Next Article in Journal
The Impacts of Power Take-Off Surviving Strategies on the Extreme Load Estimations of Wave Energy Converters
Previous Article in Journal
Numerical Study of the Hydrodynamic Performance of a Two-Propeller Configuration
Previous Article in Special Issue
Research on the On-Line Identification of Ship Maneuvering Motion Model Parameters and Adaptive Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Robust Adaptive Dynamic Positioning of Full-Actuated Surface Vessels: Reinforcement Learning Approach for Unknown Hydrodynamics

Navigation College, Dalian Maritime University, Dalian 116026, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(5), 993; https://doi.org/10.3390/jmse13050993
Submission received: 15 April 2025 / Revised: 13 May 2025 / Accepted: 19 May 2025 / Published: 21 May 2025
(This article belongs to the Special Issue Optimal Maneuvering and Control of Ships—2nd Edition)

Abstract

:
In this article, a robust adaptive dynamic position-control problem is addressed for full-actuated surface vessels under coupled uncertainties from unmodeled hydrodynamic effects and time-varying external disturbances. To obtain a high-performance dynamic position controller, a reinforcement learning (RL) weights law involving actor and critic networks is designed without knowledge of the model dynamics and the disturbance parameters. This can enhance the robustness of the closed-loop control system. Furthermore, dynamic surface control is integrated to diminish the design complexity resulting from the derivative of the kinematics, while ensuring semi-global uniformly ultimately bounded (SGUUB) stability through Lyapunov-based synthesis. Simulations are carried out to evaluate the superiority and feasibility of the proposed algorithm.

1. Introduction

In recent years, artificial intelligence has achieved an exploratory application in the field of marine engineering [1]. Some new approaches have been introduced into the autonomous control of marine vehicles for the high-efficiency implementation of engineering missions, such as path following, formation control, and dynamic positioning (DP), to name but a few. Among these tasks, especially, is the DP of fully actuated surface vehicles demanding high-precision operation under unpredictable external disturbances and model hydrodynamic parameters [2]. To this end, neural networks and a Takagi–Sugeno (T-S) fuzzy system [3] are utilized for addressing uncertainties [4], which involves a larger number of computational parameters. An explicit fact is that the RL strategy [5,6] should be further studied to enhance the control performance of the DP system [7], highlighting the need for more adaptive solutions.
In the literature, a large number of advanced DP control algorithms have been presented for fully actuated surface vehicles. By dynamically adjusting control parameters in real time, adaptive control methods [8,9] enhance the robustness and stability of systems to effectively address variations in system parameters and external disturbances [10]. In controller design using event-triggered approaches [11], computational resource utilization is optimized and operational efficiency improved by designing appropriate triggering conditions that reduce the update frequency of control signals, while maintaining system performance. Additionally, through the accurate modeling and compensation of nonlinear dead zones, dead-zone compensation control [12] is used to mitigate the impact of actuator nonlinearities within the system. Along similar lines, by predicting and compensating for communication and actuation delays, time-delay compensation control successfully enhances system’s reliability and response speed. Despite these advancements, the aforementioned methods do not achieve the expected results satisfactorily, and they still have significant limitations when dealing with uncertainties in the hydrodynamic model and time-varying external disturbances.
It should be pointed that the above-mentioned control strategies do not concern the strong nonlinear properties resulting from low-velocity operations, such as the complex hydrodynamic parameters between the hull and the ocean environment. Around these issues, although traditional nonlinear control approaches have achieved satisfactory performance in several applications, such as the controller design based on Lyapunov stability analysis, they still have limitations for complex, high-dimensional, and dynamical changes in nonlinear systems. To overcome this, with their strong approximation capabilities, neural networks are used in system modeling and control strategy design [13,14], which possess strong fault tolerance and robustness [15]. Similarly, through utilizing fuzzy rules to handle uncertainties, the fuzzy control approaches in [16] simplify complexity with linear systems. Currently, as an important branch of the artificial intelligence field, machine learning [17] has a broad application prospect. Furthermore, RL is an automatic learning approach [18] that interacts with the environment to adjust behavioral strategies through reward and punishment signals, as described in [19]. In particular, compared to traditional methods, where the system explored the simulated environment through trial and error, RL enables the direct acquisition of the optimal control strategy through interaction with the environment, without the need for a preexisting hydrodynamic model. It allows for real-time adaptation of the strategy through online learning, effectively handling time-varying disturbances and parameter uncertainties, and thereby enhancing positioning accuracy, energy efficiency, and resilience to interference [20]. Despite the fact that RL has been widely applied in various fields, it is largely untapped in DP. Through its adaptive learning ability, RL can optimize control strategies in real time in a dynamically changing environment, and it is particularly suitable for addressing challenges such as hydrodynamic uncertainty and external disturbances. The training efficiency and stability of the algorithm have been improved, and the dynamic response and disturbance suppression ability of the system have also been enhanced. By approximating solutions to the Hamilton–Jacobi–Bellman (HJB) equation, RL can address the nonlinearities inherent in DP systems [21], which offers a promising solution to their control challenges. RL provides a solution for DP systems that goes beyond traditional methods through model-free learning, online adaptation, complex disturbance suppression capabilities, and reduced training data requirements. Its unique advantages in scenarios of model uncertainty, time-varying disturbances, and multi-objective optimization make it an ideal choice for DP control in a highly dynamic marine environment.
Motivated by the above observations, this paper proposes an enhancing robust adaptive DP control algorithm for fully actuated surface vehicles by employing the A-C RL mechanism and dynamic surface control technique. The major contributions of this article are twofold:
(1)
Compared with conventional neural networks [13], the core innovation lies in a RL framework where actor–critic mechanisms are presented for the DP system: the actor network explicitly estimates multi-source model uncertainties, while the critic network dynamically optimizes the coupled interactions between positioning accuracy and environmental disturbances via a value function;
(2)
By constructing a gain-related adaptive law, a high-efficiency thrust allocation algorithm of the multi-actuators of the fully actuated vessels is proposed, where the complexity calculation can be avoided by employing the dynamic surface control technique. The proposed methodology significantly enhances operational autonomy while maintaining compatibility with standard marine control hardware platforms.

2. Problem for Formulation and Preliminaries

2.1. Preliminaries

Throughout this paper, diag { y 1 , , y n } denotes the main diagonal matrix, with the diagonal elements being y 1 , , y n . For a given matrix X = [ x i , j ] m × n where m × n represents the set of all m × n matrices consisting of real numbers, m and n represent the number of rows and columns of the matrix, respectively, X F 2 = tr X T X = i = 1 m j = 1 n x i , j 2 . indicates the matrices of the absolute operator of a scalar. is the Euclidean norm of a vector, and F denotes the Frobenius norm and usually refers to the square of the Frobenius norm of the matrix, which is the sum of the squares of all elements. tr X is the sum of the main diagonal elements of X . In mathematics, the trace of a matrix is mathematically equivalent to the square of the Frobenius norm. represents a certain variable or parameter; here, the “·” represents a placeholder indicating the name of a specific variable or parameter. ^ is the estimate of . The estimation error ˜ = ^ .

2.2. Dynamic Model of Marine Vessels

The purpose of the DP of vessels at sea is to accurately control the vessel propulsion system, which can adjust the position of the vessel in real time to meet the high stability requirements of offshore operations and ensure the accuracy and stability of the working location. According to seakeeping and maneuvering theory, the model of the marine vessel for 3 degrees of freedom (DOFs) can be expressed as
η ˙ = R ψ μ
M μ ˙ + N μ μ = τ + τ w
τ = T β κ n μ p
where η = x , y , ψ T 3 represents the position and heading angle vector in the geodetic coordinate system, μ = u , v , r T 3 represents the velocity and rotational angular velocity vector in the hull coordinate system, and the rotational matrix between body-fixed and earth-fixed coordinate frames is expressed as Equation (4). Note that R T ψ = R 1 ψ , R = 1 .
R ψ = cos ψ sin ψ 0 sin ψ cos ψ 0 0 0 1
In Equation (2), M 3 × 3 is the inertial mass matrix, and N μ μ are a nonlinear fluid mechanics function, which includes Coriolis forces and centripetal forces (moments), as well as nonlinear damping forces. τ = τ u , τ v , τ r T represents the controlling force and torque provided by the driving device, and in Equation (1) their detailed distribution models are given. τ w = τ w u , τ w v , τ w r T is a bounded disturbance caused by the disturbance of wind, wave, and current environment.
M = m X u ˙ 0 0 0 m Y v ˙ m x G Y r ˙ 0 m x G Y r ˙ I z N r ˙
N μ μ = X u X u u u u + Y v ˙ v r + Y r ˙ r r X u ˙ u r Y v v Y r r Y v v v v Y v r v r X u ˙ Y v ˙ u v Y r ˙ u r N v v N r r N v v v v N v r v r
I z represents the moment of inertia along the body-fixed z b axis. Y v ˙ , Y r ˙ , and N r ˙ characterize added mass and inertia moment. Around N μ μ , X u , X u u , Y v ˙ , etc., are all hydrodynamic force derivatives.
In Equation (3), T β 3 × q is the thrust allocation matrix. q is the number of collocated thrusters. β is the angle of the equivalent. κ n = diag κ 1 n 1 , , κ q n q q × q , is the unknown coefficient moment matrix that depends on the propeller rotation speed n i , where i = 1 , , q . and κ i n i is the elements of the κ n . In the actual use environment, the change of wake flow and the influence of hull movement make the performance of the propeller more difficult to predict and quantify. μ p = p 1 p 1 , , p q p q T is the actual controllable input, being the pitch ratio of thrusters p i 1 , 1 , where i = 1 , , q .
Assumption 1.
As the mass matrix,  M is positive, definite, and invertible. Due to the inherent port–starboard symmetry and approximate fore–aft symmetry of surface vessels, this assumption is typically satisfied.
Assumption 2.
In Equation (2),  τ w represents a bounded disturbance, i.e., there exists a positive constant vector  τ w = τ w u , τ w v , τ w r T > 0 . Note that  τ w is an unknown vector solely for the purposes of analysis.
Lemma 1.
In practical engineering, the thrust generated by a propulsor is always limited. Therefore, for a given vessel, the force coefficient of the propulsor is a constant that must satisfy the condition  0 < κ i ¯ κ i n i κ i ¯ , where  i = 1 , , q . Both  κ i ¯ and  κ i ¯ are unknown and are used solely for stability analysis.

2.3. NN Function Approximation

In a control system, for any given nonlinear continuous function f x satisfying the initial condition f 0 = 0 , the NN serves as an efficient approximator to model f x [22]. From [23,24,25], the f x is expressed approximately as
f x = W * T ϕ x + ε x
where the matrix W * T represents the ideal weight matrix; ε x denotes the approximation error; and ϕ x represents the basis function vector, described by
ϕ i x = 1 2 π ζ i exp x ο r i T x ο r i ζ i 2 , i = 1 , , l
with ο r i n representing the center vector and ζ i denoting the width of the Gaussian function.

3. Robust Adaptive Neural Cooperative Controller

In this section, an RL approach based on the actor–critic framework is proposed to develop an adaptive control method [26] for the DP system of vessels. This method is capable of adapting to complex and dynamic marine environmental changes through continuous learning and the optimization of control policies. Under Assumptions 1 and 2, a robust neural adaptive control scheme is designed for the DP system, as described by Equations (1) and (2), utilizing dynamic surface control (DSC) techniques. The schematic diagram and detailed block diagram of the proposed DP system is illustrated in Figure 1 and Figure 2, providing a comprehensive overview of the system architecture.

3.1. Control Design

Let η d be the target position; the entire control synthesis consists of the following several steps. Define the position error vector as
η e = η η d
Taking the time derivative of Equation (9), one has
η ˙ e = R ψ μ η ˙ d
Based on the position error dynamics Equation (10), the immediate control α μ 3 can be directly selected as
α μ = R 1 ψ K η η e + η ˙ d
where K η is a positive definite diagonal matrix. Obviously, the differential expression of α μ is very complicated and difficult to solve. In order to solve this problem, DSC technology [14,27] is applied, and the first-order filter is introduced in Equation (12) with the time constant matrix t μ = diag t u , t v , t r .
α μ β μ = t μ β ˙ μ , α μ 0 = β μ 0
The vector β μ as the output is the reference signal for the velocity vector μ . Meanwhile, ne can define the error vector as
q μ = α μ β μ
and the velocity vector as
μ e = μ β μ
The derivative of q μ is obtained by Equations (13) and (14)
q ˙ μ = α ˙ μ β ˙ μ   = Ω μ ψ , r , η e , η ˙ e t μ - 1 q μ
where Ω μ = Ω u , Ω v , Ω r T is a vector with its elements being bounded continuous functions. The position error derivative expression is further obtained from Equations (10) and (15) as
η ˙ e = R ψ α μ q μ + μ e η ˙ d
The primary control variable α μ p = α u p , α v p , α r p T in the proposed design is chosen as Equation (22) for the desired thrusts term T β κ n μ p .
Due to system complexity and dynamic environmental/operational conditions, the servo system governing propeller thrust faces substantial uncertainties, which include load variations, environmental disturbances, and fluctuations in system parameters [25], inducing variations in the thrust coefficient and compromising the stability of the propulsion system [28]. To address these challenges, a robust adaptive control strategy is proposed for adjusting controller parameters in real time to adapt to changes in propeller dynamics or the vessel’s state [29]. The estimation of 1 / κ i n i is accomplished using λ ^ k [30]. The control law, expressed as p = p 1 , , p q T , is derived based on Equations (17) and (18), with the corresponding adaptive law detailed in Equation (19). This strategy ensures enhanced stability and operational efficiency under varying conditions.
p = sgn μ p μ p
μ p = diag λ ^ 1 , , λ ^ q T α μ p
λ ^ ˙ k = χ k i = u , v , r j = u , v , r T k , j T i , k i e α j p χ k σ k λ ^ k λ k 0 , k = 1 , , q
α μ p = K μ μ e + W ^ a T ϕ a + β ˙ μ R ψ η e
where K μ , χ k = χ 1 , , χ q T and σ k = σ 1 , , σ q T are the positive parameters; especially, σ k λ ^ k λ ^ k 0 could protect the estimation λ ^ k from drifting divergence. T denotes the pseudo inverse of T . According to the adaptive law Equation (20), the gain correlation coefficient is added to compensate for the influence of model uncertainty and interference [31].

3.2. Critic and Actor NN Design

This section introduces the RL [32] utilizing the critic and actor method to control the DP of vessels [33,34]. At the same time, in order to construct the Bellman error of nonlinear systems, the long-term strategy performance index function [35] is defined as follows:
J c t = t γ t x ω h η e d x
where ω > 0 is integral reinforcement interval, and γ is the discount rate that reduces the weight of this cost incurred further in the future. Based on the delay characteristics of the system, a smaller value (such as 0.9) was selected through the debugging of parameters in order to pay more attention to the long-term return. J c t = J c 1 t , J c 2 t , J c 3 t T , which integrates future information that is not known at the current time related to the position error vector.
h i = 0 , η e i C i 1 , η e i C i , i = 1 , 2 , 3
where C i denotes a small positive threshold associated with tracking accuracy, typically set as 0.2, and h i and η e i express the i th parameters of h η e and η e , respectively. Under the current control strategy, h i = 1 indicates a decrease in tracking performance and an increase in long-term performance indicators, while h i = 0 indicates the opposite, which reflects a significant tracking error under the current control strategy.
At the same time, the long-term strategy performance index function at time t ω is defined as follows: h c t = h c 1 t , h c 2 t , h c 3 t T , h c t = t ω t γ t x ω h η e d x .
However, for the highly nonlinear and coupled system described in Equation (3), the long-term utility function J c t incorporates future information that is unavailable at the current time, making it challenging to solve directly, even for linear systems. Only a limited class of nonlinear systems with specific functional designs and appropriate parameters can achieve the explicit evolution of J c t . J c t is particularly difficult to solve. To address this issue, the critic NN, which is utilized as the approximation, is expressed as J c t W c * T Δ ϕ c t where W c * is the ideal weight matrix of the critic NN, η e is the input vector of the critic NN, and ϕ c is the given critic NN vector. The actor NN operates based on the control law, driven by the critic NN’s evaluation of control performance [36].
From Equation (7), the actor NN f μ = W a * ϕ a μ + ε a μ is designed. The update mechanism aims to maintain closed-loop stability and optimize the performance index J c t . However, since W c *   W a * is unknown, W ^ c and W ^ a are utilized to approximate J c t and f μ in real time, respectively.
As outlined in [37], the strategic utility function is constructed as ϖ a t = W ^ a ϕ a μ + J ^ c μ + μ e , where μ e is the input vector of the actor NN system at time t , W ^ a represents the ideal weight matrix of the actor NN, and ε a μ is an NN approximation error. Meanwhile, the temporal difference error ϖ c is defined as ϖ c t = W ^ c T Δ ϕ c t + h c , where Δ ϕ c = ϕ c η e t γ ϕ c η e t ω . Positive constants K h c , K W c , K ϕ c , K ε c , K W a , K ϕ a and K ε a exist such that h c F K h c , W c F K W c , ϕ c η e F K ϕ c , ε c η e F K ε c , Δ ϕ c F 1 + γ K ϕ c , W a F K W a , ϕ a μ F K ϕ a , ε a μ F K ε a . Using a gradient descent algorithm, the adaptive control rate of critic and actor NN part are respectively defined as
W ^ ˙ c j = γ c ϖ c T ϖ c W ^ c j = γ c Δ ϕ c t W ^ c j T Δ ϕ c t ρ j + h c k T
W ^ ˙ a i = γ a ϖ a T ϖ a W ^ a i = γ a ϕ a ϕ a T W ^ a i ρ i + μ e T + ϕ c T W ^ c j ρ j
where ρ i , ρ j are gain coefficients, W ^ c = W ^ c x , W ^ c y , W ^ c ψ T , W ^ a = W ^ a u , W ^ a v , W ^ a r T , k = 1 , 2 , 3 , i = u , v , r , and j = x , y , ψ . By repeatedly adjusting the size of the gain coefficient in the simulation, the stability and convergence during the training process are guaranteed.
To prevent one network form dominating while the other stagnates, the mitigation of uncoordinated updates between actor and critic NN is solved by the inclusion of gain coefficients. γ c = diag γ c x , γ c y , γ c ψ > 0 and γ a = diag γ a u , γ a v , γ a r > 0 represent the self-defined learning rate matrix. Then, one can define the weight error as W ˜ c = W ^ c W c and W ˜ a = W ^ a W a * . Neural networks play a crucial role in the training and performance of networks. To accelerate the training, adjust the network architecture to meet the requirements and ensure that in the actor–critic framework, the critic network can effectively estimate the value function, while the actor network can stably learn the optimal strategy.
Remark 1.
NNs are widely recognized as effective tools for the adaptive approximation of complex unknowns, such as unmodeled dynamics and uncertainties, within intelligent learning-based control frameworks. RL techniques, particularly those employing actor–critic architectures, are designed to optimize long-term rewards through a continuous interaction with online control performance. This performance is inherently influenced by both internal control strategies and external disturbances, which collectively shape the system’s behavior and adaptability. Through multiple repeated experiments, the parameters are evaluated, and the balance of each parameter is adjusted. Due to the complex design of the dynamic adjustment strategy, the hyperparameters need to be adjusted multiple times. Relying on empirical adjustment and repeated trial and error, the values of the parameters are finally determined in Equation (39).

3.3. Stability Analysis

Theorem 1.
Consider a closed-loop system composed of vessel dynamics, a robust neural control law, an actor–critic NN, and adaptive laws. Assume that the initial conditions are satisfied:
η e T 0 η e 0 + μ e T 0 μ e 0 + q e T 0 q e 0 + W ˜ a T 0 γ a 1 W ˜ a 0 + W ˜ c T 0 γ c 1 W ˜ c 0 0
and there exist adjustable parameters K η , K μ , t μ , λ k , χ k , γ c , γ a . Then, the closed-loop system consisting of the vessel model Equations (1) and (2), the control law Equations (11) and (20), and the adaptive law Equation (19) is stable. The proposed adaptive NN controller, integrating RL, ensures the following closed-loop properties for all initial conditions:
(1)
Semi-Global Uniform Ultimate Boundedness: The state variables of the closed-loop system are semi-globally uniformly ultimately bounded;
(2)
Tightly Concentrated Errors: The tracking position error η e , tracking velocity error μ e , and the NN weight errors W ˜ c  and W ˜ c are kept within a tight region;
(3)
The optimal control strategy in RL is determined by approximating the ideal unknown weights. An estimator of actor and critic weights is designed to allow the NN to update recursively in parallel.
The proof is as follows: According to the characteristics of vessel DP and control design, the following Lyapunov function is used:
V = 1 2 η e T η e + 1 2 μ e T M μ e + 1 2 q e T q e + 1 2 k = 1 q κ n k λ ˜ k 2 χ k + 1 2 tr γ a 1 W ˜ a T W ˜ a + 1 2 tr γ c 1 W ˜ c T W ˜ c
The time derivative of Equation (25) is
V ˙ = η e T η ˙ e + μ e T M μ ˙ e + q e T q ˙ e + k = 1 q κ n k λ ˜ k λ ^ ˙ k χ k + tr γ a 1 W ˜ a T W ^ ˙ a + tr γ c 1 W ˜ c T W ^ ˙ c
Similar to the former, we further have
η e T η ˙ e = η e T R ψ μ η ˙ d   K η η e T η e + η e T R ψ μ e + η e F 2 + 1 4 q e F 2
According to Equations (4) and (14) and Young’s inequality,
μ e T M μ ˙ e = μ e T τ + τ ω N μ μ μ e T q e t μ 1 M
where the N μ μ τ ω term is treated with an f μ = W ^ a T ϕ a μ + ε a μ approximation. And according to Equation (28),
μ e T M μ ˙ e = μ e T τ W ^ a T ϕ a μ ε a μ q e t μ 1 M   = μ e T K μ μ e + ε a μ T κ λ ˜ k T α μ p   K μ 1 4 I μ e T μ e + μ e T T κ λ ˜ k T α μ p + ε ¯ a μ F 2
Moreover, we also have
q μ T q ˙ μ = q μ T α ˙ μ β ˙ μ   q μ T t μ 1 q μ + i = u , v , r K Ω i 2 q i 2 2 b + 3 2
Through the designed adaptive control law, Equation (19) is further obtained by
k = 1 q κ n k λ ˜ k λ ^ ˙ k χ k = k = 1 q κ λ ˜ k i = u , v , r j = u , v , r T k , j T i , k i e α j p σ k λ ^ k λ ^ ˙ k 0
The first term cancels out the one of T κ λ ˜ k T α μ p μ e , and the second term is determined by Young’s inequality:
T κ λ ˜ k T α μ p μ e + k = 1 q κ n k λ ˜ k λ ^ ˙ k χ k 1 2 k = 1 q κ n k σ k λ ˜ k 2 λ ^ k λ ^ k 0
Finally, according to Equation (24), the error of RL weight is analyzed as
tr γ a 1 W ˜ a T W ^ ˙ a 1 4 K ϕ a 2 tr W ˜ a T W ˜ a + μ e T μ e + K ϕ c 2 tr W ˜ c T W ˜ c + K ϕ a 2 K W a 2 + K ϕ c 2 K W a 2
tr γ c 1 W ˜ c T W ^ ˙ c 1 2 1 + γ 2 K ϕ c 2 tr W ˜ c T W ˜ c + 1 2 K h c 2 + 1 + γ 2 K ϕ c 2 K W c 2
To sum up, the final form is obtained by combining from Equation (27) to Equation (34):
V ˙ K η I η e T η e K μ 5 4 I μ e T μ e i = u , v , r t μ 1 1 4 K Ω i 2 2 b q i 2   1 2 k = 1 q κ n k σ k λ ˜ k 2 1 4 K ϕ c 2 1 + γ 2 2 tr W ˜ c T W ˜ c 1 4 K ϕ a 2 tr W ˜ a T W ˜ a   + 3 2 b + ε ¯ a 2 + 1 2 k = 1 q κ n k σ k λ ^ k λ ^ k 0 2 + 1 2 K h c 2 + K ϕ a 2 K W a 2 + K ϕ c 2 K W a 2 + 1 + γ 2 K ϕ c 2 K W c 2   λ min K η 1 η e T η e λ min K μ 5 4 μ e T μ e i = u , v , r t μ 1 1 4 K Ω i 2 2 b q i 2   1 2 k = 1 q κ n k σ k λ ˜ k 2 1 4 K ϕ c 2 1 + γ 2 2 tr W ˜ c T W ˜ c 1 4 K ϕ a 2 tr W ˜ a T W ˜ a + ϱ
The design parameters are appropriately selected, satisfying
a 1 λ min K η 1 , a 2 λ min K μ 5 4 , a 3 i = u , v , r t μ 1 1 4 K Ω i 2 2 b , a 4 1 2 K ϕ c 2 1 + γ 2 2 , a 5 1 4 K ϕ a 2
where
a = min a 1 , a 2 , a 3 , a 4 , a 5 ϱ = 3 2 b + ε ¯ a 2 + 1 2 k = 1 q κ n k σ k λ ^ k λ ^ k 0 2   + 1 2 K h c 2 + K ϕ a 2 K W a 2 + K ϕ c 2 K W a 2 + 1 + γ 2 K ϕ c 2 K W c 2
By adjusting the following parameter matrix, all tracking errors of the closed-loop control system are converged to a compact set. Further, by adjusting the parameter K η , K μ , t μ 1 , γ , the following conditions can be obtained. Integrating two sides of Equation (35), one can obtain V t ϱ / 2 a + V 0 ϱ / 2 a exp 2 a t . The V t can converge into ϱ / 2 a with t , and the bounded variable ϱ can be small enough through turning the control parameters appropriately. Therefore, all the state variables in the closed-loop control system are SGUUB.

4. Numerical Simulation

In this section, two representative case studies are compared to illustrate the effectiveness and merits of the control scheme, i.e., a comparative experiment with the result in [38], and in order to verify the DP adaptive control method based on integral RL, MATLAB 2024a is used as the simulation platform to conduct experiments on the marine environment model. The DP system model used in the experiment includes a thruster, servo system, control system, and external disturbance model. And the environmental disturbances are as follows: the wind is at level 6, the number of wave frequency segments is 10, and the wave velocity is 10 × 1852 / 3600 . The nominal physical parameters [39] of the vessels are as follows in Equation (38):
X u ˙ = 0.7212 × 10 6 , Y v ˙ = 3.6921 × 10 6 Y r ˙ = 1.0234 × 10 6 , I z N r ˙ = 3.7454 × 10 9 X u = 5.0242 × 10 4 , Y v = 2.7229 × 10 5 Y r = 4.3933 × 10 6 , Y v v = 1.7860 × 10 4 X u u = 1.0179 × 10 3 , Y v r = 3.0068 × 10 5 N v = 4.3821 × 10 6 , N r = 4.1894 × 10 8 N v v = 2.4684 × 10 5 , N v r = 6.5759 × 10 6
With the initial states η 0 = 10   m , 10   m , 140   deg T and μ 0 = 10   m / s , 10   m / s , 140   deg / s T , the desired position and heading are chosen as η d = 0   m , 0   m , 156   deg T . The design parameters utilized for the model are chosen as in Equation (39):
K η = diag 0.4 , 0.4 , 0.2 , K μ diag 0.4 , 0.4 , 0.2 , χ k = 0.1 , 0.1 , 0.05 , 0.05 , 0.28 , 0.32 , 0.21 , σ k = 3.2 , 3.2 , 4.1 , 3.5 , 1..8 , 1.2 , 3.1 , t μ = diag 0.01 , 0.01 , 0.01 , γ c = diag 0.01 , 0.01 , 0.01 , γ a = diag 0.01 , 0.01 , 0.01 , ρ x = 230 , ρ y = 220 , ρ ψ = 200 , ρ u = 66 , ρ v = 58 , ρ r = 59 , C x = 0.3 , C y = 0.3 , C ψ = 0.3 , γ = 0.9 , ω = 8
The comparative experimental results are illustrated in Figure 3, Figure 4, Figure 5 and Figure 6. In Figure 3, with its trajectory exhibiting a narrower error range and smoother curve, the vessel’s position ultimately stabilizes within the desired attitude domain. As the simulation progresses, the system demonstrates a high accuracy, with the proposed algorithm outperforming in terms of control and positioning precision. The trajectory (the proposed curve) converges tightly within the desired error boundary. In contrast, the other curve shows persistent oscillations with a maximum deviation.
The comparison of specific position parameters x , y , ψ in Figure 4 reveals that both algorithms attain fast convergence within the target region. However, the algorithm from [38], exhibits significant fluctuations in attitude and velocity errors, whereas the proposed algorithm maintains smoother variations, achieving a precise targeting of the desired attitude. Figure 5 presents a remarkable demonstration of control robustness, with the proposed algorithm reducing speed variations by an order of magnitude compared to the oscillatory behavior characteristic of conventional methods [38]. Under substantial external disturbances, the proposed algorithm reduces the speed fluctuations of the vessel to the target point more stably. Based on the adaptive control algorithm’s ability, the outcome highlights the RL optimizing the system’s response to disturbance, reducing the necessity for abrupt adjustments in propeller output to accommodate environmental changes. During the initial stage, there is a significant transient overshoot in [38], while the proposed algorithm has an overshoot of no more than 0.1 m / s . Safety improvement: The steady-state fluctuation range of u and v is reduced, indicating that the vessel is less susceptible to drift caused by wind and waves during positioning operations. Its low fluctuation and high-precision characteristics are particularly suitable for high-demand marine engineering scenarios. In the comparison in Figure 6, the proposed algorithm demonstrates smoother temporal variations in force and torque, minimizing sudden adjustments or fluctuations in control inputs. For DP systems requiring high precision and stability, the proposed method significantly enhances operational stability and control efficiency, thereby improving overall system performance metrics. The attenuation rate a determines the speed at which the system state tends to be stable. It can be seen from the theoretical analysis that the convergence effect is good. In Figure 3 and Figure 4, both the velocity vector and the position vector have converged to zero in the domain near 100 s, and the dynamic indicators such as the overshoot and the adjustment time perform well. RL and adaptive control methods are adopted to further improve the convergence speed and enhance the transient response characteristics of the system. This scheme ensures that the system reaches a stable process quickly and smoothly, reduces over-rush and slows oscillation to make the oscillation amplitude smaller, and improves the stability of the system.
Figure 7 depicts the pitch rate of the control propeller, in the simulation experiments, with all values falling within reasonable ranges. The two subplots represent the pitch ratio and the azimuth angle of the azimuth thruster, respectively [30]. The azimuth angle of the propeller directly affects the anti-drift ability of the vessel by controlling the thrust direction of the thruster. Its dynamic adjustment can align the thrust vector in reverse with environmental disturbances (such as wind loads), significantly enhancing the positioning stability. Multiple thrusters are combined through differentiated azimuth angles to form a cooperative thrust vector field, achieving high-precision torque control. This azimuth–pitch joint control mechanism is optimized in real time through an adaptive algorithm. It can not only adjust the thrust phase in advance, according to the environmental interference prediction model, to avoid overdraft but also dynamically reconstruct the thrust distribution when the thruster fails. Figure 8 illustrates significant variations in the long-term utility function. Over time, the control system progressively minimizes the heading angle error, rendering changes negligible over time, thus achieving the required performance and stability. The primary objective of both networks is to minimize the utility function, optimizing the control strategy and reducing the impact of wind and wave variations to ensure high-precision positioning.
Figure 9 presents the parameters of the robust adaptive control, where the incorporation of the RL network significantly smooths their fluctuations, reflecting improved control performance. Subsequently, both position and velocity vector fluctuations remain minimal, maintaining precise positioning and robustness against disturbances in complex environments. Notably, due to the larger parameter space required for policy learning, the weight norm of the actor network is slightly larger than that of the critic network, reflecting its higher structural complexity. In contrast, the critic, which estimates the value function, typically operates within a less complex action space, leading to a smaller weight norm. This observation underscores the distinct structural and training demands of the two networks during the learning process. Figure 10 and Figure 11 present the cost function trends and norm of the current weights for the two networks. With more fluctuations in the early learning phase, the critics cost quantifies the error in estimating the long-term utility. The norm of the actor weights updates and focuses on optimizing the strategy to enhance future expected rewards, showing greater variation due to uncertainty in the training environment. The primary objective of the actor–critic RL method in tuning the action–critic NNs is to minimize the cost function V a = ϖ a T ϖ a and V c = ϖ c T ϖ c , effectively tracking the target heading and maintaining small heading errors despite environmental disturbances. This method dynamically adjusts the control strategy based on historical data and environmental feedback, demonstrating a high accuracy in practical applications. The critic network evaluates the long-term value function and directs the actor to focus on actions that minimize cumulative future errors. The actor network learns a strategy that achieves a balance between tracking accuracy and energy efficiency. The proposed RL framework ensures adaptability to non-stationary disturbances, while avoiding the excessive punishment of control efforts. It is achieved through collaborative actor-driven strategy optimization and critic-guided value learning.
In Figure 12, in the area where the position and velocity error values rapidly decrease and then return to zero, even when the system is confronted with disturbances under harsh sea conditions, it can still rapidly converge and maintain stability and robustness.
Through the RL process, the agent learns the most energy-efficient control strategy while maintaining control precision, particularly in unstable sea conditions, where the system optimally manages propeller operations to avoid excessive adjustments. It could be heavily impacted by harsh sea conditions in [38], and its capacity to mitigate such disturbances was relatively weak. In contrast, the single parameter of the proposed algorithm can be finely adjusted, enabling the gradual stabilization of the critic network’s value estimation. With adequate training and sufficient interaction with the environment, the algorithm demonstrates improved performance.

5. Conclusions

Even though robust adaptive control boasts strong stability and the ability to handle uncertainties, when facing extreme conditions, especially sea condition changing rapidly, its judgement is slow, and its energy consumption is high. By combining RL with traditional robust control, it performs well in dynamic maritime environments, significantly reducing position and heading errors, while optimizing energy efficiency and ensuring real-time adaptability, which guarantees smaller error fluctuations and enhanced stability. Moreover, it addresses the problems of response delay and performance degradation in extreme conditions. This synergy improves control accuracy and stability, making it highly effective in challenging maritime operations.

Author Contributions

Conceptualization, J.L., W.H. and C.H.; methodology, W.H.; software, W.H.; validation, J.L. and C.H.; formal analysis, J.L.; investigation, J.L.; writing—original draft preparation, W.H.; writing—review and editing, J.L. and G.Z.; supervision, J.L.; project administration, G.Z.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

The paper is partially supported by the National Excellent Youth Science Fund of China (52322111), the National Natural Science Foundation of China (52171291), the Dalian Science and Technology Program for Distinguished Young Scholars (2022RJ07), and the Fundamental Research Funds for the Central Universities (3132023137, 3132023502). The authors would like to thank the anonymous reviewers for their valuable comments.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhai, M.; Wu, W.; Tsai, S. The Effects of Artificial Intelligence Orientation on Inefficient Investment: Firm-Level Evidence from China’s Energy Enterprises. Energy Econ. 2025, 141, 108048. [Google Scholar] [CrossRef]
  2. Alagili, O.; Fernando, E.; Ahmed, S.; Imtiaz, S.; Murrant, K.; Gash, B.; Islam, M.; Zaman, H. Experimental Investigations of An Energy-Efficient Dynamic Positioning Controller for Different Sea Conditions. Ocean Eng. 2024, 299, 117297. [Google Scholar] [CrossRef]
  3. Xie, W.; Liang, J.; Wang, Z.; Yang, J. Non-Monotonic Lyapunov Function Based Membership Function Dependent Robust Control of Takagi-Sugeno Fuzzy Systems. Eng. Appl. Artif. Intell. 2025, 152, 110785. [Google Scholar] [CrossRef]
  4. Hatami, E.; Salarieh, H. Adaptive Critic-Based Neuro-Fuzzy Controller for Dynamic Position of Ships. Sci. Iran. 2015, 22, 272–280. [Google Scholar]
  5. Wan, L.; Fu, L.; Li, C.; Li, K. Flexible Job Shop Scheduling via Deep Reinforcement Learning with Meta-Path-Based Heterogeneous Graph Neural Network. Knowl.-Based Syst. 2024, 296, 111940. [Google Scholar] [CrossRef]
  6. Wang, D.; Zhao, H.; Zhang, L.; Chen, K. Learning to Dispatch for Flexible Job Shop Scheduling Based on Deep Reinforcement Learning via Graph Gated Channel Transformation. IEEE Access 2024, 12, 50935–50948. [Google Scholar]
  7. Zhao, L.; Bai, Y. Unlocking the Ocean 6G: A Review of Path-Planning Techniques for Maritime Data Harvesting Assisted by Autonomous Marine Vehicles. J. Mar. Sci. Eng. 2024, 12, 126. [Google Scholar] [CrossRef]
  8. Liu, Z.; Zhang, O.; Gao, Y.; Zhao, Y.; Sun, Y.; Liu, J. Adaptive Neural Network-Based Fixed-Time Control for Trajectory Tracking of Robotic Systems. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 241–245. [Google Scholar] [CrossRef]
  9. Li, J.; Zhang, G.; Cabecinhas, D.; Pascoal, A.; Zhang, W. Prescribed Performance Path Following Control of USVs via An Output-Based Threshold Rule. IEEE Trans. Veh. Technol. 2024, 73, 6171–6182. [Google Scholar] [CrossRef]
  10. Ma, D.; Chen, X.; Ma, W.; Zheng, H.; Qu, F. Neural network Model Based Reinforcement Learning Control for AUV 3-D Path Following. IEEE Trans. Intell. Veh. 2014, 9, 893–904. [Google Scholar] [CrossRef]
  11. Li, J.; Zhang, G.; Zhang, X.; Zhang, W. Integrating Dynamic Event-Triggered and Sensor-Tolerant Control: Application to USV-UAVs Cooperative Formation System for Maritime Parallel Search. IEEE Trans. Intell. Transp. Syst. 2025, 25, 3986–3998. [Google Scholar] [CrossRef]
  12. Wang, Y.; Bai, W.; Zhang, W.; Chen, S.; Zhao, Y. Optimal Course Tracking Control of USV with Input Dead Zone Based on Adaptive Fuzzy Dynamic Programing. Proc. Inst. Mech. Eng. Part J. Syst. Control Eng. 2024. [CrossRef]
  13. Zhang, G.; Sun, Z.; Li, J.; Huang, J.; Bin, Q. Iterative Learning Control for Path-following of ASV with the Ice Floes Auto-select Avoidance Mechanism. IEEE Trans. Intell. Transp. Syst. 2025; early access. [Google Scholar] [CrossRef]
  14. Qu, C.; Cheng, L.; Ga Gao, S.; Huang, X. Experience Replay Enhances Excitation Condition of Neural-Network Adaptive Control Learning. J. Guid. Control Dyn. 2025, 48, 496–507. [Google Scholar] [CrossRef]
  15. Ning, J.; Ma, Y.; Chen, C. Event-Triggered-Based Distributed Formation Cooperative Tracking Control of Under-Actuated Unmanned Surface Vehicles with Input and State Quantization. IEEE Trans. Intell. Transp. Syst. 2025, 26, 7081–7097. [Google Scholar] [CrossRef]
  16. Gao, Y.; Su, S.; Zong, Y.; Zhang, L.; Guo, X. Adaptive Fuzzy Gain-Scheduling Robust Control for Stability of Quadrotors. Appl. Math. Model. 2025, 138, 115816. [Google Scholar] [CrossRef]
  17. He, Y.; Liu, Y.; Yang, L.; Qu, X. Deep Adaptive Control: Deep Reinforcement Learning-Based Adaptive Vehicle Trajectory Control Algorithms for Different Risk Levels. IEEE Trans. Intell. Veh. 2024, 9, 1654–1666. [Google Scholar] [CrossRef]
  18. Wang, S.; Li, J.; Jiao, Q.; Ma, F. Design Patterns of Deep Reinforcement Learning Models for Job Shop Scheduling Problems. J. Intell. Manuf. 2024. [CrossRef]
  19. Yang, Y.; Geng, S.; Yue, D.; Gorbachev, S.; Korovin, I. Event-Triggered Approximately Optimized Formation Control of Multi-Agent Systems with Unknown Disturbances via Simplified Reinforcement Learning. Appl. Math. Comput. 2025, 489, 129149. [Google Scholar] [CrossRef]
  20. Abreu, M.; Reis, L.; Lau, N. Addressing Imperfect Symmetry: A Novel Symmetry-Learning Actor-Critic Extension. Neurocomputing 2025, 614, 128771. [Google Scholar] [CrossRef]
  21. Tagliaferri, F.; Viola, I. A Real-Time Strategy Decision Program for Sailing Yacht Races. Ocean Eng. 2017, 134, 129–139. [Google Scholar] [CrossRef]
  22. Ning, J.; Wang, Y.; Chen, C.; Li, T. Neural Network Observer Based Adaptive Trajectory Tracking Control Strategy of Unmanned Surface Vehicle with Event-Triggered Mechanisms and Signal Quantization. IEEE Trans. Emerg. Top. Comput. Intell. 2025; early access. [Google Scholar] [CrossRef]
  23. Zhang, G.; Yin, S.; Li, J.; Zhang, W.; Zhang, W. Game-Based Event-Triggered Control for Unmanned Surface Vehicle: Algorithm Design and Harbor Experiment. IEEE Trans. Cybern. 2025; early access. [Google Scholar] [CrossRef]
  24. He, W.; Dong, Y.; Sun, C. Adaptive Neural Impedance Control of a Robotic Manipulator with Input Saturation. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 334–344. [Google Scholar] [CrossRef]
  25. Liu, Z.; Zhao, Y.; Zhang, O.; Chen, W.; Wang, J.; Gao, Y.; Liu, J. A Novel Faster Fixed-Time Adaptive Control for Robotic Systems with Input Saturation. IEEE Trans. Ind. Electron. 2024, 71, 5215–5223. [Google Scholar] [CrossRef]
  26. Shen, H.; Wu, J.; Wang, Y.; Wang, J. Reinforcement Learning-Based Robust Tracking Control for Unknown Markov Jump Systems and Its Application. IEEE Trans. Circuits Syst. II-Express Briefs 2024, 71, 1211–1215. [Google Scholar] [CrossRef]
  27. Li, H.; Zhang, T. Neural Adaptive Dynamic Event-Triggered Practical Fixed-Time Dynamic Surface Control for Non-Strict Feedback Nonlinear Systems. Int. J. Adapt. Control Signal Process. 2022, 36, 3066–3086. [Google Scholar] [CrossRef]
  28. Liu, A.; Wang, D.; Qiao, J. An Advanced Robust Integral Reinforcement Learning Scheme with The Fuzzy Inference System. Int. J. Robust Nonlinear Control 2024, 34, 11745–11759. [Google Scholar] [CrossRef]
  29. Chen, Y.; Ding, J.; Chen, Y.; Yan, D. Nonlinear Robust Adaptive Control of Universal Manipulators Based on Desired Trajectory. Appl. Sci. 2024, 15, 2219. [Google Scholar] [CrossRef]
  30. Chwa, D. Tracking Control of Differential Drive Wheeled Mobile Robots Using a Backstepping Like Feedback Linearization. IEEE Trans. Syst. Man Cybern. Part A-Syst. Hum. 2010, 40, 1285–1295. [Google Scholar] [CrossRef]
  31. Zhang, G.; Cai, Y.; Zhang, W. Robust Neural Control for Dynamic Positioning Ships with The Optimum-Seeking Guidance. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 1500–1509. [Google Scholar] [CrossRef]
  32. Zheng, L.; Fiez, T.; Alumbaugh, Z.; Chasnov, B.; Ratliff, L. Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms. IEEE Trans. Neural Netw. Learn. Syst. 2022, 36, 9217–9224. [Google Scholar] [CrossRef]
  33. Zhang, G.; Li, Z.; Li, J.; Zhang, W.; Bin, Q. Prescribed Performance Path Following Control for Rotor-Assisted Vehicles via an Improved Reinforcement Learning Mechanism. IEEE Trans. Neural Netw. Learn. Syst. 2025; early access. [Google Scholar] [CrossRef]
  34. Xu, B.; Yang, C.; Shi, Z. Reinforcement Learning Output Feedback NN Control Using Deterministic Learning Technique. IEEE Trans. N Neural Netw. Learn. Syst. 2014, 25, 635–641. [Google Scholar]
  35. Qin, C.; Zhu, T.; Jiang, K.; Wu, Y. Integral Reinforcement Learning-Based Dynamic Event-Triggered Safety Control for Multiplayer Stackelberg-Nash Games with Time-Varying State Constraints. Eng. Appl. Artif. Intell. 2024, 133, 108317. [Google Scholar] [CrossRef]
  36. Zheng, Z.; Ruan, L.; Zhu, M.; Guo, X. Reinforcement Learning Control for Underactuated Surface Vessel with Output Error Constraints and Uncertainties. IEEE Trans. Veh. Technol. 2020, 399, 479–490. [Google Scholar] [CrossRef]
  37. Hou, Y.; Lin, M.; Anjidani, M.; Nik, H. Robust Optimal Control of Point-Feet Biped Robots Using a Reinforcement Learning Approach. IETE J. Res. 2024, 70, 7831–7846. [Google Scholar] [CrossRef]
  38. Song, W.; Zuo, Y.; Tong, S. Fuzzy Optimal Event-Triggered Control for Dynamic Positioning of Unmanned Surface Vehicle. IEEE Trans. Syst. Man Cybern.-Syst. 2025, 55, 2302–2311. [Google Scholar] [CrossRef]
  39. Zhang, G.; Xing, Y.; Zhang, W.; Li, J. Prescribed Performance Control for USV-UAV via a Robust Bounded Compensating Technique. IEEE Trans. Control. Netw. Syst. 2025; early access. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of control effect of vessel DP system.
Figure 1. Schematic diagram of control effect of vessel DP system.
Jmse 13 00993 g001
Figure 2. Flowchart of the entire process of the vessel DP system based on reinforcement learning under time-varying interference.
Figure 2. Flowchart of the entire process of the vessel DP system based on reinforcement learning under time-varying interference.
Jmse 13 00993 g002
Figure 3. Comparison of position control performance under different algorithms [38].
Figure 3. Comparison of position control performance under different algorithms [38].
Jmse 13 00993 g003
Figure 4. Comparison of the vessel position vector variables [38].
Figure 4. Comparison of the vessel position vector variables [38].
Jmse 13 00993 g004
Figure 5. Comparison of the vessel velocity vector variables [38].
Figure 5. Comparison of the vessel velocity vector variables [38].
Jmse 13 00993 g005
Figure 6. Comparison of the vessel control inputs τ u , τ v , τ r [38].
Figure 6. Comparison of the vessel control inputs τ u , τ v , τ r [38].
Jmse 13 00993 g006
Figure 7. Schematic diagram of the timing sequence for adjusting the pitch and azimuth angles of the propeller during the analysis of the parameter response of the thruster and the steady-state performance p i , i = 1 , , 6 .
Figure 7. Schematic diagram of the timing sequence for adjusting the pitch and azimuth angles of the propeller during the analysis of the parameter response of the thruster and the steady-state performance p i , i = 1 , , 6 .
Jmse 13 00993 g007
Figure 8. The adaptive parameter λ ^ k , k = 1 , , 7 dynamic adjustment analysis diagram under the proposed algorithm.
Figure 8. The adaptive parameter λ ^ k , k = 1 , , 7 dynamic adjustment analysis diagram under the proposed algorithm.
Jmse 13 00993 g008
Figure 9. Long-term effect strategy function J ^ c i , i = 1 , 2 , 3 time evolution analysis.
Figure 9. Long-term effect strategy function J ^ c i , i = 1 , 2 , 3 time evolution analysis.
Jmse 13 00993 g009
Figure 10. The variation in the cost function based on actor–critic RL.
Figure 10. The variation in the cost function based on actor–critic RL.
Jmse 13 00993 g010
Figure 11. Adaptive weight matrix norm variation curve based on actor–critic framework.
Figure 11. Adaptive weight matrix norm variation curve based on actor–critic framework.
Jmse 13 00993 g011
Figure 12. Norm of the position and velocity errors.
Figure 12. Norm of the position and velocity errors.
Jmse 13 00993 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Huang, W.; Huang, C.; Zhang, G. Enhancing Robust Adaptive Dynamic Positioning of Full-Actuated Surface Vessels: Reinforcement Learning Approach for Unknown Hydrodynamics. J. Mar. Sci. Eng. 2025, 13, 993. https://doi.org/10.3390/jmse13050993

AMA Style

Li J, Huang W, Huang C, Zhang G. Enhancing Robust Adaptive Dynamic Positioning of Full-Actuated Surface Vessels: Reinforcement Learning Approach for Unknown Hydrodynamics. Journal of Marine Science and Engineering. 2025; 13(5):993. https://doi.org/10.3390/jmse13050993

Chicago/Turabian Style

Li, Jiqiang, Wanjin Huang, Chenfeng Huang, and Guoqing Zhang. 2025. "Enhancing Robust Adaptive Dynamic Positioning of Full-Actuated Surface Vessels: Reinforcement Learning Approach for Unknown Hydrodynamics" Journal of Marine Science and Engineering 13, no. 5: 993. https://doi.org/10.3390/jmse13050993

APA Style

Li, J., Huang, W., Huang, C., & Zhang, G. (2025). Enhancing Robust Adaptive Dynamic Positioning of Full-Actuated Surface Vessels: Reinforcement Learning Approach for Unknown Hydrodynamics. Journal of Marine Science and Engineering, 13(5), 993. https://doi.org/10.3390/jmse13050993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop