Next Article in Journal
A Smart Four-DOF SCARA Robot: Design, Kinematic Modeling, and Machine Learning-Based Performance Evaluation
Previous Article in Journal
An Image Feature Extraction Method for Quick Inspection and Fault Detection of Objects in Production Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear Probabilistic Model Predictive Control Design for Obstacle Avoiding Uncrewed Surface Vehicles

by
Nurettin Çerçi
1,* and
Yaprak Yalçın
2
1
Mechatronics Engineering Department, Istanbul Technical University, Reşitpaşa Street, Istanbul 34475, Turkey
2
Control and Automation Engineering Department, Istanbul Technical University, Reşitpaşa Street, Istanbul 34475, Turkey
*
Author to whom correspondence should be addressed.
Automation 2026, 7(1), 10; https://doi.org/10.3390/automation7010010
Submission received: 25 September 2025 / Revised: 28 November 2025 / Accepted: 10 December 2025 / Published: 1 January 2026
(This article belongs to the Section Control Theory and Methods)

Abstract

The primary objective of this research is to develop a probabilistic nonlinear model predictive control structure (NMPC) that efficiently operates uncrewed surface vehicles (USVs) in an environment that has probabilistic disturbances, such as wind, waves, and currents of the water, while simultaneously maneuvering the vehicle in a way that avoids stationary or moving stochastic obstacles in its path. The proposed controller structure considers the mean and covariances of the inputs or state variables of the vehicle in the cost function to handle probabilistic disturbances, where an extended Kalman filter (EKF) is utilized to calculate the mean, and the covariances are calculated dynamically via a linear matrix equality based on this mean and obtained system matrices with successive linearization for every sampling instance. The proposed control structure deals with non-zero-mean probabilistic disturbances such as water current via an innovative approach that treats the mean of the disturbance as a deterministic part, which is estimated by a disturbance observer and eliminated by a control term in the controller in addition to the control signal obtained via MPC optimization; the effect of the remaining zero-mean part is handled over its covariance during the probabilistic MPC optimization. The probabilistic constraints are also dealt with by converting them to deterministic constraints, as in linear probabilistic MPC. However, unlike the linear MPC, these constraints updated each sampling instance with the information obtained via successive linearization. The control structure incorporates the velocity obstacle (VO) method for collision avoidance. In order to ensure stability, the proposed NMPC adopts a dual-mode strategy, and a stability analysis is presented. In the second mode, an LQG design that ensures stability in the existence of non-zero mean disturbance is also provided. The simulation results demonstrate that the proposed probabilistic NMPC framework effectively handles probabilistic disturbances as well as both stationary and moving obstacles, ensuring collision avoidance while reaching the desired position and orientation through optimal path tracking, outperforming the conventional NMPC.

1. Introduction

Uncrewed autonomous vehicles have gained significant popularity in various industries over the past few decades. These vehicles operate on land as uncrewed ground vehicles (UGVs) [1], in the air as uncrewed aerial vehicles (UAVs) [2], in water as uncrewed surface vehicles (USVs) [3], and even underwater as uncrewed underwater vehicles (UUVs) [4]. Among these vehicles, USVs are the most difficult to control and navigate due to numerous disturbances in their working environment, such as wind, waves, and currents in the water. Furthermore, heavy traffic on waterways poses a significant risk as the vehicle must move with caution to avoid collisions with other objects, especially on narrow waterways such as canals, straits, or inland waterways [5,6,7,8,9,10]. As a result, a collision avoidance technique must be incorporated into the control algorithm. Hence, the controller design algorithm must consider safety constraints to ensure that the watercraft navigates without colliding with any obstacles.
Collision avoidance is almost always a concern when there is at least one moving vehicle and other vehicles or objects in the environment, whether stationary or in motion. Therefore, it is important to follow the desired trajectory without colliding with other vehicles or objects. Collision avoidance is crucial in various areas such as self-driving cars [11], ground vehicles [12], aircraft [13], uncrewed aerial vehicles [14], underwater vehicles [15], and ships [16]. The same collision risk is also important for USVs or other types of vessels that move on water [17].
There are various methods used to avoid collisions in control problems, with two of the most common being the artificial potential field [18] and the velocity obstacle (VO) [19,20,21,22,23,24] techniques. Both of these approaches are easy to implement and can handle stationary and moving obstacles in the environment. The VO technique is particularly valuable in MPC problems, where only minor adjustments to the cost function are needed to prevent collisions.
The model predictive control (MPC) is a widely used optimization-based control method across various industries. It handles constraints systematically and finds an optimal solution within a reasonable finite horizon [25,26]. MPC is the second-most commonly used control method in systems after proportional–integral–derivative (PID) control [27]. However, MPC is an optimal control method, which means that the MPC design yields an optimal control problem to be solved. In linear systems with a quadratic cost function and no constraints, an analytical solution can be obtained. However, when dealing with nonlinear systems that have constraints, numerical optimization algorithms are required. One approach for numerical optimization for MPC problems is sequential quadratic programming (SQP).
In real-life applications, many systems are subject to internal or environmental disturbances that introduce inputs and states in a probabilistic manner. These probabilistic effects must be taken into account in the cost function and the constraints that lead to probabilistic MPC [28,29]. Although some research has been conducted on probabilistic MPC [30,31], these studies primarily focus on linear systems and assume that the disturbances have a zero mean. However, in certain aquatic environments, such as narrow water channels or straits, some disturbances are non-zero-mean. For example, water currents in these areas may flow at a certain speed, indicating that the mean disturbance torque is not zero.
The dynamic model of uncrewed surface vehicles includes six control inputs: forces applied in the X–Y–Z directions, as well as torques applied around Euler angles. This model has twelve outputs. While the nonlinear model can be linearized around an equilibrium point or simplified to reduce its degrees of freedom to three or four, we propose using the three-degree-of-freedom nonlinear vehicle model in this paper. We introduce an extension to probabilistic linear MPC for managing these vehicles, taking into account the nonlinear nature of the system and the probabilistic disturbances it encounters.
As mentioned before, in the literature, there are some works such as [30,31] that consider linear probabilistic MPC for linear systems under the effect of zero-mean disturbances. In these works, nonlinear dynamics and non-zero-mean disturbances are not considered. They include numerical examples that are not based on real systems. Namely, they are not directly related to USV tracking and obstacle avoidance problems.
Additionally, several linear stochastic MPC research papers have considered USV tracking and obstacle avoidance [32,33,34]. However, they consider linear dynamics and zero-mean disturbances as well, but do not consider non-zero-mean disturbances. Note that [32,33] consider avoidance from static obstacles, while [34] considers dynamic obstacles.
On the other hand, there are some linear MPC methods presented in the recent literature for tracking and obstacle avoidance, such as [35,36], and also some nonlinear MPC methods, such as [37,38], but they do not consider any means of probabilistic behavior, probabilistic disturbance, or probabilistic constraints.
All the above-mentioned works present linear MPC methods. In the probabilistic MPC framework, some papers consider nonlinear models, namely present nonlinear probabilistic MPC methods such as [39,40]. The paper [39] considers Gaussian processes where no probabilistic disturbances are taken into account, which pass through nonlinear dynamics and lead to non-Gaussian state distributions. The paper [40] presents a nonlinear probabilistic MPC method for a general stochastic nonlinear system with process noise, but similarly does not consider external probabilistic disturbances. In addition, it is not directly related to theoretical applications for tracking and obstacle avoidance problems of uncrewed surface vehicles.
As stated above, we propose an extension of linear probabilistic MPC for nonlinear systems that enables it to work at all operating points and also operate under the effect of non-zero-mean probabilistic disturbances implemented on uncrewed surface vehicles, considering stationary and moving obstacle avoidance. Note that when a probabilistic disturbance with constant mean and variance is applied to a linear system, the states and outputs exhibit probabilistic signals with altered mean and variance, but these values remain the same at every operating point. However, when applied to a nonlinear system, the mean and variance of the probabilistic signals in the states and outputs change across different operating points. As a result, probabilistic linear MPC cannot be directly applied to nonlinear systems. To overcome this issue, we propose an extended Kalman filter (EKF) to estimate the mean value of the states along all operation points. Afterward, we propose utilizing this mean value in a dynamic equation that exists in probabilistic linear MPC with successive linearization (linearization at every sampling instance) to calculate the covariance matrix at every sampling instance or every state (operating point). With this information, the cost function and probabilistic constraints are updated at every sampling instance.
On the other hand, to address the case of non-zero-mean disturbances, we recognize that these disturbances consist of both a deterministic component and a probabilistic component. The mean of the disturbance represents the deterministic part. To manage this, we employ a deterministic disturbance observer that helps us estimate the mean. This information is then sent to the MPC optimization to handle it as we address deterministic disturbances. The remaining portion of the disturbance is treated as a zero-mean probabilistic disturbance in the probabilistic MPC framework.
In summary, the contribution lies in utilizing an EKF, a disturbance observer, and successive linearization, proposing an extension of probabilistic linear MPC such that the extension can be applicable for nonlinear systems under the effect of non-zero-mean probabilistic disturbances and in the existence of stationary and moving obstacles, where mainly the NMPC problem is solved. The cost function and constraints are updated using successive linearization and an extension of linear methods, while predictions are made based on nonlinear system equations. Below are the contributions that do not exist in the literature; namely, our novelty points are further clarified.
  • The combined probabilistic MPC problem addresses nonlinear dynamics, non-zero mean disturbance, and obstacle avoidance for both static and moving obstacles and is applied to USVs.
  • Linear probabilistic MPC is extended to nonlinear systems, such that it can deal with the nonlinear effects of probabilistic disturbances on nonlinear systems. The nonlinear effects of probabilistic disturbance are dealt with by obtaining the varying mean value of states with an EKF and by computing the covariance of states dynamically using a linear matrix equality, which is updated across all operating points based on successive linearization.
  • Non-zero-mean external disturbances are considered in the probabilistic MPC context and handled by assuming they are composed of a deterministic part, which is the mean of the disturbance, and a probabilistic part, which is a zero-mean disturbance. The deterministic part is estimated by utilizing a deterministic observer.
  • Feasibility and stability proofs based on the dual-mode control strategy are provided, where derivations are worked out with the consideration of non-zero-mean disturbance in the equations and the corresponding utilized disturbance observer equations.
  • Based on a dual-mode control strategy, in the second mode, an LQG design that ensures stability in the existence of non-zero-mean disturbance is also provided in Appendix A.
The paper is divided into seven sections. In Section 2, the detailed model of the nonlinear USV is presented. Section 3 presents the formulation of the considered probabilistic NMPC problem. Section 4 discusses the collision avoidance method, VO. Section 5 presents the proposed control structure, detailing its components, namely the EKF and deterministic disturbance observer. After that, Section 6 examines the feasibility and stability of the proposed controller. Section 7 provides the simulation results and shows the improvements. Section 8 contains some discussion of the paper and gives future insights. Finally, Section 9 concludes the paper.

2. Uncrewed Surface Vehicle Model

As mentioned in Section 1, this work utilizes a nonlinear USV model [41] whose schematic is given in Figure 1.
The considered USV model has a length of L, a width of W, and also four thrusters on the right, left, front, and back of the USV body. These thrusters generate the control inputs of the system, F1, F2, F3, and F4, and “COM” is short for center of mass.
Furthermore, the model has a position and direction vector given by η = x y ψ where x, y denote the positions and ψ denotes the yaw angle (direction of the USV). The USV also has the velocity state vector given by ν = u v r where u and v denote the surge and sway velocities, respectively, and r denotes the angular velocity of the USV.
The model of the nonlinear USV consists of two parts, one of which is the kinematic model, and is given as follows:
η ˙ = J ( η ) ν
where J matrix contains the transformation terms and can be written as
J ( η ) = c o s ψ s i n ψ 0 s i n ψ c o s ψ 0 0 0 1
The second part of the model is the dynamical model, which is
M ( ν ˙ ) + C ( ν ) ν + D ( ν ) ν = τ + τ d i s t u r b a n c e
where the M matrix contains the mass and inertia terms, C is made up of the Coriolis and centripetal terms, and D denotes the linear damping matrix. All these matrices are given as
M = m X u ˙ 0 m y g 0 m Y v ˙ m x g Y r ˙ m y g m x g N v ˙ I z z N r ˙
C = 0 0 m ( x g r + v ) 0 0 m ( y g r u ) m ( x g r + v ) m ( y g r u ) 0
D = X u 0 0 0 Y v 0 0 0 N r
where m is the mass of the USV, Izz is the inertia along the z axis, xg, yg are the differences between the center of mass and origin of the USV, which are assumed to be all zero. Additionally, the input torque vector is τ = X Y N where X, Y are forces along the surge and sway directions, respectively, and N is the torque exerted along the heave direction of the USV.
τ = X Y N = 1 1 0 0 0 0 1 1 W 2 W 2 L 2 L 2 F 1 F 2 F 3 F 4
Finally, τ d i s t u r b a n c e represents the disturbance torques containing wind, waves, and currents of water. Also, all the inertias are assumed to be zero or ignored, except for Izz.
Consequently, for simulation and later calculations, the equations of motion can be obtained by reformulating Equations (1) and (2), and the perturbation model is obtained as
η ˙ = J ( η ) ν ν ˙ = M 1 ( τ + τ d i s t u r b a n c e ) M 1 C ( ν ) ν M 1 D ( ν ) ν .

3. Probabilistic Nonlinear Model Predictive Control

3.1. Problem Formulation

Regarding the probabilistic linear MPC, we similarly formulate the extended probabilistic MPC. In probabilistic linear MPC, the cost function incorporates the mean values of the system states and includes a term to minimize variations in both state and control inputs. We formulated the cost function similarly, but with the consideration of the following nonlinear system equations:
x t + 1 = f ( x t , u t ) + d t , t 0
where f(.) represents the compact form of the nonlinear equations of motion of the USV that was given in (4) but without disturbances, x is the state vector, u represents the increment control input signal (can be referred to as Δu in the literature), and d represents the disturbances caused by wind, waves and the currents with normal distribution and has either a zero or non-zero-mean value. The cost function is
J = J m ( x ¯ t , u ¯ t t + N 1 ) + J v ( X t , K t t + N 1 )
and this cost function’s parts can be written as
J m = i = t t + N 1 x ¯ i r i Q 2 + u ¯ i R 2
J v = i = t t + N 1 t r a c e ( Q X i + R U i )
where x ¯ i is the mean value of the system variable, u ¯ i is the optimal increment control input signal, r i is the reference signal, X i is the covariance matrix of the state vector, U i is the covariance matrix of the control input, Q, R are the weighting matrices for the system variable and control input, respectively, and N is the prediction horizon [30,31]. In contrast to probabilistic linear MPC, this formulation allows the covariance matrix of system variables to change along all state trajectories (operating points). When we add terminal cost and constraints for stability, a cost term for obstacle avoidance, and probabilistic constraints on system variables and control input, the following probabilistic NMPC formulation is obtained:
J ¯ m = i = t t + N 1 ( x ¯ i r i Q 2 + u ¯ i R 2 ) + x ¯ t + N r t + N Q ¯ 1 2 + d ˜ t + N ) Q ¯ 2 2
J ¯ v = i = t t + N 1 t r a c e ( Q X i + R U i ) + t r a c e ( Q ¯ 1 X t + N ) + t r a c e ( Q ¯ 2 D t + N )
min u J ¯ m + J ¯ v
s . t . x t + 1 = f ( x t , u t ) + d t , t 0
P ( b r x t + k x r m a x ) p x r , k > 0
P ( c s u t + k u s m a x ) p u s , k 0
x t + N Ω
where P ( . ) shows the probability, b , c are constant vectors, x r m a x , u s m a x are constraint limits of the state variable and control input, p x , p u are the violation probabilities, Ω represents the terminal region, J O b s t a c l e is the obstacle avoidance cost that is obtained from the VO method, and this cost value is explained in Section 4.

3.2. Deterministic Formulation of Probabilistic Constraints

In this work, by applying successive linearization of the nonlinear USV model at every sampling instance, to deal with probabilistic constraints on system variables and control input, the method proposed by Farina et al. [30] and Magni et al. [31] that converts the probabilistic constraints to deterministic constraints for probabilistic linear MPC is utilized and extended. Since we are considering successive linearization, the constraints are updated at each sampling instance, unlike in linear MPC. The method in the mentioned papers is summarized as follows: Consider the probabilistic constraints given in (13) and (14). One can transform these probabilistic constraints into deterministic constraints using one-sided Cantelli’s inequality [30,31].
Definition 1
(Cantelli’s Inequality). Assume y is a scalar random variable with mean y ¯ and variance Y . Then, for every ε R + , it satisfies
P ( y y ¯ + ε ) Y Y + ε
To use this inequality, first assume that for δ x > 0 , the following inequality holds:
b x ¯ x m a x δ x
Using both this equation and Definition 1, (13) can be written as
P { b x x m a x } P { b x b x ¯ + δ x }   b X b b X b + δ x 2 p x
Considering the right-hand side of the inequality above, the following inequality can be obtained:
b X b ( 1 p x p x ) δ x 2
When we consider the equality (17) and (19) together, the following deterministic constraint inequality is obtained:
b x ¯ x m a x b X b 1 p x p x
The same procedure is valid for (14), so it will also yield the following result:
c u ¯ u m a x c U c 1 p u p u

4. Velocity Obstacle Method

In this work, we consider the VO method for collision avoidance, which can be summarized as follows: The VO method utilizes the velocities and positions of both the ego vehicle (the main vehicle) and the obstacle vehicle to identify a collision cone defined by velocity obstacles. This technique calculates a range of possible velocities for the ego vehicle at any given moment, ensuring that it can avoid colliding with the obstacle vehicle. If the velocity of the ego vehicle falls within the collision set, the two vehicles will inevitably collide after a certain time. To illustrate this, let us consider an example of two vehicles navigating an environment. The ego vehicle has a velocity of vA, while the obstacle vehicle travels at a velocity of vB. Figure 2 depicts the scenario.
The collision cone is then adjusted based on the velocity of the obstacle vehicle, resulting in the creation of the “Velocity Obstacle” set. This set represents the range of velocities that the ego vehicle can take without colliding with the obstacle vehicle. If
v A C C A , B v B
where C C A , B is the collision cone, C C A , B v B denotes the velocity obstacle cone, and “⊕” is the Kronecker summation symbol, the vehicle A moves without colliding with vehicle B. Therefore, a proper vA value must be chosen. This also means that a set of positions needs to be avoided according to the time of the anticipated collision.
Let pA be a set of positions to be avoided. This position set can be calculated as p A = v A t c , where tc is the time to collision and “⊙” is the Kronecker product symbol. So, the obstacle cost, J O b s t a c l e , can be obtained as
J O b s t a c l e = K c o l l i s i o n x p p A
where xp denotes the first two states (N and E) of the state vector and K c o l l i s i o n represents how close we can reach the outer limit of the radius of the obstacle vehicle. Also, if there is no collision possibility, J O b s t a c l e is zero.

5. Proposed Control Structure

The proposed controller for the nonlinear uncrewed surface vehicle is a probabilistic NMPC that utilizes SQP as an optimizer to calculate the optimal control sequence of the control input. The cost function of the controller is designed to incorporate the mean value and covariance of the state variables, which are obtained using an EKF and a linear matrix difference equation (referred to as the covariance calculator) in the literature through successive linearization. The EKF calculates the mean value of the state vector and feeds it to the covariance calculator. The proposed controller block diagram is shown in Figure 3.
One of the unique features of this proposed controller is its ability to handle non-zero-mean disturbances, which is essential for a water-based environment where currents can exert non-zero-mean disturbances on the vehicle. To overcome this challenge, a disturbance observer is used to eliminate the effect of the non-zero-mean disturbance. We regarded the non-zero-mean disturbances as comprising both a deterministic and a probabilistic component. The deterministic component is represented by the mean of the disturbance, which is assessed using a deterministic disturbance observer. This observed mean is then sent to the MPC system to be treated as deterministic disturbances. The remaining aspect is regarded in the probabilistic MPC as a zero-mean probabilistic disturbance.
The proposed structure involves an EKF and a disturbance observer. It utilizes successive linearization to compute the mean and covariance of states and outputs, as well as to update constraints at each sampling instance and handle non-zero-mean disturbances. This proposes an extension of probabilistic linear MPC, making it applicable to nonlinear systems affected by non-zero-mean probabilistic disturbances and capable of addressing stationary and moving obstacles, essentially solving an NMPC problem. The proposed controller ensures that the vehicle can handle different scenarios with ease and can navigate through obstacles and other challenges in a water-based environment with precision.

5.1. Mean and Covariance Calculation with EKF

The Kalman filter allows for the estimation of state variable values even in noisy environments. Typically, it functions as a state observer when state measurements are absent or when outputs are noisy. Additionally, it can estimate the mean values of system states in the presence of zero-mean disturbances, even if state measurements are available. The mean value of the state vector is represented by x ¯ t + k | t = E { x t + k | t } (estimated state). However, since USV control involves nonlinear equations of motion, an EKF [42] is necessary to estimate system states. In the EKF, the Jacobian matrices, At and Bt given in (24) are calculated from the nonlinear equations of motion at each sampling instance. Then, these matrices are utilized in the linear Kalman filter to determine the estimate (the mean value) of the nonlinear uncrewed surface vehicles’ state variables at that sampling instance.
A t = f ( x , u ) x |     x = x t u = u t , B t = f ( x , u ) u |     x = x t u = u t
Namely, at each sampling instance, the nonlinear ship model is linearized around the system state variables at that time interval, as given below.
x t + 1 = A t x t + B t u t + d t
y t = C t x t + n t
Based on linearized system matrices, utilizing the following EKF steps,
  • Prediction Step:
    P t | t 1 = A t P t 1 | t 1 A t + D t
  • Correction Step:
    Z t = P t | t 1 C t ( B t P t | t 1 C t + N t ) 1
    P t | t = ( I Z t C t ) P t | t 1
    x ^ t + 1 = x ^ t + Z t ( y t C t x ^ t )
    where D t is the covariance matrix of the process noise (in our case probabilistic disturbance effect on the states), N t is the covariance matrix of the measurement noise, Z t is the Kalman gain, P t is the prediction of the covariance matrix, the estimation of the system state vector, x ^ t , under zero-mean Gaussian disturbances is obtained. Indeed, the estimated values are the mean of the state variables ( x ¯ t = x ^ t ).
After this mean calculation, the covariance matrix is calculated based on the mean state variable vector. To calculate the covariance matrix of the state variables, we utilize the following matrix equality for linear systems with successive linearization at every sampling instance [29]:
X t + k + 1 = ( A t + B t K c ) X t + k ( A t + B t K c ) + D t
where X t + k = E { ( x t + k x ¯ t + k ) ( x t + k x ¯ t + k ) } , and K c is the control gain. It can be represented as K M P C , which is the MPC control gain calculated with SQP optimization at every time instant before the prediction horizon, and then it takes the form of K L Q G , which is the linear quadratic Gaussian (LQG) control gain, and is calculated at the end of the prediction horizon. For the calculation of covariance in Equation (31), the mean value of the state vector is required, where we obtain it using an EKF.

5.2. Estimation of the Mean of the Non-Zero-Mean Disturbance

A novel aspect of this work addresses non-zero-mean disturbances by treating them as a combination of deterministic and probabilistic parts. The mean of the non-zero-mean probabilistic disturbance, as a novel approach, is assumed to be the deterministic part, which we propose to observe using a deterministic disturbance observer. Therefore, to estimate the deterministic part of the disturbances, a linear disturbance observer is integrated into the proposed control structure. The disturbance observer given in the following is adopted based on the linear system model at (25) and (26). The disturbance observer utilized is [43]
d ^ t = K O B x t z t
z t + 1 = z t + K O B { ( A t I n ) x t + B t u t + d ^ t }
where d ^ t is the estimated deterministic part of the disturbance; d t , z t is the state variable of the disturbance observer, and K O B is the gain matrix of the observer.

6. Feasibility and Stability Analysis

In order to ensure the stability of the considered controller system, a dual-mode strategy is considered. After the prediction horizon, an LQG controller that guarantees the stability under the effect of the non-zero mean stochastic disturbance (its mean is considered as deterministic disturbance and the remaining part as zero-mean Gaussian disturbance), u t = K t x t ( B t B t ) 1 B t d ^ t , is utilized where we normally minimize the tracking error, where K t is the general representation of the feedback control gain, and for stability analysis, x t is the tracking error for the general notation.
To start, we need to verify the feasibility of the controller system. Assume that, for u L Q G = K L Q G x t ( B t B t ) 1 B t d ^ t , the closed-loop system is stable in Ω , satisfying constraints (20) and (21), and Ω is invariant under the u L Q G control law (see Appendix A). If the MPC problem is feasible at time t, x t + N in Ω , then this implies that the optimization problem is feasible at time t + 1 based on the following analysis.
Note that x t + N : ( x ¯ t + N , X t + N ) Ω ( Ω m , Ω v ) , where A K x ¯ Ω m , in which A K is the closed-loop system matrix of the dual-mode control obtained with LQG, and X t + N X ¯ Ω v . X ¯ is the steady-state solution of (31) with noise variance D ¯ > D , namely X ¯ = A K X ¯ A K + D ¯ .
Owing to invariant property of Ω under LQG control,
x ¯ t + N + 1 = A t + N x ¯ t + N + B t + N u ¯ L Q G Ω m
u ¯ = K L Q G x ¯ ( B t B t ) 1 B t d ^ t
is valid, and because of the Gaussian white noise acting on the system, one can state that
X t + N + 1 = A K X t + N A K + D t + N X ¯ = A K X ¯ A K + D ¯
Therefore, the pair ( x ¯ t + N + 1 , X t + N + 1 ) is also in Ω . We can conclude that the optimization problem is also feasible at time t + N + 1 . Also, one can say, if we choose t + N as any value, this feasibility statement is valid for any time instant, leading to recursive feasibility.
After verifying recursive feasibility, stability analysis can be performed as follows: For this purpose, the cost of mode 2 (LQG) is added as a terminal cost to the finite-horizon cost to obtain infinite-horizon cost, so that the obtained infinite-horizon cost can be considered as a Lyapunov function for the considered closed-loop system obtained with dual-mode MPC and LQG control. A final constraint, as the terminal region Ω , is also added to the MPC formulation to ensure system states converge to an invariant set of LQG control. LQG ensures that once the system states enter the terminal region, they remain in the region.
Recalling the problem formulation which is given in Equations (9)–(15), we then consider the following infinite-horizon cost, obtained by adding the infinite-time state trajectory cost after the prediction horizon to the finite-horizon cost:
J ¯ m = i = t t + N 1 ( x ¯ i Q 2 + u ¯ i R 2 ) + i = t + N ( x ¯ i Q 2 )
J ¯ v = i = t t + N 1 t r a c e ( Q X i + R U i ) + i = t + N t r a c e ( Q X i )
Now, we calculate the sum of all the output values of the system from the end of the prediction horizon to infinity. Then, we can write down x t + N + i values for i = 0 , 1 , 2 , 3 , and obtain
x ¯ t + N = x ¯ t + N x ¯ t + N + 1 = A K x ¯ t + N + d ˜ t + N x ¯ t + N + 2 = A K ( A K x ¯ t + N + d ˜ t + N ) + d ˜ t + N + 1 = A K 2 x ¯ t + N + A K d ˜ t + N + d ˜ t + N + 1 x ¯ t + N + j = A K j x ¯ t + N + A K j 1 d ˜ t + N + A K j 2 d ˜ t + N + 1 + + A K d ˜ t + N + j 2 + d ˜ t + N + j 1
where d ^ t is the predicted mean value of the disturbance d t , and d ˜ is the remaining zero-mean stochastic disturbance. And now, the next step is to calculate the d ˜ t + 1 term by looking into disturbance observer Equations (32) and (33).
d ^ t + 1 = K O B x t + 1 z t + 1   = K O B ( A K x t + d ˜ t ) ( z t + K O B ( A t x t + B t u t + d ^ t ) )   = K O B A K x t + K O B d ˜ t + d ^ t K O B A t x t + K O B B t K t x t + K O B d ^ t K O B d ^ t   = K O B d ˜ t + d ^ t
From here, using two equalities for the disturbance, d ˜ t = d ¯ t d ^ t and d ¯ t + 1 = d ¯ t , we obtain
d ˜ t + 1 = d ¯ t + 1 ( K O B d ˜ t + d ^ t ) d ˜ t + 1 = d ¯ t K O B d ˜ t d ^ t
This yields
d ˜ t + 1 = d ˜ t K O B d ˜ t
Considering any j t h equation of the above prediction equations and the disturbance observer equality we obtained in (41), we can rewrite the equation as
x ¯ t + N + j = A K j x ¯ t + N + A K j 1 d ˜ t + N + A K j 2 ( 1 K O B ) d ˜ t + N + A K j 3 ( 1 K O B ) 2 d ˜ t + N + + A K ( 1 K O B ) j 2 d ˜ t + N + ( 1 K O B ) j 1 d ˜ t + N
and this summation can be written as
x ¯ t + N + j = A K j x ¯ t + N + i = 0 j 1 A K j 1 i ( 1 K O B ) i d ˜ t + N
Now we can say that A K j = A ˜ and i = 0 j 1 A K j 1 i ( 1 K O B ) i = A ˇ , and so (43) becomes
x ¯ t + N + j = A ˜ x ¯ t + N + A ˇ d ˜ t + N
Starting from t + N , as j goes to infinity, the infinite-horizon trajectory tracking cost can be written as follows:
i = t + N x ¯ i Q = x ¯ t + N [ i = 0 ( A ˜ ) i Q ( A ˜ ) i ] x ¯ t + N + 2 x ¯ t + N [ i = 0 ( A ˜ ) i Q ( A ˇ ) i ] d ˜ t + N   + d ˜ t + N [ i = 0 ( A ˇ ) i Q ( A ˇ ) i ] d ˜ t + N
After this step, we can use the Young inequality on the second term with coefficient γ 1 R + and obtain
i = t + N x ¯ i | | Q ( 1 + γ 1 ) x ¯ t + N [ i = 0 ( A ˜ ) i Q ( A ˜ ) i ] x ¯ t + N + ( 1 + 1 γ 1 ) d ˜ t + N [ i = 0 ( A ˇ ) i Q ( A ˇ ) i ] d ˜ t + N
Now, we are making the following assignments to obtain the terminal cost weight matrices
Q ¯ 1 = ( 1 + γ 1 ) [ i = 0 ( A ˜ ) i Q ( A ˜ ) i ] x ¯ t + N
Q ¯ 2 = ( 1 + 1 γ 1 ) [ i = 0 ( A ˇ ) i Q ( A ˇ ) i ] d ˜ t + N
From these matrices, we can write the following compact formulation for the cost as
i = N + 1 x ¯ t + i Q x ¯ t + N Q ¯ 1 x ¯ t + N + d ˜ t + N Q ¯ 2 d ˜ t + N
Similarly, we can calculate t r a c e ( X t + N + i ) values for i = 0 , 1 , 2 , 3 , using almost the same procedure as above while using the properties of the trace: t r ( A + B ) = t r ( A ) + t r ( B ) , t r ( A B ) = t r ( B A )
t r a c e ( X t + N ) = t r a c e ( X t + N ) t r a c e ( X t + N + 1 ) = t r a c e ( A K X t + N A K + D t + N ) t r a c e ( X t + N + 2 ) = t r a c e ( ( A K 2 ) X t + N ( A K 2 ) + A K D t + N A K + D t + N ) t r a c e ( X t + N + j ) = t r a c e ( ( A K j ) X t + N ( A K j ) + ( A j 1 ) K D t + N A K j 1 + ( A K j 2 ) D t + N + 1 A K j 2 + + A K D t + N + j 2 A K + D t + N + j 1 )
In a compact form, it can be rewritten as
t r a c e ( X t + N + j ) = t r a c e ( ( A K j ) X t + N A K j + i = 0 j 1 ( A K j 1 i ) ( 1 K O B ) i D t + N A K j 1 i )
If we follow the similar steps like in Equations (44) to (48), the summation (50) leads to
i = N + 1 t r a c e ( Q X i ) = t r a c e ( Q ¯ 1 X t + N ) + t r a c e ( Q ¯ 2 D t + N )
So, the objective functions J m and J v in Equations (37) and (38) become as in Equations (9) and (10). In the next phase, we need to calculate V t + 1 V t , where V t = x t Q ¯ 1 x t + d ˜ t Q ¯ 2 d ˜ t . So,
V t + 1 V t = x t + 1 Q ¯ 1 x t + 1 x t Q ¯ 1 x t + d ˜ t + 1 Q ¯ 2 d ˜ t + 1 d ˜ t Q ¯ 2 d ˜ t
x t + 1 Q ¯ 1 x t + 1 = ( A K x t + d ˜ t ) Q ¯ 1 ( A K x t + d ˜ t ) = x t A K Q ¯ 1 A K x t + 2 x t A K Q ¯ 1 d ˜ t + d ˜ t Q ¯ 1 d ˜ t
d ˜ t + 1 Q ¯ 2 d ˜ t + 1 = ( d ˜ t K O B d ˜ t ) Q ¯ 2 ( d ˜ t K O B d ˜ t ) = ( 1 K O B ) 2 d ˜ t Q ¯ 2 d ˜ t
In view of these equations, we can write
V t + 1 V t = x t A K Q ¯ 1 A K x t + 2 x t A K Q ¯ 1 d ˜ t + d ˜ t Q ¯ 1 d ˜ t x t Q ¯ 1 x t + ( 1 K O B ) 2 d ˜ t Q ¯ 2 d ˜ t d ˜ t Q ¯ 2 d ˜ t
Using the Young inequality again in 2 x t A K Q ¯ 1 d ˜ t with a coefficient of γ 2 R + , we obtain
V t + 1 V t x t Q ¯ 1 x t + ( 1 + γ 2 ) x t A K Q ¯ 1 A K x t + ( 1 + 1 γ 2 ) d ˜ t Q ¯ 1 d ˜ t + ( K O B 2 2 K O B ) d ˜ t Q ¯ 2 d ˜ t
At this point, to guarantee the stability, the following equality should hold:
V t + 1 V t = x t Q x t u t R u t
We implement u t = K t x t ( B t B t ) 1 B t d ^ t and use the Young inequality again for the right-hand side of the equality (56) with the coefficient of γ 3 R + . Therefore, one can obtain
x t Q ¯ 1 x t + d ˜ t Q ¯ 2 d ˜ t ( 1 + γ 2 ) x t A K Q ¯ 1 A K x t ( 1 + 1 γ 2 ) d ˜ t Q ¯ 1 d ˜ t + ( K O B 2 2 K O B ) d ˜ t Q ¯ 2 d ˜ t   ( 1 + γ 3 ) x t K L Q G R K L Q G x t + ( 1 + 1 γ 3 ) d ^ t L t R L t d ^ t + x t Q x t
where L t = ( B t B t ) 1 B t , and the equation above yields the following Riccati inequalities:
Q ¯ 1 ( 1 + γ 2 ) A K Q ¯ 1 A K ( 1 + γ 3 ) K L Q G R K L Q G + Q
( 2 K O B K O B 2 ) Q ¯ 2 ( 1 + 1 γ 2 ) Q ¯ 1 ( 1 + 1 γ 3 ) L t R L t
For chosen Q and R matrices, Q ¯ 1 and Q ¯ 2 can be calculated.
To show the convergence of the output of the system, we need to show that J ( t + 1 ) J ( t ) 0 , where J ( t ) is the Lyapunov function and can be denoted as J ( t ) = J m ( t ) + J v ( t ) , where J m ( . ) and J v ( . ) are mean and covariance parts of the Lyapunov function. For the first part of this Lyapunov function, we can write
J m ( t + 1 ) = J m ( t ) x ¯ t | t Q 2 u ¯ t | t R 2 + x ¯ t + N | t Q 2 + K L Q G x ¯ t + N | t R 2 + L t + N d ^ t + N R 2 x ¯ t + N | t Q ¯ 1 2 + ( A + B K L Q G ) x ¯ t + N | t Q ¯ 1 2
Now, in view of (59), we can obtain
J m ( t + 1 ) J m ( t ) E { x t Q 2 + u t R 2 } γ 3 K L Q G x ¯ t + N | t R 2 γ 2 ( A + B K L Q G ) x ¯ t + N | t Q ¯ 1 2
For this inequality, we can say
J m ( t + 1 ) J m ( t ) < E { x t Q 2 + u t R 2 }
Also, for the second part of the Lyapunov function
J v ( t + 1 ) = J v ( t ) t r { ( Q + K M P C R K M P C ) X t } + t r { ( Q + K L Q G R K L Q G ) X t + N + Q ¯ 1 ( A + B K L Q G ) X t + N ( A + B K L Q G ) + Q ¯ 1 D Q ¯ 1 X t + N }
where K L Q G is the gain calculated from the LQ problem. Now, using properties of the trace again and in view of (59), we can write
t r { ( Q + K L Q G R K L Q G ) X t + N + Q ¯ 1 ( A + B K L Q G ) X t + N ( A + B K L Q G ) } = t r { [ ( Q + K L Q G R K L Q G ) + Q ¯ 1 ( A + B K L Q G ) ( A + B K L Q G ) ] X t + N } t r { γ 3 K L Q G R K L Q G X t + N γ 2 ( A + B K L Q G ) X t + N + Q ¯ 1 X t + N }
In view of this little manipulation, (64) becomes
J v ( t + 1 ) J v ( t ) t r { Q ¯ 1 D γ 3 K L Q G R K L Q G X t + N γ 2 ( A + B K L Q G ) X t + N }
Then, we can say
J v ( t + 1 ) J v ( t ) < t r { Q ¯ 1 D }
As a result, we can put together Equations (63) and (67), which yields the following inequality:
J ( t + 1 ) J ( t ) < E { x t Q 2 + u t R 2 } + t r { Q ¯ 1 D }
The inequality (68) is the same as the equality (32) in the study [30]. This proves J ( t + 1 ) J ( t ) < 0 is very similar to the proof in [30]. Therefore, we do not need to repeat the proof here, and the stability proof is completed.

7. Simulation and Results

In this section, we present the simulation results. To demonstrate the positive effects of the proposed controller on tracking performance, a simulation was carried out using the Simulink and Optimization toolboxes in the 2023a version of MATLAB®.
In our simulation, we employ SQP to solve the formulated NMPC problem. SQP is a robust optimization method specifically designed for constrained nonlinear optimization problems [44,45]. This method effectively handles both initial and dynamic constraints, as well as various equality constraints. It is particularly adept at finding optimal solutions for complex nonlinear systems.
First of all, we need to allocate the system parameters to the simulation, which are given in Table 1 below.
Now, we need to specify the disturbances acting on the system and other parameters. The disturbance forces acting on the system inputs have Gaussian white noise with a variance value of 0.1. However, the mean value of these disturbance forces is −7.5 N for all control inputs, F 1 , F 2 , F 3 , and F 4 , and there is no torque disturbance because the considered disturbance is only the current of the water.
Normally, the control input constraints without probabilistic effects are chosen as 15 N u 15 N . However, if we take the probabilistic effects into account and calculate the probabilistic constraints described in (20) and (21) in which b r , c s are both chosen as 1 and p u is chosen as 0.1, the control input constraints become 14.051312 N u 14.051312 N .
Weight matrices in the cost function are selected as Q = 100 I 3 × 3 and R = 0.1 I 4 × 4 , and the terminal weight matrix Q ¯ 1 is the calculated from (59). Additionally, for the moving obstacle, K c o l l i s i o n = 3 , for the stationary obstacle, K c o l l i s i o n = 6 , and for the observer, K O B = 0.7 .
In Figure 4, we illustrate the tracking and obstacle avoidance performance of traditional NMPC compared to the proposed probabilistic NMPC with obstacles and stochastic disturbances. When we say traditional NPMC, it means that the control system has no disturbance observer, no EKF, or no probabilistic effects taken into account in optimization. Also, it is important to stress that an invisible straight line from (0 m, 0 m) point to (50 m, 50 m) point is our reference path in this study; namely, we consider a ramp reference. Simulations are performed with a sampling time of 0.1 s.
Upon examining Figure 4, in view of negative values of the disturbance, the traditional NMPC vehicle tracks the reference path slightly to the left and behind the vehicle managed by the probabilistic NMPC. Both vehicles encounter the moving obstacle between 11 and 12 s, navigating past this obstacle quite closely, as the collision gain for this situation is small. Nonetheless, when both vehicles confront the stationary obstacle around 26 to 27 s, they diverge more to avoid it due to the increased collision gain associated with this obstacle. During this avoidance maneuver, the vehicle operated by traditional NMPC does not have to change its position much, as it is already situated to the left of the desired trajectory. But the probabilistic one needs more maneuvering because it is situated almost on the trajectory. And then, both vehicles arrive at their final position around the coordinates (50 m, 50 m).
While tracking the reference trajectory, the traditional NPMC exhibits greater error compared to the probabilistic NMPC. To illustrate this, we present the x and y positions along with the error signals of these positions for the vehicles. Figure 5 and Figure 6 depict the positions and their errors, demonstrating that the probabilistic NMPC allows the system to adhere to the reference path more than the traditional NMPC, resulting in a smaller steady-state error.
Regarding the error at the coordinate (50 m, 50 m), the probabilistic NMPC achieves an error close to zero. On the other hand, the traditional NMPC results in steady-state errors with average values of approximately 2 and 1.25 m for x and y, respectively.
Since both traditional and probabilistic NMPC control simulations have Gaussian noise acting on them, there is always some tracking error; however, the probabilistic NMPC eliminates the mean value of the disturbance because of the observer. In contrast, with the traditional NMPC, the system fails to reach the target position, leading to excessive cumulative error, as also shown in Figure 7.
So, the probabilistic NMPC controls the system with smaller integral absolute errors (IAEs) than the traditional NMPC. For x, in 70 s, the probabilistic NMPC creates a 124.115 m IAE, but the traditional NMPC creates a 248.284 m IAE, so the traditional NMPC has approximately 100% m more IAE. For y, in 70 s, the probabilistic NMPC creates a 105.418 m IAE, but the traditional NMPC creates a 149.93 m IAE, so the traditional NMPC has approximately 42.22% m more IAE.
Furthermore, Figure 8 and Figure 9 show the control input and the observed control input disturbances of the probabilistic NMPC for the corresponding input signals F 1 , F 2 , F 3 , and F 4 . When the probabilistic NMPC-controlled vehicle tracks the reference of the ramp function up to the target point, the vehicle thrusters do not exceed the limits of the input constraints. However, when the vehicle encounters the obstacles, especially the stationary obstacle, the control inputs go to bigger values, and unlike the traditional NMPC, which has no constraints, the probabilistic NMPC operates the vehicle within the calculated constraints. One can see that the output of all disturbance observers follows the mean of the input disturbance, which is −7.5 N.
Finally, Figure 10 shows the close-up look of both traditional and probabilistic NMPC controllers when they reach the target point. As we can visually see, the probabilistic NMPC creates a more subtle trajectory than the traditional one. In addition, Figure 11 shows the direction angles of both traditional and probabilistic NMPC when they are avoiding obstacles and reaching the target point. One can see that the probabilistic NMPC tracks the 45° angle quite reasonably while avoiding the obstacles approximately at 11 and 26 s, but the traditional one struggles due to the water current.

8. Discussion and Future Work

The proposed method achieves better tracking performance under non-zero-mean probabilistic constraints with less steady-state error, simultaneously avoiding stationary and moving obstacles compared to the traditional NMPC. However, while Cantelli’s inequality provides a framework for probabilistic constraints, it can be conservative; therefore, it would be beneficial to investigate less conservative methods to address the probabilistic constraints.
The magnitude of the disturbances that can be attenuated is limited by the actuator reaction and working torque range. In addition, the actuators are assumed to be providing an ideal control signal. However, in real applications, actuators also have their own dynamics that can affect our performance due to their limitations.
In addition, in this study, it is assumed that the positions and velocities of the obstacles are known. The uncertainties in the positions and velocities of obstacles and dynamic obstacles are not considered. If the vehicle has the equipment to measure the positions and velocities of dynamical obstacles, the proposed method is still applicable by updating the velocity obstacle cones at each sampling instance with the updated measurements. Otherwise, the considered velocity obstacle method cannot be applied in the existence of dynamic obstacles since the cones are established based on the velocities of obstacles.
Additionally, it is important to note that this work focuses solely on controlling a single-ego vehicle. However, in certain operations, it may be necessary to manage a group of autonomous vehicles to achieve consensus, and therefore, this should be considered in future work. In future work, we can also explore scenarios that involve uncertain or time-varying vehicle parameters.
For real implementation, an industrial PC equipped with a GPU can be considered, where the industrial PC has an interface with sensors and actuators, and the GPU can handle the heavy computational load of NMPC. Note that ship control systems are comparably slow, allowing the controller to be implemented with larger sampling periods, so even without GPU, high-computational-power industrial PCs can solve the proposed NMPC optimization in these sampling periods.

9. Conclusions

In this paper, we propose an extension of probabilistic linear MPC for nonlinear systems, enabling efficient operation across all operating points and under the effect of non-zero-mean disturbances. This extension is particularly tailored for uncrewed surface vehicles, incorporating probabilistic disturbance attenuation and stationary and moving obstacle avoidance. We highlighted the challenges of applying probabilistic linear MPC to nonlinear systems, where probabilistic signals in states and outputs change their mean and variance across different operating points. To address this, we utilized an EKF to estimate the mean value of the states at all operating points and a linear matrix difference equation to calculate the covariance matrix at each sampling instance via successive linearization. This method allows for the cost function and probabilistic constraints to be updated dynamically at each sampling instance.
We addressed non-zero-mean disturbances by separating them into deterministic and probabilistic components. The deterministic part, represented by the mean of the disturbance, was handled using a deterministic disturbance observer. Furthermore, the remaining zero-mean probabilistic component was integrated into the probabilistic MPC framework.
The simulation results showed that the proposed probabilistic NMPC allows the system to follow the reference path more accurately than traditional NMPC, eliminating steady-state errors relatively. In contrast, traditional NMPC exhibits significant steady-state errors and is unable to reach the target reference, as indicated by the absolute error and the integral absolute error results.

Author Contributions

Conceptualization: Y.Y.; methodology: N.Ç., Y.Y.; formal analysis and investigation: N.Ç., Y.Y.; writing—original draft preparation: N.Ç.; writing—review and editing: Y.Y.; supervision: Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The authors did not receive support from any organization for the submitted work.

Data Availability Statement

The data presented in this study are available within the article. The MATLAB code developed for this study is not publicly available. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors have no competing interests to declare that are relevant to the content of this article.

Appendix A

For the terminal region, let V t = x t P 1 x t + d ˜ t P 2 d ˜ t be our Lyapunov candidate function. Now, we need to calculate the V t + 1 V t = x t + 1 P 1 x t + 1 x t P 1 x t + d ˜ t + 1 P 2 d ˜ t + 1 d ˜ t P 2 d ˜ t function, where P 1 and P 2 are positive definite.
For the x t + 1 P 1 x t + 1 term, we need to examine the dynamics of the system. To begin with, we assume that the system is x t + 1 = f ( x t ) + g ( x t ) u t + d t , where the control input is u t = K L Q G x t ( B t B t ) 1 B t d ^ t , and then with the help of some small manipulations, this equation can be rewritten as follows:
x t + 1 = f ( x t ) + g ( x t ) [ K L Q G x t ( B t B t ) 1 B t d ^ t ] + d ¯ t + d 0 x t + 1 = f ( x t ) g ( x t ) K L Q G x t g ( x t ) ( B t B t ) 1 B t d ^ t + d ¯ t + d 0 + A K x t A K x t + d ˜ t d ˜ t x t + 1 = f ( x t ) g ( x t ) K L Q G x t g ( x t ) ( B t B t ) 1 B t d ^ t + d ¯ t + d 0 + A K x t A K x t + d ˜ t ( d ¯ t d ^ t ) x t + 1 = f ( x t ) g ( x t ) K L Q G x t g ( x t ) ( B t B t ) 1 B t d ^ t + d 0 + A K x t A K x t + d ˜ t + d ^ t
where A K systems are assumed to have poles that are all located inside the unit circle and d 0 is the white noise. At this stage, let ϕ ( x t , d t ) = f ( x t ) g ( x t ) K L Q G x t g ( x ) ( B t B t ) 1 B t d ^ t + d ^ t A K x t be a function that makes a compact version of the equations above, and we obtain the following equation:
x t + 1 = ϕ ( x t , d t ) + A K x t + d ˜ t
At this stage, it is important to note that ϕ ( x t , d t ) / x t 0 and ϕ ( x t , d t ) / d t 0 as x t 0 , because it is not so easy to satisfy the stability condition if we have a large terminal region [46]. As a result, around the origin, we can assume ϕ ( x t , d t ) = 0 and obtain the system equations as
x t + 1 = A K x t + d ˜ t
Now, we are ready to calculate V t + 1 V t using (A2) and (41).
V t + 1 V t = x t + 1 P 1 x t + 1 x t P 1 x t + d ˜ t + 1 P 2 d ˜ t + 1 d ˜ t P 2 d ˜ t x t + 1 P 1 x t + 1 = ( A K x t + d ˜ t ) P 1 ( A K x t + d ˜ t ) = x t A K P 1 A K x t + 2 x t A K P 1 d ˜ t + d ˜ t P 1 d ˜ t
d ˜ t + 1 P 2 d ˜ t + 1 = ( d ˜ t K O B d ˜ t ) P 2 ( d ˜ t K O B d ˜ t ) = d ˜ t P 2 d ˜ t 2 d ˜ t P 2 K O B d ˜ t + d ˜ t K O B P 2 K O B d ˜ t
V t + 1 V t = x t A K P 1 A K x t + 2 x t A K P 1 d ˜ t + d ˜ t P 1 d ˜ t x t P 1 x t + d ˜ t P 2 d ˜ t 2 d ˜ t P 2 K O B d ˜ t + d ˜ t K O B P 2 K O B d ˜ t d ˜ t P 2 d ˜ t  
In this step, we are using the Young inequality in 2 x t A K P 1 d ˜ t and 2 d ˜ t P 2 K O B d ˜ t with coefficients of α R + and 1, respectively, and then we obtain
V t + 1 V t x t A K P 1 A K x t x t P 1 x t + α 2 x t A K P 1 A K x t + α 2 d ˜ t P 1 d ˜ t + d ˜ t P 1 d ˜ t     d ˜ t K O B P 2 K O B d ˜ t d ˜ t P 2 d ˜ t + d ˜ t K O B P 2 K O B d ˜ t
and we obtain the two Riccati inequalities below.
P 1 ( 1 + α 2 ) A K P 1 A K Q 1
P 2 P 1 α 2 P 1 Q 2
where Q 1 and Q 2 are symmetric and positive definite. If we choose appropriate P 1 and P 2 values, V t + 1 V t becomes a negative value, and the condition is satisfied. Therefore, the system converges to zero with u t = K L Q G x t ( B t B t ) 1 B t d ^ t control input, and once the system enters the terminal region Ω , it stays in this terminal region.

References

  1. Ni, J.; Hu, J.; Xiang, C. A review for design and dynamics control of unmanned ground vehicle. Proc. Inst. Mech. Eng. Part J. Automob. Eng. 2021, 235, 1084–1100. [Google Scholar] [CrossRef]
  2. Chaurasia, R.; Mohindru, V. Unmanned aerial vehicle (UAV): A comprehensive survey. In Unmanned Aerial Vehicles for Internet of Things (IoT) Concepts, Techniques, and Applications; Scrivener Publishing LLC: Beverly, MA, USA, 2021; pp. 1–27. [Google Scholar] [CrossRef]
  3. Bai, X.; Li, B.; Xu, X.; Xiao, Y. A review of current research and advances in unmanned surface vehicles. J. Mar. Sci. Appl. 2022, 21, 47–58. [Google Scholar] [CrossRef]
  4. Wibisono, A.; Piran, M.J.; Song, H.K.; Lee, B.M. A survey on unmanned underwater vehicles: Challenges, enabling technologies, and future research directions. Sensors 2023, 23, 7321. [Google Scholar] [CrossRef] [PubMed]
  5. Huang, M.; Abel, D. A*-guided incremental sampling for trajectory planning of inland vessels in narrow ship canals. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 658–663. [Google Scholar] [CrossRef]
  6. Moser, M.M.; Huang, M.; Abel, D. Model Predictive Control for Safe Path Following in Narrow Inland Waterways for Rudder Steered Inland Vessels. In Proceedings of the 2023 European Control Conference (ECC), Bucharest, Romania, 13–16 June 2023; pp. 1–6. [Google Scholar] [CrossRef]
  7. Tang, Y.; Zhang, R.; Mao, Y.; Shi, T.; Fan, C. Research on decision optimization method of avoiding collision of inland ships based on intelligent calculation. Iop Conf. Ser. Mater. Sci. Eng. 2020, 780, 072023. [Google Scholar] [CrossRef]
  8. Kim, J.; Lee, C.; Chung, D.; Cho, Y.; Kim, J.; Jang, W.; Park, S. Field experiment of autonomous ship navigation in canal and surrounding nearshore environments. J. Field Robot. 2024, 41, 470–489. [Google Scholar] [CrossRef]
  9. Lee, C.; Chung, D.; Kim, J.; Kim, J. Nonlinear model predictive control with obstacle avoidance constraints for autonomous navigation in a canal environment. IEEE/ASME Trans. Mechatron. 2023, 29, 1985–1996. [Google Scholar] [CrossRef]
  10. de Vries, J.; Trevisan, E.; van der Toorn, J.; Das, T.; Brito, B.; Alonso-Mora, J. Regulations aware motion planning for autonomous surface vessels in urban canals. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 3291–3297. [Google Scholar] [CrossRef]
  11. Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixão, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
  12. Wang, H.; Liu, B. Path planning and path tracking for collision avoidance of autonomous ground vehicles. IEEE Syst. J. 2021, 16, 3658–3667. [Google Scholar] [CrossRef]
  13. Zhao, P.; Wang, W.; Ying, L.; Sridhar, B.; Liu, Y. Online multiple-aircraft collision avoidance method. J. Guid. Control. Dyn. 2020, 43, 1456–1472. [Google Scholar] [CrossRef]
  14. Yasin, J.N.; Mohamed, S.A.; Haghbayan, M.H.; Heikkonen, J.; Tenhunen, H.; Plosila, J. Unmanned aerial vehicles (uavs): Collision avoidance systems and approaches. IEEE Access 2020, 8, 105139–105155. [Google Scholar] [CrossRef]
  15. Kot, R. Review of collision avoidance and path planning algorithms used in autonomous underwater vehicles. Electronics 2022, 11, 2301. [Google Scholar] [CrossRef]
  16. Huang, Y.; Chen, L.; Chen, P.; Negenborn, R.R.; Van Gelder, P.H.A.J.M. Ship collision avoidance methods: State-of-the-art. Saf. Sci. 2020, 121, 451–473. [Google Scholar] [CrossRef]
  17. Kufoalor, D.K.M.; Johansen, T.A.; Brekke, E.F.; Hepsø, A.; Trnka, K. Autonomous maritime collision avoidance: Field verification of autonomous surface vehicle behavior in challenging scenarios. J. Field Robot. 2020, 37, 387–403. [Google Scholar] [CrossRef]
  18. Hwang, Y.K.; Ahuja, N. A potential field approach to path planning. IEEE Trans. Robot. Autom. 1992, 8, 23–32. [Google Scholar] [CrossRef]
  19. Fiorini, P.; Shiller, Z. Motion planning in dynamic environments using velocity obstacles. Int. J. Robot. Res. 1998, 17, 760–772. [Google Scholar] [CrossRef]
  20. Zhang, W.; Yan, C.; Lyu, H.; Wang, P.; Xue, Z.; Li, Z.; Xiao, B. COLREGS-based path planning for ships at sea using velocity obstacles. IEEE Access 2021, 9, 32613–32626. [Google Scholar] [CrossRef]
  21. Chen, P.; Li, M.; Mou, J. A velocity obstacle-based real-time regional ship collision risk analysis method. J. Mar. Sci. Eng. 2021, 9, 428. [Google Scholar] [CrossRef]
  22. Zhang, G.; Wang, Y.; Liu, J.; Cai, W.; Wang, H. Collision-avoidance decision system for inland ships based on velocity obstacle algorithms. J. Mar. Sci. Eng. 2022, 10, 814. [Google Scholar] [CrossRef]
  23. Wang, J.; Wang, R.; Lu, D.; Zhou, H.; Tao, T. USV dynamic accurate obstacle avoidance based on improved velocity obstacle method. Electronics 2022, 11, 2720. [Google Scholar] [CrossRef]
  24. Shaobo, W.; Yingjun, Z.; Lianbo, L. A collision avoidance decision-making system for autonomous ship based on modified velocity obstacle method. Ocean. Eng. 2020, 215, 107910. [Google Scholar] [CrossRef]
  25. Camacho, E.F.; Bordons, C. Constrained Model Predictive Control; Springer: London, UK, 2007; pp. 177–216. [Google Scholar] [CrossRef]
  26. Maciejowski, J.M.; Huzmezan, M. Predictive control. In Robust Flight Control: A Design Challenge; Springer: Berlin/Heidelberg, Germany, 2007; pp. 125–134. [Google Scholar] [CrossRef]
  27. Samad, T. A survey on industry impact and challenges thereof [technical activities]. IEEE Control Syst. Mag. 2017, 37, 17–18. [Google Scholar] [CrossRef]
  28. Mesbah, A. Stochastic model predictive control: An overview and perspectives for future research. IEEE Control Syst. Mag. 2016, 36, 30–44. [Google Scholar] [CrossRef]
  29. Heirung, T.A.N.; Paulson, J.A.; O’Leary, J.; Mesbah, A. Stochastic model predictive control— How does it work? Comput. Chem. Eng. 2018, 114, 158–170. [Google Scholar] [CrossRef]
  30. Farina, M.; Giulioni, L.; Magni, L.; Scattolini, R. A probabilistic approach to model predictive control. In Proceedings of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 7734–7739. [Google Scholar] [CrossRef]
  31. Magni, L.; Pala, D.; Scattolini, R. Stochastic model predictive control of constrained linear systems with additive uncertainty. In Proceedings of the 2009 European Control Conference (ECC), Budapest, Hungary, 23–26 August 2009; pp. 2235–2240. [Google Scholar] [CrossRef]
  32. Tan, Y.; Cai, G.; Li, B.; Teo, K.L.; Wang, S. Stochastic model predictive control for the set point tracking of unmanned surface vehicles. IEEE Access 2019, 8, 579–588. [Google Scholar] [CrossRef]
  33. Park, K.; Lee, C.; Kim, J. Robust Path Tracking and Obstacle Avoidance of Autonomous Ship using Stochastic Model Predictive Control. In Proceedings of the 2023 20th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 25–28 June 2023; pp. 179–182. [Google Scholar] [CrossRef]
  34. Cao, L.; Chi, H. Dynamic Obstacle Avoidance of UAV Using Chance Constrained Model Predictive Control. Optim. Control Appl. Methods 2025, 46, 1914–1931. [Google Scholar] [CrossRef]
  35. Li, W.; Zhang, J.; Wang, F.; Zhou, H. Model predictive control based on state space and risk augmentation for unmanned surface vessel trajectory tracking. J. Mar. Sci. Eng. 2023, 11, 2283. [Google Scholar] [CrossRef]
  36. Zhang, P.; Ding, Y.; Du, S. Obstacle Avoidance Control for Autonomous Surface Vehicles Using Elliptical Obstacle Model Based on Barrier Lyapunov Function and Model Predictive Control. J. Mar. Sci. Eng. 2024, 12, 1035. [Google Scholar] [CrossRef]
  37. Liu, C.; Hu, Q.; Wang, X.; Yin, J. Event-triggered-based nonlinear model predictive control for trajectory tracking of underactuated ship with multi-obstacle avoidance. Ocean. Eng. 2022, 253, 111278. [Google Scholar] [CrossRef]
  38. Luo, Q.; Wang, H.; Li, N.; Su, B.; Zheng, W. Model-free predictive trajectory tracking control and obstacle avoidance for unmanned surface vehicle with uncertainty and unknown disturbances via model-free extended state observer. Int. J. Control. Autom. Syst. 2024, 22, 1985–1997. [Google Scholar] [CrossRef]
  39. Bradford, E.; Imsland, L. Stochastic nonlinear model predictive control using Gaussian processes. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 1027–1034. [Google Scholar] [CrossRef]
  40. Mesbah, A.; Streif, S.; Findeisen, R.; Braatz, R.D. Stochastic nonlinear model predictive control with probabilistic constraints. In Proceedings of the 2014 American Control Conference, Portland, OR, USA, 4–6 June 2014; pp. 2413–2419. [Google Scholar] [CrossRef]
  41. Fossen, T.I. Handbook of Marine Craft Hydrodynamics and Motion Control; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar] [CrossRef]
  42. Kim, Y.; Bang, H. Introduction to Kalman filter and its applications. In Introduction and Implementations of the Kalman Filter; IntechOpen London: London, UK, 2018; Volume 1, pp. 1–16. [Google Scholar] [CrossRef]
  43. Kim, K.S.; Rew, K.H. Reduced order disturbance observer for discrete-time linear systems. Automatica 2013, 49, 968–975. [Google Scholar] [CrossRef]
  44. Grandia, R.; Jenelten, F.; Yang, S.; Farshidian, F.; Hutter, M. Perceptive locomotion through nonlinear model-predictive control. IEEE Trans. Robot. 2023, 39, 3402–3421. [Google Scholar] [CrossRef]
  45. Li, S.; Liu, J.; Negenborn, R.R.; Wu, Q. Automatic docking for underactuated ships based on multi-objective nonlinear model predictive control. IEEE Access 2020, 8, 70044–70057. [Google Scholar] [CrossRef]
  46. Chen, H.; Allgöwer, F. A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability. Automatica 1998, 34, 1205–1217. [Google Scholar] [CrossRef]
Figure 1. Scheme of the uncrewed surface vehicle model.
Figure 1. Scheme of the uncrewed surface vehicle model.
Automation 07 00010 g001
Figure 2. (a) Two vehicles with different radius and different velocities. (b) The ego vehicle shrank down to a point, and the moving obstacle had two radii added together. (c) Collision cone. (d) The collision cone is shifted by the amount of the moving obstacle velocity to form the “Velocity Obstacle”.
Figure 2. (a) Two vehicles with different radius and different velocities. (b) The ego vehicle shrank down to a point, and the moving obstacle had two radii added together. (c) Collision cone. (d) The collision cone is shifted by the amount of the moving obstacle velocity to form the “Velocity Obstacle”.
Automation 07 00010 g002
Figure 3. Block diagram of the proposed probabilistic NMPC control structure.
Figure 3. Block diagram of the proposed probabilistic NMPC control structure.
Automation 07 00010 g003
Figure 4. Traditional NMPC vs. probabilistic NMPC: (a) 10 s; (b) 12 s; (c) 15 s; (d) 25 s; (e) 30 s; (f) 40 s; (g) 50 s; (h) 70 s.
Figure 4. Traditional NMPC vs. probabilistic NMPC: (a) 10 s; (b) 12 s; (c) 15 s; (d) 25 s; (e) 30 s; (f) 40 s; (g) 50 s; (h) 70 s.
Automation 07 00010 g004aAutomation 07 00010 g004b
Figure 5. (a) x positions of traditional and probabilistic NMPC controllers; (b) y positions of traditional and probabilistic NMPC controllers.
Figure 5. (a) x positions of traditional and probabilistic NMPC controllers; (b) y positions of traditional and probabilistic NMPC controllers.
Automation 07 00010 g005
Figure 6. (a) x position errors of traditional and probabilistic NMPC controllers; (b) y position errors of traditional and probabilistic NMPC controllers.
Figure 6. (a) x position errors of traditional and probabilistic NMPC controllers; (b) y position errors of traditional and probabilistic NMPC controllers.
Automation 07 00010 g006
Figure 7. (a) IAEs of x positions of traditional and probabilistic NMPC controllers; (b) IAEs of y positions of traditional and probabilistic NMPC controllers.
Figure 7. (a) IAEs of x positions of traditional and probabilistic NMPC controllers; (b) IAEs of y positions of traditional and probabilistic NMPC controllers.
Automation 07 00010 g007
Figure 8. (ad) are the control inputs of traditional and probabilistic NMPC controllers.
Figure 8. (ad) are the control inputs of traditional and probabilistic NMPC controllers.
Automation 07 00010 g008aAutomation 07 00010 g008b
Figure 9. (ad) are the observed input disturbances of the probabilistic NMPC for F 1 , F 2 , F 3 , and F 4 , respectively.
Figure 9. (ad) are the observed input disturbances of the probabilistic NMPC for F 1 , F 2 , F 3 , and F 4 , respectively.
Automation 07 00010 g009
Figure 10. Close-up look of traditional NMPC vs. probabilistic NMPC around target point.
Figure 10. Close-up look of traditional NMPC vs. probabilistic NMPC around target point.
Automation 07 00010 g010
Figure 11. Direction angles of traditional NMPC and probabilistic NMPC.
Figure 11. Direction angles of traditional NMPC and probabilistic NMPC.
Automation 07 00010 g011
Table 1. Model parameters for the considered controller system.
Table 1. Model parameters for the considered controller system.
ParameterValueUnitParameterValueUnit
m15kg Y v 10kg/s
W0.5m N r 2kg·m2/rad·s
L1m X u ˙ −1kg
Izz5kg· m2 Y v ˙ −10kg
Xu5kg/s N r ˙ −4kg· m2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Çerçi, N.; Yalçın, Y. Nonlinear Probabilistic Model Predictive Control Design for Obstacle Avoiding Uncrewed Surface Vehicles. Automation 2026, 7, 10. https://doi.org/10.3390/automation7010010

AMA Style

Çerçi N, Yalçın Y. Nonlinear Probabilistic Model Predictive Control Design for Obstacle Avoiding Uncrewed Surface Vehicles. Automation. 2026; 7(1):10. https://doi.org/10.3390/automation7010010

Chicago/Turabian Style

Çerçi, Nurettin, and Yaprak Yalçın. 2026. "Nonlinear Probabilistic Model Predictive Control Design for Obstacle Avoiding Uncrewed Surface Vehicles" Automation 7, no. 1: 10. https://doi.org/10.3390/automation7010010

APA Style

Çerçi, N., & Yalçın, Y. (2026). Nonlinear Probabilistic Model Predictive Control Design for Obstacle Avoiding Uncrewed Surface Vehicles. Automation, 7(1), 10. https://doi.org/10.3390/automation7010010

Article Metrics

Back to TopTop