Next Article in Journal
Site-Specific Extreme Wave Analysis for Korean Offshore Wind Farm Sites Using Environmental Contour Methods
Previous Article in Journal
Effect of Damping Plate Parameters on Liquid Sloshing in Cylindrical Tanks of Offshore Launch Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Whole-Body Vision/Force Control for an Underwater Vehicle–Manipulator System with Smooth Task Transitions

School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(8), 1447; https://doi.org/10.3390/jmse13081447
Submission received: 30 June 2025 / Revised: 23 July 2025 / Accepted: 27 July 2025 / Published: 29 July 2025
(This article belongs to the Section Ocean Engineering)

Abstract

Robots with multiple degrees of freedom (DOFs), such as underwater vehicle–manipulator systems (UVMSs), are expected to optimize system performance by exploiting redundancy with various basic tasks while still fulfilling the primary objective. Multiple tasks for robots, which are expected to be carried out simultaneously with prescribed priorities, can be referred to as sets of tasks (SOTs). In this work, a hybrid vision/force control method with continuous task transitions is proposed for a UVMS to simultaneously track the reference vision and force trajectory during manipulation. Several tasks with expected objectives and specific priorities are established and combined as SOTs in hybrid vision/force tracking. At different stages, various SOTs are carried out with different emphases. A hierarchical optimization-based whole-body control framework is constructed to obtain the solution in a strictly hierarchical fashion. A continuous transition method is employed to mitigate oscillations during the task switching phase. Finally, comparative simulation experiments are conducted and the results verify the improved convergence of the proposed tracking controller for UVMSs.

1. Introduction

Exploring and utilizing ocean resources has become a matter of strategic importance around the world against the backdrop of global resource scarcity. Underwater equipment is being increasingly investigated given the rising demands of underwater operations [1]. The underwater vehicle–manipulator system (UVMS), which is composed of an underwater vehicle and manipulator(s), is designed to expand the capacity and enhance the functionality of underwater operations. However, the attached manipulator(s) increases the complexity of the system, which brings new challenges in terms of whole-system design, multibody kinematics and dynamics and convoluted control algorithms [2]. Significant efforts have been devoted to developing a reliable and valuable UVMS that can operate in harsh underwater conditions.
Precise trajectory tracking control is the key technology to improve the autonomy of UVMSs. Generally, the control algorithms of UVMSs are mainly classified into two main categories. In the most straightforward sense, the control algorithms of the main vehicle and the manipulator are implemented for the corresponding degrees of freedom (DOFs) independently. In [3], the authors combined a standard proportional–integral–differential (PID) control with a prior model to track the desired vehicle velocity, while a cascade joint controller was employed to control the velocity and angle of each joint. This approach does not consider the couplings, so the system performance is necessarily limited. In improved methods, the dynamic couplings between the vehicle and manipulator are treated as disturbances and handled by feedforward compensation [4]. In [5], the authors discussed the coupling property and proposed a Slotine sliding mode approach to reduce the coupling effect. Active compensation is implemented in the control law to counteract the undesirable effect by measuring or estimating the interaction force from the manipulator in [6]. In general, the simplicity of this approach makes it attractive, and some control techniques from manipulators [7,8] and unmanned aerial vehicles (UAVs) [9] can be readily referenced and transferred. However, the main vehicle will usually be designed to be much heavier than the attached manipulator(s), resulting in relatively smaller reactive forces from the manipulator(s) compared to the vehicle’s own weight, as in the Phoenix+SMART3S [2].
Another control scheme is to model the vehicle and the attached manipulator(s) as a serial chain of rigid bodies by regarding the UVMS as a whole system. However, more challenges will arise due to the increased complexity of the whole system, such as inaccurate parameters of the hydrodynamic effects, kinematic redundancy of the system, etc.
Several classical control methodologies have been successfully applied to trajectory tracking in UVMSs, including robust PID in [10,11,12,13,14], sliding mode control (SMC) in [15,16,17,18] and adaptive controllers in [19,20,21,22,23], which are capable of effectively addressing external disturbances and unmodeled dynamics. Furthermore, advanced nonlinear control strategies, such as H control in [24,25], prescribed performance control in [26,27] and neural network techniques in [28,29,30,31], have also been implemented to control underwater vehicles. However, these methods demonstrate limited capabilities in handling state constraints. Consequently, model predictive control (MPC) has been extensively adopted due to its inherent constraint handling advantages in the field of underwater vehicles, as shown in [32,33,34,35,36].
The previously mentioned works focus on tracking the trajectory based on the position, which is difficult to measure accurately in the underwater environment. Vision sensors are an important means of localization in a GPS-denied environment, which is referred to as visual servoing; this has also been widely used in underwater vehicles for dynamic positioning [37,38], path following [39], docking [40,41] and underwater manipulation [42,43].
The capacity to handle interactions is one of the fundamental requirements to accomplish the manipulation task successfully [44]. The contact force at the manipulator’s end-effector is employed to describe the state of interaction. Thus, force control strategies are often employed in scenarios where a robot intervenes in the environment, and they can be classified into ’direct force control’ and ’indirect force control’ according to whether the force is controlled directly or not [45]. Both of them are mainly constructed based on position control schemes and have been widely used for fixed-base manipulators.
However, the velocity and position of free-floating robots are difficult to measure accurately and rapidly, making it challenging to practically implement force control algorithms for freely moving manipulators [46], in which even higher precision and update rates are necessitated than in position control. Therefore, the current force control studies on mobile robots, including wheeled mobile robots [47,48,49] and UAVMs [50,51], remain largely confined to laboratory environments, with limited real-world applications. The same situation applies to UVMSs, including both impedance control ([52,53]) and direct force control ([54,55,56]).
In line with the distinction between position-based visual servoing (PBVS) and image-based visual servoing (IBVS), force control strategies can also bypass the reconstruction of the position and directly regulate contact forces using visual information. In image-based force control, the translational positions are replaced by visual features; then, visual features and forces are simultaneously tracked in a unique control law to handle the interaction with the environment. A vision-based impedance force control algorithm was introduced for a laboratory injection system in [57]. The injection force was estimated by utilizing visual feedback via the concept of visual–force integration. Meanwhile, three types of hybrid visual–impedance control schemes that directly relate image feature errors to external forces by projecting the force component onto the image plane were proposed in [58]. This approach enables stable physical interaction between an unmanned aerial manipulator and its task environment.
Challenges also arise from the complexity of handling multiple DOFs in the whole UVMS system. A set of tasks (SOT) is expected to be carried out simultaneously with prescribed priorities by utilizing the redundant DOFs of the UVMS. Task-based control methods for UVMSs have been proposed in [59,60,61] with the consideration of priorities between tasks. In [62], a whole-body control (WBC) framework for a dual-arm UVMS is proposed to deal with multidimensional inequality control objectives. In this work, WBC is firstly introduced for the controller of the UVMS. Although it is similar to some task-priority-based control methods, it offers a distinct perspective on the problem by utilizing the whole system’s DOFs.
Considering the advantages of IBVS, a vision-based force control task is established without the position reconstruction process. By employing image moments instead of point features, this method achieves enhanced robustness. Furthermore, a multitask framework considering smooth transitions can preserve the priority of the force tracking task while effectively mitigating fluctuations.
In our previous work, an image-based visual serving method for UVMSs was introduced [42]. The hierarchical control architecture is composed of a kinematic model predictive IBVS controller and a dynamic velocity controller, respectively. In this work, a whole-body vision/force control framework is proposed to simultaneously track the reference vision and force trajectory, which fully exploits the whole-body DOFs of the UVMS. The contributions of this work are summarized as follows:
  • A whole-body multitask control framework is proposed to simultaneously accomplish the visual trajectory tracking and contact force tracking of a UVMS. This approach allows flexible task combinations in different scenarios while consistently maintaining strict priorities.
  • By reprojecting the image points and choosing proper image moment combinations, decoupled visual features aligned along the tangential and normal directions of the target plane can be obtained, which enhances the independence among individual tasks.
  • A continuous transition method for SOT switching is presented to optimize the performance in the transition process, particularly by effectively reducing fluctuations in the contact force.
This paper is organized as follows. The system formulation and visual servo model are introduced in Section 2. Several tasks for hybrid vision/force control based on WBC are established and a hierarchical WBC framework is reviewed in Section 3. In Section 4, a continuous transition method for SOT switching in WBC is proposed. The simulation results are demonstrated in Section 5. Lastly, we draw the conclusions of our paper and discuss further research problems.

2. System Model and Problem Statement

In this part, the whole system and the modeling process of the UVMS are first reviewed. Then, the contact model and the visual servo model for the hybrid vision/force control are introduced as the research basis.

2.1. UVMS Model

This work focuses on the control and analysis of a free-floating UVMS attached with an n-link manipulator. In Figure 1, the schematic provides an illustration of the system and frame definitions. The notations of the relevant frames are listed in Table 1.
The modeling process of the UVMS follows the approach described in [2], in which the system is regarded as a serial chain of rigid bodies with hydrodynamics. In this part, a brief summary is provided for completeness, including the kinematics and the dynamics. The global pose vector of the UVMS is defined as
η = η ˙ 1 η ˙ 2 q ˙ R 6 + n
including the position vector η 1 = [ x , y , z ] T R 3 ; the orientation vector η 2 = [ ψ , θ , φ ] T R 3 , described by the Euler angle; and the joint vector q = [ q 1 , q 2 . . . q n ] T R n , representing the joint angles of the manipulator with n joints.
The following equation defines the kinematics of the UVMS:
η ˙ 1 η ˙ 2 q ˙ = R B I ( η 2 ) O O O J k , e ( η 2 ) O O O I n v 1 v 2 q ˙
where ζ = [ v 1 , v 2 , q ˙ ] T R 6 + n is the system velocity vector, consisting of the body-fixed linear velocity v 1 , the body-fixed angular velocity v 2 of the base vehicle and the joint velocity q ˙ . R B I ( η 2 ) is the rotation matrix expressing the transformation from the body-fixed frame to the inertial frame, and J k , e ( η 2 ) is a Jacobian matrix describing the connection between η ˙ 2 and v 2 .
Define η e = [ η e 1 , η e q ] R 7 as the pose vector of the end-effector in the inertial frame I, where η e 1 R 3 is the position and η e q R 4 is the orientation in quaternion form. According to [2], we have
η ˙ e 1 w e = J m I η 2 , q ζ
where η ˙ e 1 and w e are the end-effector velocities expressed in the I-frame, and J m I ( η 2 , q ) is the mapping matrix, which maps the ( 6 + n ) -dimensional velocities into 6-dimensional end-effector velocities.
Remark 1. 
The system is redundant as  ( 6 + n ) > 6 , which means that kinematic redundancy can be exploited to achieve additional task(s) besides the expected end-effector task.
The velocity of the end-effector in the reference frame E is defined as v e = [ v e 1 , v e 2 ] . Then, the relationship between v e and ζ can be obtained by
v e = R I E ( η e q ) O O R I E ( η e q ) η ˙ e 1 w e = J m E η 2 , q , η e q ζ
where R I E ( η e q ) is the rotation matrix expressing the transformation from the inertial frame I to the E frame.
In the underwater environment, the total forces and moments acting on the generic part of the serial chain, including hydrodynamics, can be derived. Then, we write them in matrix form, and the dynamic equation of the UVMS is given by
M q ζ ˙ + C q , ζ ζ + D q , ζ ζ + g ( q , R B I ) = τ + ( J m I ) T f I + ( J m I ) T f d I
where M R ( 6 + n ) × ( 6 + n ) and C R ( 6 + n ) × ( 6 + n ) are the inertia and coriolis/centripetal matrices considering the additional mass influences from hydrodynamic effects, and  D R ( 6 + n ) × ( 6 + n ) represents the linear and quadratic damping matrix. g is the restoring force vector due to gravity and buoyancy. τ = [ τ v , τ q ] R 6 + n is the vector of the control inputs, including the vehicle control input τ v and joint torque vector τ q . f I and f d I are the external contact force and disturbance force expressed in the inertial frame, respectively. The global pose vector η , the system velocity ζ and the control input τ are constrained as
η i m i n η i η i m a x , i = 1 , 2 , , 6 + n
ζ i m i n ζ i ζ i m a x , i = 1 , 2 , , 6 + n
τ i m i n τ i τ i m a x , i = 1 , 2 , , 6 + n
where ( . ) m i n and ( . ) m i n are the lower and upper bounds of the corresponding variables.

2.2. Contact Model

An object–environment contact model describes the relationship between the contact force and the deformation, which is a very complex physical process. In [63], a stiffness-damping environmental model with a satisfactory matching degree is used, as shown in Figure 2.
The dynamic relationship is simplified as
f = 0 x x e b e x ˙ + k e ( x x e ) x > x e ,
where b e and k e are the damping and stiffness coefficients; x and x e are the actual position and the equilibrium position, respectively; and f is the contact force.

2.3. Modeling of Visual Servo System

Image moments are selected as visual features due to the favorable decoupling characteristics. To further isolate the force control task from visual servo tasks, an image reprojection technique is employed by projecting the image points onto a virtual image plane that is always parallel to the target plane.
The detailed reprojection process can be found in [36] and is omitted here. For any image point, its original pixel coordinate is represented by p c = ( u c , n c ) T and its reprojected coordinate in frame A is written as p a = ( u a , n a ) T . Considering that N virtual image points p a , i ( i = 1 , 2 N ) constitute a visual target V t on the virtual image plane, we can define the image feature vector
s a = x n , y n , a n T .
where x n , y n and a n are three normalized image moments ([64]) calculated form the central moments.
Given that the virtual camera frame and the target frame are always aligned, it is easy to obtain the body-fixed velocity of frame A by
v a = v a 1 v a 2 = [ R C A v c 1 0 3 × 3 ] = R C A 0 3 × 3 0 3 × 3 0 3 × 3 v c 1 v c 2 = J c a v c
and the rotation matrix between frame C and A is given by
R C A = ( R A I ) T R C I .
The dynamic relationship between s a and the relative motion of frame A and frame C can be described by
s ˙ a = L a v a = L a J c a v c
where v a = [ v a 1 T , v a 2 T ] T is the velocity of the virtual camera in the reference frame A, including the linear velocity v a 1 and angular velocity v a 2 , and v c is the velocity of the real camera. L a R 3 × 6 is the interaction matrix, which is defined as
L a = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 .
It is clear that L a shows good decoupling characteristics between the three translational motions, and the image moments computed from the reprojected image are fully decoupled.
Since the camera is fixed on the end-effector, the relationship between the camera velocity and the end-effector velocity can be obtained by the velocity Jacobian matrix
v c = J e c v e
with
J e c = I 3 × 3 S ( l c ) O 3 × 3 I 3 × 3 R 6 × 6
where S ( · ) is the skew-symmetric matrix operator, and l c is the position vector of the camera relative to the end-effector in frame E.
By substituting (4), (9) and (13) into (11), the derivative of visual features and the system velocity can be connected by
s ˙ a = L a J c a J e c J m E ζ = J s a , ξ ( η 2 , ξ , q ) ζ .
where J s a , ξ R 3 × ( 6 + n ) is the Jacobian matrix of s ˙ a .

2.4. Problem Statement

The control objective of this work is to simultaneously track the reference visual trajectory of the UVMS while maintaining a desired contact force with the target plane.

3. Vision/Force Hybrid Control for UVMSs

A hybrid vision/force control framework based on WBC for UVMSs is established in this part. The operation is decomposed into several tasks, and a hierarchical optimization-based WBC is established to construct a unified output among the tasks.

3.1. Hybrid Vision/Force Control Tasks for UVMS

The general definition of one task includes the generic task variable x t and its corresponding Jacobian matrix J t ( η , q ) relating its time derivative to the system velocity ζ . Then, the differential relationship is given by
x ˙ t = J t ( η , q ) ζ
where the subscript ( . ) t indicates the variable related to the task notation. Thus, the task is accomplished by finding the appropriate ζ * to track the reference x ˙ t * , which is a simple inverse kinematic (IK) problem, as (15) is defined at the kinematic level. However, these tasks are not only limited to IK; they can also be employed at the dynamic level.
In our work, the tasks in hybrid vision/force tracking for UVMSs will be defined as follows.

3.1.1. End-Effector Orientation

The orientation of the end-effector is represented in unit quaternion form, and the task of end-effector orientation is implemented as
e t q = qerr ( η e q * , η e q )
J t q = J e q
x ˙ t q * = λ t q e t q .
where η e q * and e t q are the reference value and the state error of the orientation of the end-effector. The quaternion error is calculated by qerr ( . ) , with the advantage of using the end-effector angular velocity w e and the corresponding orientation Jacobian J e q extracted from J m I  [2]. λ t q is the positive control gain.

3.1.2. End-Effector Trajectory of UVMS

Visual servoing is employed to track the reference trajectory of the end-effector. The tangential movement of the end-effector to the target plane is described by s p s = ( x n , y n ) and the task is defined as
e t p s = s p s * s p s
J t p s = J s a ( x n , y n )
x ˙ t p s * = λ t p s e t p s .
The normal motion is connected with a n and controlled separately because it is highly coupled with the contact force f n in the normal direction of the plan, and the task of f n will be defined later.
e t p n = a n * a n
J t p n = J s a ( a n )
x ˙ t p n * = λ t p n e t p n .

3.1.3. Normal Contact Force

The contact force in normal direction f n is considered in this work. The reprojection process also has the advantage of allowing the use of the Jacobian matrix of a n ; then, the control task is obtained.
e t f n = f n * f n
J t f n = J s a ( a n )
x ˙ t f n * = λ t f n e t f n .
Only one of the two tasks of a n and f n in the same direction will be activated at the same time.

3.1.4. Main Vehicle Attitude Holding

In most cases, the main vehicle is preferably kept horizontal because of the requirements of the attached sensors (e.g., Doppler velocity logger (DVL), camera). By defining η 2 , p r = [ ψ , θ ] T R 2 and the corresponding Jacobian J η 2 , p r , a horizontal attitude holding task is considered.
e t p r = η 2 , p r * η 2 , p r
J t p r = J η 2 , p r
x ˙ t p r * = λ t p r e t p r

3.1.5. Camera’s Field of View

The image features are constrained states limited by the camera’s field of view (FOV), which is an inequality-constrained task. The indirect method is to convert the unequal constraints into equal tasks. Considering any one constrained state x c [ x c l , x c u ] , its lower-bound rejection task can be formulated as
e t c l = x c l + d * x c , x c < x c l + d * , x c x c l + d *
J t c = J c ( x c )
x ˙ t c * = λ t c l e t c l
where d * is the safe buffer and e t c l = means that the task is deactivated when unnecessary. In the same way, the upper-bound rejection task is defined as
e t c u = x c u d * x c , x c > x c u d * , x c x c u d *
J t c = J c ( x c )
x ˙ t c * = λ t c u e t c u
The constraint is ensured by an equation task. It is clear that expressing such tasks via inequalities is more intuitive:
x ˙ t c l + d * J t c ζ x ˙ t c u d * .
For convenience, the  i t h task is defined as L i ( J t i , x ˙ t i l , x ˙ t i u ) , including the Jacobian matrix and the expected lower and upper derivatives ( x ˙ t i l = x ˙ t i u in a task of equal constraints). For two tasks L i and L j , if  L i has a higher priority than L j , it is called L i L i ; otherwise, L i L i . In a hierarchical controller, N tasks with the priority of L 1 L 2 L N are combined into an SOT, S = { L 1 , , L N } , and deployed to accomplish the whole mission collectively.

3.2. Hierarchical Optimization-Based Whole-Body Control

One single task is a simple inverse kinematic problem, which can be solved using a pseudo-inverse matrix. Then, when considering n tasks with different priorities, a closed-form WBC based on null space projection can find a solution that always satisfies the hierarchy, but it can only handle equal constraints. To deal with unequal constraints, an optimization-based WBC (OWBC) will be constructed.
Generally, the problem can be expressed in a general optimization-based form:
P : min ζ J t ζ x ˙ t * 2
Then, when considering n tasks with different priorities, a weighting matrix Q n can be added to the optimization problem to ensure the precedence rules:
P : min ζ ( J n ζ x ˙ n ) T Q ( J n ζ x ˙ n ) ,
where
J n = J t 1 J t n , x ˙ n = x ˙ t 1 x ˙ t n .
are the corresponding Jacobian matrices and states of these tasks, and 
Q n = Q t 1 Q t n
consists of the weighing matrices of the tasks.
To obtain the solution to an optimization problem in a strictly hierarchical manner, a hierarchical optimization-based WBC (HWBC) can be formulated as in the following recursive optimization problem.
From n = 1 , , N , the optimization problem is constructed and solved in recursive form:
P o : min ζ w n 2
s.t.
x ˙ t n l J n ζ w n x ˙ t n u
x ˙ t k l J k ζ w k * x ˙ t k u , k = 1 , , n 1
where w = { w 1 * , , w k * } are the slack variables obtained after solving the first k task, and x ˙ t n l and x ˙ t n u are the lower and the upper constraints of task n. The implementation in accordance with the condensed procedure is shown in Algorithm 1.
Algorithm 1 HWBC for UVMS
Input:  Initial SOT of UVMS: S = { L 1 , , L N } ;
Output:  Reference value: ζ N * ;
       Initialization: w * ={};
       for each i [ 1 , N ]  do
           Set n = i and initialize the optimization problem (42);
           Solve the optimization problem and obtain ζ n * ;
           Compute the slack variable w n * = J n ζ n * x ˙ t n ;
            w * = { w * , w n * } ;
       end for
       return  ζ N * : = ζ ;

4. Continuous and Smooth Task Transitions for Vision/Force Control

If S does not change in the working process, the hierarchical controller can be solved based on the proposed HWBC. However, in practical operations, the controller needs to incorporate different SOTs at different stages, e.g., tasks being added or removed, priorities changing, etc. If  S a directly switches to S b , the control output may oscillate. Inspired by the continuous transition approach in [65], a smooth transition method for SOT switching in hybrid vision/force control for UVMSs is proposed in this part, which is called continuous HWBC (CWBC).

4.1. Task Addition/Removal

Consider a single task L p that will be inserted into S 1 = { L 1 , , L N } with the priority of L m L p L m + 1 , in which P 1 with S 1 switches to P 2 with S 2 = { L 1 , , L p , , L N } . The transition method can be expressed as
P α : min ζ w n 2
s.t.
x ˙ t n l + w n J n ζ x ˙ t n u + w n
x ˙ t k l + w k * J k ζ x ˙ t k u + w k * , k = m + 1 , , n 1
α x ˙ t p l + w p * J p ζ ( 1 α ) J p ζ 1 * α x ˙ t p u + w p *
x ˙ t k l + w k * J k ζ x ˙ t k u + w k * , k = 1 , , m
where α is the transition coefficient from 0 to 1. When α = 0 , Equation (45c) becomes
w p * J p ζ J p ζ 1 * w p *
in which P α has the same solution as P 1 . Moreover,  ζ α * = ζ 2 * with α = 1 , which indicates the end of the transition process. By increasing α from 0 to 1 during the transition, the feasible solution area of P 1 will be moved to P 2 continuously and smoothly. The implementation of (44) is described in Algorithm 2.
Algorithm 2 CWBC for UVMS: Adding a Task
Input:  Initial SOT of UVMS: S 1 = { L 1 , , L N } , task to be added: L p ( J p , x ˙ t p l , x ˙ t p u ) ;
Output:  Reference value: ζ α * ;
  1:  if is empty( α ) then
  2:      Initialization: α = 0 ;
  3:  end if
  4:  Solve: ζ 1 * = s o l v e ( P 1 ) ;
  5:  Set transition tasks:
        L p , α = L p , α ( J p , α x ˙ t p l + ( 1 α ) J p ζ 1 * , α x ˙ t p u + ( 1 α ) J p ζ 1 * ) ;
  6:  Set transition SOT for P α :
       S α = { L 1 , , L m , L p , α , L m + 1 , , L N } ;
  7:  Solve: ζ α * = s o l v e ( P α ) ;
  8:  Update: α n e x t ( α , t ) ;
  9:  if  α 1  then
10:      End of transition;
11:  end if
12:  return   ζ α * : = ζ ;
The transition coefficient α can be computed by a nonlinear activation function with respect to the start time of the period.

4.2. Task Priority Swapping

Considering two existing tasks L i and L j with the priority of L i L j in S 1 = { L 1 , , L i , L j , , L N } , the following swapping process will be constructed:
P α : min ζ w n 2
s.t.
x ˙ t n l + w n J n ζ x ˙ t n u + w n
x ˙ t k l + w k * J k ζ x ˙ t k u + w k * , k = j + 1 , , n 1
α x ˙ t j l + w j * J j ζ ( 1 α ) J j ζ i j * α x ˙ t j u + w j *
( 1 α ) x ˙ t i l + w i * J i ζ α J i ζ j i * ( 1 α ) x ˙ t i u + w i *
x ˙ t k l + w k * J k ζ x ˙ t k u + w k * , k = 1 , , i 1
where α is still the transition coefficient from 0 to 1, and ζ i j * is the solution of P α when (48c) and (48d) are set as
x ˙ t j l + w j * J j ζ x ˙ t j u + w j *
( 1 α ) x ˙ t i l + w i * J i ζ ( 1 α ) x ˙ t i u + w i *
while ζ j i * is the solution of P α when (48c) and (48d) become
x ˙ t i l + w i * J i ζ x ˙ t i u + w i *
α x ˙ t j l + w j * J j ζ α x ˙ t j u + w j * .
The implementation of (47) is described in Algorithm 3.
Algorithm 3 CWBC for UVMS: Switching Task Priorities
Input: Initial SOT of UVMS S = { L 1 , , L i 1 , L i , L j , L j + 1 , , L N } , task to be switched: L i  and  L j ;
Output: Reference value: ζ α *
   1:  if is empty( α ) then
   2:      Initialization:  α = 0 ;
   3:  end if
   4:  Set transition tasks: L i , α = L i , α ( J i , ( 1 α ) x ˙ t i l , ( 1 α ) x ˙ t i u ) ;
   5:  Set transition SOT for P i j , α : S i j , α = { L 1 , , L i 1 , L i , α , L j , L j + 1 , , L N } ;
   6:  Solve: ζ i j * = s o l v e ( S i j , α ) ;
   7:  Set transition tasks: L j , α = L j , α ( J j , α x ˙ t j l , α x ˙ t j u ) ;
   8:  Set transition SOT for P j i , α : S j i , α = { L 1 , , L i 1 , L j , α , L i , L j + 1 , , L N } ;
   9:  Solve: ζ j i * = s o l v e ( S j i , α ) ;
 10:  Set transition tasks: L c i , α = L c i , α ( J i , ( 1 α ) x ˙ t i l + α J i ζ j i * , ( 1 α ) x ˙ t i u + α J i ζ j i * ) ;
 11:  Set transition tasks: L c j , α = L c j , α ( J j , α x ˙ t j l + ( 1 α ) J j ζ i j * , α x ˙ t j u + ( 1 α ) J j ζ i j * ) ;
 12:  Set transition SOT for P α : S α = { L 1 , , L i 1 , L c i , α , L c j , α , L j + 1 , , L N } ;
 13:  Solve: ζ α * = s o l v e ( S α ) ;
 14:  Update: α n e x t ( α , t ) ;
 15:  if  α 1  then
 16:      End of transition;
 17:  end if
 18:  return   ζ α * : = ζ ;

5. Simulation Results

In this section, a visualized numerical simulation is conducted to evaluate the proposed controller. The system dynamics and control algorithm are calculated by MATLAB (version: R2024b), and the simulation visualization is performed by CoppeliaSim, as shown in Figure 3. The model of the main vehicle is based on the Kambara underwater vehicle model in [66], and a sway thruster is added to the thrust mapping matrix to render the vehicle fully actuated. The parameters of the attached manipulator can be seen in Appendix A. The links are assumed to be symmetric cylinders to simplify the calculation of the hydrodynamic forces, and the details can be found in the SIMURV 4.1 paper [2]. The optimization problem is solved by the open-source solver CasADi in [67].
The dynamic velocity controller is detailed in [42]. To verify the robustness of the controller, we apply a time-varying external force on the end-effector of the UVMS as a disturbance and add a 10% error to the model parameters. The external force is set as f d = [ 0 , 10 sin ( 0.6 t ) ( N ) , 10 cos ( 0.6 t ) ( N ) , 0 , 0 , 0 ] T in the inertial frame. The parameters of the contact model are set as x e = 1.14 , k e = 1000 and b e = 300 , and Gaussian white noise of 10 db is added in the force measurement in the simulation.
Two common cases that require contact force tracking are considered in this section. One involves maintaining a contact force with a fixed point, such as button pressing. The other requires maintaining the desired contact force while moving the contact point, such as welding or flaw detection. The simulation is conducted based on these two cases.
The tasks for the hybrid vision/force control mission are listed in Table 2. Depending on whether task L f is active or not, the SOTs are defined as S 1 = { L c , L q , L p n , L p s , L p r } and S 2 = { L c , L f , L q , L p n , L p s , L p r } .

5.1. Fixed Contact Case

A reference trajectory for the end-effector in the fixed contact case is considered in this part. The initial states of the UVMS are set as
η 2 = 0 3 × 1 s = [ 0.263 , 0.530 , 0.502 ] T η e q = [ 0.502 , 0.500 , 0 , 0.866 , 0 ] T q = [ 0 , 60 , 60 , 0 , 0 , 0 ] T / 180 × π ζ = 0 12 × 1
and the fixed reference states are
f n * = 10 θ * = 0 s * = [ 0.86 , 0.74 , 0.3516 ] T η e q * = [ 0.707 , 0 , 0.707 , 0 ] T .
The results of the fixed contact case based on OWBC and the proposed CWBC are shown in Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Overall, both of the controllers are capable of driving the end-effector to the desired point and maintaining a stable contact force.
The image moments in the tangential directions of the target plane and the end-effector quaternions are demonstrated in Figure 4 and Figure 5. It can be observed that both the tangential image moments and the end-effector quaternions exhibit similar performance for the two controllers.
The main difference is reflected in the normal direction in Figure 6. After task L f is introduced into the SOT at about 4 s by a n < 0.55 , the contact force of CWBC in green converges to the reference value more quickly and stably than the red curve of OWBC. Although the force control tasks are the same in the two controllers, the better performance comes from the strictly hierarchical priorities in CWBC.
The system velocities are demonstrated in Figure 7 and Figure 8. It can be observed that CWBC exhibits greater fluctuations in the velocities of the main vehicle, whereas OWBC shows larger fluctuations in joint motions. This results from the fact that OWBC does not impose strict task prioritization, and its performance aligns more closely with the overall energy consumption by moving the joint in the first place.

5.2. Moving Contact Case

A circular reference trajectory for the UVMS is considered and designed as
f n * = 10 θ * = 0 x n * = 0.86 + 0.3 sin ( 0.3 t + 2 ) y n * = 0.74 + 0.3 cos ( 0.3 t + 2 ) a n * = 0.3516 η e q * = [ 0.707 , 0 , 0.707 , 0 ] T
and the other conditions are the same as in Section 5.1. The results are shown in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. The contact condition between the environment and the end-effector is subjected to change in order to evaluate the controller’s adaptability to different situations during the tracking process, which is achieved by changing the equilibrium position and stiffness coefficient to x e = 1.12 and k e = 600 .
In the moving case, similar performance can be seen, as shown in Figure 9 and Figure 10. One can notice that there is little difference between the curves of the tangential image moments and the end-effector quaternions for the two controllers.
The normal image moment and contact force curves are shown in Figure 11. Although both of the controllers can maintain the reference force, thanks to the strict priority of the force control task, the result of CWBC is obviously better than that of OWBC, with fewer fluctuations. At about 12 s, the normal contact forces suddenly increase when the contact condition changes and then gradually stabilize to the expected value. The adaptability of the two controllers can be certified.
Figure 12 and Figure 13 exhibit the results regarding the system velocities. The difference between the velocities of the main vehicle and the joint velocities remains similar to that described in Section 5.1, in which CWBC shows greater fluctuations from the main vehicle movements and OWBC shows larger fluctuations in joint motions.

5.3. Parameter Selection

The control performance of the proposed CWBC controller under different control gains in the force control task is demonstrated in this part to show the selection process of the control gains. All the initial and reference states remain consistent with those specified in Section 5.1. The fixed contact case is repeated three times with different λ t f n : Case 1 with λ t f n , Case 2 with λ t f n / 5 and Case 3 with 5 λ t f n .
The results of the three cases are shown in Figure 14. It can be observed that the the performances differs notably. Specifically, Case 2 demonstrates slow tracking performance accompanied by gradual fluctuations, while Case 3 achieves fast tracking at the cost of noticeable oscillations. In general, the control force of Case 1 achieves balanced overall performance in terms of tracking speed and fluctuations.

5.4. Comparative Results

A comparison of S 1 to S 2 via smooth transitions with CWBC and by direct switching with HWBC is demonstrated in this part. The reference trajectory and parameters are the same as in Section 5.2, without contact condition changes, and the results are shown in Figure 15 and Figure 16.
The contact forces are shown in Figure 15. In general, both of the curves converge to the desired values stably under the two controllers, while the two methods exhibit different behaviors during the transition phase. The force result in HWBC shows larger fluctuations, whereas CWBC approaches the desired value more smoothly. Significant fluctuations are observed in the velocity along the force control direction in Figure 16, which is consistent with the variation in the contact force. This demonstrates the effectiveness of the smooth transition method in CWBC to optimize the switching performance when the force control task is introduced in the SOT.
The root mean square error (RMSE), settling time (ST) and overshoot (OS) of the two controllers are demonstrated in Table 3. The RMSE and the ST are relatively close, indicating that the convergence rates of the two controllers are similar. A notable distinction between the two controllers is observed in the OS. The OS of CWBC is 1.8 (N), while the OS of HWBC is 8.1 (N). This is not only reflected in the improved tracking performance but is particularly critical in intervention tasks, as excessive overshoot of the contact force could potentially lead to system damage or instability.

6. Conclusions

In this paper, we have proposed a whole-body vision/force controller to complete the manipulation task of a UVMS. A HVS model based on reprojecting image moments and quaternions was firstly established. Then, several tasks were constructed to establish a hybrid vision/force control framework for UVMSs, and a HWBC framework was constructed to generate a solution that satisfied the priorities. To obtain a smooth output during the transition phrase, a continuous transition for SOT switching is presented. Simulation results verify the excellent performance and robustness of the proposed hybrid vision/force controller. In the near future, we will integrate trajectory planning into the controller and try to conduct real-world experiments.

Author Contributions

Conceptualization, J.L. and J.G.; Data curation, G.C.; Formal analysis, G.C.; Funding acquisition, F.Z.; Investigation, J.G.; Methodology, J.L.; Project administration, F.Z.; Software, J.L.; Supervision, J.G.; Validation, G.C. and J.G.; Visualization, J.L.; Writing—original draft, J.L.; Writing—review and editing, G.C. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 51979228.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
UVMSUnderwater Vehicle–Manipulator System
DOFDegree of Freedom
SOTSet of Tasks
WBCWhole-Body Control
OWBCOptimization-Based WBC
HWBCHierarchical Optimization-Based WBC
CWBCContinuous HWBC

Appendix A. Manipulator Parameters

Table A1. DH parameters.
Table A1. DH parameters.
Num.a [m]d [m] α [rad] θ [rad]
link 10.0750 π / 2 q 1
link 20.30500 q 2
link 30.0550 π / 2 q 3
link 400.305 π / 2 q 4
link 50−0.057 π / 2 q 5
link 600.2220 q 6
The links are assumed to be symmetric cylinders to simplify the calculation of the hydrodynamic forces; the link parameters of the manipulator are listed in Table A2.
Table A2. Link parameters.
Table A2. Link parameters.
Num.Mass [kg]Radius [m]Length [m]Viscous Friction [Nms]
link 150.080.1530
link 280.080.3520
link 340.080.155
link 440.060.4510
link 540.060.15
link 630.050.256

References

  1. Song, J.; He, X. Robust state estimation and fault detection for autonomous underwater vehicles considering hydrodynamic effects. Control Eng. Pract. 2023, 135, 105497. [Google Scholar] [CrossRef]
  2. Antonelli, G. Underwater Robots; Springer: Berlin/Heidelberg, Germany, 2018; Volume 123. [Google Scholar]
  3. Youakim, D.; Ridao, P.; Palomeras, N.; Spadafora, F.; Ribas, D.; Muzzupappa, M. Moveit!: Autonomous underwater free-floating manipulation. IEEE Robot. Autom. Mag. 2017, 24, 41–51. [Google Scholar] [CrossRef]
  4. Sivčev, S.; Coleman, J.; Omerdić, E.; Dooly, G.; Toal, D. Underwater manipulators: A review. Ocean Eng. 2018, 163, 431–450. [Google Scholar] [CrossRef]
  5. Dannigan, M.; Russell, G.T. Evaluation and reduction of the dynamic coupling between a manipulator and an underwater vehicle. IEEE J. Ocean Eng. 1998, 23, 260–273. [Google Scholar] [CrossRef]
  6. Ryu, J.-H.; Kwon, D.-S.; Lee, P.-M. Control of underwater manipulators mounted on an rov using base force information. In Proceedings of the 2001 ICRA, IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), Seoul, Republic of Korea, 21–26 May 2001; IEEE: New York, NY, USA, 2001; Volume 4, pp. 3238–3243. [Google Scholar]
  7. Tian, G.; Tan, J.; Li, B.; Duan, G. Optimal fully actuated system approach-based trajectory tracking control for robot manipulators. IEEE Trans. Cybern. 2024, 54, 7469–7478. [Google Scholar] [CrossRef]
  8. Wang, F.; Chen, K.; Zhen, S.; Chen, X.; Zheng, H.; Wang, Z. Prescribed performance adaptive robust control for robotic manipulators with fuzzy uncertainty. IEEE Trans. Fuzzy Syst. 2023, 32, 1318–1330. [Google Scholar] [CrossRef]
  9. Xiong, J.-J.; Chen, Y. Rbfnn-based parameter adaptive sliding mode control for an uncertain tquav with time-varying mass. Int. J. Robust Nonlinear Control 2025, 35, 4658–4668. [Google Scholar] [CrossRef]
  10. Han, J.; Chung, W.K. Active use of restoring moments for motion control of an underwater vehicle-manipulator system. IEEE J. Ocean Eng. 2013, 39, 100–109. [Google Scholar] [CrossRef]
  11. Esfahani, H.N.; Azimirad, V.; Danesh, M. A time delay controller included terminal sliding mode and fuzzy gain tuning for underwater vehicle-manipulator systems. Ocean Eng. 2015, 107, 97–107. [Google Scholar] [CrossRef]
  12. Londhe, P.; Mohan, S.; Patre, B.; Waghmare, L. Robust task-space control of an autonomous underwater vehicle-manipulator system by pid-like fuzzy control scheme with disturbance estimator. Ocean Eng. 2017, 139, 1–13. [Google Scholar] [CrossRef]
  13. Martin, S.C.; Whitcomb, L.L. Nonlinear model-based tracking control of underwater vehicles with three degree-of-freedom fully coupled dynamical plant models: Theory and experimental evaluation. IEEE Trans. Control Syst. Technol. 2017, 26, 404–414. [Google Scholar] [CrossRef]
  14. David, J.; Bauer, R.; Seto, M. Coupled hydroplane and variable ballast control system for autonomous underwater vehicle altitude-keeping to variable seabed. IEEE J. Ocean Eng. 2017, 43, 873–887. [Google Scholar] [CrossRef]
  15. Patre, B.M.; Londhe, P.S.; Waghmare, L.M.; Mohan, S. Disturbance estimator based non-singular fast fuzzy terminal sliding mode control of an autonomous underwater vehicle. Ocean Eng. 2018, 159, 372–387. [Google Scholar] [CrossRef]
  16. Lakhekar, G.; Waghmare, L. Adaptive fuzzy exponential terminal sliding mode controller design for nonlinear trajectory tracking control of autonomous underwater vehicle. Int. J. Dyn. Control 2018, 6, 1690–1705. [Google Scholar] [CrossRef]
  17. Liu, X.; Zhang, M.; Chen, J.; Yin, B. Trajectory tracking with quaternion-based attitude representation for autonomous underwater vehicle based on terminal sliding mode control. Appl. Ocean. Res. 2020, 104, 102342. [Google Scholar] [CrossRef]
  18. Liu, X.; Zhang, M.; Yang, C.; Yin, B. Finite-time tracking control for autonomous underwater vehicle based on an improved non-singular terminal sliding mode manifold. Int. J. Control 2022, 95, 840–849. [Google Scholar] [CrossRef]
  19. Sun, Y.C.; Cheah, C.-C. Adaptive setpoint control of underwater vehicle-manipulator systems. In Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, Singapore, 1–3 December 2004; IEEE: New York, NY, USA, 2004; Volume 1, pp. 434–439. [Google Scholar]
  20. Antonelli, G.; Caccavale, F.; Chiaverini, S. Adaptive tracking control of underwater vehicle-manipulator systems based on the virtual decomposition approach. IEEE Trans. Robot. Autom. 2004, 20, 594–602. [Google Scholar] [CrossRef]
  21. Vito, D.D.; Palma, D.D.; Simetti, E.; Indiveri, G.; Antonelli, G. Experimental validation of the modeling and control of a multibody underwater vehicle manipulator system for sea mining exploration. J. Field Robot. 2021, 38, 171–191. [Google Scholar] [CrossRef]
  22. Antonelli, G.; Cataldi, E. Recursive adaptive control for an underwater vehicle carrying a manipulator. In Proceedings of the 22nd Mediterranean Conference on Control and Automation, Palermo, Italy, 16–19 June 2014; IEEE: New York, NY, USA, 2014; pp. 847–852. [Google Scholar]
  23. Li, J.; Huang, H.; Wan, L.; Zhou, Z.; Xu, Y. Hybrid strategy-based coordinate controller for an underwater vehicle manipulator system using nonlinear disturbance observer. Robotica 2019, 37, 1710–1731. [Google Scholar] [CrossRef]
  24. Han, J.; Park, J.; Chung, W.K. Robust coordinated motion control of an underwater vehicle-manipulator system with minimizing restoring moments. Ocean Eng. 2011, 38, 1197–1206. [Google Scholar] [CrossRef]
  25. Dai, Y.; Yu, S. Design of an indirect adaptive controller for the trajectory tracking of UVMS. Ocean Eng. 2018, 151, 234–245. [Google Scholar] [CrossRef]
  26. Heshmati-Alamdari, S.; Bechlioulis, C.P.; Karras, G.C.; Nikou, A.; Dimarogonas, D.V.; Kyriakopoulos, K.J. A robust interaction control approach for underwater vehicle manipulator systems. Annu. Rev. Control 2018, 46, 315–325. [Google Scholar] [CrossRef]
  27. Lin, Z.; Wang, H.D.; Karkoub, M.; Shah, U.H.; Li, M. Prescribed performance based sliding mode path-following control of UVMS with flexible joints using extended state observer based sliding mode disturbance observer. Ocean Eng. 2021, 240, 109915. [Google Scholar] [CrossRef]
  28. Xu, B.; Pandian, S.R.; Sakagami, N.; Petry, F. Neuro-fuzzy control of underwater vehicle-manipulator systems. J. Frankl. Inst. 2012, 349, 1125–1138. [Google Scholar] [CrossRef]
  29. Gao, J.; An, X.; Proctor, A.; Bradley, C. Sliding mode adaptive neural network control for hybrid visual servoing of underwater vehicles. Ocean Eng. 2017, 142, 666–675. [Google Scholar] [CrossRef]
  30. Li, X.; Zhu, D. An adaptive som neural network method for distributed formation control of a group of auvs. IEEE Trans. Ind. Electron. 2018, 65, 8260–8270. [Google Scholar] [CrossRef]
  31. Huang, H.; Li, J.; Zhang, G.; Tang, Q.; Wan, L. Adaptive recurrent neural network motion control for observation class remotely operated vehicle manipulator system with modeling uncertainty. Adv. Mech. Eng. 2018, 10, 1687814018804098. [Google Scholar] [CrossRef]
  32. Shen, C.; Shi, Y.; Buckham, B. Integrated path planning and tracking control of an auv: A unified receding horizon optimization approach. IEEE/ASME Trans. Mechatron. 2016, 22, 1163–1173. [Google Scholar] [CrossRef]
  33. Li, H.; Yan, W. Model predictive stabilization of constrained underactuated autonomous underwater vehicles with guaranteed feasibility and stability. IEEE/Asme Trans. Mechatron. 2016, 22, 1185–1194. [Google Scholar] [CrossRef]
  34. Dai, Y.; Yu, S.; Yan, Y.; Yu, X. An ekf-based fast tube mpc scheme for moving target tracking of a redundant underwater vehicle-manipulator system. IEEE/ASME Trans. Mechatron. 2019, 24, 2803–2814. [Google Scholar] [CrossRef]
  35. Wei, H.; Shen, C.; Shi, Y. Distributed lyapunov-based model predictive formation tracking control for autonomous underwater vehicles subject to disturbances. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 51, 5198–5208. [Google Scholar] [CrossRef]
  36. Liu, J.; Gao, J.; Yan, W. Lyapunov-based model predictive visual servo control of an underwater vehicle-manipulator system. IEEE Trans. Intell. Veh. 2024. early access. [Google Scholar] [CrossRef]
  37. Gao, J.; Proctor, A.; Bradley, C. Adaptive neural network visual servo control for dynamic positioning of underwater vehicles. Neurocomputing 2015, 167, 604–613. [Google Scholar] [CrossRef]
  38. Krupínski, S.; Allibert, G.; Hua, M.-D.; Hamel, T. An inertial-aided homography-based visual servo control approach for (almost) fully actuated autonomous underwater vehicles. IEEE Trans. Robot. 2017, 33, 1041–1060. [Google Scholar] [CrossRef]
  39. Allibert, G.; Hua, M.-D.; Krupínski, S.; Hamel, T. Pipeline following by visual servoing for autonomous underwater vehicles. Control Eng. Pract. 2019, 82, 151–160. [Google Scholar] [CrossRef]
  40. Park, J.-Y.; Jun, B.-h.; Lee, P.-m.; Oh, J. Experiments on vision guided docking of an autonomous underwater vehicle using one camera. Ocean Eng. 2009, 36, 48–61. [Google Scholar] [CrossRef]
  41. Myint, M.; Yonemori, K.; Lwin, K.N.; Yanou, A.; Minami, M. Dual-eyes vision-based docking system for autonomous underwater vehicle: An approach and experiments. J. Intell. Robot. Syst. 2018, 92, 159–186. [Google Scholar] [CrossRef]
  42. Gao, J.; Liang, X.; Chen, Y.; Zhang, L.; Jia, S. Hierarchical image-based visual serving of underwater vehicle manipulator systems based on model predictive control and active disturbance rejection control. Ocean Eng. 2021, 229, 108814. [Google Scholar] [CrossRef]
  43. Huang, H.; Bian, X.; Cai, F.; Li, J.; Jiang, T.; Zhang, Z.; Sun, C. A review on visual servoing for underwater vehicle manipulation systems automatic control and case study. Ocean Eng. 2022, 260, 112065. [Google Scholar] [CrossRef]
  44. Siciliano, B.; Sciavicco, L.; Villani, L.; Oriolo, G. Force Control; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  45. Ott, C.; Mukherjee, R.; Nakamura, Y. Unified impedance and admittance control. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; IEEE: New York, NY, USA, 2010; pp. 554–561. [Google Scholar]
  46. Ridao, P.; Carreras, M.; Ribas, D.; Sanz, P.J.; Oliver, G. Intervention auvs: The next challenge. Annu. Rev. Control 2015, 40, 227–241. [Google Scholar] [CrossRef]
  47. Li, Z.; Ge, S.S.; Ming, A. Adaptive robust motion/force control of holonomic-constrained nonholonomic mobile manipulators. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 2007, 37, 607–616. [Google Scholar] [CrossRef] [PubMed]
  48. Ali, M.A.; Radzak, M.S.; Mailah, M.; Yusoff, N.; Razak, B.A.; Karim, M.S.A.; Ameen, W.; Jabbar, W.A.; Alsewari, A.A.; Rassem, T.H.; et al. A novel inertia moment estimation algorithm collaborated with active force control scheme for wheeled mobile robot control in constrained environments. Expert Syst. Appl. 2021, 183, 115454. [Google Scholar] [CrossRef]
  49. Rani, S.; Kumar, A.; Kumar, N.; Singh, H.P. Adaptive robust motion/force control of constrained mobile manipulators using rbf neural network. Int. J. Dyn. Control 2024, 12, 3379–3391. [Google Scholar] [CrossRef]
  50. Malczyk, G.; Brunner, M.; Cuniato, E.; Tognon, M.; Siegwart, R. Multi-directional interaction force control with an aerial manipulator under external disturbances. Auton. Robot. 2023, 47, 1325–1343. [Google Scholar] [CrossRef]
  51. Meng, X.; He, Y.; Han, J.; Song, A. Physical interaction oriented aerial manipulators: Contact force control and implementation. IEEE Trans. Autom. Sci. Eng. 2024, 22, 4570–4582. [Google Scholar] [CrossRef]
  52. Cui, Y.; Podder, T.K.; Sarkar, N. Impedance control of underwater vehicle-manipulator systems (uvms). In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No. 99CH36289), Kyongju, Republic of Korea, 17–21 October 1999; IEEE: New York, NY, USA, 1999; Volume 1, pp. 148–153. [Google Scholar]
  53. Heshmati-Alamdari, S.; Bechlioulis, C.P.; Karras, G.C.; Kyriakopoulos, K.J. Cooperative impedance control for multiple underwater vehicle manipulator systems under lean communication. IEEE J. Ocean. Eng. 2020, 46, 447–465. [Google Scholar] [CrossRef]
  54. Heshmati-alamdari, S.; Nikou, A.; Kyriakopoulos, K.J.; Dimarogonas, D.V. A robust force control approach for underwater vehicle manipulator systems. IFAC-PapersOnLine 2017, 50, 11197–11202. [Google Scholar] [CrossRef]
  55. Barbalata, C.; Dunnigan, M.W.; Petillot, Y. Position/force operational space control for underwater manipulation. Robot. Auton. Syst. 2018, 100, 150–159. [Google Scholar] [CrossRef]
  56. Cieślak, P.; Ridao, P. Adaptive admittance control in task-priority framework for contact force control in autonomous underwater floating manipulation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: New York, NY, USA, 2018; pp. 6646–6651. [Google Scholar]
  57. Huang, H.; Sun, D.; Mills, J.K.; Li, W.J.; Cheng, S.H. Visual-based impedance control of out-of-plane cell injection systems. IEEE Trans. Autom. Sci. Eng. 2009, 6, 565–571. [Google Scholar] [CrossRef]
  58. Lippiello, V.; Fontanelli, G.A.; Ruggiero, F. Image-based visual-impedance control of a dual-arm aerial manipulator. IEEE Robot. Autom. Lett. 2018, 3, 1856–1863. [Google Scholar] [CrossRef]
  59. Casalino, G.; Zereik, E.; Simetti, E.; Torelli, S.; Sperindé, A.; Turetta, A. Agility for underwater floating manipulation: Task &amp subsystem priority based control strategy. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; IEEE: New York, NY, USA, 2012; pp. 1772–1779. [Google Scholar]
  60. Simetti, E.; Casalino, G.; Torelli, S.; Sperinde, A.; Turetta, A. Floating underwater manipulation: Developed control methodology and experimental validation within the trident project. J. Field Robot. 2014, 31, 364–385. [Google Scholar] [CrossRef]
  61. Simetti, E.; Casalino, G.; Manerikar, N.; Sperindé, A.; Torelli, S.; Wanderlingh, F. Cooperation between autonomous underwater vehicle manipulations systems with minimal information exchange. In Proceedings of the OCEANS 2015-Genova, Genova, Italy, 18–21 May 2015; IEEE: New York, NY, USA, 2015; pp. 1–6. [Google Scholar]
  62. Simetti, E.; Casalino, G. Whole body control of a dual arm underwater vehicle manipulator system. Annu. Rev. Control 2015, 40, 191–200. [Google Scholar] [CrossRef]
  63. Li, L.; Wang, Z.; Zhu, G.; Zhao, J. Position-based force tracking adaptive impedance control strategy for robot grinding complex surfaces system. J. Field Robot. 2023, 40, 1097–1114. [Google Scholar] [CrossRef]
  64. Tahri, O.; Chaumette, F. Point-based and region-based image moments for visual servoing of planar objects. IEEE Trans. Robot. 2005, 21, 1116–1127. [Google Scholar] [CrossRef]
  65. Kim, S.; Jang, K.; Park, S.; Lee, Y.; Lee, S.Y.; Park, J. Continuous task transition approach for robot controller based on hierarchical quadratic programming. IEEE Robot. Autom. Lett. 2019, 4, 1603–1610. [Google Scholar] [CrossRef]
  66. Silpa-Anan, C. Autonomous Underwater Robot: Vision and Control. Master’s Thesis, The Australian National University, Canberra, Australia, 2001. [Google Scholar]
  67. Andersson, J.A.; Gillis, J.; Horn, G.; Rawlings, J.B.; Diehl, M. Casadi: A software framework for nonlinear optimization and optimal control. Math. Program. Comput. 2019, 11, 1–36. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Jmse 13 01447 g001
Figure 2. Environmental contact model.
Figure 2. Environmental contact model.
Jmse 13 01447 g002
Figure 3. Visualized simulation.
Figure 3. Visualized simulation.
Jmse 13 01447 g003
Figure 4. Tangential image moments.
Figure 4. Tangential image moments.
Jmse 13 01447 g004
Figure 5. End-effector quaternions.
Figure 5. End-effector quaternions.
Jmse 13 01447 g005
Figure 6. Normal image moment and contact force.
Figure 6. Normal image moment and contact force.
Jmse 13 01447 g006
Figure 7. Velocities of main vehicle.
Figure 7. Velocities of main vehicle.
Jmse 13 01447 g007
Figure 8. Joint velocities.
Figure 8. Joint velocities.
Jmse 13 01447 g008
Figure 9. Tangential image moments.
Figure 9. Tangential image moments.
Jmse 13 01447 g009
Figure 10. End-effector quaternions.
Figure 10. End-effector quaternions.
Jmse 13 01447 g010
Figure 11. Normal image moment and contact force.
Figure 11. Normal image moment and contact force.
Jmse 13 01447 g011
Figure 12. Velocities of main vehicle.
Figure 12. Velocities of main vehicle.
Jmse 13 01447 g012
Figure 13. Joint velocities.
Figure 13. Joint velocities.
Jmse 13 01447 g013
Figure 14. Contact force with different parameters.
Figure 14. Contact force with different parameters.
Jmse 13 01447 g014
Figure 15. Contact force.
Figure 15. Contact force.
Jmse 13 01447 g015
Figure 16. Velocity of end-effector in x direction.
Figure 16. Velocity of end-effector in x direction.
Jmse 13 01447 g016
Table 1. The notations of frames.
Table 1. The notations of frames.
NotationDescription
IThe inertial reference frame.
BThe body-fixed reference frame located at the center of mass of the vehicle.
EThe end-effector frame attached at the end of the manipulator.
CThe camera frame.
AThe auxiliary virtual camera frame, which is fixed with the camera and aligned with the target.
Table 2. Task list.
Table 2. Task list.
LabelVariableDescriptionGain
L c s c = ( x n , y n ) T Camera’s FOV
L f f n Normal contact force λ t f n = 0.0036
L q η e q End-effector orientation λ t q = 1.5
L p n a n End-effector norm trajectory λ t p n = 0.7
L p s s p s End-effector trajectory λ t p s = 1.5
L p r η 2 , p r = ( ψ , θ ) T Main vehicle attitude holding λ t p r = 1
Table 3. Result comparison.
Table 3. Result comparison.
RMSE (N)ST (s)OS (N)
CWBC1.65.91.8
HWBC1.9668.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Chen, G.; Zhang, F.; Gao, J. Whole-Body Vision/Force Control for an Underwater Vehicle–Manipulator System with Smooth Task Transitions. J. Mar. Sci. Eng. 2025, 13, 1447. https://doi.org/10.3390/jmse13081447

AMA Style

Liu J, Chen G, Zhang F, Gao J. Whole-Body Vision/Force Control for an Underwater Vehicle–Manipulator System with Smooth Task Transitions. Journal of Marine Science and Engineering. 2025; 13(8):1447. https://doi.org/10.3390/jmse13081447

Chicago/Turabian Style

Liu, Jie, Guofang Chen, Fubin Zhang, and Jian Gao. 2025. "Whole-Body Vision/Force Control for an Underwater Vehicle–Manipulator System with Smooth Task Transitions" Journal of Marine Science and Engineering 13, no. 8: 1447. https://doi.org/10.3390/jmse13081447

APA Style

Liu, J., Chen, G., Zhang, F., & Gao, J. (2025). Whole-Body Vision/Force Control for an Underwater Vehicle–Manipulator System with Smooth Task Transitions. Journal of Marine Science and Engineering, 13(8), 1447. https://doi.org/10.3390/jmse13081447

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop