A Self-triggered Position Based Visual Servoing Model Predictive Control Scheme for Underwater Robotic Vehicles †

An efficient position based visual sevroing control approach for Autonomous Underwater Vehicles (AUVs) by employing Non-linear Model Predictive Control (N-MPC) is designed and presented in this work. In the proposed scheme, a mechanism is incorporated within the vision-based controller that determines when the Visual Tracking Algorithm (VTA) should be activated and new control inputs should be calculated. More specifically, the control loop does not close periodically, i.e., between two consecutive activations (triggering instants), the control inputs calculated by the N-MPC at the previous triggering time instant are applied to the underwater robot in an open-loop mode. This results in a significantly smaller number of requested measurements from the vision tracking algorithm, as well as less frequent computations of the non-linear predictive control law. This results in a reduction in processing time as well as energy consumption and, therefore, increases the accuracy and autonomy of the Autonomous Underwater Vehicle. The latter is of paramount importance for persistent underwater inspection tasks. Moreover, the Field of View constraints (FoV), control input saturation, the kinematic limitations due to the underactuated degree of freedom in sway direction, and the effect of the model uncertainties as well as external disturbances have been considered during the control design. In addition, the stability and convergence of the closed-loop system has been guaranteed analytically. Finally, the efficiency and performance of the proposed vision-based control framework is demonstrated through a comparative real-time experimental study while using a small underwater vehicle.


Introduction
Vision-based control has been extensively investigated in recent decades for the operation of autonomous underwater vehicles [1,2]. Complex underwater missions, such as surveillance of usually equipped with a weak computing unit and, in most cases, suffer from limited energy resources and their recharging procedure is difficult and time and cost consuming [53]. On the other hand, in a N-MPC setup, a constrained Optimal Control Problem (OCP) must be solved at each sampling time. The latter is usually considered to be a very computationally demanding task. In addition, these systems are usually equipped with weak computing units that need to solve the VTA and the OCP of the NMPC at each time instant. This usually results in the reduction of the system accuracy, as bigger sampling times are required. Here, the problem is to design an automatic framework that relaxes the rate of control input calculations and visual tracking activation while maintaining the efficiency of the system. In other words, an automatic visual servoing framework that determines when the system requires tracking the visual data and calculating new control inputs while maintaining system performance at the desired level. This encourages the design of a self-triggered visual servoing strategy that is addressed in this work.

The Self-Triggered Control Framework
Nowadays, periodic control is the standard control framework that is used in most applications. Quite recently, though, a novel formulation of control schemes in a self-triggered manner is becoming popular. The key idea behind the self-triggered control is that the execution of the control task is not made ad-hoc at every sampling time, but, instead, it uses system feedback in order to sample as infrequently as possible while guaranteeing to preserve the stability of the system, see Figure 1. Consequently, this results in an aperiodic sampling system, while also preserving the system performance and stability. In particular, the self-triggered strategy leads to reducing the number of sampling data from the system, a feature that is important and desirable in a various number of applications with operational limitations in sensing, energy, and communication. The self-triggered control framework along with a closely related framework, named event-triggered control, comprise the recently introduced event-based control framework. Both approaches, self and event-based control, are comprised, inter alia, by a feedback control framework that calculates the control input and a triggering mechanism that decides when the new control update should occur. However, these frameworks are different; the event-triggered control is more reactive with respect to the self-triggered control, as control inputs in this strategy are calculated when the robot state deviates more than a certain threshold from a desired value, while the self-triggered framework can be considered to be proactive, as it computes the next control input ahead of time. Notice, that, in the event-triggered framework, a constant measurement of the system state is required in order to determine the time of control update. However, self-triggered strategy only requires the latest measurement of the system's state for determining the next triggering time instant [54]. More preliminary information regarding the event-triggering control techniques can be found in [55][56][57] and the papers quoted therein.

Contributions
In this paper (A preliminary version of this work, in the absence of a detailed analysis of the methodology, including detailed stability and convergence analysis, detailed controller design, detailed description of real time implementation and experimental results has been reported in to the IEEE European Control Conference [54] as well as in to the IEEE International Conference on Robotics and Automation [58].), by employing N-MPC, a Self-triggered Position Based Visual Servoing control strategy is designed for the motion control of autonomous underwater vehicles. The purpose of this control framework is to guide and stabilize the underwater robot towards a visual target while assuring that the target will not leave the camera's field of view ( Figure 2). The 3D position of the vehicle with respect to the target is estimated while employing computer vision algorithms and it is described in more detail in Section 4. The choice of PBVS instead of IBVS or 2-1/2 visual servoing is mainly motivated by the inherited advantage of PBVS to control the onboard camera and, as a result, the vehicle itself directly in the 3D space. This makes the design of the N-MPC framework more easy and efficient. The fact that PBVS cannot guarantee the preservation of the visual target inside the image frame is handled by defining of the field of view limitations in the N-MPC structure. The main contribution of this work relies on the design of a vision-based control strategy that automatically determines when the controller and the vision algorithm should be activated while maintaining the closed-loop performance and stability of the system. This leads to reduced tracking of the vision algorithm, CPU effort, and energy consumption, which are of paramount importance in the case of autonomous underwater vehicles in persistent inspection tasks that demand higher system autonomy. Experimental results on event-based formulations are scarce in the literature [59][60][61][62][63][64][65][66]. In this work, the efficiency and performance of the proposed control framework is verified through a comparative real-time experimental study using a small underactuated underwater robot in a small water tank. To the best of our knowledge, this work presents the first experimental validation of an event-based visual servoing control framework for underwater robotic vehicles.
In addition, the experimental results are quite satisfactory, as the vehicle reaches and stabilizes at the desired position relative to the visual target, while the number of activation of the visual algorithm is significantly decreased as compared to the conventional case employing classical N-MPC.
The remainder of the paper is organized, as follows: Section 2 presents the problem statement of this paper where the system and operation limitations are formulated in detail. In Section 3, the robust stability analysis for the proposed vision-based self-triggered N-MPC framework is accommodated. Section 4 demonstrates the performance of the proposed motion control framework through a set of experimental results. Finally, Section 5 concludes the paper.

Problem Formulation
In this section, initially, the mathematical modeling of the under-actuated underwater vehicle and its constraints are formulated. Subsequently, taking into account the external disturbances and uncertainties of the model, a perturbed version of the system is defined. Finally, the proposed motion control scheme is designed.

Mathematical Modeling
An autonomous underwater vehicle can be defined as a 6 Degree Of Freedom (DOF) free body with position and Euler angle vector x = [χ y z φ θ ψ] . Moreover, v = [u υ w p q r] is the vector of vehicle body velocities, where its components, according to SNAME [67], are surge, sway, heave, roll, pitch, and yaw, respectively ( Figure 3). In addition, τ = [X Y Z K M N] is the vector of forces and moments acting on the vehicle center of mass. In this spirit, the dynamics of an underwater robotic vehicle are given in as [68]: where: M = M RB +M A is the inertia matrix for rigid body and added mass, respectively, is the Coriolis and centripetal matrix for rigid body and added mass respectively, D(v) = D quad (v) + D lin (v) is the quadratic and linear drag matrix, respectively, g(x) is the hydrostatic restoring force vector, τ, is the thruster input vector and J(x) is the well-known Jacobian matrix [68]. The underwater robot considered in this paper is a 3 DOF VideoRay Pro ROV (Remotely Operated underwater Vehicle) that is equipped with three thrusters, which enable it effective in surge, heave, and yaw motion ( Figure 3). This means that the considered underwater robot is under-actuated along its sway axis. Here, due to the robot design, we simply neglect the angles φ, θ and angular velocities p and q. In addition, because of the robot symmetry regarding x-z and y-z planes we can safely assume that motions in heave, roll, and pitch are decoupled [68]. Furthermore, the coupling effects are safely can be considered to be negligible since the robot is operating at relatively low speeds. Finally, based on the aforementioned considerations, in this work we consider the kinematic model of the robot, which can be given, as follows [69]: where x k = [χ k , y k , z k , ψ k ] is the state vector at the time-step k, including the position and orientation of the robot relative to the target frame G. Moreover, the vector of control input is V k = [u k , w k , r k ] and dt denotes the sampling period. In addition, following theresults given in [70] and, by employing Input-to-State Stability (ISS), it can be shown that, by applying any bounded control input [u k , r k ] to the considered nonholonomic robotic system, the velocity about the sway direction v k can be seen as a bounded disturbance with upper bound ||v k || ≤v that vanishes at the point x = 0. Therefore, the aforementioned point is an equilibrium of the kinematic system of Equation (2). Note, in this work, we denote the upper bound for each variable by the notation (· ). Therefore, based on the aforementioned discussion, we consider the system: as the nominal kinematic system of the underwater robotic vehicle. It is worth mentioning that the function g(x k , v k ) ∈ Γ ⊂ R 4 is considered as a bounded inner disturbance of the system that vanishes at the origin and Γ is a compact set, such that: The underwater robot that is considered in this work moves under the influence of an irrotational current, which behaves as an external disturbance to the system. The current has components with respect to the χ, y and z axes, denoted by δ χ , δ y and δ z , respectively. Moreover, we denote by δ c the slowly-varying velocity of the current that is bounded by ||δ c k || ≤δ c and it has direction β in χ − y plane and α with respect to the z axis of the global frame, see Figure 4. In particular, we define δ k = [δ (χ/k) , δ (y/k) , δ (z/k) , 0] ∈ ∆ ⊂ R 4 , with ∆ being a compact set, where: It is straightforward to show that ||δ k || ≤δ, withδ =δ c dt. When considering the aforementioned external disturbances, the perturbed model of the underwater robotic system can be given, as follows: with ω k = g(x k , v k ) + δ k ∈ Ω ⊂ R 4 as the result of adding the inner and external disturbances of the system. Ω is a compact set, such that Ω = ∆ ⊕ Γ, where "⊕" denotes the Minkowski addition of sets ∆ and Γ. It is worth mentioning that the Minkowski addition set C of two sets A, B ⊂ R n is given as In this respect, since the sets ∆ and Γ are compact, we can conclude that Ω is also a bounded compact set, which is: ||ω k || ≤ω withω δ +γ.
The underwater robot is equipped by a pinhole camera with limited angles of view a and b for χ − y and χ − z plane, respectively. In this respect, the state vector of the system x with respect to the visual target is estimated by employing a proper vision algorithm, see Figure 4. The requirements for the vision system, namely the visibility constraints, are imposed in order to ensure that the target will remains always within the image-plane during the control operation.
where 2y T and 2z T denote the width and height of the visual target. In this context, the [ f χy/1 , f χy/2 ] and [ f χz/1 , f χz/2 ] indicate the camera's field-of-view on χ − y and χ − z plane, respectively ( Figure 4). Moreover, we consider a maximum distance R max , where the visual target is visible and recognizable for the vision system. The aforementioned requirements are captured by the state constraint set X of the system, given by: which is formed by: In addition, the control constraint set V set of the system is formulated, as follows: It is worth mentioning that the control input constraints are of the form |u| ≤ū , |w| ≤w and |r| ≤r. Thus, we obtain V k ≤V withV = (ū 2 +w 2 +r 2 ) 1 2 andV,ū,w,r ∈ R ≥0 . Therefore, it can be easily shown that system Equation (3) is Lipschitz continuous: (3), subject to constraints Equations (8b)-(8e) and (9), is locally Lipschitz in x for all x ∈ X, with a Lipschitz constant L f (max{8, 8(ūdt) 2 } + 1) 1 2 . See Appendix A.1 for the proof.

Control Design and Objective
The objective here is to guide the perturbed system Equation (6) to a desired compact set that includes the desired state x d [χ d , y d , z d , ψ d ] ∈ X, while respecting the state and control constraints described in Equations (8b)-(8e) and (9), respectively. We employ a predictive controller in order to achieve the aforementioned objective. More specifically, the N-MPC consists in solving an Optimal Control Problem (OCP) at time instant k, with respect to a control sequence , for a prediction horizon N. The OCP of the N-MPC is formulated, as follows: subject to: where F, E, and E f are the running, terminal cost, and terminal set, respectively. The solution of the aforementioned OCP Equations (10a)-(10d) at time instant k is an optimal control sequence, being indicated as V * f (·). It should be pointed out that the specifics for the design parameters, such as the running and terminal costs, as well as the state sets, will be provided in more detail in the sequel. In this context, we denote the predicted state of the nominal system Equation (3) at sampling time k + j byx(k + j|k), where j ∈ Z ≥0 . The state prediction is based on the measurement of the real system at sampling time k, denoted by x k , while applying a sequence of control inputs [V(k|k), V(k + 1|k), . . . , V(k + j − 1|k)]. Thus: Therefore, we have thatx(k|k) = x k . It is worth mentioning that the OCP is formulated and solved for the nominal system and for a specific time horizon. That makes it impossible to address the disturbances beforehand. However, we distinguish the nominal system, denoted asx(·), with the actual one denoted as x(·). Therefore, we can obtain the following preliminary result: The difference between the actual state x k+j at the time-step k + j and the predicted statex(k + j|k) at the same time-step, under the same control sequence, is upper bounded by: See Appendix A.2 for the proof.
More specifically, Lemma 2, gives the difference between the real state of the system Equation (6) with the predicted state of the nominal system Equation (3). In order to address this, we employ constraint tightening technique and use a restricted constraint set X j ⊆ X in Equation (10b) instead of the state constraint set X (More details regarding the constraint tightening technique can be found in the literature [71,72]). By employing the aforementioned constraint tightening technique, we guarantee that the evolution of the perturbed system Equation (6), when the control sequence developed in Equations (10a)-(10d) is applied to it, will necessarily satisfy the state constraint set X. In particular, we denote the restricted constraint set as Moreover, we define the running and terminal cost functions F(·), E(·), both of quadratic form, i.e., F(x, V) =x Qx + V RV and E(x) =x Px, respectively, with P, Q and R are positive definite matrices. In particular, we define Q = diag{q 1 , q 2 , q 3 , q 4 }, R = diag{r 1 , r 2 , r 3 }, and P = diag{p 1 , p 2 , p 3 , p 4 }. For the running cost function F, we have F(0, 0) = 0, and we can also obtain the following: Regarding the cost function F(x, V) we have: See Appendix A.3 for the proof.
As we have already mentioned, the state and input constraint sets are bounded; therefore, we have: where σ max (Q) denotes the largest singular value of the matrix Q. Moreover, z max = R max tan( b 2 ) − z T is the maximum feasible value along the z axis.
See Appendix A.4 for the proof.
Before proceeding with the analysis, we employ some standard stability conditions that are used in N-MPC frameworks: Assumption 1. For the nominal system Equation (3), there is an admissible positively invariant set E ⊂ X, such that the terminal region E f ⊂ E , where E = {x ∈ X : ||x|| ≤ ε 0 } and ε 0 being a positive parameter.

Assumption 2.
We assume that in the terminal set E f , there exists a local stabilizing controller

Problem Statement
At time step k, the solution of the N-MPC Equations (10a)-(10d) provides an control sequence, In a conventional N-MPC framework, only the first control vector, i.e., V * (k|k), is applied to the robotic system and the remaining parts of the optimal control sequence V * f (k) is discarded. At the next sampling time k + 1, again, a new state measurement is obtained from the vision algorithm and a new OCP based on this new state measurement is calculated. This is iteratively repeated until the robot has reached to the desired position. However, the proposed self-triggered strategy in this work suggests that a portion of the computed control sequence V * f (k) might be applied to the underwater robot and not only the first vector. Let us suppose k i to be a triggering instant. In the proposed self-triggered control strategy, the control input that is applied to the robotic system is of the form: for all d i ∈ [0, k i+1 − k i ] ∈ Z 0 , where k i+1 is the next triggering instant. Between two consecutive triggering instants, i.e., [k i , k i+1 ), the control inputs calculated by the N-MPC at the previous triggering time instant are applied to the underwater robot in an open-loop mode i.e., the vision algorithm is not activated and no image processing is performed. Obviously, the smallest and largest possible time intervals are 1 (i.e., k i+1 = k i + 1) and N − 1, respectively. The self-triggered framework that is proposed in this work will provide sufficient conditions for the activation of the vision algorithm and triggering the computation of the N-MPC. Currently, we are ready to state the problem treated in this paper: Problem 1. Consider the system Equation (6) that is subject to the constraints Equations (7) and (9). The control goal is (i) to design a robust position based visual servoing control framework provided by Equations (10a)-(10d), such that the system Equation (6) converges to the desired terminal set and (ii) to construct a mechanism that determines when the control updates, state measurement and next VTA should occur.

Stability Analysis of Self-Triggering NMPC Framework
The stability analysis for the system Equations (6)- (14) is addressed in this section. It is already shown in the literature that the closed-loop system in the case of classic N-MPC is Input-to-State Stable (ISS) with respect to the disturbances [71] (More details on the notion of ISS in the discrete-time case can be found in [73].). In the subsequent analysis, we are going to use the ISS notion in order to derive the self-triggering mechanism.
The traditional approach in establishing stability in predictive control consists of two parts, named feasibility and convergence analysis. The aim in the first is to prove that the initial feasibility implies feasibility afterwards and, based on this, in the second part, it is then shown that the system state converges to a bounded set around the desired state.

Feasibility Analysis
We begin by treating the feasibility property. Before proceeding with the analysis, we provide a necessary definition: In other words, X MPC , a set that contain all of the state vectors for which a feasible control sequence exists satisfying the constraints of the optimal control problem. Assume, now, that, at k i k, an event is triggered, thus an OCP is solved and a new control sequence V * f (k) [V * (k|k), . . . , V * (k + N − 1|k)] is provided. Now, consider control inputs at time instants k + m with m = 1, . . . , N − 1, which are based on the solution at sampling time k, V * f (k). These can be defined, as follows: Let N − 1 control sequencesṼ m f (k) be comprised by the control inputs of Equation (15), i.e., Notice that the time-steps k + m are the discrete-time instants after the time-step of the triggering instant k i , i.e., [k, . With the help of Assumption 2 and by taking the feasibility of the initial control sequence at sampling time k into account, it follows that, for m = 1, . . . N − 1, we haveṼ(k + j|k + m) ∈ V set . We can prove finally that x(k + N + 1|k + m) ∈ E f for all m = 1, . . . , N − 1: Proof. From Lemma 2, we can derive that: by employing the Lipschitz property of E(·), we have: Having in mind thatx(k + N|k) ∈ E f and by employing Assumption 4, we obtain the following: It should hold that E(x(k + N|k + m)) ≤ α ε , i.e.,x(k + N|k + m) ∈ E , thus: Now, applying a local control law, we getx(k + N + 1|k + m) ∈ E f for all m = 1, . . . , N − 1. From these results, it can be concluded that X MPC is a robust positive invariant set if the uncertainties are bounded by Equation (16) for all m = 1, . . . , N − 1. Notice that Equation (16) should still hold for m = 1 for the problem to be meaningful, in the sense that it should be feasible at least in the time-triggered case.

Convergence Analysis
Herein, we show that the state of the actual system convergences to a desired terminal set. In order to prove this, we show that a proper value function is decreasing. First, we define the optimal cost at the time-step k as J * N (k) = J N (x k , V * f (k)), which is evaluated under the optimal control sequence. In the same spirit, the optimal cost at a time-step k + m with m ∈ [1, N − 1] is denoted as ). Now, we denote byJ N (k + m) the "feasible" cost, which is evaluated from the control sequenceṼ m f (k), which isJ N (k + m) =J N (x k+m ,Ṽ m f (k)). In the following, we will employ this "feasible" cost in order to obtain the difference J * N (k + m) − J * N (k). More specifically, the difference between the optimal cost at time k and the feasible sequence at time-step k + j by employing Equation (15) is: See Appendix B for the proof. From the optimality of the solution, we have: This result along with the triggering condition that is going to be derived in the next subsection will enable us to provide conclusions for the stability and convergence of the closed-loop system.

The Self-Triggered Mechanism
This section presents the self-triggering framework that is proposed in this work. Let us consider that, at time-step k i , an event is triggered. We assume that the next triggering time k i+1 is unknown and should be found. More specifically, triggering time k i+1 k i + d i should be such that the closed-loop maintains its predefined desired properties. Therefore, a value function J * N (·) is required to be decreasing. In particular, given Equations (17) and (18), for a triggering instant k i and a number of time-step d i after k i , with d i = 1, 2, . . . , N − 1 it can be obtained the following: where: The time instant k i+1 should be such that: where 0 < σ < 1. Substituting Equation (20) to (19), we obtain This suggests that, by considering 0 < σ < 1, decreasing of the value function is guaranteed. In particular, in view of Equation (21), we can conclude that the value function J * N (·) has been proven to be decreasing at the triggering instants. Next, we study the convergence of the state of the system under the proposed self-triggered framework:

Convergence of System under the Proposed Self-Triggered Framework
We have proven in Equation (20) that the value function J * N (·) is always decreasing with respect to the previous triggering instant. In other words, the value function cannot be guaranteed to be monotonically decreasing at every time-step, as the standard Lyapunov theory dictates. Thus, additional arguments will be provided in order to prove convergence of the state of the closed-loop system to a bounded set. In particular, the following steps are going to be followed: first, we are going to provide the steps to derive a suitable Lyapunov-function candidate and, secondly, we are going to show that this Lyapunov function is an ISS-Lyapunov function and, according to standard definitions, if a system admits an ISS-Lyapunov function, then the system is ISS with respect to the external disturbances, [73]. Thus, finding a suitable ISS-Lyapunov function immediately implies that our system is ISS with respect to disturbances and, thus, the states of the closed-loop system are converging to a bounded set. Proposition 1. Our proposed Lyapunov function candidate is the following: Now, if d i = 1 at every time instant then our system boils down to the classic time-triggered MPC, where, in [71], it has been shown that the closed-loop system is ISS with respect to the disturbances. However, we are going to show that Equation (22) is also an ISS-Lyapunov function for d i ≥ 1. This is going to be shown for d 1 = 2 and then Equation (22) is derived by induction.
Proof. Now, assume that d i = 2. From Equation (19), it follows that: as well as: adding the last two inequalities, yields: (23) adding and subtracting the terms ∑ (23), we can obtain: Considering as a Lyapunov function: Equation (24) is re-written as: It is now evident that, by induction, we can reach to Equation (22) for an arbitrary d i , following the same procedure. Moreover, from Equation (25), it is obvious that W(k), as defined in Equation (22), is an ISS-Lyapunov function, thus the proposed framework is ISS stable with respect to the external disturbances and the proof is completed.
Thus, having the aforementioned analysis in mind, the next tracking of the vision system as well as updating the control law should be occur when Equation (20) is violated. This means that, at the triggering time instance, the condition Equation (20) must be checked for each consecutive time-step, i.e., for d i = 1, 2, . . . . Thus, we check which time step does not meet this condition and set it to the next triggering instant k i+1 . Based on the above discussion, it can be understood that, in the proposed self-activated framework, the time step k i+1 is found beforehand at time k i . Moreover, it is worth mentioning that the term L Q (d i ) only includes predictions of the nominal system that can be easily computed by forward integration of Equation (3) for time-steps d i ∈ [1, N − 1]. Now, based on the aforementioned stability results, we state the theorem for the proposed vision-based self-triggered framework: Theorem 1. Consider the system of autonomous underwater vehicle described by Equation (6), which is subject to state and input constraints given in Equations (7) and (9)  x(k i ) ← VTA Trigger the VTA, get s(k i ) 3: Run OCP of (10a)-(10d) 4: (20) for d i The next triggering time 5: for i = 1 → d i do 6: Apply the control inputs V * (k i + i|k i ) to the underwater robot. 7: goto Triggering time.
At time k i , we assume that the Vision Tracking Algorithm (VTA) is triggered, the optimal control problem of the N-MPC Equations (10a)-(10d) is run and a control sequence for the time interval [k i , k i + N − 1] is provided. The solution of Equation (20) provides the next triggering time k i+1 , as it is already stated. During the time interval i ∈ [k i , k i+1 ), the control inputs V * (k i + i|k i ) are applied to the underwater robot in an open-loop fashion. Next, at k i+1 the vision system is tracked and the OCP of the N-MPC Equations (10a)-(10d) is solved again while employing the new state measurement x(k i+1 ) as the initial values in Equations (10a)-(10d). The controller follows this procedure until the robot converges and stabilizes towards the visual target.

Experiments
In this section, the efficacy of the proposed position based self triggered framework demonstrated through a real time comparative experimental study. A real-time stabilization scenario was considered by employing a small and under-actuated underwater vehicle.

System Components
The small underwater robot used in following experiments is a 3-DOF (VideoRay PRO, VideoRay LLC, Figure 3), which is equipped with 3 thrusters, and a USB camera. The image dimensions are 640x480 pixels. A visual target is located on an aluminum surface plane that is fixed inside the tank. The system software is conducted in the Robotics Operating System (ROS, http://www.ros.org), and the code is written in C++ and Python.
The state vector of the underwater robot regarding the visual target is estimated in real time while using the ROS package ar_pose (http://www.ros.org/wiki/ar_pose), which is an Augmented Reality Marker Pose Estimation algorithm that is based on the ARToolkit software library (http://www. hitl.washington.edu/artoolkit/). The target detection and robot localization in initial and desired pose configuration are shown in Figure 5. The constrained N-MPC that was used in this real-time experiment was designed using the NLopt Optimization library [74].

Experimental Results
The goal in the following comparative experimental studies is the stabilization of the underwater robot at the desired configuration towards the visual target. Two experiments were held for comparison. More specifically, in the first experiment, we employed a classic N-MPC (i.e., time activation at each sampling time), while, in the second experiment, the self-triggered framework proposed in this work was used. The initial as well as desired position of the underwater vehicle relative to the target frame is In the initial pose, the target appears in the right side of the camera view because of the negative yaw angles of the vehicle with respect to the target frame (See Figure 5). Note that this is a difficult initial pose and, if one does not take the visual constraints into account, the experiment will fail.
The sampling time and prediction horizon were selected as to be dt = 0.15 sec and N = 6, respectively. It is worth mentioning that the sampling time is selected based on the response frequency of the closed-loop system, while the prediction horizon is selected based on the computational capabilities of the onboard unit to solve the optimization problem. The more capable the computational unit is, the larger the prediction horizon can be considered. The maximum allowable velocities for the considered underwater robot in surge, heave, and yaw direction were selected asū = 0.2 m/s, w = 0.3 m/s andr = 0.3 rad/s, respectively. It is worth mentioning that the considered allowable velocities might be required for the needs of several common underwater tasks (e.g., seabed inspection, mosaicking), where the vehicle is required to move with relatively low speeds with a predefined upper bound. The design matrices Q, R, and P are defined as Q = diag(0.5, 4.5, 4.5, 0.1), R = diag(0.17, 0.1, 1), and P = diag (1, 1, 1, 1), respectively. The maximum permissible distance on the referred water tank is R max = 1.5 m. The results of the experiment are presented in Figures 6-10.
In Figure 6, the evolution of the robot coordinates in x, y, and ψ for both experiments are depicted. When comparing two experiments, it is obvious that the underwater robot in both experiments has reached and stabilized in the desired position towards the visual target and the operational limitations (F.O.V and control saturation) remained satisfied. It can also be seen that the system performances in the case of proposed self triggering framework are better (or at least the same) as compared to the classical approach.    In Figures 7 and 8, the camera view and the coordinates of the visual target center during the experiments are presented, respectively. It is obvious that the target remains inside of the F.O.V of the camera. Figure 9 presents the triggering evaluation in the case of the proposed self triggered framework. For the value 1 in vertical axis, the vision algorithm has been activated, thus the image has been processed, the state vector has been estimated, N-MPC has been evaluated, and control inputs are calculated. For value 0, the rest of the last computed control inputs implemented on the underwater robot in an open-loop fashion, and, therefore, no optimization and no image processing is running. Moreover, in the case of classic N-MPC, vision tracking and N-MMPC are running always at all sampling times, as has been already stated. It is worth mentioning that by employing the proposed self triggered condition, the triggering of the vision tracking algorithm and the N-MPC have been reduced by 50% (124 triggering instead of 253) regarding the classic N-MPC framework. When comparing the triggering instants of Figure 9 to the image target center coordinate in Figure 8, one might notice that when the target is going to leave the image plane (at the region of six and 14-17 s of the experiment), the triggering instants are more frequent. This fact appears at region of the 40 and the 80-110 sampling times, respectively, in the Figure 9. When comparing triggering instants of Figure 9 to the state evolution of the system Figure 6, one might notice that when the robot is getting near to the desired position the triggering instants are more frequent. This is because, close to the desired position, the system becomes more demanding due to the visibility limitations as the target becomes larger in the camera view and because external disturbances move the robot from the desired position.
The computational time in the case when a new state information from vision system and a new control sequence are calculated (triggering instant) is approximately 0.1 s, while, in the case of the open loop control (employing the proposed self triggered framework), it is being reduced to 0.0002 s. This is because, in the case of self-triggering framework, neither the vision tracking algorithm nor the optimization process is executed between two triggering instants. Finally, Figure 10 presents the control inputs during the experiments. It is easy to see that the control constraints remained satisfied during the experiment.

Video
This work is accompanied by a video presenting the experimental procedure of Section 4: https: //youtu.be/mdRM2ThaOQM.