Next Article in Journal
Blockchain and IoT Integration: A Systematic Survey
Next Article in Special Issue
Automated Calibration Method for Eye-Tracked Autostereoscopic Display
Previous Article in Journal
Screening Method for Anti-Colon Cancer Drugs Using Two Sensor Cell Lines with Human β4-Galactosyltransferase 4 Gene Promoters
Previous Article in Special Issue
High-Precision Detection of Defects of Tire Texture Through X-ray Imaging Based on Local Inverse Difference Moment Features
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Modeling and Control of a Micro AUV: Objects Follower Approach

Jesus Arturo Monroy-Anieva
Cyril Rouviere
Eduardo Campos-Mercado
Tomas Salgado-Jimenez
1,† and
Luis Govinda Garcia-Valdovinos
Energy Division, Center for Engineering and Industrial Development-CIDESI, Santiago de Queretaro, Queretaro 76125, Mexico
Automation and Control Department, Universidad Politecnica de Tulancingo-UPT, Tulancingo 43629, Hidalgo, Mexico
Ecole Nationale Supérieure de Techniques Avancées-ENSTA, Bretagne, Brest 29806, France
CONACYT, Universidad del Istmo de Tehuantepec-UNISTMO, Tehuantepec 70760, Oaxaca, Mexico
Author to whom correspondence should be addressed.
Current address: Av. Playa Pie de la Cuesta No. 702, Desarrollo San Pablo, Santiago de Queretaro, Queretaro 76125, Mexico.
Sensors 2018, 18(8), 2574;
Submission received: 14 June 2018 / Revised: 17 July 2018 / Accepted: 19 July 2018 / Published: 6 August 2018
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)


This work describes the modeling, control and development of a low cost Micro Autonomous Underwater Vehicle ( μ -AUV), named AR2D2. The main objective of this work is to make the vehicle to detect and follow an object with defined color by means of the readings of a depth sensor and the information provided by an artificial vision system. A nonlinear PD (Proportional-Derivative) controller is implemented on the vehicle in order to stabilize the heave and surge movements. A formal stability proof of the closed-loop system using Lyapunov’s theory is given. Furthermore, the performance of the μ -AUV is validated through numerical simulations in MatLab and real-time experiments.

1. Introduction

Nowadays, the development of autonomous underwater systems is a scientific research topic which deserves a great interest from the international community. There are a huge number of possible applications wherein autonomous underwater vehicles can be used to avoid risky tasks for the human beings. The proper operation of autonomous vehicles requires the convergence of several disciplines among which the automatic control discipline may play an important role when dealing with complex nonlinear dynamics as in the case of AUVs (Autonomous Underwater Vehicles) [1]. The AUVs have traditionally been used for oceanographic research, where these vehicles have been used for mapping and monitoring specific areas; for example, 3D reefs reconstruction, marine cartography, ecosystem exploration, pipeline and structures inspection, etc. [2]. There are other water-based applications which take advantage of the use of AUVs, such as the monitoring of nuclear storage ponds, waste water treatment facilities and archaeology enclosures. These environments differ from the ocean in terms of scale, thus they can be considered as closed spaces [3]. Traditional AUVs have tended to be large scale (meters in length) and high cost, making them unsuitable for small scale environments. One interesting topic from the automatic control point of view is that very often the vertical and horizontal movements are coupled, i.e., forward momentum is required for the AUV to move in the vertical plane. This means that traditional AUV designs are unsuitable for being used as small-scale sensor platforms [4,5,6]. The AR2D2 prototype (see Figure 1) has major features, such as its size and maneuverability compared to traditional vehicles.
The μ -AUV prototype has the shape of a torpedo with the following dimensions: 35 cm × 20 cm front and 30 cm long. Its weight is less than 5 kg, thus this vehicle is considered like a micro AUV [7]. The different AUV’s degrees of freedom are actuated by four thrusters. The geometric shape of the prototype is such that the vertical and horizontal movements can be considered decoupled.
This paper provides details about the platform development and real-time test of a control strategy for heave and surge movements to follow objects. The effectiveness of the proposed control strategy is validated through numerical simulations as well as real-time experiments. Section 2 presents further details on the design and capabilities of the μ -AUV. In Section 3, we briefly describe the dynamic model of the vehicle, whilst Section 4 shows the control strategy for the depth and forward motion through a nonlinear PD controller. Section 5 details the computer vision algorithm developed in order to detect and follow objects, whilst Section 6 presents the simulation of the control law chosen for depth and forward movement, and the experimental results. Finally, concluding remarks and further work proposals are given in Section 7.

2. Prototype Description

2.1. Embedded System

The hardware architecture of the prototype consists of an embedded system that includes a Raspberry Pi 2B, which has a quad-core ARM (Advanced RISC Machines) Cortex-A7 CPU of 900 MHz, 1 GB RAM memory, camera interface (CSI), display interface (DSI), four USB ports, Ethernet port, Full HDMI port, 40 GPIO pins, and a slot to use a micro SD card for an operative system based on a Linux platform that allows for programming the Middleware. This embedded system also includes a compass sensor (CMPS10), an ultrasonic sensor with the driver SRM400, a depth sensor (BMP085) and a Raspberry Pi camera for artificial vision. The Raspberry processes the information from the sensors, computes de control laws and sends signals to the actuators (DC motors) controlled by Pulse Width Modulation using a micro maestro device (six-channel USB) in order to manage four thrusters through the Robbe Rookie 25A ESC (Electronic Speed Control) motor drivers. Real-time communication is provided using an Ethernet LAN interface. In Figure 2, a schematic shows components of the vehicle’s hardware and their interactions.

2.2. Computer Vision

The GNSS (Global Navigation Satellite System) signals cannot go through water, acoustic communication throughput is limited, odometry is impossible. However, vision is available. The camera provides rich data about the environment. Moreover, in the case of a Remote Operated Vehicle (ROV), it shows to the the user a lot of information to help him drive the robot. Computer Vision is a way to process these data autonomously, in order to implement obstacle avoidance, object following recognition and following, localization and mapping, without any human intervention.
However, as other sensors, underwater vision poses some particular issues to be solved with respect to usual air vision. The main problem is about picture quality, depending on the water (clearness, suspended particles, etc.), the camera will be able to see less or more details. These details are the key to success for computer vision algorithms, especially dealing with features-based vision. Moreover, many features that the camera will detect are not reliable because they are not static (usually, particles floating in water, bubbles, etc.) Thus, we cannot rely on stereo-vision, optical flow and common mapping, which need a strong feature recognition.
Sometimes, pictures quality is so bad that even a human is not able to use it. Colors often do not have enough contrasts, which does not help to detect and differentiate objects that are not close enough to the robot. Finally, the camera can be reliable only as a low-range sensor, which is not adapted for global navigation.
In fact, underwater vision is problematic only if it tries to mimic air vision. Indeed, what is seen as an issue can be turned to our advantage to have better vision quality than expected before.
Classic computer vision complains about the lack of details under water. However, it then has to filter all of these data to select only what is interesting. With an underwater vision point of view, this heavy filtering is not necessary: it is already done by the environment. Only three colors are visible in water: red, orange and yellow; all others appear as blue. This is why objects we want to detect usually have these colors. Thus, to detect them, color filtering is enough (shape and geometry does not matter).
To get around the problem of low contrast level, HSV (Hue–Saturation–Value) filtering can give good results. A lamp also helps to keep the same luminosity exposition (and avoid moving shadows). It is also a way to measure distance to the object: the better luminosity and contrast are, the closer it is.

2.3. Prototype’s Movement Description

The AR2D2 micro submarine has been designed and built, which is shown in Figure 3, with its body fixed frame ( O b , x b , y b , z b ) . The center ( O b ) of this frame corresponds to the vehicle’s center of gravity, and its axes are aligned with the main axes of vehicle’s symmetry. The movement in the horizontal plane is referred to as s u r g e (along x b axis) and s w a y (along y b axis), while h e a v e represents the vertical motion (along z b axis). Roll, pitch, and yaw, denoted ( ϕ , θ , ψ ) , are the Euler angles describing the orientation of the vehicle’s body fixed frame with respect to the earth-fixed frame ( O I , x I , y I , z I ) [8], while ( x , y , z ) denote the coordinates of the body-fixed frame center in the earth fixed frame. The propulsion system consists of four thrusters that generate the rotational and translational motion. Concerning the rotational movement of this prototype, roll motion is performed through differential speed control of thrusters 1 and 2. In the same fashion, yaw motion is obtained using thrusters 3 and 4; finally, pitch motion is unactuated with respect to the translational movement of the z axis being regulated by decreasing or increasing the combined speed of thrusters 1 and 2. In the same way, the translational movements along the x b - and y b -axes are obtained by using thrusters 3 and 4 and by controlling the yaw angle.

3. Dynamic Model

The dynamics of the vehicle that are expressed in the body-fixed frame can be written in a vectorial setting according to [8]:
M ν ˙ + C ( ν ) ν + D ( ν ) ν + g ( η ) = τ + w e ,
η ˙ = J ( η ) ν ,
where M R 6 × 6 is the inertial matrix, C ( ν ) R 6 × 6 defines the Coriolis-centripetal matrix, D ( ν ) R 6 × 6 represents the hydrodynamic damping matrix, g ( η ) R 6 × 1 describes the vector of gravitational/buoyancy forces and moments, τ = [ τ 1 , τ 2 ] T = [ [ X , Y , Z ] , [ K , M , N ] ] T R 6 × 1 defines the vector of control inputs; w e R 6 × 1 defines the vector of disturbances; ν = [ ν 1 , ν 2 ] T = [ [ u , v , w ] , [ p , q , r ] ] T R 6 × 1 denotes the linear and angular velocity vector in the body-fixed frame; η = [ η 1 , η 2 ] T = [ [ x , y , z ] , [ ϕ , θ , ψ ] ] T R 6 × 1 is the position and attitude vector decomposed in the earth-fixed frame, and J ( η ) R 6 × 6 is the transformation matrix between body frame and earth-fixed frame (for more details, see [9,10]).

3.1. Gravity/Buoyancy Forces and Torques

According to Archimedes’s principle, the buoyant force f B applied in the center of buoyancy, which acts on the opposite direction of vehicle weight f W , is expressed as follows:
f B = 0 0 ρ g f W = 0 0 m g ,
where ρ represents the fluid density, g the gravitational acceleration, ∇ the displaced fluid volume and m the mass of the vehicle. Now, considering that W = m g and B = ρ g by using the z y x -convention for navigation and control application [11], the transformation matrix J 1 ( η 2 ) = R z , ψ T R y , θ T R x , ϕ T is applied in order to obtain the weight and buoyancy forces respect to the body fixed coordinates system:
F B = J 1 ( η 2 ) 1 f B F W = J 1 ( η 2 ) 1 f W ;
F B = B s i n ( θ ) B c o s ( θ ) s i n ( ϕ ) B c o s ( θ ) c o s ( ϕ ) F W = W s i n ( θ ) W c o s ( θ ) s i n ( ϕ ) W c o s ( θ ) c o s ( ϕ ) .
Thus, the restoring forces acting on the vehicle are f g = F B + F W , this is:
f g = ( B W ) s i n ( θ ) ( W B ) c o s ( θ ) s i n ( ϕ ) ( W B ) c o s ( θ ) c o s ( ϕ ) .
On the other hand, the restoring moments are described by the following equation:
m g = r w × F W + r b × F B ,
where r w = [ x w , y w , z w ] T and r b = [ x b , y b , z b ] T represent the positions of the center of gravity (CG) and the center of buoyancy (CB), respectively. Based on the design of the vehicle and in order to reduce further analysis, the origin of the body fixed frame is chosen in the gravity’s center; this implies that r w = [ 0 , 0 , 0 ] T , while the center of buoyancy is r b = [ 0 , 0 , z b ] T . For practical purposes, the buoyancy force is greater than the weight, i.e., W B = f b . Notice that f b should be smaller than the force produced by the thrusters. Then, from Equations (6) and (7), we have:
g ( η ) = f g m g = f b s i n ( θ ) f b c o s ( θ ) s i n ( ϕ ) f b c o s ( θ ) c o s ( ϕ ) z b B c o s ( θ ) s i n ( ϕ ) z b B s i n ( θ ) 0 .

3.2. Forces and Torques Generated by the Thrusters

Figure 3 shows the forces generated by the thrusters acting on the micro submarine. These are described relative to the body-fixed coordinate system as:
f 1 ^ = 0 0 f 1 ; f 2 ^ = 0 0 f 2 ; f 3 ^ = f 3 0 0 ; f 4 ^ = f 4 0 0 .
Summarizing and using the notation of [12], it follows that:
τ 1 = X Y Z = f 3 + f 4 0 f 1 + f 2
and the body-fixed torques generated by the above forces are defined as:
τ 2 = i = 1 4 l i × f i ^ ,
where l i = ( l i x , l i y , l i z ) is the position vector of the force f i ^ i = 1 , . . , 4 , with respect to the body-fixed reference frame. Then, the torques generated by the thrusters are described as:
τ 2 = K M N = l 1 y f 1 + l 2 y f 2 l 1 x f 1 + l 2 x f 2 l 3 y f 3 + l 4 y f 4 .
τ = f 3 + f 4 0 f 1 + f 2 l 1 y f 1 + l 2 y f 2 l 1 x f 1 + l 2 x f 2 l 3 y f 3 + l 4 y f 4 .

4. Control Strategy

For the design of the controller, it is common to assume that the hydrodynamic parameters involved in the dynamical model of the underwater vehicle are unknown. Indeed, they depend on effects and properties that are hard to model or estimate, like added mass, skin friction, vortex shedding, fluid characteristics, etc. Therefore, we propose using a nonlinear PD controller [13].
u ( t ) is a PD controller, which is depicted as the following equation:
u ( t ) = K P e ( t ) + K D d e ( t ) d t ,
where e ( t ) = r ( t ) y ( t ) is the error, r ( t ) represents the reference, y ( t ) is the output measurement, and ( K P , K D ) are the proportional and derivative gains. In Equation (13), we can notice that, if e ( t ) , then u ( t ) ; this could lead to system oscillations or in other case saturation of the actuators. In order to prevent damage on the actuators, we propose using a saturation function in each term of Equation (13).
Now, let σ b i ¯ ( k i h i ) be a saturation function ∀i = 1,2, and b i ¯ , k i > 0 , described in Figure 4 and defined as the next equation:
σ b i ¯ ( k i h i ) = b i ¯ , i f k i h i > b i ¯ , k i h i , i f k i h i b i ¯ , b i ¯ , i f k i h i < b i ¯ .

4.1. A Nonlinear PD Controller Based on Saturation Functions

According to Equations (13) and (14), we propose a nonlinear PD controller based on saturation functions as follows:
u ( t ) = σ b 1 ¯ k 1 e ( t ) + σ b 2 ¯ k 2 d e ( t ) d t .
The above equation can be represented as:
u ( t ) = i = 1 2 u i ,
where u i = σ b i ¯ ( k i h i ) represents the saturation function, h 1 is the error and h 2 is the derivative error. Then, according to Equation (14), we have:
u i = σ b i ¯ ( k i h i ) = b i ¯ i f k i h i > b i ¯ , k i h i i f k i h i b i ¯ , b i ¯ i f k i h i < b i ¯ ,
and, for the previous equation, we can rewrite it as:
u i = s i g n ( h i ) b i ¯ , i f k i h i > b i ¯ , k i h i , i f k i h i b i ¯ .
In Equation (18), we can notice that the parameters tuning of the controller, which is described by Equation (15), are the gains k i and the saturation values b i ¯ , ∀i = 1 , 2 . Notice that the parameters tuning could be the saturation values b i and the interval of h i for which u i is lineal, thus we can choose the value of h i for which we want to saturate the law control. As a consequence, we are going to introduce a new parameter. For this, we consider the point of h i where u i = b i ¯ ; this is:
u i = k i h i = b i ¯ h i = b i ¯ / k i .
Then, we define:
d i : = b i ¯ / k i
as the point where:
u i = s i g n ( h i ) b i ¯ h i > d i .
According to Equations (20) and (21), we can represent the control law, which is given by Equation (18), in terms of the parameters b i and d i as follows:
u i = s i g n ( h i ) b i ¯ , i f h i > d i , b i ¯ d i 1 h i , i f h i d i ,
where the parameters tuning of the controller are b i and d i , ∀i = 1 , 2 . In order that Equation (22) will be expressed in terms of h i when h i > d i , we have that:
s i g n ( h i ) b i ¯ = h i s i g n ( h i ) b i ¯ h i 1 .
s i g n ( h i ) b i ¯ = h i b i ¯ h i 1 .
Considering that h i h i 1 = h i 1 h i , then Equation (22) can be rewritten as:
u i = b i ¯ h i 1 h i i f h i > d i , b i ¯ d i 1 h i i f h i d i .
Finally, the control law defined by Equation (15) can be represented as:
u ( t ) = u 1 + u 2 = k p ( e ) e ( t ) + k d ( e ˙ ) e ˙ ( t ) ,
k p ( e ) = b 1 ¯ e ( t ) 1 i f e ( t ) > d 1 , b 1 ¯ d 1 1 i f e ( t ) d 1 ,
k d ( e ˙ ) = b 2 ¯ e ˙ ( t ) 1 i f e ˙ ( t ) > d 2 , b 2 ¯ d 2 1 i f e ˙ ( t ) d 2 .
The advantage of this controller is that the maximum forces and torques are chosen by the parameters b 1 ¯ and b 2 ¯ . Thus, we are sure that the actuators will not be damaged, but, in other cases, it is necessary that the forces and torques are slightly larger to correct the system error.

4.2. Stability Proof

Considering Equations (1) and (2), we propose the following control input:
τ = g ( η ) J T ( η ) τ N P D
τ N P D = σ b p ¯ k p e ( t ) + σ b d ¯ k d ( e ˙ ( t ) ) .
The equation before can be rewritten as:
τ N P D = K p ( · ) e ( t ) + K d ( · ) e ˙ ( t ) ,
K p ( · ) = K p 1 ( · ) 0 0 0 K p 2 ( · ) 0 0 0 K p n ( · ) > 0 ,
K d ( · ) = K d 1 ( · ) 0 0 0 K d 2 ( · ) 0 0 0 K d n ( · ) > 0 ,
k p i = b ¯ p i e i ( t ) 1 i f e i ( t ) > d p i , b ¯ p i d p i 1 i f e i ( t ) d p i ,
k d i = b ¯ d i e i ˙ ( t ) 1 i f e i ˙ ( t ) > d d i , b ¯ d i d d i 1 i f e i ˙ ( t ) d d i .
Considering the regulation case:
η d = c t e η ˙ = 0 .
Assuming that:
e = η η d e ˙ = η ˙ ,
then Equation (29) can be rewritten as:
τ = g ( η ) J T ( η ) K p ( e ) e + K d ( e ˙ ) η ˙ .
Now, we have the closed-loop system as follows:
M ν ˙ + C ( ν ) ν + D ( ν ) ν = J T ( η ) K p ( e ) e + K d ( e ˙ ) η ˙ .
Considering Equation (2), we have:
M ν ˙ + C ( ν ) ν + D ( ν ) ν = J T ( η ) K p ( e ) e + K d ( e ˙ ) J ( η ) ν .
We define K d d ( η , e ˙ ) = J T ( η ) K d ( e ˙ ) J ( η ) ; then, the previous equation can be rewritten as:
M ν ˙ + C ( ν ) ν + D ( ν ) ν = J T ( η ) K p ( e ) e K d d ( η , e ˙ ) ν .
Rewriting the previous equation, we have:
d d t e ν =
J ( η ) ν M 1 J T ( η ) K p ( e ) e K d d ( η , e ˙ ) ν C ( ν ) ν D ( ν ) ν .
Observe that the unique origin is the equilibrium point. Now, we can propose the following Lyapunov function candidate:
V ( e , ν ) = 1 2 ν T M ν + 0 e ξ T K p ( ξ ) d ξ .
According to Lemma 2 from [14], we have that V ( e , ν ) is globally positive definite and a radially unbounded function. The time derivative of the Lyapunov function candidate is:
V ˙ ( e , ν ) = ν T M ν ˙ e T K p ( e ) J ( η ) ν .
By substituting the closed-loop Equation (41) into (45), we obtain:
V ˙ ( e , ν ) = ν T J T ( η ) K p ( e ) e ν T K d d ( η , e ˙ ) ν ν T C ( ν ) ν ν T D ( ν ) ν e T K p ( e ) J ( η ) ν .
Since K p ( e ) = K p T ( e ) and C ( ν ) = C ( ν ) T , Equation (46) becomes:
V ˙ ( e , ν ) = ν T K d d ( η , e ˙ ) + D ( ν ) ν .
Assuming that D ( ν ) > 0 and remembering that K d > 0 K d d > 0 and symmetric matrix, we then obtain that V ˙ ( e , ν ) is a globally negative semidifinite function, and therefore we conclude stability of the equilibrium point. In order to prove asymptotic stability, we apply the Krasovskii–LaSalle’s theorem; then:
Ω = e ν = V ˙ ( e , ν ) = 0 = e ν = e 0 R 2 n .
Introducing ν = 0 and ν ˙ = 0 into Equation (41), we have e = 0 ; therefore, we conclude that the equilibrium point is globally asymptotically stable.

5. Computer Vision Algorithm

5.1. Data Processing Chain

Processing an image requires many steps, each one narrowing the quantity of data until it gives what we need (see Figure 5).
Firstly, a raw image taken from the camera has to be unnoised thanks to a Gaussian blur. Then, only a pixel of wanted colors are kept, giving a binary image. At this step, the image gives a geometric representation of the interesting object, without any color considerations.
Secondly, the algorithm has to localize the object, in order to have coordinates, orientation, size and variables that can be easily manipulated. It detects all connected groups of pixels (called “blobs”), and returns their size and centroid position. The biggest is considered as the object that the robot has to follow.
Finally, depending on the status of the robot, it can decide to move or follow the detection of objects.

5.2. Image Preprocessing

Before trying to extract data from an image, we need to process some filters to remove most noise and uninteresting things. It usually involves spatial and temporal blur to fix noise variance and dynamic filters to fix noise mean.
Images provided by a camera have noise, which can corrupt computer vision algorithm. For instance, in order to detect features, we need to extract gradient, which can be distorted by noise variance. This kind of noise can be reduced by blur, especially Gaussian blur for Gaussian noise, which dilutes variance in neighboring pixels. However, this two-dimensional spatial blur should not be too strong, at the risk of producing diplopia. Moreover, the camera video has also a third dimension: time. A time blur along consecutive images could also reduce noise variance. However, it is rather effective on static video: mixing too many different images produces a ghost effect [15].
These previous filters are static, and they cannot change their parameters (which are calibrated for the hardware). We need dynamic filters to adapt automatically to the environment. Indeed, luminosity and contrast can change along time. A filter can keep the same luminosity, or change color distribution by analyzing and equalizing Hue histogram. As compensation, it increases noise variance [16]. The main issue is that we need to choose parameters depending on the environment. Completely independent methods exist, but they consume too much CPU resources, thus it could be useful to improve a single interesting image, just that is too much of a guzzler for autonomous navigation [17].

5.3. Extracting Interesting Data

The two main properties of objects in an image are color and shape. Color recognition is easy to implement, but not discriminating enough (different objects can have the same color). On the contrary, shape recognition is reliable, but not so easy to program.
We can combine advantages with these two methods: first, color recognition only keeps a little data corresponding to the right color, and then shape recognition processes these lightened data.

5.3.1. HSV Filtering

The first step is HSV filtering. It keeps pixels whose “hue” corresponds to the right color, “saturation” and “value” above a threshold to eliminate gray and dark areas. The result is a binary image, in which white pixels are kept, and blue pixels are rejected. The issue is that many good pixels are rejected, which creates little holes. The solution is to “dilate” the image to recover these pixels, and “erode” to remove the border effect of dilate (see Figure 6).

5.3.2. Blob Detection

Blob detection consists of linking adjacent pixels to create groups—each one corresponding to an object. This algorithm [18] is implemented in the “findContours” function of OpenCV. Then, each blob has its centroid computed, whose coordinates are understandable data for the navigation algorithm (see Figure 7).

5.4. Take Decision

Orders for motors are generated depending on the state of the FSM (Finite State Machine). Main states are:
  • Remote control (no autonomous, the user sends orders).
  • Stabilize (keep same position and posture).
  • Go up and Go down (change only depth and stabilize).
  • Explore (follow a planned path).
  • Follow an object by the Raspberry Pi camera.

Ball Following through the Algorithm Vision

Blob detection gives the coordinates and size of each blob by giving HSV boundaries of interesting objects. It is required to know before the mission what the robot is looking for. The first step is to show to the robot an OPI (Object of Potential Interest) and manually select HSV boundaries until it is well detected. It must be done underwater because colors are not the same in the laboratory (Figure 6 and Figure 7 show pictures of the same OPI, outside and inside water). On the other hand, objects that avoid its reflection on the surface due to the surrounding light should be considered to obtain a better detection of the object. In addition, another object was detected as shown in Figure 7. However, the detection of the object was fine while the sun is neglected on the surface, due to the yellow color of this object. Therefore, the detection of the objects is good until they are not close to the surface when their reflection appears or they are confused with others of the same color.
Then, the robot is able to find the coordinate ( x o p i , y o p i ) of the biggest object seen by its camera, which it will try to follow. Due to the shape of AR2D2, a simple proportional law control is enough. By comparing to the center ( x c , y c ) of the image, and its size to a threshold size thresh, we obtain this kind of control law:
m o t 1 m o t 2 m o t 3 m o t 4 = + K p y + K p y + K p x K p x * y o p i y c y o p i y c x o p i x c x o p i x c + K s i z e * 0 0 s i z e t r e s h s i z e s i z e t r e s h s i z e .
K p x and K p y try to minimize horizontal and vertical error of the OPI’s centroid compared to the center of the image. Meanwhile, K s i z e keeps the robot to a good distance from the object (here, the size gives an approximation of distance). In Figure 8, we can observe an experimental test of the detection of the ball.

6. Simulation and Results

Matlab Simulink (R2016a, MathWorks) has been used to apply the control law chosen in order to stabilize the surge and heave movements on the dynamic model equations.
Simulation results are presented in order to observe the performance of the translation movements by the proposed control law. Nonlinear PD control was tuned to observe the best behavior versus disturbances, where the proportional and derivative gains on the saturation functions relative to surge and heave control, belong to values; b = + / 4 and d = + / 2 , thus the time evolution of the “x” vehicle’s position is shown in Figure 9, while its “z” position of the closed-loop system are plotted in Figure 10. Observe that the state positions ( x , z ) are externally perturbed and the nonlinear PD control is again able to stabilize the engine position. The initial conditions used for this simulation are x ( 0 ) = 0 , z ( 0 ) = 0 , x ˙ ( 0 ) = 0 , z ˙ ( 0 ) = 0 and the desired values are x d = 1 . 5 and z d = 1 .
Control input U1 is directly applied in vertical forces and control input U2 is applied in horizontal forces. Figure 9 and Figure 10 show both control inputs. Observe that control input U1 converges to the weight value f W = 1, whilst that control input U2 converges to zero, due to horizontal forces’ combination.
In order to characterize the thrusters, we did some tests in the fish tank, considering the reaction of the thruster on the z-axis through a PWM signal (Pulse Width Modulation) for the maneuverability index (see Figure 11). Therefore, F = (PWM/50) − 1 is the linear equation obtained by the adjustment of the curve. On the other hand, in recent works, such as in [19], it is important to consider a greater knowledge of the dynamic of all the thrusters to obtain the best performance and efficiency with respect to the maneuvers of the vehicle in control applications. In this case, the Micro AUV has the layout configuration of the thrusters near the center of gravity to obtain better maneuverability.
Figure 12 and Figure 13 show the experimental results relative to surge and heave movements and its respective error signal in the regulation, while the torques of the thrusters on both axes are depicted in Figure 14. Notice that the control law proposed presents a good performance. The gains tuning was fixed in order to get the best behaviour of the micro submarine exposed to extern disturbs. Concerning the stability improvement, it is possible to fix the gains tuning with more accuracy, considering the controller characteristic, being able to saturate the control input and protect the actuators.
The ball followed a random path (it is not a stabilization test) with random velocity. The robot managed to follow it, even when the object goes out of image. Indeed, inertia of the robot into water preserves its movement, making the robot able to find the OPI again.

7. Conclusions

The development of this prototype was motivated by the need for having an autonomous small vehicle for operation in closed environments. We developed and presented the embedded control system of the AR2D2 Micro AUV robot. In this initial work, we have considered the problem of set-point regulation on heave and surge movement. The vehicle was designed with the aim of having a small platform able to make complex maneuvers; to this end, the dynamic decoupling of surge and sway turned to be a key issue. In turn, the navigation embedded system was reduced as much as possible. On one hand, one is restricted to work in closed volumes, and, on the other hand, we are looking for cooperative tasks as the major goal of our future research work [20]. In this paper, we have used a typical nonlinear controller with saturation functions, thus the closed loop stability was demonstrated on the basis of Lyapunov’s theory. The desired behavior was validated by Matlab’s simulation and experiments. On the other side, this vehicle performs real-time embedded image processing in order to recognize and follow an object of interest. The results show a good ability to do it, without losing the OPI. In the future, we will implement another control strategy aiming to improve the performances along yaw angle and regulation in x, in order to be close to the object for further inspection. Introducing some filters along time can also improve behavior of the robot, and stabilize recognition of object (by processing sequential video images instead of single pictures). Moreover, we will conduct experiments in a natural environment with less visibility.

Author Contributions

J.A.M.-A. and C.R. designed the prototype, implemented the algorithms of artificial vision and control strategy, obtained the experimental results and wrote this paper; E.C.-M. validated the control scheme and provided guidance for experiments, T.S.-J. guided in the dynamic model process, and provided supervision and support of the work, L.G.G.-V. performed the simulations and revised the article.


This study is part of the project number 201441 “Implementation of oceanographic observation networks (physical, geochemical, ecological) for generating scenarios of possible contingencies related to the exploration and production of hydrocarbons in the deepwater Gulf of Mexico”, granted by SENER-CONACyT Hydrocarbons Sectorial Fund.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Antonelli, G. Underwater Robots, STAR (Springer Tracts in Advance Robotics), 3rd ed.; Springer: Berlin, Germany, 2014; ISBN 978-3-319-02876-7. [Google Scholar]
  2. Carreras, M.; Ridao, P.; García, R.; Ribas, D.; Palomeras, N. Inspección visual subacuática mediante robótica submarina. Rev. Iber. Aut. Inf. Ind. 2012, 9, 34–45. [Google Scholar] [CrossRef]
  3. Clark, C.; Olstad, C.; Buhagiar, K.; Gambin, T. Archaelogy via Underwater Robots: Mapping and Localization within Maltese Cistern Systems. In Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 662–667. [Google Scholar] [CrossRef]
  4. Monroy, J.; Campos, E.; Torres, J. Attitude Control of a Micro AUV Trough an Embedded System. IEEE Lat. Am. Trans. 2017, 15, 603–612. [Google Scholar] [CrossRef]
  5. Watson, S.; Green, P. Depth Control for Micro-Autonomous Underwater Vehicles (μAUVs): Simulation and Experimentation. Int. J. Adv. Robot. Syst. 2014, 11, 1–10. [Google Scholar] [CrossRef]
  6. Fechner, S.; Kerdels, J.; Albiez, J. Design of a μAUV. In Proceedings of the 4th International Symposium on Autonomous Minirobots for Research and Edutainment, Buenos Aires, Argentina, 2–5 October 2007; pp. 99–106, ISBN 978-3-939350-35-4. [Google Scholar]
  7. Rodríguez, P.; Piera, J. Mini AUV, a platform for future use on marine research for the Spanish Research Council? Instrum. ViewPo. 2005, 4, 14–15. [Google Scholar]
  8. Fossen, T. Guidance and Control of Ocean Vehicles; John Wiley and Sons Ltd., University of Trondheim: Trondheim, Norway, 1999; ISBN 0471941131. [Google Scholar]
  9. Goldstein, T.; Poole, C.; Safko, J. Classical Mechanics, 2nd ed.; Adison-Wesley: Boston, MA, USA, 1983; ISBN 8185015538. [Google Scholar]
  10. Marsden, J. Elementary Classical Analysis; W.H. Freeman and Company: San Francisco, CA, USA, 1974. [Google Scholar]
  11. Fossen, T. Marine Control Systems: Guidance, Navigation, and Control of Ships, Rigs and Underwater Vehicles; Marine Cybernetics: Trondheim, Norway, 2002; ISBN 8292356002. [Google Scholar]
  12. SNAME, Technical and Research Committee. Nomenclature for Treating the Motion of a Submerged Body Through a Fluid; The Society of Naval Architects and Marine Engineers: New York, NY, USA, 1950; pp. 1–5. [Google Scholar]
  13. Campos, E.; Monroy, J.; Abundis, H.; Chemori, A.; Creuze, V.; Torres, J. A nonlinear controller based on saturation functions with variable parameters to stabilize an AUV. Int. J. Nav. Arch. Ocean Eng. 2018, 10. in press. [Google Scholar] [CrossRef]
  14. Kelly, R.; Carelli, R. A Class of Nonlinear PD-type Controller for Robot Manipulator. J. Field Robot. 1996, 13, 793–802. [Google Scholar] [CrossRef]
  15. Bascle, B.; Blake, A.; Zisserman, A. Motion Deblurring and Super-Resolution from an Image Sequence. In Proceedings of the 4th European Conference on Computer Vision, Cambridge, UK, 15–18 April 1996; pp. 571–582, ISBN 9783540611233. [Google Scholar]
  16. Pizer, S.; Amburn, E.; Austin, J.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.; Zimmerman, J. Adaptive Histogram Equalization and Its Variations. Comp. Vis. Grap. Im. Proc. Elsevier 1987, 39, 355–368. [Google Scholar] [CrossRef]
  17. Bazeille, S.; Quidu, I.; Jaulin, L.; Malkasse, J. Automatic Underwater Image Pre Processing. In Proceedings of the SEA TECH WEEK Caracterisation Du Milieu Marin, Brest, France, 16–19 October 2006. [Google Scholar]
  18. Suzuki, S.; Abe, K. Topological Structural Analysis of Digitized Binary Images by Border Following. Comp. Vis. Grap. Im. Proc. Elsevier 1985, 30, 32–46. [Google Scholar] [CrossRef]
  19. Pugi, L.; Allotta, B.; Pagliai, M. Redundant and reconfigurable propulsion systems to improve motion capability of underwater vehicles. Ocean Eng. 2018, 148, 376–385. [Google Scholar] [CrossRef]
  20. Yoon, S.; Qiao, C. Cooperative Search and Survey Using Autonomous Underwater Vehicles (AUVs). IEEE Trans. Par. Dist. Syst. 2011, 22, 364–379. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental prototype AR2D2.
Figure 1. Experimental prototype AR2D2.
Sensors 18 02574 g001
Figure 2. Electronic architecture of the embedded system.
Figure 2. Electronic architecture of the embedded system.
Sensors 18 02574 g002
Figure 3. The AR2D2 μ AUV, with the body fixed frame ( O b , x b , y b , z b ) , the earth-fixed frame ( O I , x I , y I , z I ) , and the forces generated by its four thrusters.
Figure 3. The AR2D2 μ AUV, with the body fixed frame ( O b , x b , y b , z b ) , the earth-fixed frame ( O I , x I , y I , z I ) , and the forces generated by its four thrusters.
Sensors 18 02574 g003
Figure 4. Saturated function.
Figure 4. Saturated function.
Sensors 18 02574 g004
Figure 5. Data processing chain.
Figure 5. Data processing chain.
Sensors 18 02574 g005
Figure 6. Extract interesting color with HSV filter.
Figure 6. Extract interesting color with HSV filter.
Sensors 18 02574 g006
Figure 7. Segmentation to different blobs and the detection of another object.
Figure 7. Segmentation to different blobs and the detection of another object.
Sensors 18 02574 g007
Figure 8. Ball following test into the swimming pool.
Figure 8. Ball following test into the swimming pool.
Sensors 18 02574 g008
Figure 9. Surge movement and its control input U2.
Figure 9. Surge movement and its control input U2.
Sensors 18 02574 g009
Figure 10. Heave movement and its control input U1.
Figure 10. Heave movement and its control input U1.
Sensors 18 02574 g010
Figure 11. Thruster test.
Figure 11. Thruster test.
Sensors 18 02574 g011
Figure 12. Experimental surge movement and its signal of error in regulation.
Figure 12. Experimental surge movement and its signal of error in regulation.
Sensors 18 02574 g012
Figure 13. Experimental heave movement and its signal of error in regulation.
Figure 13. Experimental heave movement and its signal of error in regulation.
Sensors 18 02574 g013
Figure 14. Experimental reaction of the torques on the x- and z-axes.
Figure 14. Experimental reaction of the torques on the x- and z-axes.
Sensors 18 02574 g014

Share and Cite

MDPI and ACS Style

Monroy-Anieva, J.A.; Rouviere, C.; Campos-Mercado, E.; Salgado-Jimenez, T.; Garcia-Valdovinos, L.G. Modeling and Control of a Micro AUV: Objects Follower Approach. Sensors 2018, 18, 2574.

AMA Style

Monroy-Anieva JA, Rouviere C, Campos-Mercado E, Salgado-Jimenez T, Garcia-Valdovinos LG. Modeling and Control of a Micro AUV: Objects Follower Approach. Sensors. 2018; 18(8):2574.

Chicago/Turabian Style

Monroy-Anieva, Jesus Arturo, Cyril Rouviere, Eduardo Campos-Mercado, Tomas Salgado-Jimenez, and Luis Govinda Garcia-Valdovinos. 2018. "Modeling and Control of a Micro AUV: Objects Follower Approach" Sensors 18, no. 8: 2574.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop