Quadrotor UAV Dynamic Visual Servoing Based on Differential Flatness Theory

: In this paper, we propose 2D dynamic visual servoing (Dynamic IBVS), where a quadrotor UAV tries to track a moving target using a single facing-down perspective camera. As an application, we propose the tracking of a car-type vehicle. In this case, data related to the altitude and the lateral angles have no importance for the visual system. Indeed, to perform the tracking, we only need to know the longitudinal displacements (along the x and y axes) and the orientation along the z -axis. However, those data are necessary for the quadrotor’s guidance problem. Thanks to the concept of differential ﬂatness, we demonstrate that if we manage to extract the displacements according to the three axes and the orientation according to the yaw angle (the vertical axis) of the quadrotor, we can control all the other variables of the system. For this, we consider a camera equipped with a vertical stabilizer that keeps it in a vertical position during its movement (a gimbaled camera). Other specialized sensors measure information regarding altitude and lateral angles. In the case of classic 2D visual servoing, the elaboration of the kinematic torsor of the quadrotor in no way guarantees the physical realization of instructions, given that the quadrotor is an under-actuated system. Indeed, the setpoint has a dimension equal to six, while the quadrotor is controlled only by four inputs. In addition, the dynamics of a quadrotor are generally very fast, which requires a high-frequency control law. Furthermore, the complexity of the image processing stage can cause delays in motion control, which can lead to target loss. A new dynamic 2D visual servoing method (Dynamic IBVS) is proposed. This method makes it possible to generate in real time the necessary movements for the quadrotor in order to carry out the tracking of the target (vehicle) using a single point of this target as visual information. This point can represent the center of gravity of the target or any other part of it. A control by ﬂatness has been proposed, which guarantees the controllability of the system and ensures the asymptotic convergence of the generated trajectory in the image plane. Numerical simulations are presented to show the effectiveness of the proposed control strategy.


Introduction
The navigation of unmanned aerial vehicles (UAVs) using a vision system has found much interest during the last few decades in several fields of its application, such as the military field and civil society [1,2], traffic surveillance [3,4], mapping and exploration [5,6], and agriculture [7,8].Visual servoing methods use visual information to control a vehicle's pose relative to specific visual targets.They are divided into two main families [9] pose with respect to visual targets.It therefore does not need information on the geometry of the target a priori, as in the case of PBVS.Moreover, it is easy to calculate and more robust than PBVS.
The IBVS technique is a control method that guarantees the convergence of the visual features of a target toward the desired values in the image, as stated in [9].IBVS methods may face challenges, such as significant tracking inaccuracies or total tracking failure, in situations where the motion of the target changes over time or is not accurately anticipated [10].Predictive visual control (PVC) tries to solve this problem by incorporating model predictive control constraints [11,12].These constraints include the field of view (FOV), actuator output limitations, and the workspace.In [13], a nonlinear predictive controller was effectively employed to produce the desired velocity for an underwater vehicle while adhering to visibility limitations.The same approach was investigated in [14] to develop a tracking controller for UAVs.The application of the model predictive control (MPC) scheme has been observed in the context of a mobile robot [15] and quadrotor [16].In the aforementioned scenario, the model predictive control (MPC) was employed to ensure that the visual attribute of the target remains in the intended location within the image.In the context of navigation, the IBVS system has been observed to encounter occlusion issues leading to missing feature points.To address this, artificial patterns have been utilized to predict the missing feature points and maintain the proper functioning of the system.This approach has been documented in [17].Nevertheless, the model predictive control (MPC) methodologies are restricted to immobile targets.Furthermore, a high-performance processing stage is necessary.Predictive control is a computational process that involves solving an optimization problem at every moment in real time.This is due to the fact that predictive control requires intensive calculations.This situation may result in a substantial computational load, particularly for complex or fast systems, as occurs in our scenario.
For effective tracking control, it is crucial to have knowledge of the movement of a dynamic target.This information is frequently inaccessible and challenging to anticipate.Closed-loop control employs various image characteristics to maintain the target within the field of view (FOV), as stated in [18].Various techniques have been developed for feature extraction and matching in image processing.These include RGB-based methods [19], scaleinvariant feature transformation (SIFT) [20], and accelerated robust features (SURF) [21].Notwithstanding, these techniques exhibit constraints with respect to object detection and the assessment of the camera's motion relative to the target.Quadrotors have been subjected to vision-based optimization techniques [22] for the purpose of tracking a moving target while avoiding obstacles.However, this is only possible if the target's position is predetermined.In [23], alternative model-based optimization techniques were employed to ensure reliable detection of an unmanned aerial vehicle (UAV), but the focus of the study was primarily on utilizing image features for indoor localization instead of target tracking.Several methods have been employed to track humans, including the use of bounding boxes and minutiae [24,25].Nevertheless, the targets tracked using these methods moved at slow speeds, which could result in their movements being ignored.Furthermore, it has been reported that target orientation is frequently unavailable [26].In [27], the utilization of model-based predictive control was demonstrated for the purpose of tracking a periodically moving target.This approach resulted in a reduction in the complexity associated with controller design.The aforementioned methods failed to consider the interaction between the unmanned aerial vehicle (UAV) and the intended target.Additionally, the angle between the camera and the target was neither modeled nor quantified.
Quadrotor dynamics are typically fast and unpredictable.To control this type of system, it is necessary to develop a high-frequency controller [28].On the other hand, visual servoing goes through an image processing step that aims to extract the characteristics of the object.This can have a detrimental effect on the frequency of control law development.Furthermore, the complexity of image processing can cause delays in motion control, which can lead to the loss of a target.In order to solve these problems, a new 2D dynamic visual servoing (Dynamic IBVS) method is proposed.Its objective is to generate the necessary movements of the quadrotor to keep the target centered in the image plane.The proposed method transforms the problem into an asymptotic tracking process of a desired trajectory in the image plane, using the inverse dynamic of the estimated model of the vehicle to be followed.Since the proposed method allows the altitude of the quadrotor to be controlled independently of other variables, it is possible to set the altitude to a high level in order to reduce the risk of losing the target out of sight of the on-board camera, even during discontinuous and significant movements of the target.Moreover, this method only uses a single point on the target as a visual primitive.To increase the robustness and flexibility of the detection, this point can represent either the center of gravity of the target to be tracked or a specific part of the target.
The flatness property of a system is a relatively recent concept in automatic control that was proposed and developed in 1992 by M. Fliess et al. [29].This property, which makes it possible to parameterize in a very simple way the dynamic behavior of a system, is based on the highlighting of a set of fundamental variables of the system: its flat outputs.This point of view, as we demonstrate, has multiple and interesting consequences relative to the control of systems.First of all, this makes it possible to return to the center of the control of a process the notion of trajectory that the system must execute; that is to say, the movement requested from a system must above all be achievable by this system.This avoids many of the problems faced by automation engineers.One of the first steps in flatness control is to generate an adequate desired trajectory that implicitly takes into account the system model.
In this work, we consider as an application the tracking of a car-type vehicle (Dynamic IBVS) by a quadrotor UAV equipped with a single facing-down perspective camera.In our case, the information concerning the altitude and the lateral angles (the roll angle and the pitch angle) are of no importance to the visual system.Indeed, to perform the tracking, we only need to know the longitudinal displacements (along the xand y-axes) and the orientation along the z-axis.Those details are necessary for the problem of guiding the quadrotor.In [30][31][32][33], the authors proposed to use a rotating image plane, called a "virtual image plane", thus making it possible to obtain a dynamic of decoupled image characteristics.This method is applied to a fixed target and requires the detection of at least three points on the target.Thanks to the concept of differential flatness, we demonstrate that if we manage to extract the displacements according to the three axes and the orientation according to the yaw angle (the vertical axis) of the quadrotor, we can control all the other variables of the system.For this, we consider the following conditions.The camera is equipped with a vertical stabilizer, which keeps the camera in a vertical position during its movement; in other words, we neglect the lateral angles.It is also assumed that the quadrotor flies over at a given altitude.This altitude is not necessarily constant, but it must be known a priori.It should be noted here that these hypotheses only concern the visual system, which makes it possible to generate the movements necessary for the quadrotor in order to ensure the tracking of the vehicle.We use additional sensors to measure its magnitudes in order to achieve the trajectory that the visual system has thus generated.
In the case of traditional 2D visual servoing, the development of the quadrotor's kinematic torsor does not guarantee the physical realization of control instructions (a controllability issue compounded by an under-actuated system).In fact, the kinematic torsor has six dimensions, whereas the quadrotor has only four inputs.With only four control inputs, it is nearly impossible to implement the six instructions generated by the visual servoing algorithm.To solve this problem, ref. [34] proposed a linear model predictive control (MPC), but this method uses linear approximations and is not generally suitable for systems with very fast dynamics.Dongliang Zheng et al. [31] offered a command by backstepping; it was necessary to make many modifications to the model to render it in a particular form.
The proposed control by flatness takes into account all the variables of the system, guarantees its controllability, and ensures the asymptotic convergence of the resulting trajectory.The contributions of this study can be summarized as follows: i.
Using the concept of differential flatness, we have developed a new method of dynamic visual servoing for quadrotors.This method generates the necessary movements (translation and orientation) in order to keep the target centered in the image plane.ii.Since quadrotors are fast systems working in outdoor environments, we have simplified the image processing and ensured the robustness of the visual primitive by using only one point of the target.iii.Since quadrotors are under-actuated and strongly coupled systems, the realization of the kinematic tensor generated by the visual servoing algorithm becomes a problem.
To solve this, we have proposed a control by flatness that ensures controllability and asymptotic tracking of the generated trajectory.iv.In order to ensure robustness against climatic conditions, such as wind, we have added a PD-type correction term to the open-loop flatness control.
This paper is organized as follows: Section 2 presents the dynamic model of the quadrotor.The tracking strategy of a vehicle is detailed in Section 3.This strategy includes three loops: the first controls the altitude of the quadrotor; the second is dedicated to the generation of the trajectory; and finally, the third loop ensures tracking by flatness.Section 4 displays the simulation results validating the proposed approach.

The Dynamic Model of the Quadrotor
The commonly used quadrotor dynamic model [35][36][37] is given by Equation (1).This model has been proven by numerous experimental tests.
where (x, y, z) are the three positions; (θ, φ, ψ) are the three Euler angles, representing pitch, roll, and yaw, respectively; g is the acceleration of gravity; l is the distance from the center of gravity to each rotor; m is the total mass of the quadrotor; (I 1 , I 2 , I 3 ) are the moments of inertia along x, y, and z; (K 1 , K 2 , K 3 , K 4 , K 5 , K 6 ) are the drag coefficients (in the rest of our work, we assume that the drag is zero since the drag is negligible at low speed); (u 1 , u 2 , u 3 , u 4 ), are the control inputs defined Equation (2) [36]: where (T 1 , T 2 , T 3 , T 4 ) are the thrusts generated by the four rotors and can be considered as the actual system control inputs; C is the force-moment scaling factor; u 1 represents the total thrust on the quadrotor UAV body according Z; u 2 and u 3 are the pitch and roll inputs; and u 4 is the yaw input.

Tracking Strategy
The control strategy of the quadrotor for ensuring the tracking of a vehicle is given in Figure 1.The quadrotor takes a desired altitude, z d , and as soon as it detects the vehicle to be pursued, it will join it and ensure its tracking.The quadrotor used in this work is equipped with a camera with a stabilizer that keeps this camera upright during its movement.Once we have generated the movements necessary for the quadrotor to ensure the tracking of the vehicle, a flatness control technique is proposed to carry out this target.
We demonstrate in this work that a single point of the object to be tracked is enough for our proposed algorithm to achieve visual servoing.The trajectory generation block uses the coordinates in the image plane of this point to generate the necessary movements for the quadrotor in order to track the vehicle.In this control strategy, we develop three control loops, namely the loop that controls the altitude of the quadrotor; the loop that provides the 2D dynamic visual servoing, generating in real time the correct movements for the quadrotor in order to perform the tracking of the vehicle; and the loop that ensures the asymptotic convergence of the desired trajectory with a given degree of robustness using flatness control.We demonstrate in this work that a single point of the object to be tracked is enough for our proposed algorithm to achieve visual servoing.The trajectory generation block uses the coordinates in the image plane of this point to generate the necessary movements for the quadrotor in order to track the vehicle.In this control strategy, we develop three control loops, namely the loop that controls the altitude of the quadrotor; the loop that provides the 2D dynamic visual servoing, generating in real time the correct movements for the quadrotor in order to perform the tracking of the vehicle; and the loop that ensures the asymptotic convergence of the desired trajectory with a given degree of robustness using flatness control.

Loop 1: Altitude Control
As mentioned in Section 2, the control input, u 1 , is responsible for movement along the Z-axis.By applying the input/output linearization method on the third line of Equation ( 1), the linearizing control will be given by: where Nu z is the new input of the linearized system given by Equation (4): .. To make the altitude, z(t), track the desired altitude, z d (t), we just take the new input as follows: Nu z = . .
The coefficients k 11 and k 12 are chosen so that the polynomial p 2 + k 11 p + k 12 is a Hurwitz polynomial.

Loop 2: Trajectory Generation
By using a vertical camera stabilizer, the image plane will always be parallel to the (X w , Y w ) plane of the Cartesian world coordinate system, as shown in Figure 2.
where  is the new input of the linearized system given by Equation (4): To make the altitude, (), track the desired altitude,  (), we just take the new input as follows:

Loop 2: Trajectory Generation
By using a vertical camera stabilizer, the image plane will always be parallel to the ( ,  ) plane of the Cartesian world coordinate system, as shown in Figure 2. We denote (, , , , , ) as quadrotor coordinates and ( ,  ,  ,  ,  ,  ) as camera coordinates.So, we have  = ;  = ;  = ;  = ;  = 0; and ∅ = 0. We assume that the quadrotor flies at a given constant altitude,  , and we impose that the orientation of the quadrotor is the same as that of the vehicle to be tracked (the orientation is managed by the Yaw angle  = ).The translation movements along the  - We denote (x, y, z, θ, φ, ψ) as quadrotor coordinates and (x c , y c , z c , θ c , φ c , ψ c ) as camera coordinates.So, we have x c = x; y c = y; z c = z; ψ c = ψ; θ c = 0; and ∅ c = 0. We assume that the quadrotor flies at a given constant altitude, z d , and we impose that the orientation of the quadrotor is the same as that of the vehicle to be tracked (the orientation is managed by the Yaw angle ψ c = ψ).The translation movements along the X w -and the Y w -axes as well as the rotation along the Z w -axis of the quadrotor are independent.In other words, to go from an initial situation to a final situation, there is an infinity of possible trajectories.We are therefore faced with a problem that is undersized: we have three variables to determine, (X d , Y d , ψ d ) using only two equations (coordinates of the point, P, in the image plane).To remedy this problem, we choose a trajectory that connects the two situations in a way similar to that carried out by a differential mobile robot.This choice has legitimacy since we aim to track a car-type vehicle.We can therefore assimilate the behavior of the camera on board the quadrotor to that of a differential mobile robot that moves in the plane parallel to the plane (X w , Y w ) located at a distance, z d , from this plane and rotating according to the axis Z w by an angle, ψ, according to the following dynamics: Appl.Sci.2023, 13, 7005 7 of 19 where .
x r and .y r are the translation velocities along the X w -and Y w -axes of the robot.υ r and ω r are, respectively, the linear speed and the angular speed of the robot.

Characteristics of the Descriptor
The tracking problem using a camera as a visual sensor is a 2D dynamic visual servoing problem.In this case, tracking is guaranteed if we manage to keep the vehicle (the target) centered in the image plane, as shown in Figure 3.
mobile robot that moves in the plane parallel to the plane ( ,  ) located at a distance,  , from this plane and rotating according to the axis   by an angle, , according to the following dynamics: where  and  are the translation velocities along the  -and  -axes of the robot. and  are, respectively, the linear speed and the angular speed of the robot.

Characteristics of the Descriptor
The tracking problem using a camera as a visual sensor is a 2D dynamic visual servoing problem.In this case, tracking is guaranteed if we manage to keep the vehicle (the target) centered in the image plane, as shown in Figure 3.According to the behavior of the dynamics proposed in Equation ( 6), the knowledge of the displacements (∆ , ∆ ) according to the axes  and  makes it possible to deduce the orientation  along the  -axis.In effect, Since we know  =  =  , we can deduce the displacements (∆ , ∆ ) from the coordinates of a single point, , of the vehicle (the target).This point can be chosen in an arbitrary way, belonging to the vehicle but other than that which coincides with the cen- According to the behavior of the dynamics proposed in Equation ( 6), the knowledge of the displacements (∆x r , ∆y r ) according to the axes X w and Y w makes it possible to deduce the orientation ψ along the Z w -axis.In effect, Since we know z c = z = z d , we can deduce the displacements (∆x r , ∆y r ) from the coordinates of a single point, P, of the vehicle (the target).This point can be chosen in an arbitrary way, belonging to the vehicle but other than that which coincides with the center of projection of the camera because it is invariant to the rotation according to the axis Z w .
It is interesting here to take the point, P, on the horizontal axis, H, of the image (Figure 3) passing through the center of projection, o, because all the points located on this axis are invariant to displacement along Z w .To simplify the task of extraction, this point can represent the center of gravity of the vehicle or even the center of gravity of the vehicle hood, as shown in Figure 3. Let X = (X c , Y c , Z c = z d = constant), the coordinates of the point, P, in the 3D Cartesian coordinate system; the projection of this point on the image plane is done in p of coordinates (x m , y m ) expressed in millimeters.The visual information considered in this work is S = (x m , y m ).The expressions of these coordinates are given by the following relations: where (u, v) represents the coordinates of the point p of the image expressed in pixels.a = (c u , c v , f , α u , α v ) is the set of intrinsic parameters of the camera with (c u , c v ), the coordinates of the principal point of the image; f is the focal length; and (α u , α v ) are the vertical and horizontal scale factors expressed in pixel/mm.By performing the derivation with respect to the time of the projection equations in (8) we obtain: where V is the kinematic torsor of the camera formed by the translation velocities, v c , and the rotation velocities, ω c .L s denotes the interaction matrix, also known as the Jacobian of the image, and is given by: Since the movement of the robot is assumed to be in a plane and, using Equation ( 7), we can conclude that if we know the translational speed along the X c -axis and the rotational speed along the Z c -axis we can deduce the translation speed along the Y c -axis.The interaction matrix can be reformulated as follows: Equation ( 9) can be rewritten in the following form: .

Creation of the Trajectory
It should be remembered that our objective is to carry out vehicle (target) tracking using a camera on board a quadrotor UAV.This problem can be converted into a problem of asymptotic tracking of a desired trajectory in the image plane of the point p resulting from the projection of the point P belonging to the vehicle.Let (x * (t), y * (t)) be this desired trajectory, as shown in Figure 2. Using Equation (12), the two mobile robot control inputs are given by Assuming that point p does not coincide with the center of the projection (i.e., x m = 0), and calculating the inverse of the interaction matrix, we get In this case, we have a reversible relationship between outputs and inputs.We use the exact linearization presented by Hagenmeyer-Delaleau in [38].The resulting linearized system is equivalent to a system that has an integration of the following form: .
where ϑ x and ϑ y are the two auxiliary control inputs to be specified, which ensure the asymptotic tracking of the desired trajectory.The control law of the mobile robot is finally given by v Appl.Sci.2023, 13, 7005 k 1 and k 2 are chosen so that the error dynamics are asymptotically stable.In this case, it suffices to take k 1 > 0 and k 2 > 0, which ensure an asymptotic pursuit of the desired trajectory (x * , y * ).The variables (x * , y * , . x * , .y * ) represent the metric coordinates in the image plane, in position and speed of the point P. The metric coordinates in the image plane of the desired point to be tracked all along the path are (x m , y m ).Once the two commands (v r , ω r ) ensuring the asymptotic tracking of the point P of the vehicle (target) are defined, we can deduce using Equation ( 6) the necessary movements (x d , y d , ψ d ) that the quadrotor must achieve to ensure the vehicle tracking.

Loop 3: Flatness-Based Tracking Control
In this subsection, we propose a control by flatness to achieve and ensure an asymptotic convergence of the trajectory generated in Section 3.2.

Flatness Theory
Flat systems theory is a complex area of research in differential algebra and differential geometry.Differential flatness is introduced as follows [29]: A nonlinear system is given by: .
This system is differentially flat if there is a vector, F ∈ m , such that whose components are differentially independent, and two functions, η(.) and Γ(.), such that where α and r and are finite multi-indices and ξ, η, and Γ are vectors of smooth functions.The vector F that appears in this definition is called the flat output of the system.In other words, a flat system is a system whose state and control variables can be written according to this flat output and its derivatives.The open-loop flatness control given by Equation ( 21) is known as the Brunovosky control because it provides an exact linearization of the system.For a differentially flat system, when the desired trajectory, F d , is known, the desired state, x d , and the desired open-loop control, u d , can be defined as follows: If the system is naturally stable, it will behave well and follow the desired trajectory.For unstable systems whose purpose is to accelerate convergence, it is necessary to add to this open-loop control a small closed-loop correction term to ensure trajectory tracking.
In this work, a closed-loop flatness control is proposed.We denote it by FTC: flatnessbased tracking control.This control contains two parts: the open loop control given by Equation ( 21) and a loop term, ϑ, which represents a linear control capable of stabilizing the obtained linearized system.The FTC is given as follows: ϑ(t) represents the new command.When ∂Γ(.) ∂F (α) is locally invertible, this leads to the following decoupled system: i=0 k i p i be a diagonal matrix whose elements are polynomials with negative real part roots.This allows for asymptotic trajectory tracking with lim t→∞

Control Strategy
As shown in Figure 1, the control used to ensure the realization of the movements necessary for the quadrotor in order to satisfy the tracking of the vehicle (target) is based on the differential flatness.This control uses an open-loop control that linearizes the system and a closed-loop correction term that ensures the asymptotic convergence of the desired trajectory even in the presence of disturbances.Here, we replace the command u 1 of Equation ( 3) in the model that describes the dynamics of the quadrotor (Equation ( 1)): We prove that this system is flat and has flat output: F 1 = z ; F 2 = x; F 3 = y; F 4 = ψ.Indeed, using the first and second lines of Equation ( 28), we can express the variables θ and φ in terms of the flat outputs:
Control expressions can be written in terms of flat outputs and their derivatives: ..
We have just expressed all the variables of the system as a function of the dynamics of (z, x, y, ψ).The system of Equation ( 28) is flat and has as flat outputs of F 1 = z ; F 2 = x; F 3 = y; F 4 = ψ.To achieve the desired trajectory (x d , y d , ψ d ), generated by the trajectory generation block and using Equation (30), we can deduce the open loop control ensuring this desired trajectory: ..
Until now, flatness has been used to calculate the commands corresponding to the open-loop trajectories of the system.If the system is intrinsically stable, it will behave appropriately and pursue the desired trajectory.For unstable systems whose purpose is to accelerate convergence, a small closed-loop correction term must be added to this open-loop control to guarantee trajectory tracking.To generate this correction term, we will consider some simplification assumptions.It should be noted here that these assumptions relate only to the development of a correction term that is considered in the vicinity of the desired trajectory.
When the quadrotor joins the desired trajectory, we can assume that the angles θ, φ, and ψ become small.The expressions for the second derivative of θ and φ will be given by By using the theorem given in [39], which neglects all terms of the polynomial equation greater than the fourth degree, Equation (32) becomes Assuming the quadrotor reaches its desired altitude (z − z d = 0), the control expressions are then The expressions of the closed-loop control laws (FCT), which ensure asymptotic convergence toward the desired trajectory even in the presence of disturbances, are given by with e i = F id − F i (i = 2, 3, 4), and the k ij values are deduced using the pole placement technique.

Simulation Results
To demonstrate the effectiveness of our proposed control strategy, we divide this section into two parts.In the first part, we exclusively test the algorithm responsible for generating the necessary movements of the quadrotor based on its dynamics, which are similar to those of a differential robot.It is essential to assess its ability to maintain the target in the center of the image plane.Then, in the second part, we integrate this algorithm with the other control loops using the complete model of the quadrotor.The tool used to perform the simulations is the RVCTOOLS library from MATLAB 9.2 R2017b.

Proposed Algorithm Performance Related to the FOV Constraint
Since we are in a simulated environment, in order to generate the displacements necessary for the quadrotor to ensure the pursuit of a vehicle (target), we must propose a model for this vehicle.This model will be used to generate the trajectory of a point in the image plane with variable dynamics.We seek to prove that the proposed approach generates a trajectory for the quadrotor that fully copies the dynamics of the vehicle (target) without a priori knowledge of this dynamic.The only knowledge available is that of the instantaneous position (as well as the history of this position) in pixels of the point p of the vehicle (target).This vehicle is a car-type vehicle, as is described by Figure 4. We assume that this vehicle is located just below the quadrotor and that it starts to move according to the following kinematic model: where υ v is the linear speed of the vehicle given by υ r = r 4 (ω 1 + ω 2 ) and ω v is the angular speed of the vehicle given by ω v = r 2R (ω 2 − ω 1 ).
Appl.Sci.2023, 13, x FOR PEER REVIEW 13 of 20 The position of the vehicle is given by  =  ,  ,  .The two control inputs are  and  .( ,  ) represent the abscissa and the ordinate of the middle of the axis of the two driving wheels. represents the orientation of the vehicle. and  are the speeds of the two driving wheels. is the distance between the two wheels, and  is the diameter of a wheel.The movement of the vehicle is managed by the two rotational speeds ( ,  ) of the two driving wheels.In order to achieve any movement at variable speed and orientation and to ensure a variation of the linear velocities of translation and the angular velocities of rotation, we have chosen ( ,  ), as follows: , for  ∈ 0, 120s .
In order to facilitate the detection of the descriptor point of the target object (the vehicle), we have chosen the center of gravity of the vehicle's hood.Our objective is to keep the vehicle (target) centered in the image plane of the quadrotor camera throughout its movement.So that the coordinates of this point are not largely affected by displacements along the -axis, it is better that this point is on the horizontal axis, which passes through the center of projection, and that it is not confused with the throwing center.Indeed, the center of projection is invariant with respect to the rotation along the -axis.The desired image that must be satisfied throughout the movement of the vehicle is given in Figure 5.The position of the vehicle is given by X v = [x v , y v , θ v ] T .The two control inputs are υ v and ω v .(x v , y v ) represent the abscissa and the ordinate of the middle of the axis of the two driving wheels.θ v represents the orientation of the vehicle.ω 1 and ω 2 are the speeds of the two driving wheels.R is the distance between the two wheels, and r is the diameter of a wheel.The movement of the vehicle is managed by the two rotational speeds (ω 1 , ω 2 ) of the two driving wheels.In order to achieve any movement at variable speed and orientation and to ensure a variation of the linear velocities of translation and the angular velocities of rotation, we have chosen (ω 1 , ω 2 ), as follows: In order to facilitate the detection of the descriptor point of the target object (the vehicle), we have chosen the center of gravity of the vehicle's hood.Our objective is to keep the vehicle (target) centered in the image plane of the quadrotor camera throughout its movement.So that the coordinates of this point are not largely affected by displacements along the Z-axis, it is better that this point is on the horizontal axis, which passes through the center of projection, and that it is not confused with the throwing center.Indeed, the center of projection is invariant with respect to the rotation along the Z-axis.The desired image that must be satisfied throughout the movement of the vehicle is given in Figure 5.The simulation results are given in Figure 6. Figure 6a-d illustrate the displacement along the and -axes as well as the orientation along the -axis, both real and generated by the visual servo loop.Figure 6e-g show linear translational velocities and angular rotational velocities, respectively.It is clear that despite variations in vehicle speed, the desired trajectory is followed perfectly.Figure 6h shows the continuous detection of point  in the image plane along the trajectory.It is obvious that the object always remains in the field of view of the camera.In a practical situation, it would be possible to adjust the altitude of the quadrotor to widen the field of view because our method allows this variable (the altitude) to be independently controlled from the others.The simulation results are given in Figure 6. Figure 6a-d illustrate the displacement along the Xand Y-axes as well as the orientation along the Z-axis, both real and generated by the visual servo loop.Figure 6e-g show linear translational velocities and angular rotational velocities, respectively.It is clear that despite variations in vehicle speed, the desired trajectory is followed perfectly.Figure 6h shows the continuous detection of point P in the image plane along the trajectory.It is obvious that the object always remains in the field of view of the camera.In a practical situation, it would be possible to adjust the altitude of the quadrotor to widen the field of view because our method allows this variable (the altitude) to be independently controlled from the others.The simulation results are given in Figure 6. Figure 6a-d illustrate the displacement along the and -axes as well as the orientation along the -axis, both real and generated by the visual servo loop.Figure 6e-g show linear translational velocities and angular rotational velocities, respectively.It is clear that despite variations in vehicle speed, the desired trajectory is followed perfectly.Figure 6h shows the continuous detection of point  in the image plane along the trajectory.It is obvious that the object always remains in the field of view of the camera.In a practical situation, it would be possible to adjust the altitude of the quadrotor to widen the field of view because our method allows this variable (the altitude) to be independently controlled from the others.

Global Tracking Strategy's Performance
To validate the effectiveness of the global tracking strategy, we will consider the following experiment.The quadrotor takes off to reach a given altitude.The vehicle (target) is located just below the quadrotor.At a given moment, it begins to move according to a variable dynamic.As mentioned above, to ensure the pursuit or tracking of

Global Tracking Strategy's Performance
To validate the effectiveness of the global tracking strategy, we will consider the following experiment.The quadrotor takes off to reach a given altitude.The vehicle (target) is located just below the quadrotor.At a given moment, it begins to move according to a variable dynamic.As mentioned above, to ensure the pursuit or tracking of the vehicle, we have made three servo loops.In this section, we detail the simulation results of each loop.
We include in this experiment the fact that the quadrotor cannot maintain a constant altitude throughout its flight.We choose the descriptor point P such that its projection on the image plane is located on the horizontal axis H which passes through the center of projection.This point will be less affected by the displacement along the Z-axis.We impose a variable altitude given by the following expression: z d (t) = 0.1sin(0.04πt)+ 5 for t ∈ [0, 200s].The control law responsible for altitude control is given by Equations ( 3) and (5).The parameters of Equation ( 5) are chosen following experimental tests: k 11 = 10 ; k 12 = 25.The simulation results are given in Figure 7.
Appl.Sci.2023, 13, x FOR PEER REVIEW 16 of 20 the vehicle, we have made three servo loops.In this section, we detail the simulation results of each loop.We include in this experiment the fact that the quadrotor cannot maintain a constant altitude throughout its flight.We choose the descriptor point  such that its projection on the image plane is located on the horizontal axis  which passes through the center of projection.This point will be less affected by the displacement along the -axis.We impose a variable altitude given by the following expression:  () = 0.1 (0.04)+ 5 for  ∈ 0, 200s .The control law responsible for altitude control is given by Equations ( 3) and (5).The parameters of Equation ( 5) are chosen following experimental tests:  = 10 ;  = 25.The simulation results are given in Figure 7. Figure 7a represents the desired altitude and the altitude achieved by the quadrotor.We can clearly see that the quadrotor ensures the tracking of this altitude.Figure 7b represents the total force that is responsible for the displacement along the -axis.We note that it is a continuous command and physically realizable.In order to achieve any movement at variable speed and orientation, we have chosen ( , ) as follows: () = 20 (0.00026)+ 20  () = 20 (0.0002)+ 20 , for  ∈ 20, 200s .
The 2D dynamic visual servoing is provided by Equations ( 16) and (17).The coefficients of Equation ( 17) are given by  = 28 and  = 28.To realize the trajectory thus generated, a flatness-based control is proposed.The values of the gain that control the dynamics of the errors are given in Table 1: The simulation results are given by Figure 8. Figure 8a-c represent, respectively, the actual trajectory (displacement along , displacement along , and orientation along the -axis) performed by the vehicle, the trajectory generated by the visual servo loop and the trajectory produced by the quadrotor using flatness control.We notice that the visual servoing algorithm generates a trajectory faithful to the real trajectory carried out by the vehicle (target).Flatness control ensures exact tracking of the generated trajectory.The Figure 7a represents the desired altitude and the altitude achieved by the quadrotor.We can clearly see that the quadrotor ensures the tracking of this altitude.Figure 7b represents the total force that is responsible for the displacement along the Z-axis.We note that it is a continuous command and physically realizable.In order to achieve any movement at variable speed and orientation, we have chosen (ω 1 ,ω 2 ) as follows: The 2D dynamic visual servoing is provided by Equations ( 16) and (17).The coefficients of Equation ( 17) are given by k 1 = 28 and k 2 = 28.To realize the trajectory thus generated, a flatness-based control is proposed.The values of the gain that control the dynamics of the errors are given in Table 1: The simulation results are given by Figure 8. Figure 8a-c represent, respectively, the actual trajectory (displacement along X, displacement along Y, and orientation along the Z-axis) performed by the vehicle, the trajectory generated by the visual servo loop and the trajectory produced by the quadrotor using flatness control.We notice that the visual servoing algorithm generates a trajectory faithful to the real trajectory carried out by the vehicle (target).Flatness control ensures exact tracking of the generated trajectory.The evolution of the error in pixels between the position of the point p in the image plane and the desired point is given in Figure 8d,e.The evolution of the roll angle and the pitch angle, respectively, are given in Figure 8f,g.We notice that these two variables remain sufficiently small during the trajectory tracking.

Conclusions
In this paper, we have presented a new dynamic image-based visual servoing method.We proposed to solve the problem of the pursuit of a car-type vehicle by a quadrotor UAV.Under the specific conditions of this application, we have demonstrated that we can use a single point of the target object to perform the task of dynamic visual servoing.This contribution aims to reduce the computation time of the quadrotor control law.To circumvent the problem of controlling an under-actuated system and to achieve the necessary displacements generated by the visual servoing algorithm, a new flatness-based control algorithm has been integrated.The simulation results show the effectiveness of the proposed method.The proposed method provides an intriguing solution to the issue of vehicle tracking by a quadrotor UAV through the utilization of an onboard camera.In order to effectively implement this method, it is advisable to pair it with an additional algorithm that enables the precise selection of the target vehicle from among the other automobiles present.Given the aforementioned considerations, we suggest a classification algorithm based on artificial intelligence.In another context, the proposed method can be extended to solve the planning problem in the image plane for the 2D visual servoing of a quadrotor.This represents a challenge and will solve many problems related to the integration of vision in UAVs.

Conclusions
In this paper, we have presented a new dynamic image-based visual servoing method.We proposed to solve the problem of the pursuit of a car-type vehicle by a quadrotor UAV.Under the specific conditions of this application, we have demonstrated that we can use a single point of the target object to perform the task of dynamic visual servoing.This contribution aims to reduce the computation time of the quadrotor control law.To circumvent the problem of controlling an under-actuated system and to achieve the necessary displacements generated by the visual servoing algorithm, a new flatness-based control algorithm has been integrated.The simulation results show the effectiveness of the proposed method.The proposed method provides an intriguing solution to the issue of vehicle tracking by a quadrotor UAV through the utilization of an onboard camera.In order to effectively implement this method, it is advisable to pair it with an additional algorithm that enables the precise selection of the target vehicle from among the other automobiles present.Given the aforementioned considerations, we suggest a classification algorithm based on artificial intelligence.In another context, the proposed method can be extended to solve the planning problem in the image plane for the 2D visual servoing of a quadrotor.This represents a challenge and will solve many problems related to the integration of vision in UAVs.
: 3D visual servoing, or position-based visual servoing (PBVS), and 2D visual servoing, or image-based visual servoing (IBVS).IBVS directly uses image feature errors in the image plane to derive control inputs.It regulates the vehicle without reconstructing the relative Appl.Sci.2023, 13, 7005 2 of 19

Figure 6 .
Figure 6.Performances related to the FOV constraint.(a-d) Displacement along the and -axes as well as the orientation along the -axis, both real (in red) and generated by the visual servo loop (in blue).(e-g) Linear translational velocities and angular rotational velocities.(h) Continuous detection of point P (target point) in the image plane.

Figure 6 .
Figure 6.Performances related to the FOV constraint.(a-d) Displacement along the Xand Y-axes as well as the orientation along the Z-axis, both real (in red) and generated by the visual servo loop (in blue).(e-g) Linear translational velocities and angular rotational velocities.(h) Continuous detection of point P (target point) in the image plane.

Figure 8 .
Figure 8. Simulation results.(a) Displacement along  of the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(b) Displacement along  of the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(c) Orientation along  (Yaw angle) of the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(d) Error evolution in pixels along .(e) Error evolution in pixels along .(f) The evolution of the roll angle.(g) The evolution of the pitch angle.

Figure 8 .
Figure 8. Simulation results.(a) Displacement along X of the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(b) Displacement along Y the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(c) Orientation along Z (Yaw angle) of the target trajectory (red), the visual servo-based trajectory (gray), and the flatness-based quadrotor trajectory (blue).(d) Error evolution in pixels along u.(e) Error evolution in pixels along v. (f) The evolution of the roll angle.(g) The evolution of the pitch angle.

Table 1 .
Gain values associated with the dynamics of the errors.

Table 1 .
Gain values associated with the dynamics of the errors.