An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan

In this article, we present an innovative approach to 2D visual servoing (IBVS), aiming to guide an object to its destination while avoiding collisions with obstacles and keeping the target within the camera’s field of view. A single monocular sensor’s sole visual data serves as the basis for our method. The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane. We show that the differential flatness of the system’s dynamics can be used to limit arbitrary paths based on the number of points on the object that need to be reached in the image plane. This creates a link between the current configuration and the desired configuration. The number of required points depends on the number of control inputs of the robot used and determines the dimension of the flat output of the system. For a two-wheeled mobile robot, for instance, the coordinates of a single point on the object in the image plane are sufficient, whereas, for a quadcopter with four rotating motors, the trajectory needs to be defined by the coordinates of two points in the image plane. By guaranteeing precise tracking of the chosen trajectory in the image plane, we ensure that problems of collision with obstacles and leaving the camera’s field of view are avoided. Our approach is based on the principle of the inverse problem, meaning that when any point on the object is selected in the image plane, it will not be occluded by obstacles or leave the camera’s field of view during movement. It is true that proposing any trajectory in the image plane can lead to non-intuitive movements (back and forth) in the Cartesian plane. In the case of backward motion, the robot may collide with obstacles as it navigates without direct vision. Therefore, it is essential to perform optimal trajectory planning that avoids backward movements. To assess the effectiveness of our method, our study focuses exclusively on the challenge of implementing the generated trajectory in the image plane within the specific context of a two-wheeled mobile robot. We use numerical simulations to illustrate the performance of the control strategy we have developed.


Introduction
In the context of 2D visual servoing, also known as image-based visual servoing (IBVS), one of the major challenges we face is obstacle avoidance.Our main objective is to use visual data from a 2D camera to control the movement of a robotic system, with the image the camera captures serving as our central reference point.The complexity of the task arises when we need to guide the robot to a specific destination while avoiding Sensors 2023, 23, 9667 2 of 16 possible obstacles that may be in its path.This requires real-time analysis of visual data to detect the presence of obstacles, estimate their position and trajectory, and then adjust the robot's path accordingly while ensuring that the visual target remains within the camera's field of view (FOV).This task requires a combination of skills in computer vision, trajectory planning in the Cartesian plane, and control to ensure both environmental safety and mission accomplishment.In summary, obstacle avoidance in the context of 2D visual servoing is an essential and challenging task that requires an innovative and effective approach to ensure the success of robotic applications.
In many research studies, various approaches have been explored to solve this complex problem.For example, in [1], navigation is based on vanishing points to guide a quadcopter through corridors without colliding with walls.Other researchers, such as [2][3][4][5][6], have used different types of sensors, including stereo, monocular, and RGB-D cameras, to obtain three-dimensional information about the environment.They have also implemented techniques such as simultaneous localization and mapping (SLAM) to reconstruct maps of the environment, which are then used for motion planning and obstacle avoidance.Stereo cameras have also been used, as seen in the works of [7][8][9], combining visual, depth, and inertial information for navigation and obstacle avoidance.Studies like [10][11][12][13] have explored optical flow to find obstacles and estimate the speed difference between the robot and the obstacles.This is followed by trajectory planning and adaptive control to achieve navigation tasks.
In parallel, some studies have focused on the "teach and repeat" strategy, where a set of key images of the environment is captured, stored, and ordered for the robot's navigation.This allows the robot to follow the same path multiple times, with examples such as [14][15][16][17][18][19].A learning phase is required to capture the environment's topology from these key images.
In a recent study, authors in [20] proposed a hierarchical visual servo control scheme for visual trajectory tracking of quadcopters in indoor environments.The control scheme is capable of handling three tasks, taking into account a hierarchy order: first, the collision avoidance task, then the visibility task, and finally, the visual task.
Path planning and tracking in the control loop can solve IBVS's above issues and deal with complex scenes, including obstacles.Ideally, the idea is to generate a trajectory in the image plane and force the robot to follow it.According to [21], this is an extremely complex problem.To date, no control law generated, in a deterministic way, from a trajectory imposed in the image plane has been proposed.Despite this, there are many efforts being made.In [21], when the target is very far from the robot, the visual servoing algorithm may diverge.As a solution, the authors create a sequence of images between the image captured at the initial instant and the desired (final) image.They use a trajectory in 3D Cartesian space, which they project into the image plane.The trajectory is generated using the potential field method.Refs.[22,23] proposed a driving assistance system in which a trajectory in the image plane is generated.The outcomes of these two projects are used in a remote operation.The works presented in [24][25][26][27][28][29] seek to find a feasible trajectory in Cartesian space (3D) while avoiding obstacles and guaranteeing the visibility of the object in the image plane.An estimation of the pose of the robot and a priori knowledge of the obstacles are necessary.Once the trajectory is generated, an IBVS control algorithm called the Image-Based Tracker algorithm is used to ensure this trajectory.In [30], a 2D visual servoing method without camera calibration based on homographic projection is proposed.This method uses another form of the interaction matrix based on the homographic projection on a horizontal plane.The control law resembles that of classical visual servoing by replacing the interaction matrix with the proposed matrix.Since the proposed method does not allow direct control of the robot's motion, leading to undesirable movements, Ref. [30] suggested adding a trajectory optimization method to the homographic projection.The idea is to generate a feasible trajectory in the projective homographic space.Subsequently, the large initial error is divided into small errors used to follow the discretized trajectory.The optimization of the trajectory is performed on the image plane.Ref. [31] attempted to determine an optimal trajectory using dynamic programming.The possible trajectories are divided into several smaller trajectories, and a cost function is integrated to penalize undesirable sub-trajectories.Constraints are added on the borders of the image (to guarantee the visibility of the scene), on the limits of the control, and on the limits of the joints of the robot.In this work, the authors did not consider the presence of obstacles.For this, a priori knowledge of the location of the obstacles in the Cartesian coordinate system is necessary.A navigation algorithm using visual memory is proposed in [32].A keyframe topology map is generated from the movement of a camera over a model of the real scene.During navigation, a sequential comparison of stored images with captured images is performed.The desired trajectory here is defined by a number of waypoints.Each waypoint takes the form of a desired intermediate image.No direct planning on the image plane is performed.
In this paper, we propose a new method of path planning and tracking in the image plane in order to perform a visual servoing of a differential mobile robot using the concept of differential flatness.For the first time in the field of visual servoing based on path planning, it would be possible to generate any trajectory in the image plane and ensure its tracking while guaranteeing the controllability of the robot as well as the mechanical feasibility, high tracking accuracy, and position, velocity, and acceleration control on any desired trajectory and consequently on the robot.
We show that using the differential flatness of the system's dynamics lets us limit arbitrary trajectories based on the number of points the object needs to reach in the image plane, creating a link between the current configuration and the desired configuration.The number of required points depends on the number of control inputs of the robot being used and determines the dimension of the flat output of the system.For a two-wheeled mobile robot, for instance, the coordinates of a single point on the object in the image plane are sufficient, whereas, for a quadrotor with four rotating motors, the trajectory needs to be defined by the coordinates of two points in the image plane.By ensuring precise tracking of the chosen trajectory in the image plane, we ensure that collision problems with obstacles and leaving the camera's field of view are avoided.Our approach is based on the inverse problem principle, meaning that when any point on the object is selected in the image plane, it will in no way be occluded by obstacles or exit the camera's field of view during movement.It is true that proposing any trajectory in the image plane can result in non-intuitive movements (back and forth) in the Cartesian plane.In the case of backward movement, the robot may collide with obstacles as it navigates without direct vision.Therefore, it is essential to perform optimal path planning that avoids backward movements.To evaluate the effectiveness of our method, we will focus on the specific case of a two-wheeled mobile robot.Numerical simulations are presented to demonstrate the efficiency of the proposed control strategy.
The following points outline the major contributions of this work: The proposed trajectory planning algorithm includes, implicitly, a permanent guarantee of target visibility and thus the impossibility of encountering an obstacle while ensuring robot controllability.The problem of a distant target no longer arises.The FOV is guaranteed even in the presence of obstacles using only a single monocular sensor.
The integration of multiple sensors may require complex data fusion algorithms to combine information from different sources.With a single sensor, software complexity is reduced, which can simplify system development and maintenance.
In the case of a mobile robot, the trajectory of a single point on the target object is sufficient.Any curve in the image plane between the two boundary points (initial and desired) can be ensured.Using a single point on the object as a descriptor provides greater robustness to the detection and recognition phases.Since this trajectory uses a certain number of points (called collocation points), we can introduce the concept of time between the points.This implies that we have just imposed a dynamic on this trajectory (in position and speed) and thus a dynamic on the robot.
The imposed trajectory is physically feasible thanks to the concept of differential flatness (there is equivalence between differential flatness and controllability).
No a priori knowledge of the 3D environment is necessary.The proposed algorithm is able to ensure robust tracking of the imposed trajectory.
The time required to ensure that this trajectory can be fixed (within the limits of the robot's physical constraints).
The paper's structure is as follows: Section 2 exposes the problem formulation.In Section 3, we present the general control law design, including the flatness model, path planning, and path tracking process.The synthesis of the mobile robot control law is detailed in Section 4. Finally, Section 5 discusses simulation results that prove the effectiveness of the proposed method.

Problem Formulation 2.1. Differential Mobile Robot
The mobile robot considered in this work is of the single-cycle type shown schematically in Figure 1.This robot is equipped with two independently controlled drive wheels and a freewheel, ensuring its stability.


Since this trajectory uses a certain number of points (called collocation points), we can introduce the concept of time between the points.This implies that we have just imposed a dynamic on this trajectory (in position and speed) and thus a dynamic on the robot.


The imposed trajectory is physically feasible thanks to the concept of differential flatness (there is equivalence between differential flatness and controllability).


No a priori knowledge of the 3D environment is necessary.


The proposed algorithm is able to ensure robust tracking of the imposed trajectory.


The time required to ensure that this trajectory can be fixed (within the limits of the robot's physical constraints).
The paper's structure is as follows: Section 2 exposes the problem formulation.In Section 3, we present the general control law design, including the flatness model, path planning, and path tracking process.The synthesis of the mobile robot control law is detailed in Section 4. Finally, Section 5 discusses simulation results that prove the effectiveness of the proposed method.

Differential Mobile Robot
The mobile robot considered in this work is of the single-cycle type shown schematically in Figure 1.This robot is equipped with two independently controlled drive wheels and a freewheel, ensuring its stability.( ,  ) are midpoint coordinates between the two driving wheels,  the orientation of the robot,  and  are velocities of the two driving wheels,  is the distance between the two driving wheels, and  is the diameter of a drive wheel.It is assumed that the robot movement occurs without sliding.The robot kinematic model can be expressed as follows: is the linear speed of the robot given by and  is the angular velocity of the robot expressed by (x r , y r ) are midpoint coordinates between the two driving wheels, θ the orientation of the robot, ω 1 and ω 2 are velocities of the two driving wheels, l is the distance between the two driving wheels, and L is the diameter of a drive wheel.It is assumed that the robot movement occurs without sliding.The robot kinematic model can be expressed as follows: υ r is the linear speed of the robot given by and ω r is the angular velocity of the robot expressed by The mobile robot state is given by q = [x r , y r , θ] T , and the two control inputs are υ r and ω r .

Relations between Coordinate Systems
According to Figure 1, the robot is equipped with a camera.By utilizing the following homogeneous transformation matrix, we can calculate the robot-camera coordinate system transformation: where is the position vector where t x , t y , t z are the relative displacements between the robot coordinate system and the camera coordinate system.Since the camera is fixed on the base of the mobile robot, the robot-camera velocity relationship can be expressed as follows: In Equation ( 5), T represents the velocity of the camera expressed in the camera coordinate system.ζ r r is the velocity of the robot expressed in the robot coordinate system.s(t r c is the skew-symmetric matrix of the vector t r c .υ r and ω r are, respectively, the translation and rotation velocity of the robot. By simplifying Equation ( 5), we obtain where υ c ω c T is the camera velocity vector expressed in the camera coordinate system.
The rotational velocities of the two drive wheels are given by the following expression:

Perspective Transformation Model
Figure 2 shows the perspective transformation model.The mobile robot state is given by  = [ ,  , ] , and the two control inputs are  and  .

Relations between Coordinate Systems
According to Figure 1, the robot is equipped with a camera.By utilizing the following homogeneous transformation matrix, we can calculate the robot-camera coordinate system transformation: where = [    ] ∈ ℜ is the position vector where  ,  ,  are the relative displacements between the robot coordinate system and the camera coordinate system.Since the camera is fixed on the base of the mobile robot, the robot-camera velocity relationship can be expressed as follows: In Equation ( 5),  =  ,  ,  ,  ,  ,  represents the velocity of the camera expressed in the camera coordinate system. is the velocity of the robot expressed in the robot coordinate system.( ) is the skew-symmetric matrix of the vector  . and ω are, respectively, the translation and rotation velocity of the robot.By simplifying Equation ( 5), we obtain where [   ] is the camera velocity vector expressed in the camera coordinate system.The rotational velocities of the two drive wheels are given by the following expression:

Perspective Transformation Model
Figure 2 shows the perspective transformation model.Let  be any point in space. represents the projection of  in the 2D plane called  . is the center of projection. is the intersection between the optical axis passing through  and the image plane. is the focal length of the camera. is the camera coordinate system.Let  = ( ,  ,  ) the coordinates of the point  in the 3D Cartesian Let P be any point in space.p represents the projection of P in the 2D plane called π. O is the center of projection.o is the intersection between the optical axis passing through O and the image plane.f is the focal length of the camera.R c is the camera coordinate system.Let X = (X c , Y c , Z c ) the coordinates of the point P in the 3D Cartesian coordinate system; the projection of this point on the plane π is located at p whose coordinates are (x, y) in the plane π expressed in mm.The expressions of these coordinates are given by the following relations: The pair (u, v) represents the image point p coordinates expressed in pixels.a = (c u , c v , f , α u , α v ) represents the camera intrinsic parameter: (c u , c v ) are the coor- dinates of the principal point, f being the focal distance, and (α u , α v ) are the vertical and horizontal scale factors expressed in pixel/mm.The time derivative of Equations ( 8) returns Using Equation ( 9), we can connect the velocity of the 3D point P to the velocity of the camera using Equation ( 9), which give Equation ( 10) can be expanded to obtain the following form: Using Equations ( 9) and ( 11) we can write .
We can deduce the following compact form: .
where L s is the interaction matrix, commonly known as the image's Jacobian: L s ensures the link between the variation in the position of the point p and, consequently, the variation of its coordinates in pixels (u, v) and the velocity of the camera V = v x v y v z ω x ω y ω z T .In the interaction matrix expression, Z c represents the depth of the point P. Any control law using this form of interaction matrix must first estimate the value of Z c .Intrinsic parameters of the camera are involved in the calculation of x and y.Since the movement of the robot is performed in the 2D plane, the linear and angular velocities are in the direction of the Z c axis and the Y c axis, respectively, the interaction matrix can be reformulated as follows:

Flatness of a Model
A system is described by the equation where X(t) denotes the state and u(t) denotes the command, is flat if there is a vector z(t) such that z(t) = h X(t), u(t), u (1) (t), . . . ,u (δ) , ( whose components are differentially independent and have two functions A(.) and B(.) such that X(t) = A z(t), z (1) (t), . . . ,z (α) (t) , ( 18) where α, β, and δ are three finite integers.This formulation refers to the vector z(t) as the flat output of the system.By introducing the functions A(.) and B(.), this flat output is composed of a set of variables that enables parameterization of all other system variables: the state, the command, and also the output y (t).Indeed, if the output of the system is defined by a relation of the form y(t) = Ψ X(t), u(t), . . ., u (p) (t) then necessarily the quantities described in ( 18) and ( 19) make it possible to affirm that there exists an integer γ such that The flat output groups all of the free (unconstrained) variables of the system since the components of z(t) are differentially independent.However, we can also consider, on the basis of Equation (17), that the flat output z(t) only depends on the state and the command.This would make it an endogenous variable of the system, in contrast to the state of an observer, which would be an example of an exogenous variable of the observed system.In addition, Lie-Bäcklund's notion of differential equivalence [33] shows that the number of components of z(t) is the same as the number of components of the command: This is a basic property that permits one to know how many free variables must be found on a model to show that it is flat.One of the advantages of the flatness property is that the previous definition is not restricted to state models but to any model of the form Φ X (n) (t), . . . ,X (1) (t), X(t), u (m) (t), . . . ,u (1) , u(t) = 0. ( This means that we can start right from the equations that describe how the system works.We do not have to rewrite all the equations into an equation of state.

Path Planning in the Image Plane
From Equation ( 19), if we decide to obtain, for the flat system described by Equation ( 16), the trajectory: z d (t) for t from t 0 to t f , it is sufficient to apply, on the same time segment, the open loop control given is Assuming a perfect model, we will then have, for t from t 0 to t f , z(t) = z d (t), and therefore The only constraint is that the desired trajectory on the flat output must necessarily be at least max(α, β, γ) times differentiable on t 0 t f .To ensure the differentiability constraint throughout the entire trajectory, we generally envisage, and without being restrictive, piecewise polynomial trajectories, interpolation polynomials, or C ∞ functions with, most of the time, continuity conditions at the start and at the arrival.We can also impose passing or reversal points or also avoidance trajectories planned according to possible events.

Asymptotic Trajectory Pursuit
With knowledge of Equation ( 19), the following command is proposed: where ϑ(t) is a new command.When ∂B(.) ∂z (β) is locally invertible, this leads to the following decoupled system: This result is to be compared to the linearization and to the decoupling by looping of the nonlinear systems, which are always conditioned by the stability of the zeros of the system [34].Indeed, we obtain here an unconditional decoupling and linearization (note that this property is at the origin of the choice of the term flatness).However, it is obvious that an additional stabilization loop is necessary; ϑ(t) then becomes i=0 k i p i is a diagonal matrix whose elements are polynomials with negative real part roots, u(t) then becomes which makes it possible to ensure an asymptotic trajectory pursuit with As z(t) and all its derivatives are endogenous variables of the process, the looping u(t) is called an endogenous looping.

Synthesis of the Proposed Mobile Robot Control Law
We showed in [35] that the coordinates of a single point of the object in the image plane p(x, y), can act as a flat output for the mobile robot.To simplify the problem, we can consider, in Figure 1, the coordinate system of the camera confused with the coordinate system of the robot.In this case, we can write .
The two mobile robot control inputs are given by By explaining the inverse of the interaction matrix, we obtain the following: Sensors 2023, 23, 9667 Utilizing the concept of differential flatness provided by Hagenmeyer-Delaleau in [36], we can achieve a precise linearization.The resultant linearized system is comparable to a system with the following integration form: where ϑ x and ϑ y are the two auxiliary control inputs to be specified, given by Equation ( 28), which enable the asymptotic pursuit of the planned trajectory.The final form of the control law for the mobile robot is as follows: The two auxiliary control inputs ensuring the asymptotic tracking of the desired trajectory are given by Let us consider e x = x * − x and e y = y * − y.The error dynamics can be written as follows: . .
where k 1 and k 2 are chosen so that the error dynamics are asymptotically stable.In this case, it suffices to take k 1 > 0 and k 2 > 0, which ensures an asymptotic pursuit of the desired trajectory (x * , y * ).
Figure 3 shows the proposed control scheme.
Sensors 2023, 23, x FOR PEER REVIEW 9 of 16 Utilizing the concept of differential flatness provided by Hagenmeyer-Delaleau in [36], we can achieve a precise linearization.The resultant linearized system is comparable to a system with the following integration form: where  and  are the two auxiliary control inputs to be specified, given by Equation (28), which enable the asymptotic pursuit of the planned trajectory.The final form of the control law for the mobile robot is as follows: The two auxiliary control inputs ensuring the asymptotic tracking of the desired trajectory are given by Let us consider e = x * − x and e = y * − y.The error dynamics can be written as follows: where  and  are chosen so that the error dynamics are asymptotically stable.In this case, it suffices to take  > 0 and  > 0, which ensures an asymptotic pursuit of the desired trajectory ( * ,  * ).
Figure 3 shows the proposed control scheme.

Simulation Results
To demonstrate the efficiency of our control algorithm, we propose an arbitrary trajectory in the image plane which connects the initial and final position of any point or

Simulation Results
To demonstrate the efficiency of our control algorithm, we propose an arbitrary trajectory in the image plane which connects the initial and final position of any point or P of the object, except those that are situated along the horizontal axis of symmetry of the object.Indeed, the movement of the mobile robot is conducted in the 2D plane (presented in yellow in Figure 4); the horizontal axis of symmetry of the object remains invariant during the movement of the robot.Consequently, the dynamics of any point situated along the horizontal axis always remain zero or constant (unless the object is centered in the image).This situation contradicts the principle of differential flatness, which asserts that knowledge of the dynamics of the flat output allows for deducing the dynamics of all variables in the system.If the dynamics along the y-axis remain zero or constant, it creates a singularity, also known as the controllability problem.It is important to emphasize that it is possible to select any trajectory connecting the starting point to the endpoint, even by freehand drawing on the screen and using an interpolation algorithm to generate an analytical expression for this trajectory.In our specific example, we will choose a polynomial trajectory due to its simplicity in calculating derivatives.
Since we can impose a dynamic (in velocity and acceleration) on any desired trajectory, we propose a polynomial-type trajectory.Let P i = x * i , y * i , the initial position, in pixels, of a point P of the object in the image plane at time t i , and P f = x * f , y * f , its final position at time t f .Let us suppose we want to have a trajectory connecting these two points pass through a maximum.Let us consider, for example, the coordinate point i , which represents the maximum of a curve between y * i and y * f .It should be noted here that the initial situation and the desired final situation must be taken by the camera installed on the real robot.We propose the following dynamic given by a slow start, acceleration in the middle of the trajectory, and finally, a slow convergence.The desired trajectory y * x * must therefore satisfy the following constraints: of the object, except those that are situated along the horizontal axis of symmetry of the object.Indeed, the movement of the mobile robot is conducted in the 2D plane (presented in yellow in Figure 4); the horizontal axis of symmetry of the object remains invariant during the movement of the robot.Consequently, the dynamics of any point situated along the horizontal axis always remain zero or constant (unless the object is centered in the image).This situation contradicts the principle of differential flatness, which asserts that knowledge of the dynamics of the flat output allows for deducing the dynamics of all variables in the system.If the dynamics along the y-axis remain zero or constant, it creates a singularity, also known as the controllability problem.It is important to emphasize that it is possible to select any trajectory connecting the starting point to the endpoint, even by freehand drawing on the screen and using an interpolation algorithm to generate an analytical expression for this trajectory.In our specific example, we will choose a polynomial trajectory due to its simplicity in calculating derivatives.
Since we can impose a dynamic (in velocity and acceleration) on any desired trajectory, we propose a polynomial-type trajectory.Let  = ( * ,  * ), the initial position, in pixels, of a point  of the object in the image plane at time  , and  =  * ,  * , its final position at time  .Let us suppose we want to have a trajectory connecting these two points pass through a maximum.Let us consider, for example, the coordinate point * * , 2 * −  * , which represents the maximum of a curve between  * and  * .It should be noted here that the initial situation and the desired final situation must be taken by the camera installed on the real robot.We propose the following dynamic given by a slow start, acceleration in the middle of the trajectory, and finally, a slow convergence.As an example, we suggest the following polynomial equation in  * that satisfies the conditions stated in (41): As an example, we suggest the following polynomial equation in x * that satisfies the conditions stated in (41): Sensors 2023, 23, 9667 Therefore, it is necessary to create a variation of x * (t), which satisfies the following boundary conditions: This produces the following 11-degree polynomial: with The dynamics of the desired trajectory are described in Figure 5. Figure 5a,b present the desired trajectory, in position, which connects the two points to the limits in the image plane and which passes through a maximum.Figure 5c,d show the velocity dynamics of this trajectory, and Figure 5e,f give the acceleration dynamics.We impose on the robot a slow start, acceleration in the middle of the trajectory, and finally, a slow convergence.

Simulation Conditions
We consider an object described by four points.Figure 5a shows this rectangularshaped object in its initial position (the small blue rectangle) and in the final position (the large blue rectangle).Our algorithm uses only one point (denoted P in Figure 5a) to generate the control law.The initial position P i and the desired position P d of the object in the image plane are given by the coordinates of the following characteristic points of the object (in blue the coordinates of point P): Therefore, it is necessary to create a variation of  * (), which satisfies the following boundary conditions: *  =  * , ̇ *  = 0, ⋯ ,  * ( )  = 0.
This produces the following 11-degree polynomial: with The dynamics of the desired trajectory are described in Figure 5. Figure 5a,b present the desired trajectory, in position, which connects the two points to the limits in the image plane and which passes through a maximum.Figure 5c,d

Simulation Conditions
We consider an object described by four points.Figure 5a shows this rectangularshaped object in its initial position (the small blue rectangle) and in the final position (the large blue rectangle).Our algorithm uses only one point (denoted  in Figure 5a) to generate the control law.The initial position  and the desired position  of the object in the image plane are given by the coordinates of the following characteristic points of the object (in blue the coordinates of point The simulation parameters are chosen as follows:  = 0.3 m,  = 0.1 m,  =  = 100,  = 10 s, and  = 2.5 m.

Results and Interpretations
The simulation results are given in Figure 6. Figure 6a shows the trajectories performed by the four points.The desired end position and the reached end position of the four points are: We notice that the two positions are almost similar; this proves that a single object point is enough to perform 2D visual servoing in the case of a mobile robot.Figure 6b represents the performed trajectory and the desired trajectory of a point P of the object in the image plane.We notice a high accuracy of the tracking process.The percentages of errors between the performed trajectory and the desired trajectory do not exceed 0.3% (Figure 6c). Figure 6d represents the movement performed by the robot in the Cartesian plane.We note that the robot has come and gone to reach the final position, which is normal since we have proposed any trajectory in the image plane, which necessarily generates any trajectory in the Cartesian plane.Figure 6e represents the two control laws v r and ω r applied to the robot.Note that both controls are continuous and smooth.We also notice that the two control laws begin with a slow variation (a slow start of the robot), an acceleration in the middle, and a slow convergence.This proves that we have just controlled the dynamics of the robot via the choice of the desired trajectory in the image plane.The variation in the two rotational speeds of the two driving wheels is given in Figure 6f.

𝑃 =
We notice that the two positions are almost similar; this proves that a single object point is enough to perform 2D visual servoing in the case of a mobile robot.Figure 6b represents the performed trajectory and the desired trajectory of a point  of the object in the image plane.We notice a high accuracy of the tracking process.The percentages of errors between the performed trajectory and the desired trajectory do not exceed 0.3% (Figure 6c). Figure 6d represents the movement performed by the robot in the Cartesian plane.We note that the robot has come and gone to reach the final position, which is normal since we have proposed any trajectory in the image plane, which necessarily generates any trajectory in the Cartesian plane.Figure 6e represents the two control laws  and  applied to the robot.Note that both controls are continuous and smooth.We also notice that the two control laws begin with a slow variation (a slow start of the robot), an acceleration in the middle, and a slow convergence.This proves that we have just controlled the dynamics of the robot via the choice of the desired trajectory in the image plane.The

Conclusions
In this article, we introduced a novel approach to 2D visual servoing in complex environments containing obstacles.Our method only makes use of visual data obtained by a single monocular sensor.The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane.By ensuring precise tracking of the selected trajectory in the image plane, we prevent issues such as collisions with obstacles and loss of camera visibility.In the specific context of this work, which focuses on a mobile robot, we have demonstrated using the concept of differential flatness that the use of a single point on the object in the image plane as a descriptor is sufficient.In the future, this approach could be extended to robots with multiple degrees of freedom, such as manipulator arms or quadcopters, using other points on the object to represent the flat output dimension of the robot.It is important to note that proposing an arbitrary trajectory in the image plane can lead to non-intuitive movements, including back-and-forth motions, in the Cartesian plane.This can create a problem, especially when the robot is moving backward, which could result in collisions with obstacles as the robot moves without direct vision.Therefore, in future work, it would be essential to develop optimal trajectory planning that avoids backward movements.We were able to impose a dynamic in position, velocity, and acceleration on the robot by using the concept of flatness.It thus becomes possible to decide on any trajectory in the image plane of a primitive initially visible by the robot so that the robot executes the necessary movement that guarantees the exact following of this trajectory.In this paper, we were interested in visual servoing in the case of a static target.Since we have demonstrated that it is possible to control positions, velocities, and accelerations in the image plane, we can also use the proposed concept to carry out dynamic visual servoing.

Conclusions
In this article, we introduced a novel approach to 2D visual servoing in complex environments containing obstacles.Our method only makes use of visual data obtained by a single monocular sensor.The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane.By ensuring precise tracking of the selected trajectory in the image plane, we prevent issues such as collisions with obstacles and loss of camera visibility.In the specific context of this work, which focuses on a mobile robot, we have demonstrated using the concept of differential flatness that the use of a single point on the object in the image plane as a descriptor is sufficient.In the future, this approach could be extended to robots with multiple degrees of freedom, such as manipulator arms or quadcopters, using other points on the object to represent the flat output dimension of the robot.It is important to note that proposing an arbitrary trajectory in the image plane can lead to non-intuitive movements, including back-andforth motions, in the Cartesian plane.This can create a problem, especially when the robot is moving backward, which could result in collisions with obstacles as the robot moves without direct vision.Therefore, in future work, it would be essential to develop optimal trajectory planning that avoids backward movements.We were able to impose a dynamic in position, velocity, and acceleration on the robot by using the concept of flatness.It thus becomes possible to decide on any trajectory in the image plane of a primitive initially visible by the robot so that the robot executes the necessary movement that guarantees the exact following of this trajectory.In this paper, we were interested in visual servoing in the case of a static target.Since we have demonstrated that it is possible to control positions, velocities, and accelerations in the image plane, we can also use the proposed concept to carry out dynamic visual servoing.

Figure 4 .
Figure 4. Simulation environment.(a) Three-dimensional environment; (b) image captured using the camera (size 1024 × 1024).Red point represents a single point (detected) in the object.

Figure 4 .
Figure 4. Simulation environment.(a) Three-dimensional environment; (b) image captured using the camera (size 1024 × 1024).Red point represents a single point (detected) in the object.

Figure 5 .
Figure 5. (a,b) The desired trajectory in the image plane of a point  of the object (The big and small boxes represent respectively the desired and initial target position-Only the bleu point is used to generate control laws); (c,d) the pixel variation in the desired trajectory; (e,f) the accelerations (in pixel) of the desired trajectory.

Figure 5 .
Figure 5. (a,b) The desired trajectory in the image plane of a point P of the object (The big and small boxes represent respectively the desired and initial target position-Only the bleu point is used to generate control laws); (c,d) the pixel variation in the desired trajectory; (e,f) the accelerations (in pixel) of the desired trajectory.The simulation parameters are chosen as follows: L = 0.3 m, R = 0.1 m, k 1 = k 2 = 100, T = 10 s, and Z c = 2.5 m.

AuthorFigure 6 .
Figure 6.(a) The trajectory made by the four points (The big and small boxes represent, respectively, the desired and initial target positions; blue, green, red, and turquoise thick lines represent the four point trajectories in the image plane when the robot moves); (b) the actual and the desired trajectory in the image plane of a point P of the object; (c) the percentage of the error of the performed trajectory according to u and according to v; (d) the trajectory performed by the robot in the 2D Cartesian plane; (e) the evolution of the two commands v r and ω r applied to the robot; (f) the variations in the two angular speeds ω 1 and ω 2 of the two driven wheels.