Leader–Follower Approach for Non-Holonomic Mobile Robots Based on Extended Kalman Filter Sensor Data Fusion and Extended On-Board Camera Perception Controlled with Behavior Tree

This paper presents a leader–follower mobile robot control approach using onboard sensors. The follower robot is equipped with an Intel RealSense camera mounted on a rotating platform. Camera observations and ArUco markers are used to localize the robots to each other and relative to the workspace. The rotating platform allows the expansion of the perception range. As a result, the robot can use observations that are not within the camera’s field of view at the same time in the localization process. The decision-making process associated with the control of camera rotation is implemented using behavior trees. In addition, measurements from encoders and IMUs are used to improve the quality of localization. Data fusion is performed using the EKF filter and allows the user to determine the robot’s poses. A 3D-printed cuboidal tower is added to the leader robot with four ArUco markers located on its sides. Fiducial landmarks are placed on vertical surfaces in the workspace to improve the localization process. The experiments were performed to verify the effectiveness of the presented control algorithm. The robot operating system (ROS) was installed on both robots.


Introduction
Different types of robots [1,2] are created by researchers according to specific tasks and applications.They have also developed different types of controllers for collision avoidance [3] and also for leader-follower configurations [4].Due to their simplicity and ease of use, fiducial ArUco markers are widely used.Researchers have proposed different types of control algorithms [5,6] with different features depending on their applications.
The pose calculation from the ArUco marker using the Kalman Filter is well described in the paper [7].The paper [8] explains a tutorial on behavior trees in video games and the book [9] gives an introduction to the behavior tree.The examples of controls for the formation of robots are described in the papers [10,11].The paper [12] gives an overview of the introduction of the landmarks with the Extended Kalman Filter.
Control based on only onboard sensors is more challenging but results in a robot's operating area that is not limited by the coverage of the external measurement system.The novelty of the paper is an approach of controllers for leader-follower robots including the independent control of the camera's field of view.While the rotating platform does not cover the full angle (its range is −150 to 150 degrees), it allows observation of a robot and landmarks located on the sides or to some extent behind.This is achieved by using only one camera.The main task of the behavior tree is to switch between observing the leader and static fiducial markers placed on the wall.This provides correction of positions obtained based on intrinsic sensors, which can drift, using features available in the environment.In addition, the decision to switch between observed objects using a behavior tree will make it possible in the future to easily shape the process of perception of the environment depending on the conditions of the task (for example, the speed of robots and other objects, the quality of perception due to the distances and angles of observation of markers or lighting conditions).
An additional benefit of using a behavior tree is the simplified management of task initiation.In the case of tasks carried out by more than one robot, it is necessary to spread information about the readiness of key sub-systems to work.This problem can be solved in many ways and is usually overlooked in the literature.It has been faced by every team implementing a multi-robot system.This paper proposes a neat solution to this problem based on the behavior tree approach.
To the best of the authors' knowledge, there is no use of a behavior tree in the literature to expand the robot's perception range.In particular, the use of this approach in a multirobot system has never been proposed.According to the authors, the proposed combination of hardware and software allows for a significantly reduced cost of the perceptual system because there is no need to use many cameras.
The authors emphasize that the practical usefulness of this approach is not obvious, because the rotation of the camera reduces the quality of the recorded image, and as a result, makes it difficult to track the marker.The research presented in the article shows that under standard laboratory conditions, the camera on a rotating platform does work and can be used in this kind of task.The ground points which explain the difference between already published papers and work represented by authors in this paper are as follows: (a) A decentralized vision-based system for markers is represented in the paper [13].
In the same paper, each robot has a truncated regular octagon and a fixed camera, whereas in this paper, the camera is movable and independent of the movement of the mobile platform.(b) The paper [14] is the extension of the research work presented in paper [13].A dual Unscented Kalman Filter is used to estimate the leader's velocities from fiducial markers.But the work presented in this paper has a centralized architecture and only simulation results are given.(c) In the paper [15], the ArUco markers are placed on the top of the robots and the camera is placed on the stand observing the robot vertically.The disadvantage of the method presented in this paper is that the vertical camera should cover all the environments with the robots.Our algorithm presented in this paper covers this gap and such a solution limits the working area only to that covered by the measurement system.(d) The paper [16] represents a hitchhiking robot approach for leader-follower robots.
The author used a QR marker, and the mapping and computation are conducted by the driver robot.But a completely different control algorithm from this paper is used with a fixed camera.
The paper [17] used the ArUco markers on the top of the robots which presents a protocol for a network of multiple robots.The nonholonomic robots in the paper are asymptotically stabilized.The paper [18] represents an algorithm for intermittent vision measurements and odometry with noise.The follower robot generates the trajectory of the leader robot with Bayesian trajectory estimation, and the ArUco marker was on the leader robot.
The paper [19] represents a fusion system with the combination of technologies INS, EKF, IMU, GNSS, and information from ArUco markers.The paper [20] represents the follower robot recognizing the ArUco marker on the clothes worn by the factory worker.The Mecanum platform is used and the human gait error model is also presented by the authors.The paper [21] also presents an algorithm with ArUco markers placed on the assets of a truncated rhombi cuboctahedron on the drone.The GPS-denied environment for Unmanned Aerial Systems (UASs) is discussed in the paper.This paper is the extension of the conference paper [22] with landmark fusion in EKF and a behavior tree controlling the robots.A leader-follower mobile robot control approach using onboard sensors and fiducial markers is shown in this paper.A 3D-printed cuboidal tower is mounted on the leader robot with four ArUco markers located on its sides.The follower robot is equipped with an Intel RealSense camera mounted on a rotating platform.An Intel RealSense Depth sensor D435 is installed on the follower robot.The depth sensor has an ideal range from 0.3 m to 3 m which works in indoor as well as in outdoor environments.Both the leader and follower robots are equipped with MPU9255 IMU sensors which deliver nine pieces of information, a three-axis gyroscope, a three-axis accelerometer, and a three-axis compass/magnetometer.A Dynamixel 12-A servo motor is used in a one-degree-of-freedom rotating platform.The servo motor has a resolution of 0.29 degrees per pulse with 0 to 300 degrees of endless turn.Both of the robots also have an NUC Intel single-board PC which has inbuilt Wi-Fi modules for connectivity.Camera observations and ArUco markers are used to localize the robots to each other and relative to the landmarks located in the workspace.A proportional-integrative-derivative (PID) controller rotates the camera platform independently from the mobile platform movement of the follower robot.The follower robot detects the ArUco markers on the leader robot and calculates its pose.The data from the wheel encoders and the IMU sensors are fused by EKF to calculate the global pose of the robots in the environment.The environment has artificial markers (ArUco markers) on vertical surfaces.The EKF of the follower robot also takes the data from the landmarks to update its pose.
The Robot Operating System (ROS) is installed on both robots.The experiments are performed to verify the effectiveness of the control algorithm presented in the paper.The leader robot is shown in Figure 1a and the follower robot is shown in Figure 1b.The behavior tree is explained in Section 2, ArUco marker detection in Section 3, the Extended Kalman Filter and its matrices specific to the sensors used in Section 4, and the control algorithm in Section 5.The experimental methodology is given in Section 6 and the experimental results in Section 7. In the last section, concluding remarks are given.

Behavior Tree
In general, behavior trees are used in video games, computer science, control systems, and robotics and is defined as the mathematical model to execute a plan.
The plan execution of the leader and follower robot is controlled by the two separate behavior trees implemented in on-board systems.The behavior tree is used as it is effective and easy to modify and extend the tasks in the same trees.The behavior trees only execute the node that is in the current state which makes them effective while executing.Another feature is that behavior trees are easily debuggable, as they are visual representations of the sequence of tasks.The C++ library [23] is used to build and execute the behavior tree in ROS.At the heart of a behavior tree is the root node, which acts as the starting point.From here, the tick signal begins its journey through the tree's nodes.The initiation of a behavior tree's operation commences at the root.This root node emits periodic "ticks" to its child nodes.As a node within the behavior tree is permitted to execute, it conveys a status update back to its parent node.If the node's task is still in progress, it communicates a "running" status, if the objective is accomplished, it communicates "success", and if the task encounters an obstacle or failure, it relays "failure" to the parent node.The representation of the behavior tree for the leader robot is presented in Figure 2 and for the follower robot in Figure 3.The behavior tree of the leader robot has the condition "Follower robot ready"; only when the follower robot is connected to the ROS master and sends the signal that it is ready to move will this condition relay a "success" status to the upper-level sequence node.After success, the sequence node will execute the task "Follow virtual robot", in which the leader robot will start moving and follow the trajectory of the virtual robot.The behavior tree of the follower robot checks the two conditions "Leader robot ready" and "Leader robot moving".The sequence node will obtain two successes when the leader robot is connected to the ROS master and moving with some velocities.After success from two conditions, the sequence node will execute a fallback node.The fallback node will rotate the camera platform of the follower robot towards the landmarks on walls for 3 s and then rotate the camera platform towards the leader robot for 5 s.The execution time of the platform rotation is negligible from the point of view of the task.

ArUco Marker
ArUco markers are fiducial markers created by S. Garrido-Jurado [24].The different IDs and sizes of ArUco markers can be created from the online generator website [25].The distances and the angles between the ArUco marker and the camera axes can be extracted by using the OpenCV library [26].
Section 3.1 explains the placements of ArUco markers on the leader robot.The data extraction after detection of the ArUco marker is explained in Section 3.3.The calculation of the leader pose expressed in the global coordinate frame is explained in Section 3.4.

ArUco Marker Placement on Leader Robot
The center of the cuboidal tower coincides with the center of the leader robot.The ArUco markers from IDs 1 to 4 of size 5 cm × 5 cm are pasted on the faces of the cuboidal tower (Figure 1a).Different IDs are required to detect the side of the leader robot which has a direct influence on the angle between the follower and leader robot.Figure 4 shows the top view of the leader and follower robot, the same figure also shows the ArUco marker placement on the leader robot.

Artificial Landmarks
ArUco markers of size 10 cm × 10 cm from IDs 100 to 104 are pasted on the vertical surface in the environment.The distance between the landmarks is measured in the environment and the same distances are added according to respective IDs to give global positions in the environment.

Pose Estimation from ArUco Markers
A calibrated camera gives the precise information from the markers, distortion coefficients (D), and camera matrix (K) of the same camera as follows: Intel's ROS library [27] helps to connect the Intel RealSense sensor to the ROS environment.The factory calibration information, such as D and K values, is published through the ROS topic "/camera/color/camera_info".
A total of two cases are possible when the follower robot observes the leader robot in the camera frame.In case 1 (Section 3.3.1),the follower robot observes only one ArUco marker in the camera frame, or in case 2 (Section 3.3.2),the follower robot observes two ArUco markers at the same time.The translation and the rotation of ArUco markers are conducted to calculate the center of the leader robot.

Case 1-When One Marker Detected
When one ArUco marker on the leader robot comes into the camera frame of the follower robot, ID information and the translation ( t vec ) and rotation ( r vec ) vectors are extracted by the ArUco library [26].The camera rotation matrix relative to the tag (ArUco marker) is calculated by Rodrigues' formula as follows: The transposition of Equation ( 3) gives the rotation matrix of the tag relative to the camera (R TC ): The Euler angles (φ R , θ R , ψ R ) from the tag to the camera are calculated from the rotation matrix R TC .The Z-Y-X rotation conventions are used for Euler angles.The axis representation between the camera and ArUco marker is shown in Figure 5.The leaderfollower controller shown in this paper requires the geometric center of the leader and follower robots.The ArUco markers have some offset distance from the center on the leader robot.The translation is conducted to move the marker exactly to the center of the leader robot.The algorithm proposed in this paper to calculate the leader robot's pose requires that the XY plane of the ArUco marker be parallel to the camera's XY plane.The first rotation is conducted from the ArUco markers and then the translations.When a rotation matrix R y (θ R ) is multiplied by the rotation matrix R TC , it gives a new matrix Ŕ TC which is rotated along the y-axis of the marker.To virtually displace the marker from the cuboidal wall to the center of the leader robot which is also the center of the cuboidal tower, a translation should be added in Z-axis.The cuboidal tower measures 0.06 m on each side so 0.03 m is added into the ( t vec ) vector of the marker.The new vector ( tvec ) of the marker is as follows: where d x = 0, d y = 0 and d z = 0.03.The translation matrix ( t tag cam ) to the camera coordinate is calculated from Ŕ TC from Equation ( 6) and tvec .
The transpose representation of the translation matrix t tag T cam is as follows: z marker and x marker from the translation vector t tag T cam from Equation ( 9) gives the distance between the follower and leader robots in the x and y directions expressed in the global coordinate frame.To calculate the angle between the leader and follower, the modification of the pitch angle θ R of the ArUco marker is conducted, which is explained in Figure 6.The pose difference between the leader and follower robot is (z marker , x marker , θR ).

Case 2-When Two Markers Detected
The proposed solution by the authors is the translation and rotation of markers to the center of the leader robots.Theoretically, both the detected markers should give the same readings, but due to manufacturing and pasting errors, the markers give slightly different readings of the center of the leader robot.The solution to the problem raised is to take the average readings of markers depending on the pixel areas captured in the image frame.
The pixel area a i of the i'th marker is calculated by the Shoelace formula [28] as: The Shoelace formula is a mathematical algorithm to find out the area after cross multiplication of corresponding coordinates of vertices.Figure 7 explains the corner points of the ArUco markers.The pixels in the x coordinates are (x 1 , . . ., x 4 ) and in the y coordinates are (y 1 , . . ., y 4 ).The total area A is calculated as follows: where N m is the total number of markers detected.At most, the follower robot may detect only two markers, so N m = 2.Both the detected markers have sets of translation t vec i and rotational R TC i vectors.The new rotation matrices ŔTC i are calculated as explained in Equation ( 6).Both of the detected markers come into the same XY planes after multiplying with rotation matrices.To obtain the center of the leader robot, translation distances should be added to translation t vec i vectors.The distance of 0.03 m is added to the Z-axis.As the cuboidal side is 0.06 m, 0.06 m is added or subtracted according to the set of markers detected.When ArUco IDs (ID1 and ID2) or (ID2 and ID3) are detected, the translation distances are added in the following ways: where d x1 = 0, d y1 = 0 and d z1 = 0.03.
The average of translation vector t tag cam is calculated as follows: In this case, the distances z marker and x marker are taken from the average matrix t tag cam from Equation (18).The authors have considered the pitch angle θ R from the marker having a bigger pixel area.After considering the pitch angle, the new angle θR is calculated as explained in Figure 6.

Pose Calculation of Leader Robot
Let the poses of follower robot (Pose F ) be represented as x 2 , y 2 , and θ 2 .After ArUco marker detection, z tag and x tag give the distances in the x-axis and y-axis, respectively, between the center of the follower and leader robot.The angle between the camera and the ArUco marker is θR and the angle between the rotating platform and follower robot base is θ c .Both the angles θ c and θR are added to obtain the resultant angle, as they are opposite in nature.Figure 8 represents the situation of models after pose estimation (Section 3.3) or the situation of robots assuming the estimation is perfect.The pose of the leader robot (Pose L ) is written as:

Extended Kalman Filter
A total of three Extended Kalman Filters (EKFs) are used in the robots-two EKFs in the follower robot and one EKF in the leader robot.The EKF of the leader robot (Figure 9) takes the input data from the wheel encoders and IMU sensor to generate the robot's own poses in the global frame, whereas one of the follower EKFs (Figure 10a) has the same inputs but with landmarks in addition.The remaining EKF of the follower robot (Figure 10b) predicts the position of the leader robot after ArUco marker detection.The EKF model is taken from the paper [29].The system state X of EKF is written as: where i = 1, 2 for leader or follower robot.Variables x i , y i , and θ i represent the position in the x-axis, y-axis, and the orientation of the leader or follower robot in the global frame, respectively.The linear and angular velocities are v i and ω i , respectively.The discrete time model of the system ] is written as follows: The state model and measurement model of the filter are as follows: where the non-linear system is represented as f , measurement model as h, the dynamic system Gaussian noise as w(k − 1) (w(k) ∼ N(0, Q(k))) and the measurement Gaussian noise as υ(k) (υ(k) ∼ N(0, R(k))).The covariance matrices are Q(k) and R(k).To make the system linearized, a Jacobian matrix of f is calculated as: where t is the time difference between the two consecutive samples.The prediction Equations are as follows: The update Equations are as follows: where the Kalman gain is represented as K(k), X(k) − , P(k) − , X(k) + and P(k) + are priori and posteriori state estimates.

Wheel Encoder Data of the EKF
Both the leader and follower robots have motors with encoders.As a result, the wheel angular velocities of the left and right wheel in radians per second are calculated as: where w li and w ri are the angular velocities (rad/s) of the left and right wheels of i-th robot.φ c l i and φ p l i are current encoder ticks and previous saved encoder ticks, respectively, for the left wheels, whereas φ c r i and φ p r i are for the right wheels.t φ l and t φ r are the time between the two consecutive readings of the left and right wheel readings.
The linear velocities (in m/s) of wheels are calculated by multiplying the wheel radius R wheel i in Equations ( 31) and (32) as: Equations ( 33) and (34) are used to calculate the linear and angular velocities of robots as: where L i is the length of the wheel separation of robots in meters.v wheel i and ω wheel i are linear and angular velocities of robots calculated from the wheel encoder readings.The calculated linear and angular velocities of robots are used in EKF to calculate the pose of the robot in the global reference frame.The measurement matrix z encoder (k) for EKF is as follows: The H encoder (k) matrix is as follows:

IMU Data to the EKF
The gyroscope data are only used from the IMU sensor in both robots.The measurement matrix z gyro (k) for IMU data is as follows: The H gyro (k) matrix is as follows: 4.3.Landmark Data to the EKF ArUco markers give the global x and y positions of robots, so z landmark (k) is as follows: The H landmark (k) matrix is as follows:

Pose Prediction of Leader Robot
The follower robot has another EKF which predicts the position of the leader robot.The measurement matrix z leader (k) for leader pose prediction is as follows: where the pose of the leader robot (x 1 , y 1 , θ 1 ) is taken from Equation (19).The H leader (k) matrix is as follows:

Control for Rotating Platform
The movement of the rotating platform on the follower robot is independent of its robot body.A proportional-integral-derivative (PID) controller is used to control the movement of the rotating platform.The main aim of the controller is to minimize the distance between the center of the camera and the ArUco marker as much as possible.The distance between the ArUco marker center and the camera center (x marker ) is either taken from Equation ( 8) or (18) depending on the ArUco marker.
where u PID (t) is the control signal generated for a time t.K P , K i and K d are the proportional gain, integral gain, and derivative gain parameters, de is the change in error, dt is the change in the time, and e(t) is expressed as follows: where SP is the setpoint which is set to be zero as the camera should point between the center of the ArUco marker.PV(t) is the process variable which was equal to the current angle between the center of the camera and the ArUco marker (x marker ).

Control Leader and Follower Robots
The kinematic model of the mobile robot R i (i = 1, 2) is written as: where i = 1, 2 for the leader and follower robots, respectively.The robot pose is represented by vector [x i y i θ i ] .x i , y i , and θ i are the variables representing the pose of the robot in the global reference frame.The control vector is v i ω i , where v i and ω i are linear and angular velocity controls of the robot, respectively.The aim of the leader is to mimic the pose [x 0 y 0 θ 0 ] of the virtual leader: The desired velocity vector is [v 0 ω 0 ] , where v 0 is the linear velocity and ω 0 is the angular velocity.With some constant displacement [d 2x d 2y ] , the follower has to mimic the motion of the leader on a mobile platform: and with the same orientation: which brings the following quantities to zero: The system errors with respect to the robot's fixed frame are as follows: Due to its simplicity and its effectiveness, the trajectory tracking algorithm is taken from [30].In the original version [30], the algorithm was used for trajectory tracking by a single robot.The same paper presents stability proof that remains valid for the leader without changes and for the follower under the assumption that the leader's motion is a trajectory generator for the follower after adding a fixed displacement [d 2x d 2y ] .The control for the i-th robot is written as: where k 1 , k 2 and k 3 are constant parameters greater then zero, and the function sgn(•) is defined as follows: A detailed analysis of the properties of the control is presented in paper [30].

Methodology
The experiments with real robots are performed in an environment without obstacles.A PC with Intel i7 5th generation with 16 GB RAM and Ubuntu 20.04 as the operating system is used.ROS Noetic is installed on the PC and both the two robots.A NUC single-board computer is used in both robots as the hardware.PC and both the robots are connected to the same WiFi network.PC is selected as the ROS master and both robots are connected as the slave nodes.The task for the formation of two robots is to follow a sinusoidal trajectory.Two experiments were performed by the authors.
In experiment 1 (Section 7.1), the follower robot has to follow the leader as it will move along sinusoidal path.The behavior tree and landmarks are not used in the experiments, the experiment is performed to validate the algorithm presented by the authors in which the follower robot will detect the markers on the leader robot and calculate its global pose.The rotation platform will continuously observe the markers on the leader robot with the PID controller described in the paper.All the data generated in the experiments were saved in the text (txt) files.
In experiment 2 (Section 7.2), landmarks are placed in the environment.The behavior Tree will switch the movement of the rotating platform towards landmarks placed on the wall or to towards the leader robot.

Experiment Results
The control gains of the controller (53) between the virtual robot and leader robot are k 1 = 0.15, k 2 = 2 and k 3 = 2.The control gains of the controller (53) between the leader robot and follower robot are k 1 = 0.25, k 2 = 1.5 and k 3 = 1.5.Their values were tuned manually.The d 2x = 0.5 and d 2y = 0 in Equation (49).

Experiment 1
Experiment 1's duration was 75 s. Figure 11a shows the (x, y) plots of the virtual leader, the leader robot, and the follower robot.The pose theta of all the robots is represented in Figure 11b, which also represents the EKF output of the theta after filtering.The noise is visible on the plot of the orientation from EKF after marker detection is caused by the inaccuracy of mounting and 3D printing of the cuboidal stand and the limited precision of sticking the ArUco markers onto it.The noise is particularly noticeable when two ArUco markers, placed on two perpendicular walls of the stand are observed at the same time.As the leader robot tries to mimic the pose of the virtual robot, the leader-follower controller tries to minimize the desired displacement which is a design parameter between the robots.The errors of the leader robot in the x-axis, y-axis, and orientation are shown in Figure 11d, Figure 11e and Figure 11f, respectively.Their values do not exceed 7 cm.The actual linear and angular velocities of the leader robot are shown in Figure 11g and Figure 11h, respectively.For short periods, angular velocity reaches the platform limit set at 0.2 rad/s.The follower robot detects the ArUco markers on the leader robot and tries to mimic the pose of the leader robot, and the errors generated by the leader-follower controller are saved and plotted.The errors of the follower robot in the x-axis, y-axis, and orientation are shown in Figure 11i,j and 12a, respectively.The actual linear and angular velocities of the follower robot are shown in Figure 12b and Figure 12c, respectively.The noise associated with ArUco marker detection and, consequently, the result of relative position computation translates into more variable waveforms than in the case of the leader robot and also more frequent reaching of platform velocity limits.The follower robot that observed the pose of the leader robot through ArUco marker detection after EKF is shown in Figure 11c.The information from the follower robot, whether it detected the ArUco marker or not, is shown in Figure 12d; 1 means the marker is detected and 0 means the marker is not detected in the figure.As can be seen, the ArUco marker was not visible for only a short while.

Experiment 2
Experiment 2's duration was 40 s. Figure 13a shows the (x, y) plots of the virtual leader, the leader robot, and the follower robot.The path of the follower robot is shown in green in Figure 13a, which shows disturbances in the sinusoidal trajectory; for 3 s, the follower robot watched the landmarks on the walls, and for 5 s the follower robot watched the leader robot.Figure 13b shows the (x, y) plots of the virtual leader, odometry, and data fusion positions of the leader robot.A slight drift from the odometry data is removed after data fusion.The errors generated from the leader-follower controller algorithm are saved and plotted.The errors of the follower robot while following the leader robot in the x-axis and y-axis are shown in Figure 13c and Figure 13d, respectively.The actual linear and angular velocities of the follower robot are shown in Figure 13e and Figure 13f, respectively.Figure 13g,h show the distance of the follower robot in the global coordinate frame when the camera on the rotating platform observes the landmarks.Figure 13i gives the information of the landmark ID detected.Figures 13j and 14a show the distance between the follower robot and the leader robot when the camera on the rotation platform of the follower robot watches the leader robot.The information from the follower robot, whether it detected the ArUco marker or not, is shown in Figure 14b; 1 means the marker is detected and 0 means the marker is not detected in the graph.

Figure 2 .
Figure 2. Behavior tree of leader robot.

Figure 4 .
Figure 4. Top view of the leader and follower robots.

Figure 5 .
Figure 5. Representation of the axis between the ArUco marker and the camera.

Figure 6 .
Figure 6.Flow chart of pitch angle θ R modification of ArUco marker.

Figure 8 .
Figure 8. Pose representation of the leader and follower robots.

Figure 11 .
Experiment result 1 (a).(a) (x,y) plot of all robots.(b) Theta of all the robots.(c) (x,y) plot of all robots with EKF.(d) Error in x-axis for leader robot.(e) Error in y-axis for leader robot.(f) Error of leader's orientation.(g) Linear velocity of leader robot.(h) Angular velocity of leader robot.(i) Error in x-axis for follower robot.(j) Error in y-axis for follower robot.