Coordination of Multiple Robotic Vehicles in Obstacle-Cluttered Environments

: In this work, we consider the motion control problem for a platoon of unicycle robots operating within an obstacle-cluttered workspace. Each robot is equipped with a proximity sensor that allows it to perceive nearby obstacles as well as a camera to obtain its relative position with respect to its preceding robot. Additionally, no robot other than the leader of the team is able to localize itself within the workspace and no centralized communication network exists, i


Introduction
During the last few decades, the cooperation of autonomous robotic platforms stands as an important research direction within robotics, owing to the numerous advantages it entails, such as flexibility, increased capabilities and robustness, to name a few. A particular class of robotic problems involves the coordination of the motion of multi-robot systems in order to achieve either a common or several independent goals. Centralized control schemes constitute a well studied solution to the coordination problem; nevertheless, high computational and communication costs render them viable only for teams with a small number of robots. Furthermore, owing to the complexity of the underlying problem, the scenarios dealt so far impose very strict assumptions, such as absence of static obstacles and knowledge of the state of the entire system, which cannot be met easily in real working conditions. Moreover, knowledge of the entire state by each robot necessitates the existence of a centralized localization system and communication network, which, apart from introducing issues such as time delays and inconsistencies, may not be available when considering unstructured workplaces.
Another critical issue with explicit inter-robot communication (either through a centralized network or a mobile ad hoc network) concerns the fact that the information exchanged among the robots (e.g., position/orientation/velocity measurements) should be expressed with respect to a common frame so that it is meaningful for the robots. In that respect, a common frame is vital for the operation of the whole multi-robot system. However, the errors that are propagated when a common frame is assumed within a multi-robot system increase rapidly and in some cases deviate, thus injecting large amount of noise within the closed loop system. On the other hand, decentralized control schemes for multi-robot systems bypass the aforementioned issues, offering a more efficient and robust solution to this problem [1,2], although the design of decentralized control systems with guaranteed direct communication among the robots via a network is required. In [29], a group of robots aims at tracking a desired trajectory while maintaining a predefined formation in known constrained workspaces. The proposed control scheme combines artificial potential fields with an optimization algorithm that minimizes a given formation cost. Despite the fact that rapid obstacle avoidance is achieved, this method is based on global position measurements. In [30], the cooperative motion coordination of L-F formations of nonholonomic mobile robots under visibility and communication constraints in known obstacle environments is addressed. A state feedback control scheme based on dipolar vector fields is proposed along with a hybrid feedback motion planner that guarantees obstacle avoidance based on a cell decomposition of the free space. It is assumed that the leader ensures obstacle avoidance while navigating towards the goal configuration, and the follower guarantees visibility maintenance and inter-robot collision avoidance. The proposed scheme does not require information exchange among robots nor velocity measurements. However, it is applied only in known polygonal obstacle environments. Furthermore, as the number of robots in the formation grows, the turning radius of the leader should increase thus exhibiting wider paths, that may not be feasible in narrow workspaces. In [31], a control scheme is proposed for L-F tracking in obstacle environments while preserving sensing connectivity without explicit communication between the robots. Only the leader knows the path to the target area and each robot in the group is subject to limited sensing range. The motion direction is dictated by an artificial potential function, whereas the velocity magnitude is determined according to the sensing connectivity, the Line Of Sight (LOS) preservation as well as the obstacle and inter-robot collision avoidance. A strategy was also proposed that modifies the network connectivity by deactivating sensing links such that the robots pass through narrow spaces or establishing new links to keep the group cohesive in free spaces. Nevertheless, the aforementioned L-F approach is only applied when LOS is preserved, which is compromised by the obstacle avoidance maneuvers that may result in the violation of the distance and bearing angle constraints. Moreover, the following robot may get stuck in a corner even if sensing links are properly deactivated depending on the shape of the path.
It should be noted that in all aforementioned works prior knowledge of the environment is considered, which is rather unrealistic for practical applications, especially in unstructured workspaces. Furthermore, the case where the leader escapes the field of view of its follower was not addressed, as it was imposed by assumption. However, obstacle avoidance maneuvers may break the LOF constraint thus resulting in the loss of the leader. In [32], a tracking control scheme was proposed that takes sensor limitations explicitly into account along with the safety in unknown obstacle environments. Moreover, the leader-loss situation was tackled by an extra control mode that drives the follower towards the position where its leader was visually tracked for the last time. However, even though experiments that demonstrate the effectiveness of the proposed algorithm were presented, there is no guarantee that visual connectivity with the leader will be restored, especially in narrow snaky passages (zigzag like corridors), since by the time the follower arrives, the position where the leader was detected for the last time the leader may no longer be visible.

Contribution
In this work, we address the problem of coordinating the motion of a platoon of multiple unicycle robots, that operate within a workspace occupied by static obstacles (see Figure 1). Each robot is equipped with proximity sensors that allow it to measure its distance with nearby obstacles and a forward looking camera with a limited field of view that allows it to detect and compute the relative position of its predecessor. Assuming that the robot that leads the platoon at the front traces a safe path inside the workplace, we propose a decentralized control law for the followers based on a modification of the prescribed performance control (PPC) [33] method, which ensures safe navigation of the entire team using only local measurements from the aforementioned on-board sensors. Additionally, the proposed control scheme guarantees a priori that visual connectivity between neighbouring/successive robots is not compromised, i.e., each robot-follower maintains its predecessor within its camera field of view and prevents occlusions introduced by the static obstacles. It should be noted that a preliminary version of the present work was accepted for presentation in [34]. Compared to [34], we have re-designed the adaptive laws for the performance functions so that we guarantee a priori visual connectivity and collision avoidance. Moreover, we present extra simulation studies for complex and realistic environments to highlight the intriguing properties of the proposed control scheme. Finally, the main contributions of our work are summarized as follows: • Contrary to the related literature on multi-robot coordination based on sensors with limited field of view, we propose a purely decentralized control protocol with guaranteed collision avoidance and visual connectivity maintenance.

•
The proposed algorithm is easy to implement since it is of low complexity and does not require any explicit inter-robot information exchange via a communication network. • Contrary to our previous related works [35,36], the proposed scheme deals with generic workspaces involving static obstacles of irregular shape.

Outline and Notation
The outline of this work is given as follows. In Section 2, we rigorously formulate the multi-robot coordination problem and in Section 3, we present the decentralized control law that allows safe navigation of the robot-team within the workspace while guaranteeing visual connectivity maintenance. In Section 4, we demonstrate the efficacy of the proposed control scheme via extensive simulation results and we conclude in Section 5. Finally, the following table includes a brief description of all symbols employed throughout the manuscript.

Symbol Description
Heading of i-th robot u i Commanded linear speed of i-th robot ω i Commanded angular velocity of i-th robot Line segment (line of sight) between robots i − 1 and i d i , β i Distance and angle of view corresponding to robots i − 1 and i d con Maximum allowed distance between two consecutive robots d col Minimum allowed distance between two consecutive robots d des Desired distance between two consecutive robots b con Half-angle of field of view sector Distance between robot i and closest obstacles on the left and right side of L i , respectively d l,i , d r,i Distance between L i and closest obstacles on the left and right side of L i , respectively e d i , e β i Distance and angle of view errors of robot i ρ d i , ρ d i Performance functions bounding e d i from below and above, respectively Performance functions bounding e β i from below and above, respectively λ Convergence rate of performance functions away from obstacles ρ d ∞ , ρ β ∞ Bounds on e d i and e β i at steady state F l,i , F r,i Left and right activation terms in performance function update laws S( ; δ) Continuous function vanishing when > δ ε d i , ε β i Transformed distance and angle of view errors k d , k β Positive gains in control laws u i and ω i

Problem Formulation and Preliminaries
Let W ⊂ R 2 be a planar workspace occupied by n static obstacles O i , i ∈ J O with J O {1, 2, . . . , n}, and let W f W \ ∪ i∈J O O i denote the free space. We consider a team of N + 1 disk-shaped robots R i of radius r i , for i ∈ J R with J R {0, 1, . . . , N}, which operate within W f and whose motion obeys the unicycle kinematic model: where p i = [x i , y i ] T ∈ R 2 and θ i ∈ R denote the i-th robot's position and orientation w.r.t. an arbitrary inertial frame, respectively, u i , ω i ∈ R denote the commanded linear and angular velocities, and n i = [cos θ i , sin θ i ] T ∈ R 2 . We assume that all robots other than R 0 are unable to either localize themselves within W f or exchange explicitly information about their state with other robots. Consequently, they have to rely on on-board sensors for obtaining information about their environment and their neighbours. Particularly, each robot follower R i ∈ J F {1, 2, . . . , N} is equipped with a forward looking camera, fixed at its center, which acquires the relative positionp i = p i−1 − p i of robot R i−1 expressed in the camera's body-fixed frame, as long as robot R i−1 is visible by R i . Specifically, we say that robot R i−1 is visible by R i if: (i) R i−1 lies within the field of view F i of the camera of robot R i , defined as a sector of angle 2β con ∈ (0, π) and radius d con > 0, and (ii) the line segment L i connecting R i−1 and R i does not intersect any obstacle O j , j ∈ J O (see Figure 2). Moreover, d col > r i + r i−1 denotes the minimum allowable distance between robots R i and R i−1 .
Additionally, every robot follower R i , i ∈ J F is equipped with proximity sensors that enable it to perceive the unoccluded outline of nearby obstacles up to distance d con , thus allowing it to compute the distances d W l,i , d W r,i between itself and the closest boundary of W f , as well as the distances d l,i , d r,i between the obstacles that are closest to the line of sight L i , from the left and right side, respectively (see Figure 2).
Notice that the sensing capabilities of the robots described above define a line graph that is directed and rooted at robot R 0 . Moreover, let d i and β i be respectively the distance and angle of view corresponding to robot R i and its predecessor R i−1 , given by: We now formally define the problem addressed in this work. Problem 1. Given a feasible path to be tracked by the leading robot R 0 with bounded linear and angular velocity, design a decentralized control law for the robot followers' velocities u i , ω i , i ∈ J F such that the entire team navigates safely within the workspace while avoiding inter-robot collisions and collisions with static obstacles, i.e., and every preceding robot R i−1 remains visible by its following robot R i , i.e., for all t > 0 and i ∈ J F . Additionally, whenever possible (owing to the aforementioned operational constraints) the formation should attain a desired inter-robot distance d des ∈ (d col , d con ) with zero angle of view (i.e., each follower keeps its predecessor at the center of its camera field of view at distance d des ).
Finally, to solve the aforementioned problem we assume that the path of the leading vehicle is feasible, in the sense that all followers R i , i ∈ J F may track it safely while meeting the visibility constraints, and that the initial robot configuration satisfies:

Remark 1.
It should be noted that the aforementioned assumptions are not strict since they establish the feasibility of the problem (i.e., there exists sufficient space for each robot to track the desired path and keep its predecessor visible) and that initially all robots are safe and track their predecessors, such that the proposed control scheme may be applied. Moreover, in case the robot-team is initially folded, which renders the aforementioned problem ill-defined, i.e., collision avoidance and visual connectivity cannot be met simultaneously, then an initial reordering of the line graph is needed to alleviate the deadlock. Notice that such reordering is compatible with our formulation since all following robots are considered identical with respect to sensing and actuation capabilities.

Remark 2.
In this work, we do not study the motion planning problem of the leading robot R 0 towards its goal position. Such problem has been successfully solved in the past, following either sampling-based [37][38][39] or reactive [40][41][42] approaches, under certain mild assumptions. Our scope herein is to study the leader-follower control problem under safety and visibility constraints. Thus, the aforementioned solutions can be easily adopted in our formulation to dictate the motion of the leading robot as long as the resulted path meets the feasibility assumption regarding the safety and visibility constraints.

Prescribed Performance Control Preliminaries
This subsection presents a summary of preliminary knowledge regarding the prescribed performance control method that will be adopted in the subsequent control design. The idea of designing controllers that guarantee prescribed transient and steady state performance specifications was originally introduced in [33]. More specifically, prescribed performance control aims at achieving convergence of a scalar tracking error e(t) to a predetermined arbitrarily small residual set with speed of convergence no less than a prespecified value, which is modeled rigorously by e(t) evolving strictly within a predefined region that is upper and lower bounded by certain functions of time, as follows: where ρ(t) and ρ(t) denote smooth and bounded functions of time that satisfy ρ(t) < ρ(t), ∀t ≥ 0 and lim t→∞ ρ(t) < lim t→∞ ρ(t), called performance functions. Figure 3 illustrates the aforementioned statements for exponentially decaying performance functions, given by: In particular, the constants ρ 0 , ρ 0 are selected such that ρ 0 < e(0) < ρ 0 . Moreover, the parameters ρ ∞ lim t→∞ ρ(t), ρ ∞ lim t→∞ ρ(t) represent the minimum and maximum allowable value of the steady state error and satisfy ρ ∞ < ρ ∞ . Finally, the positive constant λ determines the convergence rate of ρ(t) and ρ(t) and thus is used to regulate the transient response of e(t). Therefore, the appropriate selection of the performance functions ρ(t), ρ(t) imposes certain transient and steady state performance characteristics on the tracking error e(t). The key point in prescribed performance control is a transformation of the tracking error e(t) that modulates it with respect to the corresponding transient and steady state performance specifications, encapsulated in the performance functions ρ(t) and ρ(t). More specifically, we adopt the mapping: .
Owing to the properties of the aforementioned transformation, it can be easily verified that preserving the boundedness of ε(t) is sufficient to achieve prescribed performance, as described in (6).

Control Design
In this work, we employ the prescribed performance control (PPC) design methodology [33] in order to meet the multiple safety specifications for collision avoidance and visibility maintenance, which are critical for the operation of the multi-robot team. Hence, let us first define the distance and angle of view errors: for each robot R i , i ∈ J F . Differentiating e d i (t) and e β i (t) with respect to time and substituting (1)- (3), we obtain the following distance and angle of view error dynamics: Notice from Figure 2 that the distance between robots R i and R i−1 is not affected by their angular velocities, thus the two terms in (8) correspond to the projected linear velocities of robots R i and R i−1 on their line of sight (i.e., radial direction), which dictate the rate of change of their distance. On the other hand, the rate of change of the angle of view (9) is affected by the angular velocity of robot R i only and the cross-radial (i.e., normal to the line of sight) velocity of the robots. Based on the PPC approach, we shall design the velocity commands u i , ω i , such that: for appropriately selected performance functions satisfy for all time the following properties: Such design specifications for the distance and angle of view performance functions guarantee that each follower maintains the preceding robot within its camera field of view F i and avoids collisions with it. More specifically, notice that guaranteeing (10) for all time, under the properties described in (11), leads to: and consequently, owing to (7), to: The aforementioned formulation was adopted successfully with exponential performance functions: [35] for a platoon of multiple unicycle robots operating within an obstacle-free workspace. The parameter λ dictates the exponential rate of convergence of the distance and angle of view errors e d i , e β i to small neighbourhoods of the origin with size ρ d ∞ and ρ β ∞ respectively. Notice that the properties in (11) are satisfied and hence the preceding robot is kept within the camera field of view of its follower while avoiding simultaneously collisions between them. However, it should be noted that the presence of static obstacles within the workspace complicates significantly the problem at hand since: (i) obstacles may break inter-robot visibility by raising occlusions among the robots if they stand between them, although the preceding vehicle may lie within the camera field of view of its follower, and (ii) performing an obstacle avoidance maneuver may violate visual connectivity, thus compromising the safe operation of the multi-robot team (see Figure 4). Therefore, in this work we propose to modify the control design presented in [35] by adapting the distance and angle of view performance functions ρ d i (t), ρ d i (t), ρ β i (t) and ρ β i (t) appropriately so that all operational specifications are met simultaneously.
In particular, when a single obstacle, either from the left or the right, depending on the motion of each pair of preceding-following robots, tends to intervene between them and raise either a visibility or collision risk (see the red robot in Figure 1), then we propose to deflect the angle of view β i and the distance d i by modifying the corresponding performance functions ρ β i (t), ρ β i (t) ρ d i (t), ρ d i (t) (positively or negatively respectively, but still satisfying the safety constraints) so that the line of sight L i moves away from the corresponding obstacle, thus ensuring that neither a collision nor a visibility break occurs. However, during the aforementioned manoeuvre to avoid an obstacle either from the left or the right, another obstacle at the opposite side may intervene (see the blue robot in Figure 1), thus introducing a conflict, since the deviation of the angle of view is not sufficient to bypass the obstacles owing to their contradicting effects on the control algorithm (i.e., the obstacle at the left of the line of sight will lead the angle of view to positive values whereas the obstacle at the right to negative values). Fortunately, in such critical case the solution to the follower's control problem is to approach its preceding robot by reducing the distance performance functions ρ d i (t), ρ d i (t) but keeping the inter-robot distance greater than d col to avoid collision. Similarly, adjusting online the inter-robot distance resolves a potential conflict when the leader performs a circular motion around its follower and an obstacle interferes between them, thus affecting their line of sight. Notice that the aforementioned strategy is viable since we have assumed that the path of the leading robot R 0 is feasible for the whole robot-team under the considered operational specifications. Based on the aforementioned discussion, we design the following update laws for the distance and angle of view performance functions: for a positive constant δ. Notice that when the distance of the robot R i as well as of the line of sight L i with the surrounding obstacles is large (>δ) then both terms F r,i and F l,i vanish and consequently the aforementioned update laws yield exponential response similar to [35]. On the other hand, when a single obstacle intervenes from the left or the right between a follower and its predecessor then the term F l,i or F r,i increases, causing the distance performance functions to decrease and the angle of view performance functions to decrease or increase respectively and consequently the robot and the line of sight L i to move away from the obstacle. Moreover, when obstacles are close to the robot R i or the line of sight L i from both sides, then both F r,i and F l,i increase, thus decreasing the distance performance functions, so that the following robot approaches its predecessor travelling in between the obstacles. Finally, in order to ensure that the properties presented in (11) regarding the distance and angle of view per- (t) and ρ β i (t) are met for all time, we also apply a standard Lipschitz continuous projection operator [43] on the aforementioned update laws over the sets: , −β con , β con − 2ρ β ∞ and −β con + 2ρ β ∞ , β con , respectively. In particular, the adopted projection operator over a compact convex set Ω ρ [ρ min , ρ max ] is defined as: for a positive number ∈ (0, 1).
Subsequently, we present the velocity control protocol for each robot R i , i ∈ J F that establishes prescribed performance with respect to the aforementioned performance functions (12) by guaranteeing the inequalities (10) for the distance and angle of view errors for all time. More specifically, we first define the transformed errors ε d i ln and ε β i ln Notice that owing to the appropriately selected initial value of the performance functions (12) and the assumption that the robot configuration meets initially all operational specifications, the transformed errors are finite at t = 0, thus, if we manage to keep the transformed error signals ε d i (t) and ε β i (t) bounded for all time, via the appropriate selection of the velocity commands, then it is easy to check that we also guarantee (10) for all time, no matter how large the upper bound of |ε d i (t)| and |ε β i (t)| is. Consequently, the problem at hand, as described by (10), has been recast as a simple stabilization problem of the transformed error signals ε d i (t) and ε β i (t), which can be resolved by the following velocity control protocol: with positive control gains k d and k β . Theorem 1. Consider a team of unicycle robots that operates within a planar and obstacle cluttered environment, under the safety and visibility constraints that were described in Section 2. Moreover, assume that the leading robot of the team R 0 follows a feasible path within the workspace and that initially at t = 0 all safety and visibility constraints are satisfied. The proposed decentralized control protocol (13) and (14) along with the update laws (12) that modify the performance functions navigates safely the robot team within the workspace by avoiding any collisions and visibility breaks.
Proof. Based on the formulated problem the underlying graph of the multi-robot team comprises a directed line graph rooted to the leading vehicle R 0 . Therefore, the analysis may be broken down into pairs of preceding and following robots starting from the leading one until the last. Thus, let us define the positive definite function of the transformed errors: Differentiating with respect to time and invoking the error dynamics (8) and (9), we get: Hence, substituting the proposed control protocol (13) and (14), we arrive at: Notice also that by design (owing to the projection applied on (12)) the distance and angle of view performance functions guarantee that if the corresponding errors evolve within them as dictated by (10) then d i > d col > 0 and |β i | < β con < π 2 . Moreover, the terms constraints are initially satisfied then ε d i (0) and ε β i (0) are well defined, from which we can easily deduce that the transformed errors ε d i (t) and ε β i (t) are uniformly ultimately bounded. As a result, the prescribed performance encapsulated by the inequalities (10) is satisfied for all time and thus neither collisions nor visibility breaks occur. Furthermore, all closed loop system signals, including the velocity control commands (13) and (14), remain bounded, which completes the proof.

Remark 3.
It should be noted that the proposed control protocol (13) and (14) along with the update laws (12) employs information that is exclusively acquired by the forward looking camera and the proximity sensors that are mounted on each robot. Thus, its implementation is purely decentralized and, contrary to other works in the related literature, does not necessitate for any explicit network communication among the robots, e.g., communicating information for the velocity of the preceding robot. Moreover, notice that the operational specifications are satisfied via the appropriate modification of the performance functions (12), hence simplifying the selection of the control gains k d and k β . Nevertheless, it should be stressed that their values affect both the response of the distance and angle of view errors within the corresponding performance bounds as well as the control signal. Therefore, additional fine tuning might be needed in real robot implementation to meet the actuation constraints.
, it is not difficult to verify that maintaining simply the boundedness of the modulated errors ε d i and ε β i for all t ≥ 0 is equivalent to guaranteeing (10) for all t ≥ 0. Therefore, the problem at hand can be visualized as stabilizing the modulated errors ε d i and ε β i . A careful inspection of the proposed control scheme (13) and (14) reveals that it actually operates similarly to barrier functions in constrained optimization, admitting high negative or positive values depending on whether respectively; eventually preventing e d i (t) and e β i (t) from reaching the corresponding boundaries.

Simulation Study A
To validate the aforementioned control protocol, we first conducted a simulation study in MATLAB for a team of 7 following robots and a leading one, operating within a workspace that involves narrow passages through which the leading robot safely navigates (see Figure 5). The radius of the robots is r i = 0.2 m, i ∈ J F , the desired inter-robot distance is d des = 2 m and the operational constraints are set as d col = 0.5 m, d con = 4 m and β con = 45π 180 rad. Moreover, we selected the following performance function parameters λ = 1, ρ d ∞ = 0.1 m, ρ β ∞ = 0.1 rad. Finally, the parameters of the control protocol were chosen as k d = k β = 4, δ = 0.5 and = 0.75. Figures 5-7. More specifically, Figure 5 depicts 8 successive snapshots of the robot team within the workspace, every 10 s, along with the camera field of view of each following robot. Notice that initially at t = 0 s all preceding robots lie within the camera field of view of their followers and are kept within it for all time after, despite the sharp corners of the considered narrow workspace. Additionally, the evolution of the distance and angle of view errors is given in Figure 6, along with the corresponding performance functions and the operational specifications, whereas the required linear and angular velocity control commands are depicted in Figure 7. Apparently, the proposed decentralized control protocol retained the distance and angle of view errors within the performance envelope without compromising the safety of the multi-robot team (i.e., neither collisions nor visibility breaks occurred). However, notice that the linear velocity commands exhibit an oscillatory behavior while trying to meet the tight steady state performance specifications for the distance errors, whereas the angular velocity commands change quickly to steer the vehicles and avoid imminent occlusions and collisions. Nevertheless, it has to be stressed that the resulted response can be improved (as mentioned in Remark 3) by fine tuning the control gains k d and k β separately for each robot. Finally, the operation of the multi-robot teams is demonstrated by the video at the following link: https://youtu.be/yRBteQSzeVQ (accessed on 21 May 2021).

Simulation Study B
A comparative simulation study was conducted with the control scheme presented in [32] for a team of three robots, consisting of two following and a leading one, that operate within the workspace depicted in Figure 8. The simulation of the system under the control law presented in [32] was conducted in MATLAB, whereas the simulation of the system under the control scheme proposed in this work was conducted in Gazebo using ROS and Python. For the latter, appropriate models of the robots were prepared by equipping each unicycle robot with a forward-looking camera with angle-of-view 45 • and range 2.5 m, a forward-looking proximity sensor with sensing range [−130 • , 130 • ], and a box with 4 distinct fiducial markers placed on its sides, which were used for estimating the position of each agent's leading robot by ArUCO [44]. The radius of the robots is r i = 0.3 m, i ∈ J F , the desired inter-robot distance is d des = 1.0 m and the operational constraints are set as d col = 0.6 m, d con = 2.5 m and β con = 0.785 rad. Additionally, we selected the following performance function parameters λ = 0.5, ρ d ∞ = 0.1m, ρ b ∞ = 0.1 rad. Finally, the parameters of the control protocol were chosen as k d = 0.2, k β = 0.3, δ = 0.35 and = 0.75. The results are given in Figures 8-11. Particularly, Figure 8 depicts the trajectories of the robots executed using our method and the control scheme proposed in [32]. In both simulations, the team was initialized at the same configuration and the same trajectory for the leading robot was used, which was generated using the ROS Navigation Stack [45] to steer the leader towards a predefined goal configuration. The inter-robot distance between each leader-follower pair d i , the corresponding angle of view β i and the distance d W,i = min{d W,l,i + 0.1, d W,r,i + 0.1} between each robot and the static obstacles are illustrated in Figure 9 (for safety purposes we have augmented the obstacles' boundary by 0.1 m). Apparently, the trajectories of the system under the proposed decentralized control framework satisfy all operational specifications for all time, whereas the system under the control scheme [32] violates periodically the angle-of-view constraints which, in practice, result in loss of connectivity between the agents (during such cases the algorithm presented in [32] performs a maneuver towards the position where the preceding robot was last detected assuming it will then regain visibility with it). Notice also that the range of commanded velocities as depicted in Figures 10 and 11 using the proposed framework is comparable with the one generated by the control scheme presented in [32], despite the fact that our method does not impose any explicit bounds on the control inputs. Finally, the operation of the multi-robot teams under the proposed control scheme is demonstrated by the video at the following link: https://youtu.be/qvYhI_NXvKw (accessed on 21 May 2021).

Simulation Study C
In this case study, which was also conducted in Matlab, we consider a team of 2 following robots and a leading one operating inside a complex workspace which consists of a corridor with sharp corners and arbitrarily shaped, non-convex inner obstacles. The radius of the robots is r i = 0.15 m, i ∈ J R , the desired inter-robot distance is d des = 1 m and the operational constraints are set as d col = 0.3 m, d con = 1.5 m and β con = 0.4 rad. Moreover, we selected the following performance function parameters λ = 0.5, ρ d ∞ = 0.05 m, ρ β ∞ = 0.1 rad. Finally, the parameters of the control protocol were chosen as k d = 1, k β = 2, δ = 0.5 and = 0.75. The trajectories of all three robots can be seen in Figure 12. Additionally, the distance and angle of view errors corresponding to each following robot, along with the associated performance functions, can be seen in Figures 13 and 14, respectively. As we can see, the proposed control law is able to keep the errors bounded by the performance functions for all time, thus ensuring safe navigation of the entire team through the workspace and visual connectivity maintenance. Finally, it should be noted that the method proposed in [32] failed to solve the corresponding problem (even for only one follower as shown in Figure 15) since in the snaky passage, where the leader was lost at the first sharp corner, by the time the follower arrived at the position where the leader was detected for the last time the leader was no longer visible.

Conclusions
In this work, we tackled the problem of coordinating the motion of a platoon of unicycle robots that navigate within an obstacle-cluttered workspace. Given that each robot is equipped with proximity sensors for detecting nearby obstacles and a forward looking camera for tracking the preceding robot, we developed a safe decentralized control strategy that avoids collisions while maintaining visual connectivity between every pair of successive robots for all time. Finally, simulations results were presented that demonstrate the efficacy of the proposed control scheme.
Future research efforts will be devoted towards incorporating hard constraints on the robots' velocity commands as well as handling moving obstacles and other more complex graph configurations to increase the applicability of our approach. Moreover, we plan to verify the theoretical findings via experimental results, employing real unicycle robotic vehicles. Towards this direction, we also intend to deal with the problems of intermittent (or loss of) visual tracking, owing to environmental interference (e.g., light conditions or shading) and motion blurring, as well as of actuation (either total or partial) failure.