Control of a Robotic Swarm Formation to Track a Dynamic Target with Communication Constraints: Analysis and Simulation

: We describe and analyze the Local Charged Particle Swarm Optimization (LCPSO) algorithm, that we designed to solve the problem of tracking a moving target releasing scalar information in a constrained environment using a swarm of agents. This method is inspired by ﬂocking algorithms and the Particle Swarm Optimization (PSO) algorithm for function optimization. Four parameters drive LCPSO—the number of agents; the inertia weight; the attraction/repulsion weight; and the inter-agent distance. Using APF (Artiﬁcial Potential Field), we provide a mathematical analysis of the LCPSO algorithm under some simplifying assumptions. First, the swarm will aggregate and attain a stable formation, whatever the initial conditions. Second, the swarm moves thanks to an attractor in the swarm, which serves as a guide for the other agents to head for the target. By focusing on a simple application of target tracking with communication constraints, we then remove those assumptions one by one. We show the algorithm is resilient to constraints on the communication range and the behavior of the target. Results on simulation conﬁrm our theoretical analysis. This provides useful guidelines to understand and control the LCPSO algorithm as a function of swarm characteristics as well as the nature of the target.


Introduction
Controlling the collective behavior of a large number of robots is a complex task. However, large natural multi-agent systems are known to work very well, such as bird flocks [1][2][3], fish schools [2,4,5], ants using pheromones [2,6], or aggregations of bacteria [2,7,8]. These self-organized systems have served as a source of inspiration to control large formations of robots to prevent collisions between these robots [9]. The emergence principle where complex collective behaviors arise from simple, elementary rules governing individuals is the main topic of interest for artificial swarming systems. However, the emergence of swarm behavior requires some constraints on the number of agents, the environment, and so forth. Also, it is difficult to design the elementary rules for a specific desired collective behavior to emerge.
One behavior in particular, flocking or schooling, enjoys a growing interest in the scientific domain. Ref. [10] established a first landmark, giving the three rules of flocking (alignment, cohesion, separation). Then Tanner, Jadbabaie and Pappas, in a two-part article [11,12], made a fundamental mathematical analysis of the Reynolds rules using the Artificial Potential Field (APF) approach. They proved that using potentials both attractive and repulsive, the flock becomes homogeneous with equal inter-agent distance and equal speed vectors at equilibrium. This behavior is resilient to external stimuli as long as the agents are within communication range. By using these methods, and despite the target tracking scenario. Finally, Section 6 concludes this paper and gives some perspectives for future works.

The General MOSL Problem
We extend the OSL problem, where the odor source is static, to the case of moving source resulting in the MOSL problem. The objective is to localize a mobile target, characterized by a D-dimensional position p s (t) varying with time t. This target is assumed to release scalar information in the environment, such as an odor, heat, radioactivity, a sound or radio wave; without losing generality we assume the intensity of this information to be positive. We thus note u : R D × R → R + ; (p, t) → u(p, t). This information u is assumed to be diffused and transported by the environment. This can be modeled by a partial differential equation of variable u, which exact formulation depends on the problem at hand. We assume furthermore that at any given time t, function u has an unique point p max (t) where the information intensity u is maximum. We also assume that the diffusion and transport mechanism is such that p max (t) is very close to the position of the target p s (t) so that for all practical purposes we may assume that p s (t) ≈ p max (t), that is, both terms can be used interchangeably. Our problem is then to find p max (t), which is then equivalent to finding the position of the target at the same date. Another thematic close to MOSL is geofencing [42] where Unmanned Aerial Vehicles (UAV) can be used in a constrained area, typically cities, for tracking and tracing missions, as for the track of stolen cars.
The measure, made by the system's sensors, will be denoted by function f : R D × R → R + ; (p, t) → u(p, t) + β(p, t), where β(p, t) is a noise. We assume that the Signal-to-noise Ratio (SNR) is sufficient so that the maximum of f still coincides with the maximum of u for all practical purposes. In our simulations, we considered that function u had values in [0, 1] and the noise was additive white Gaussian, uncorrelated in space and time, with a standard deviation σ taken between 0 and 0.25 except where indicated. We leave issues raised by a poor SNR for future research.

The Toy Problem Used in This Paper
Instead of describing the problem by a partial derivative equation, we may instead assume that the solution of this equation is known explicitly. This is the approach we use in the simulations presented in this paper because this is quicker to simulate. We use for instance the following expression: In these equations, ∆t is a discrete time step. The Equation (1) contains three elements to represent real-world phenomena: 1.
a spatial term which decreases with the distance between target position p s and any position p in the workspace; this is the inverse square law induced by mechanisms of conservation of power through propagation, modified with a constant additive term in the denominator to prevent the value from becoming unreasonably large when p − p s (t) → 0; 2. a temporal term, representing a decay, inspired by the response to a first-order filter model parameterized by the time constant τ; 3.
In our paper, dimension D can be 1 or 2 or more, and it will be indicated explicitly when it is necessary. Simulation results are not treated in higher dimensions, but we assume that the results we present should be close to the results displayed in Dimension 2 because there are no cross-dimensional terms in the PSO equation described below in Expression (2).
We must stress that the exact mechanism behind the generation of function u does not need to be known for the problem to be solved. The only hypotheses that matter, are that (i) there is only one spatial maximum for u at each date and (ii) that the maximum is reached close to the actual position of the target at a given date. This way, all that matters is finding the maximum of u at each date by sampling f at the positions of the agents in the swarm.
This model is less complex that in the state-of-the-art [31,37,43], where the plume model is dynamic and with multiple local maxima. Environmental sensors help the modelisation of this plume, an anemometer or ocean current sensor for example. However, in OSL, the source is static, and the important point we wanted to highlight is the unpredictability of the source behavior. Hence, study our algorithm with both the plume and source dynamics will complicate the analysis of our algorithm. For this reason, the analysis takes into account only the dynamics of the source, and an analysis with a dynamic plume model will be part of a future work. We note that the measurement noise can create multiple local maxima, but this phenomenon can disappear instantaneously at the next time step for the agent i.

The PSO Algorithm
PSO is an evolutionary algorithm, inspired by flocking birds [27]. We consider N trackers sampling f at their position p i (t). This value is given by f p i (t), t from (1). The trackers move to maximise f (p i (t), t). To do this, the PSO algorithm provides an update of the speed vector v i (t) as follows: As described in (2), the speed vector at time t + ∆t is a sum of three elements: 1.
The previous speed vector of tracker v i (t), weighted by a constant coefficient c 0 . For the convergence of the algorithm, we need to have c 0 ∈] − 1; 1[ [44]. c 0 is homogeneous to a (pseudo) mass and is sometimes called "inertia" in the community.

2.
The difference between the current position of tracker i and its best historical position noted p b i (t) ( "b" for "best"). The best historical position p b i (t) is the position p i (t i ), with t i between time 0 and t where measure f (p i , t i ) was the greatest. This component is weighted by a constant coefficient c 1 .

3.
The difference between the position p g (t) ("g" for "global") of the current swarm's best tracker and the current position of tracker i. The position of the best tracker of swarm p g (t) is the tracker j measuring the greatest f (p j , t) among the N trackers of the swarm. This component is weighted by a constant coefficient c 2 .
The second and last components are attractors, weighted by a random number, respectively α i1 (t) and α i2 (t), uniformly distributed in [0, 1] and specific to agent i. These random numbers provide diversity to the system and improve exploration, avoiding the swarm to be trapped in a local optimum [27,45].
Using the Euler integration scheme, the updated position of tracker i is computed as the sum of its previous position and the updated speed vector as follows [27]:

APF Theory and Flocking Principles
In APF methods, the analysis is based on potential functions P d i,j (t) , where d i,j (t) = p i (t) − p j (t) is the Euclidean distance between agents i and j. The agent i uses a gradient descent algorithm based on P d i,j (t) to move. In flocking algorithms, we are interested in a particular potential function P d i,j (t) , described in Definition 1. Definition 1. The potential function P d i,j (t) = P a d i,j (t) − P r d i,j (t) is a flocking potential field if: is a non-negative function of the distance d i,j (t) between agents i and j, • P r d i,j (t) is monotonically increasing in [0, +∞[ and its gradient is the highest when d i,j (t) → 0, • P a d i,j (t) is monotonically increasing in ]0, +∞[ and its gradient is the highest when d i,j (t) → +∞, • P d i,j (t) is convex and even, • P d i,j (t) attains its unique minimum when i and j are located at a desired distance r eq .
Where P a d i,j (t) and P r d i,j (t) are, respectively, attractive and repulsive potential fields. An example of such potential fields are shown in Figure 1. It is a reactive approach to guidance where the trajectory is not planned. Since P d i,j (t) is convex, we can directly use the sign of its gradient to describe the behavior of agents i and j. If the gradient is positive, then agents i and j will repel each other. If the gradient is negative, then agents i and j will attract each other. The agent will stop when it attains an equilibrium distance r eq between agents i and j, where the potential function is null and stable in the sense of Lyapunov [12,35,38]. Collisions between neighbors in the swarm are thus avoided and the inter-agent distance is controlled owing to near-equilibrium distance parameter r eq . Hence, the swarm will attain a stable formation, when individually each agent minimizes its potential function, whatever the initial position of the agents. When the swarm is stable, there is a velocity matching between the agents [12,46]. Ref. [47] use the consensus metric V(t) as follows : Hence, if lim t→+∞ V(t) = 0, we are sure to attain a consensus in the swarm speed. We define as follows the swarm stability.

Definition 2.
The swarm is stable if there is a consensus as follows:

PSO Formulated Using the APF Theory
As we have introduced APF theory and flocking principles, we can now rewrite the PSO Equation (2) as a gradient descent strategy: Here, P i (t) is the potential field of PSO applied to agent i, providing equality of Equations (2) and (5). Since potential fields use only phenomena of attraction/repulsion with inter-agent distance, weight c 0 does not appear in potential field P i (t). So we find: Here, P a d i,j (t) is a generic potential attraction field known in APF literature as a quadratic attractor [48]: where A is a random number uniformly distributed in [0, c 1 2 ] and [0, c 2 2 ] respectively. Hence, when the algorithm converges, all the agents will be located at the same position. So, to apply this algorithm to robotics, we need to include repulsive mechanisms to be coherent with Definition 1.

Adding an Anti-Collision Behavior to PSO: CPSO
The objective here is to determine a potential field P i (t), inspired by PSO potential (6), that meets Definition 1. To do this, Ref. [20] introduced a variation of PSO, called CPSO, which was demonstrated experimentally with interesting results [20,37]. To derive the equations of CPSO, we define the following unbounded repulsive potential [39]: This potential verifies that P r (d i,j (t)) → −∞ when d i,j (t) → 0. If we sum the attractive potential (7) and the repulsive one (8), we obtain a potential that meets Definition 1.
Those models of potentials are not unique. The state of the art provides good examples of possible potential functions for flocking algorithms [38,39]. However, for the analysis at equilibrium, we need the attractive potential defined in (7) for two reasons. First, we keep the links with the original PSO algorithm. Second, this particular model is necessary for some theorems, Theorem 4 in particular, which is important to determine the characteristics of our swarm formation.
The repulsive potential is then added to the PSO equation: where c 3 is the constant repulsive weight between trackers i and j.

LCPSO, a CPSO Variant to Deal with Some Real-World Constraints
First, to reflect limitations in communication links, a local communication constraint is added to the model. Indeed, the best tracker position of the swarm p g (t) in Equation (9) is global, shared with each agent of the swarm. We use the local-best vector position p l i (t) (l for local), which is the position of tracker j in the neighborhood set of i where measure f (p j , t) is the greatest. The neighborhood set N i (t) is based on parameter r com which denotes the maximum communication range between tracker i and its neighbors [12]. Beyond maximum communication range r com data transmission is impossible, below r com transmission is perfect : This decentralization was already proposed by [14,15] but to the best of our knowledge it was never used in the aim of mobile target tracking. Each vehicle will have its own local best position and will move towards its best neighbor.
Second, the best historical position p b i (t) is removed in the proposed approach. This is because the target is not static: it changes position with time.
Finally, to obtain a stable swarm formation, we set our random component α i2 (t) to 1. The analysis of the algorithm with random components can be the object of future work. Those considerations lead to the following model, originally introduced in [40], and which we named LCPSO: From Equation (11), we can deduce the potential function of LCPSO as follows: This potential meets Definition 1, and is illustrated in Figure 1, with the corresponding attractive (7) and repulsive potentials (8) used for the LCPSO. The Euler integration scheme is the same as in Equation (3).

Metrics and Hypothesis
While we already illustrated the behavior of the LCPSO algorithm earlier through simulation [40], these properties were only shown intuitively. We now wish to give some mathematical basis to this intuition. We make the following assumptions that will be valid throughout the mathematical analysis of this section: • Communication range is unlimited. As a result, the local-best attractor p l i (t) is the best tracker position of the swarm p g (t). • We focus our efforts on the APF analysis, and to ease the analysis we set c 0 to 0. So the speed vector v i (t + ∆t) is updated only with the gradient descent of the potential field equation. • The target's behavior is not known from the swarm's point of view and can be dynamic. Tracker i measures f (p i , t) and adjusts its local-best position p l i (t) as a function of maximum measurement of the neighborhood. Since the communication range is small, we make the hypothesis that information exchange is instantaneous between the trackers and is limited to their position in absolute coordinates and their measurements, without noise.
To illustrate the inter-agent distance between nearest neighbors in the swarm, we introduce a new function, the swarm spacing ρ(N). We normalize this spacing by r eq , a parameter allowing us to control the inter-agent equilibrium distance when the swarm is stable, and the number of agents N : Another important parameter is the surface area taken up by the swarm. For some cases, this parameter is critical to have a good tracking of the source. As we will see in Section 4.2, swarm formations have a convex hull inside a ball, and we will thus be able to represent this surface with only one parameter, the radius of this ball r max . Whatever the dimension, we have: Withp the center of gravity of our swarm, represented by the following equation: Our swarm model has a lot of mathematical similarities with the models of [39]. So, for conciseness, the demonstrations too close from [39] are not given, and the other ones are in the Appendix A. Moreover, all the theorems presented in this paper are true regardless of the repulsive potential P r (d i,j ), as long as it respects Definition 1. Thus, we could imagine that, if the repellent potential is not suitable, we could test others that exist in the state of the art [38,39].

Behavior of LCPSO
We suppose that the agents' positions are taken in R D , with D the dimension. S(t) is the set of agents' positions p i (t) of the swarm at time t, i ∈ [1, · · · , N]. Let us note C(t) as a convex hull of S(t): it is a continuous subset of R D . Then let us noteC(t) as the convex polygon of S(t). It is a manifold of dimension D − 1 which displays the surface taken by the swarm. We set y(t) ∈ R D the optimum position of f at date t: y(t) = arg max y f (y, t). We suppose that this optimum is unique and is the position of the target p s (t) propagating information.
We define the set B(t), which contains the best attractors of the swarm at time t which minimised the Euclidean distance with respect to y(t).
The set S(t) being discrete, we introduce the setS(t), defined as the set of the points of S(t) which are in the convex polygonC(t) of S(t). We summarize the behavior of our swarm with the following theorem: Theorem 1. We assume that each agent follows the LCPSO Equation (11), with r com → +∞. Then the center of gravity of the swarm is heading towards the attractor p g (t), and its velocity vector is equal to : Hence, if the swarm is stable in the sense of Definition 2, all agents follow the speed vector of the center of gravity. Taking into account the inertia weighted by c 0 , the attractor position gives the direction that the swarm will follow. In the MOSL case, we have defined it as the agent that has measured the strongest information f (p g , t) at time t; it is thus the agent that is the closest to the target.
We distinguish two particular states. The first one is the transition state, with y(t) / ∈ C(t). In this state, the attractors are necessarily and intuitively the agents located on the hull of the swarm: B(t) ⊆S(t). Trying to catch up with these attractors, the agents in the swarm will accelerate to their maximum speed and then remain at this speed, in a steady state. Thanks to (17) and r max , the maximal speed v max is predictable. This state is illustrated in Figure 2b in dimension 2. The second case is the steady state, with y(t) ∈ C(t).
In this state, all agents of S(t) are potentially attractive, that is, B(t) ⊆ S(t). In this case, the swarm will follow a speed close to that of the target. Thus, the closer the attractor is to the center of gravity, the slower its speed will be. In the case of tracking a static target, our swarm will head towards the source, and will stop when y(t) =p(t), shown in Figure 2a in Dimension 2.

Analysis with N = 2 Agents
We consider, without loss of generality, that the attractor is agent 1. The potential functions derived from (12) becomes: We can see that the potential functions P 1 (t) and P 2 (t) are only dependent on interagent distance d 1,2 (t). We deduce the following theorem: Theorem 2. A swarm of N = 2 agents following the potential field P(t) = P 1 (t) + P 2 (t) in a gradient descent strategy will converge to an inter-agent distance at equilibrium r eq = 2c 3 c 2 .
To use r eq as a parameter in our algorithm, we will replace our parameter c 3 by an expression including the so-called equilibrium distance r eq . To do this, we set c 3 = c 2 ·r 2 eq 2 . In dimension D, the LCPSO algorithm can then be rewritten as follows:

Swarm Stability
Theorem 3. We consider a swarm of agents following Equation (18), with potential functions respecting Definition 1. For any p(0) ∈ R ND , as t → +∞, we have p(t) → Ω e .
With the vector p(t) which contains all the relative positions of the individuals in the swarm and Ω e the invariant equilibrium set of the swarm; they are detailed in the Appendix A, with the proof. Hence, agents following (18) are going to reach stability in the sense of Definition 2. If this Theorem is similar to the Theorem 1 of [39], the proof is different because the LCPSO Equation (18) is not stationnary.

Symmetry and Robustness of the Swarm Formation
Due to the nature of the potential functions P a (d i,j ) and P r (d i,j ), their gradient is odd. Consequently, there is a reciprocity of the interactions between agents, with respect to the origin [39]. These reciprocal interactions will naturally lead the swarm to have a symmetrical formation with respect to the center of gravity of the swarmp when it is stable in the sense of Definition 2 [39].
Contrary to the swarm models detailed in [38,39], all interactions are not bidirectional when looking at the whole system. Indeed, if this is true for interactions of repulsion, attraction relationships are unidirectional and directed towards the attractor. One could therefore assume that a change of attractor influences the strength of the formation when it is stable in the sense of Definition 1. However, with the LCPSO Equation (18), the formation is robust to this, regardless of the dimension D.
Theorem 4. By assuming N ≥ 2 agents using the flocking speed vector described in (18), whatever the attractor p g (t), the equilibrium distance of agent i with the other agents will always be the same.
The proof is in the Appendix A. With the help of Theorem 2 from [39], we can see that the formation is bounded whatever the dimension D: Theorem 5. If the agents follow the LCPSO Equation (18), as time progresses, all the members of the swarm will converge to a hyperball: The proof is not present in this paper, because it is too similar to that of Theorem 2 from [39]. increases linearly as a function of N in Theorem 5. We can see in Figure 3b that the evolution of r max as a function of N, in Dimension 1 or 2, does not have a linear evolution, but tends to "flatten" when N increases: thus, this bound is real, but not adapted to approach the size of the swarm in reality. Now, we will look more prospectively at the properties of the stable formation. We thus present conjectures, supported by simulation results, which will remain to be proven mathematically afterwards. We do not display results in Dimension 3, because the remarks would be redundant with those in Dimension 2. We use r max and ρ(N) to illustrate the evolution of the stable formations when N increases, depending on the dimension. The results are shown in Figure 3. In dimension 1, the formation of the agents when the swarm is stable in the sense of Definition 2 is unique, so we do not need several samples. In higher dimension, the multiplicity of emergent formations as a function of N lead up to several possible formations.
In dimension 1, we can see in Figure 3a that the more N increases, the more the swarm spacing ρ(N) decreases; this is rather logical, because the surface taken by the swarm widens very quickly, as we can see in Figure 3b. Hence, the more N increases, the more certain agents are distant from the attractor, the more the attraction strength is high, the more the swarm is compact.
In Dimension 2, when N = 3, the equilibrium formation is an equilateral triangle. When N is higher, the possible formations approach a circle whose the center of gravity determines a point of symmetry in the interactions between agents. This formation presents one or several layers where the position of the agents is aligned on circles, and it becomes more difficult to predict it, as shown in Figure 3f, with 2 layers for N = 15 agents. Moreover, in Figure 3b, we have 1 ≤ ρ(N) ≤ 1.3 whatever N and our samples: our parameter r eq is a good representation of swarm spacing. Since there is multiple neighbors, the repulsive potential energy of an agent i is much higher than in dimension 1, and consequently the swarm spacing ρ(N) is higher in higher dimension. In Figure 3b, we can see that the radius of the ball containing all the agents r max varies very few with the samples. Hence, the swarm surface is predictable with few uncertainties.  In a real robot, the weight c 0 must be taken into account and is set according to its geometry and mass. This parameter will influence the speed norm of the agents when the formation is stable according to Definition 2. To illustrate our point, we keep the hypothesis that the attractor p g is always the same agent. Thanks to Theorem 1, we havē v eq = −c 2 p(t) − p g (t) when c 0 = 0.v eq is invariant in time because the swarm formation is stable. When c 0 = 0, we have: Hence, if c 0 influences the speed norm of the swarm, it does not influence its direction, which depends only on the position of the attractor. If the mathematical analysis is not impacted by the pseudo-mass when |c 0 | < 1 [39,44], the pseudo-mass c 0 greatly influences the convergence time of the swarm during its transitory phase, since it smoothes the trajectory of the agents by taking into account the previous velocity vector.
We support what we have just said by Monte-Carlo simulations as a function of c 0 in Figure 4, for N = 7 and N = 15 agents in dimension 2. For N = 7, the agents are distributed on the same circle around the center of gravity, while for N = 15, we have a distribution on 2 levels, as in Figure 3f. As our simulation time is t max = 100 s, a convergence time close to this number shows a high failure rate of the agents to converge, based on the fact that it takes between 10 s and 30 s on our samples for them to converge when c 0 = 0. The maximal simulation time is arbitrary, but increase its value does not change the results, the swarm is still unable to converge.
We have displayed only the most interesting results in Figure 4. We can see that the number of agents N as well as the weight c 0 influence our convergence time Figure 4. For −0.1 ≤ c 0 ≤ 0.5, we have 100% convergence for N = 15, but for N = 7, we can extend the range of c 0 to −0.5 ≤ c 0 ≤ 0.7. Hence, the convergence to a stable formation is more difficult when N increases. These conclusions are more restrictive than those of [44] on the analysis of the PSO algorithm, since when |c 0 | ≥1, the PSO is unable to converge. Indeed, when c 0 is too important, the previous velocity vector is so influent that it will prevent the swarm from stabilizing efficiently.

Communication Constraints
Obviously, communication constraints within the swarm yield to a limitation of our system: for example, our N agents will only be able to communicate with certain other agents, which will multiply the number of attractors and behaviors. We have different phenomena that will constrain the swarm: • Isolation of individuals: if, at any given time, one or more agents make bad choices, they may be unable to communicate with anyone, and consequently they will be unable to move following Equation (18). • Emergence of subgroups: two opposite attractors in the group can lead to the fission of the group where all the agents are connected in two or more subgroups, so there is no more direct or indirect link between any agent i and j.
We illustrate these cases in Figure 5, with the illustration of an efficient tracking of a source, in Figure 5a, and a less efficient tracking, in Figure 5b. The source follows the environment function (1) after t = 100 s. We can see the importance of the setting of r eq compared to r com . In Figure 5b, we can see the isolated agents, which are attracted by themselves, and consequently unable to move, and sub-groups of 2 robots, which are unable to track the target because they are too few. We will discuss in Section 5 the choice of these parameters, since it is directly related to the tracking performance of the target.

Results
To measure the evolution of tracking, we use the metric D 100 = p − p s , a derivative of the D 25 metric used in [40]. p s (t) is the target's position, and D 100 is the distance between the center of gravity and the position of the source. Subscript 100 defines the fact that 100% of the swarm elements are taken into account. If D 100 >> r max · r eq , we will logically consider that the target tracking is "bad" and failed. If 0 < D 100 < r max · r eq , we consider that the tracking is "good".

Dimension 1
The trackers measure information released by the dynamic source following environment Equation (1) without temporal decrease (τ → +∞). An attractor is represented by green stars in Figure 6. It is the agent with the highest information measured at time t within its communication limits.
The target follows a periodic trajectory. Its speed follows a cosine function with a period T = t max 4 , with t max the simulation time. In Figure 6, we illustrate such a tracking scenario with agents which follow LCPSO Equation (18). The distance between the swarm center of gravity and the source D 100 oscillates because the number of attractors and their position change at each time step. However, the swarm is still centred on the target when t ≥ 1 s, with en error limited in space (D 100 ≤ 1.5 m with r max larger than 7 m). In Figure 7, we illustrate the average of D 100 during the whole simulation as a function of r eq and r com in dimension 1. We have used a Monte-Carlo method with 100 sam-ples. We can observe in the simulation results that the target tracking is inefficient when r eq ≥ 8 10 r com (between "bad" and "correct" results), but is efficient elsewhere. Farther are parameters away from this limitation, better are the results.

Dimension 2
In Dimension 2, we have already performed an analysis on target tracking in a previous article [40]. We have shown in simulation results that the LCPSO algorithm was relevant to track an underwater mobile target with communication constraints [40]. A maximal speed constraint is necessary, arbitrarily set to v max = 5 m.s −1 , because the capacities to lose the swarm must be controlled. We extend the work made in this article studying two types of trajectories for the source to illustrate their impact on tracking performances: We can see in Figure 8 that when N > 15, the tracking fail percentage increases gradually, especially for the constant trajectory. This is due to the environment function f : if some agents are too far from the source, they will only be able to measure the noise; and, even worse, this will also be the case for their neighbors. Consequently, the swarm will be dislocated into packets. This phenomenon is illustrated in Figure 5b. In the case of a constant trajectory, the isolated agents have very little chance of finding the group that succeeds in tracking it, unlike the elliptical trajectory where the source comes back. Thus, with communication restrictions, it is necessary to limit the number of agents that will track the source, in order to have better tracking performance and do not waste resources; we can see that between 10 and 15 agents, the source tracking is optimal whatever its trajectory.
We add Figure 9 to give an operational point of view of our algorithm, as a function of r com and r eq , with the same parameters [40]. With the help of Figure 8, the number of agents is fixed to N = 10. In Figure 9, we can observe more restrictions than in dimension 1. Indeed, there is an area where a too low r eq degrades the results whatever r com , because agents are too close from each other, and the swarm is unable to have a stable formation. There are also more important restrictions on communication limits, in comparison with Dimension 1. Indeed, if r eq is too much important compared to r com , the number of isolated agents increases and the swarm is unable to track the target. Under this limit, some agents can lose the group, but without consequences on tracking performances, as illustrated in Figure 5b.

Conclusions
In this paper, we carried out an analysis of LCPSO algorithm that merges the spirit of PSO and flocking algorithms. This analysis is supported by mathematical theorems, that apply regardless of the dimension, or when necessary, by Monte-Carlo results, especially concerning communication constraints. Using only one attractor in a limited area permits to follow a target accurately. We summarise the contributions of this paper by the following points.
First, the formation at equilibrium is resilient to communication limits and the brutal moves of the target because Equation (18) is only based on measurements at time t. Moreover, we have proven analytically that the formation will stay stable in the sense of Definition 2 whatever the dimension and the place of the attractor(s) in the swarm. The strength of this formation avoids collisions between agents and losing agents with communication constraints. Finally, the speed is intrinsically limited and predictable thanks to Equation (17).
The stability of the swarm formation whatever the conditions (communication limits and the behavior of the target) makes our algorithm applicable in very constrained environments, like underwater scenario for example. LCPSO algorithm is resilient to the breakdown of some agents because the attractor depends on measurements and can be exchanged easily with another agent. Communication limits do not degrade our swarm formation, and the simplicity of LCPSO allows the robot to embed only few computing power.
Our work still has many limitations, which we acknowledge here. First, the plume model should be modified to be more realistic [31,43] to include the problem of measurement noise with a low SNR and the problem of noise correlation (in our case the measurement noise was uncorrelated in time and space). Second, we left out some constraints in our algorithm. For instance, we do not consider localization problems of agents with real sensors: while an exact absolute position is not important for our algorithm, a correct relative position is still necessary. This is an issue in the underwater environment, for instance. However, better positioning in challenging environments can often be enhanced using techniques such as Simultaneous Localization And Mapping (SLAM) [49,50] using variants of Kalman filters [48] or Interval Analysis [51] to take position uncertainty into account; our work can integrate these methods. Third, robotic constraints on the motion of agents could be applied on our model, in particular for heading and speed or with a linearisation of the agents' trajectory [12]. Finally, we only used information in a scalar form. If we considered, when feasible, non-scalar information, resolving our problem could be easier. For example, we could measure a local gradient for function f to indicate the direction of its maximum. All of these limitations and possible solutions are left out for future work.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Sample Availability: Samples of the compounds are available from the authors.

Abbreviations
The following abbreviations are used in this manuscript: Theorem A1. We assume that each agent follows the LCPSO Equation (11), with r com → +∞. Then the center of gravity of the swarm is heading towards the attractor p g (t), and its velocity vector is equal tov(t + ∆t) = c 0v (t) + c 2 p g (t) −p(t) . Furthermore, if the swarm is stable in the sense of Definition 2, all agents follow the speed vector of the center of gravity.
Proof. We calculate the speed vector of the center of gravity as follows: In the last line of development, the second part contains all the gradients of the repulsion potentials. Since this potential meets Definition 1, their gradients are odd. Consequently, we have ∇ p j (t) P r d i,j (t) = −∇ p i (t) P r d i,j (t) , so the sum is null. If we also remember the definition of the center of gravity (15), we have: Equation (17) is equal to the gradient of the model of attractive potential defined in relation (7). Thus, we havev(t + ∆t) = −∇pP a d att,p (t) , and we can see that it is indeed the center of gravity that is attracted by the attractor p g (t). Moreover, if we are stable in the sense of Definition 2, we have : Theorem A2. A swarm of N = 2 agents following the potential field P(t) = P 1 (t) + P 2 (t) in a gradient descent strategy will converge to an inter-agent distance at equilibrium r eq = 2c 3 c 2 .
Proof. In order not to burden the analysis, we set c 0 to 0, hence there are only interaction forces between the agents. The velocity vectors will only fit on the p 1 (0)p 2 (0) line, which depends on the initial position of the agents 1 and 2. We can therefore perform our analysis in dimension 1 without loss of generality, and the position and speed of the agents are respectively the scalar values p i (t) and v i (t). We suppose that there is no collision, so d 1,2 (t) ∈]0; +∞[. In this interval, P(t) ∈ C ∞ . So we can use the property of a convex function: if the second derivative of P(t) is null or positive, then this function is convex.
Proof. We define the potential function for the system J( p) below: where A is a positive constant, set to obtain J as a definite positive function that vanishes when we apply the gradient of J. The goal is to manage the area where the potentials balance each other, thanks to the nature of repulsive and attractive components. Indeed, in Figure 1, we can see that attraction dominates when d is high and repulsion dominates when d is low, and in those two cases the global potential is positive. Hence, J p(t) > 0 and we can use this function as a Lyapunov function for our system. Taking the gradient of J( p), and respecting the p i position of the agent i, we get: With the help of the centroïd speed Equation (17) with c 0 = 0 and the LCPSO speed Equation (18), we have: Now, if we take the temporal derivative of the Lyapunov function as a function of time t, we have: For all t, implying a decrease of J( p) unless p i = 0 for all i = 1, · · · , N, and our system is stable in the sense of Lyapunov. In addition, we have [∇ p i J(p)] T˙ p i = − p i 2 for all i, which implies that all individuals in a direction of decrease of J( p). From the attraction and repulsion properties of Definition 1, we know that attraction dominates over short distances and that it dominates over longer ranges. This implies that over long distances, a decay of J( p) is due to agents moving closer together, while over short distances, the decay is due to agents repelling each other. In other words, regardless of the initial position of the agents, the set defined as Ω 0 = { p : J( p) ≤ J p(0) } is compact. Therefore, the agent states are bounded and the set defined as Ω p = { p(t) : t ≥ 0} ⊂ Ω 0 is compact and we can apply Lasalle's invariance principle, arriving at the conclusion that as t → +∞, the state p(t) converges to the largest invariant subset of the set defined as: Since Ω 1 is invariant and satisfies Ω 1 ⊂ Ω e , we have p(t) → Ω e when t → +∞, which concludes this proof.