Adaptive Method for Modeling of Temporal Dependencies between Fields of Vision in Multi-Camera Surveillance Systems

: A method of modeling the time of object transition between given pairs of cameras based on the Gaussian Mixture Model (GMM) is proposed in this article. Temporal dependencies modeling is a part of object re-identiﬁcation based on the multi-camera experimental framework. The previously utilized Expectation-Maximization (EM) approach, requiring setting the number of mixtures arbitrarily as an input parameter, was extended with the algorithm that automatically adapts the model to statistical data. The probabilistic model was obtained by matching to the histogram of transition times between a particular pair of cameras. The proposed matching procedure uses a modiﬁed particle swarm optimization (mPSO). A way of using models of transition time in object re-identiﬁcation is also presented. Experiments with the proposed method of modeling the transition time were carried out, and a comparison between previous and novel approach results are also presented, revealing that added swarms approximate normalized histograms very effectively. Moreover, the proposed swarm-based algorithm allows for modelling the same statistical data with a lower number of summands in GMM.


Introduction
Currently, the number of video cameras in public places is huge and is still increasing. Operators of surveillance systems cannot concentrate on many fields of view (FOVs) efficiently for long periods. Browsing through hundreds of hours of video data is timeconsuming and arduous. In general, these are the main reasons that imply the need for automated tools for video data analysis to facilitate operating with multi-camera surveillance systems. One such useful tool is a re-identification method whose task is to connect many observations of one real object from multiple cameras. The issues mentioned above became the basis and the motivation for the authors' research to develop re-identification methods.
The re-identification is equal to tracking objects between non-overlapping FOVs. In this case, the location of an object that has disappeared from a particular camera can be described with a probability measure. Furthermore, the pair of linked observations from different cameras are also related to a certain probability. This probability is determined on the basis of so-called premises such as an appearance (visual features) of the object in a certain pair of observations, a physical possibility of transition between the pair of cameras related to these observations, a time of this transition, and statistical patterns of objects movement on the observed area (called a behavior model). The scope of this paper considers mainly the issues related to obtaining the model of transition time based on statistical data. The authors formulate the conception of the modification of particle swarm optimization that use multiple swarms in order to match the Gaussian Mixture Model to statistical data with an adaptation of the number of mixtures in the model. Moreover, the comparison to the non-adaptive Expectation-Maximization method is also provided.
The paper is organized in the following way: Section 2 contains a short description of the research field and the authors' previous works. It also provides more details about the experimental framework that performs the re-identification method, and explains the importance of time dependency modeling. In Section 3.5, the concept of transition time modeling is presented more precisely; Performed experiments and results are included in Section 5; the paper is concluded with Section 6 in which the two used methods are compared and the adaptability of the mPSO algorithm is discussed.

Related Works
Significant efforts were put into automated video data analysis. Such processing is performed as a compilation of many algorithms and methods that depend on each other. In general, video data processing consists of the following phases: background subtraction, object tracking, event detection and re-identification. The first step refers to distinguishing a moving object and a still background (also called background subtraction) [1][2][3]. Thus, the output of this step is the objects detected in a single FOV. The next phase, while the detected object is moving, is related to obtaining a connection between the same object detected in consecutive video frames. Results of this phase of video processing are trackers of the objects [4]. When object tracking has been performed, the directions of objects' movement and object trajectories (routes) can be determined within a single FOV. Having such metadata event detection methods can be applied. They are mostly based on rules related to the location or movement direction of an object tracker within a given FOV. Moreover, regions, so-called hot areas, are defined within a camera FOV, and they can be used as part of rules in the event detection methods [5]. At the output of the previous steps of video processing, the re-identification (inter-camera object tracking) methods can be performed. Events of an object entering or an object leaving, given hot areas, are essential for the re-identification algorithm because they are transformed into observations of objects, which provide the input meta-data for re-identification methods [6,7].
Many efforts were made to research the possibilities of tracking objects in nonoverlapping video surveillance systems. Starting from a comparison of color histograms of objects [8], the methods of re-identification were developed for more sophisticated mechanisms. The main challenge posed with regards to these methods were related to changes in an object appearing in different cameras, which can be caused by unstable and changing lighting of the observed scene, as well as various settings of cameras (like the white balance). In order to cope with these problems, methods for the compensation of color or making visual descriptors independent to these factors were developed [9][10][11][12][13][14]. In general, these approaches to re-identification assume, on the one hand, utilizing additional data such as spatio-temporal dependencies or a kind of behavior model (particle filters [9], Bayesian networks [10], Markov chains [12], probability dispersion function [13] etc.), but on the other hand, some methods use the visual feature descriptors resistant to color changes [6,15]. The issue of discovering the topology of the camera network is considered in the literature related to tracking objects in multi-camera surveillance systems with non-overlapping cameras' FOVs [7,16]. Previous authors' works were focused on the methods for modeling the behavior of objects in the whole observed area, with their results being published earlier. This method is based on Pawlak's flow graphs and rough set theory [17,18]. Further works took into account the situation when the spatial dependencies or behavior of objects are changing. Thus, the adaptation method is proposed [19]. Moreover, the authors considered issues related to computational and memory complexity [20]. Therefore, a better adaptation of the Gaussian mixture model to statistical data related to the time of transition between adjacent cameras is essential in terms of saving processor cycles and memory. In this article, the authors focus on the issue of temporal dependencies between cameras. The proposed method is a modification of the particle swarm optimization algorithm [21]. The modification of this algorithm is related to the utilization of a multiple swarm approach in which the particular swarms compete with each other in order to find the best match of the Gaussian mixture model to the statistical data. Moreover, the mPSO algorithm also optimizes the number of summands in the Gaussian mixture model. As a reference, the EM (Expectation-Maximization) methodology is used [22]. It was the previous approach to temporal dependencies modeling, which requires the a priori assumption of the number of Gaussian Mixture Model summands.

Methodology
The proposed approach is a part of a system for re-identification; therefore, a whole context is delineated. The forms of input and output, as well as data structures, are also described.

Re-Identification
As mentioned above, the re-identification algorithm is based on the observations. A single observation of an object contains elements that are used to obtain the necessary premises, such as a visual feature descriptor, location of the observation (the camera and the hot area identifiers) and timestamp. Essential for this method is the determination of a probability measure corresponding to the situation that a given pair of observations is related to the same object (also called the identity of the object). It is realized through computing the weighted sum of probabilities measuring object identity depending on the above-mentioned premises as is formulated in Equation (1): where O in and O out is a pair of observations related to the event of entrance into the FOV and the event of an exit from another FOV, respectively; P v (.), P t (.) and P b (.) are probabilities of object identity based on visual, temporal and behavioral premises, respectively; similarly, weights are expressed as w v , w t and w b for visual, temporal and behavioral premises, respectively.

Spatio-Temporal Dependencies
At the beginning of the re-identification process, an important part of the algorithm is a limitation of the number of pairs of observations. It is obtained due to knowledge of the camera network topology, which is described with a graph. In such a graph, vertices refer to particular cameras and edges describe physically possible transitions between cameras. Having the topology graph, it is possible to determine which cameras are adjacent to the given one. Therefore, during the assessment of observation pairs, only pairs from adjacent cameras are considered.
The topology graph also contains temporal dependencies (description of times of transitions between adjacent pairs of cameras) described with weights on the edges. In order to determine temporal dependencies, statistical data related to transition times between particular pairs of cameras have to be gained. The creation of the transition time model is presented more precisely in the following part of the paper (Section 3.5).

Visual Features
Because of using video input data, visual object features cannot be omitted while comparing a pair of observations. Therefore, a region containing a silhouette of a detected object in a camera's FOV (also called a blob) have to be extracted for both observations. In order to assess a visual similarity of a certain pair of observations, visual feature descriptors (mostly color and texture) are calculated. Next, the descriptors are used for comparing the pair of observations and for determining their similarity, which describes the mentioned probability of the object identity (Equation (1)) from the point of view of the visual features.
Input data obtained directly from the multi-camera surveillance system have the form of a vector which contains times of transitions between the given pair of the cameras. This vector is the input for the EM algorithm, however the input for the modified particle swarm optimization (mPSO) algorithm is a histogram of transition time. The outcome of the processing of both algorithms is the Gaussian Mixture Model, which determines the probability density function of transition time between a given pair of FOVs.

Input Data Description
The input for the EM-based algorithm is the vector mentioned above, which is described by the following formula: where s i is the value of i-th transition time in the vector s and n is the size of the vector s. The input for the (mPSO) algorithm can be described by two vectors t and h presented in Equations (3) and (5). The vector t determines the time resolution of the histogram as well as a minimum and maximum transition time: where the difference between two consecutive elements describes the time resolution, i = {1, 2, . . . , max}, t 1 and t max are minimal and maximal transition time, respectively. Moreover, the value max is a number of intervals used to create the histogram. The vector n contains the numbers of transitions referring to particular times (elements) from the vector t. Thus, the vector n is formulated as in Equation (4).
where n i is a number of transitions that lasted from t i to t i+1 . Next, the histogram n is normalized and in the following calculations this normalized histogram h is taken into consideration: where h i = n i/∑ k=max−1 k=1 n i .

Output Description-Transition Time Model
The transition time for each edge of the topology graph is described with the Gaussian Mixture Model. It is determined according to the following formula: where ∑ i=M i=1 w ei = 1; p e (t) express the probability of a given time of a transition t through the edge e; N(t|µ ei , σ ei ) determines the value of a normal distribution for the given time of transition; the parameters of the distribution are described with the mean value µ ei and standard deviation σ ei ; w ei is the weight assigned to a particular gaussian; M is the number of summands in the Gaussian Mixture Model.

Expectation-Maximization Algorithm
The creation of the transition time model is an optimization task related to finding a Gaussian Mixture Model that is best fitted to the statistical data (in this case, in the form of time transition samples). Thus, having the samples of transition time (see Equation (2)), the parameters of the Gaussian Mixture Model have to be determined. This task can be realized with the Expectation-Maximization [22] algorithm, which is an iterative algorithm and consists of a few steps:

1.
Initialization: initial values of parameters of the Gaussian Mixture Model are set: • M: number of summands in GMM (j = 1, . . . , M) • w j : initial values of weights: • parameters of the Gaussians in the mixture µ j , σ j : Moreover, the initial value of log-likelihood is obtained: where n is the number of samples in the vector s and s i determines the i-th value from samples vector s (see Equation (2)).

2.
Expectation-step: so-called responsibilities ρ (k) ij are computed for all samples from s (see Equation (2)) and for all summands of GMM: ij measures how much each Gaussian in the mixture is responsible for each sample from the input. Next, the responsibilities of each summand are also computed: where j = {1, . . . , M} and resp (k) j describe the degree of j-th summand responsibility of values contained in input vector s (See Equation (2)).

3.
Maximization-step: new values of Gaussian Mixture Model parameters are calculated for each of the GMM summands: In Equations (13)-(15) j = {1, . . . , M} and k + 1 is an index of the consecutive iteration of the algorithm.

4.
Assessment of the actual Gaussian Mixture Model match: in order to determine how well the obtained GMM fits the histogram, the new log-likelihood has to be computed: where . The ending condition of the algorithm is described with the following formula: where the value of δ determines the threshold, which refers to the precision of the GMM matching the distribution of values contained in the vector s. After the ending condition is checked, there are two possibilities: (a) when the condition (17) is true, the algorithm must return to step 2; (b) otherwise, the algorithm ends.
Thus, the input of this algorithm is the vector of transition times s and fitness threshold δ and the output is the parameters of the Gaussian Mixture Model. In Section 5, the experiments and results of using this algorithm are presented.

Modified Particle Swarm Optimization Algorithm
The second approach to the raised optimization task is the utilization of the swarm algorithm with some modifications. In PSO methods [23,24], each particle describes one probable solution of the optimization problem. A set of particles is distributed in the N-dimensional space and should be considered a swarm, which can realize a kind of inside communication [25,26]. Therefore, in each iteration of the algorithm, a new position of each particle (in the solution space) is calculated. In the classical approach, three vector components determine the direction and distance of the particle movement. These components are: • inertia component: it contains information about the actual direction and speed of a particle movement; • global component: is it is related to the best solution known in the whole set of particles; when one of particle finds the best solution the rest of them tend toward this solution; • cognitive component: it refers to the experience form the past of a particle; this component is directed to the best solution known by the particular particle.
The steps of the Particle Swarm Optimization algorithm are presented below:

1.
Initialization: an initial position w (0) i and a velocity v (0) i of i-th particle is set randomly, where i = {1, . . . , P} and P is the number of particles in the swarm.

2.
Assessment of the particles: the assessment function f w (k) i : R n → R, as arguments, takes parameters of a possible solution represented by a given particle, the result of this function is the estimation of the solution correctness in the form of a real value.

3.
Update the global and local best solutions: updating of the best solution known by the i-th particle l (k) i (needed while calculating the cognitive component of the i-th particle) and updating the global best solution g (k) (which is used to compute the global component) is performed as follows: • for each i-th particle where i = {1, . . . , P}, -the following condition has to be considered: if f l , then a new local best solution for the i-th particle is assigned: l In the case of a just initialized algorithm the actual position of the i-th particle is assigned as the best local solution: l next, in order to determine the best global solution g (k) , the following conditions have to be considered: , then the new best global solution is assigned: is determined as the position of the best-assessed particle after initialization.

4.
Obtaining particles movements: the new translation vectors of particles in the solution space are obtained as it is formulated in Equations (18): where w i is the part of the actual inertia component of the i-th particle movement (can be considered as the vector of the speed of the particle), g (k) is the position (in the solution space) of the best-assessed particle is the whole swarm, l

5.
Termination criterion check: two types of termination criteria are used. The first is the number of iterations of the algorithm, and the second is a difference between solutions calculated in the consecutive iterations. This difference can be described with Formula (19).
Thus, the two threshold values must be determined as termination criteria that are: the threshold for difference between consecutive iterations of algorithm δ rms and the limit of iteration number δ iter . Furthermore, the following condition has to be considered: rms < δ rms ∨ k < δ iter , then terminate the algorithm, otherwise return to step 2.
In order to facilitate understanding the idea of the PSO algorithm, a method for calculating movement of a single particle is suggestively presented in Figure 1 (in this case a 2-dimensional solution space was considered). In the case of transition time modeling, the solution space is one-dimensional, but some modifications must be introduced in the PSO algorithm. Each particle represents a certain transition time as its position w (k) i . In general, the proposed method assumes usage of the few particle swarms which will compete with each other in order to obtain parameters of the Gaussian Mixture Model. Each swarm is used to calculate the parameters of a single GMM summand. Therefore, there is a set of swarms S = S 1 , . . . , S L and L is the number of swarms. The competition of them refers to the two mechanisms: • the particles from the given swarm avoid being close to the particles of the other swarms, in order to find different mean values µ of model summands, • and within a single swarm, the particles also avoid being too close to each other which allows the obtaining of a standard deviation value for a given summand of a GMM.
The next change is the introduction of the topology (within the swarm), which determines the adjacency of each particle. For the one-dimensional solution space, as in the case of the time transition modeling, the topology has the form of a chain, in which a particle has two neighbors, as is schematically presented in Figure 2. Such an approach modifies the method of calculation of the global component (in which the two closest particles are taken into consideration) g (k) when the particles are moving (See Equation (18)). The vector t (containing transition times) also has to be modified in the way presented in Equation (20).
where u is a vector of normalized transition time. Moreover, the parameter called particle satisfaction φ is introduced. This parameter is determined as Equation (21) shows.
whereĝ w (k) i determines how high values of the histogram h are near the i-th particle, the value of d a describes the average distance between the i-th particle and adjacent particles from the same swarm, d o describes the average distance to the other swarms, and α, β constant coefficients.
Because the normalized transition times contained in u and the normalized histogram h contain discrete values, the interpolation needs to be applied. When the location of the i-th particle is determined as w i , the four bins from the histogram h are chosen (the two are related to the closest transition times below w i denoted as h low2 , h low1 and the other two refer to transition times above w i denoted as h up1 , h up2 ) as values of the function to interpolate. The arguments of function to interpolate are taken from the vector u. Assuming that the four chosen values areĥ 0 = h low2 ,ĥ 1 = h low1 ,ĥ 2 = h up1 andĥ 3 = h up2 , then values from the vector u areû 0 = u low2 ,û 1 = u low1 ,û 2 = u up1 andû 3 = u up2 , respectively. It is worth mentioning that the value of w i fulfills the condition u low1 ≤ w i < u up1 . In the proposed mPSO algorithm the Newton interpolation is used, which consists of the following steps:

1.
Determine the points as the input for the interpolation algorithm: where g(.) is the function to interpolate 2.
The second part of the particle satisfaction φ (k+1) i (see Equation (21)) is related to its avoidance of being close to its adjacency (called adjacency distance d a ). Thus, d a can be formulated as Equation (25) shows.
where Ξ a is the number of particles adjacent to the i-th particle, a describes the adjacency of the i-th particle (it is a vector containing the indices of adjacent particles).
The last part of the particle satisfaction φ (k+1) i (see Equation (21)) denoted as d o w (k) i describes a particle avoidance of the particles from other swarms.
where L is the number of swarms and al j are calculated in accordance with Equation (30). The obtained value of φ (k+1) i is directly used to assess i-th particle in the algorithm (see the step "assessment of the particle") as a value of the assessment function f (w (k) i ). Additionally, a swarm satisfaction Φ is determined as Equation (27) presents: where Ξ is the number of particles in the l-th swarm, ξ is the particle index, k is the index of algorithm iteration, φ ξ determines the ξ-th particle satisfaction and l is the index of the swarm. The next modification is adding two components that have an influence on the movement of the particles that are: • inner reluctance component: is introduced in order to realize the avoidance between particles within a single swarm (denoted as ir i for i-th particle in k-th iteration).
Thus, the step of the algorithm called "Obtaining particles movements" (See Equation (18)) have to be changed in the following way: The value of ir (k) i is calculated according to the following formula: where A is the number of particles adjacent to i-th particle, a is the index of adjacent a is the satisfaction of an adjacent particle and w determined a distance between i-th particle and the adjacent one. In order to calculate the value of ea (k) i , the average location of the particular swarm al (k) l has to be described: where Ξ is the number of particles in the l-th swarm, ξ is the particle index, k is the index of algorithm iteration, w (k) ξ determines the ξ-th particle location and l is the index of the swarm. Therefore, the value of external avoidance component ea where L is the number of swarms, al (k) i is the average location of the swarm, which the i-th particle belongs to.
Generally, the modified Particle Swarm Optimization algorithm is performed as it is presented in Figures 3 and 4.
The algorithm is described below in detail:

1.
Initialize vectors h, u (according to Equations (5) and (20)) and set the number of swarms L.

2.
While the index of the swarm l is lower than L: (a) initialize a new swarm S l ; (b) perform the modified Particle Swarm Optimization algorithm for the swarm S l (the rest of previously added swarms remains unchanged); (c) increase the index l

3.
Obtain the parameters of Gaussian Mixture Model basing on the distributions of particles in particular swarms (see Equation (33)).
The last step will be described below. The particular swarm is a distribution of particles on the axis of normalized transition time (See Equation (20)). The input vector u was normalized (see Equation (20)), therefore, inverse operation (See Equation (32)) needs to be performed before the whole algorithm output.
Thus, the parameters of a given GMM summand can be determined as follows: where min(S l ), max(S l ) are minimum and maximum a particle location within the whole S l swarm, respectively, and L is a number of swarms.
Title Suppressed Due to Excessive Length 13 The last step will be described below. The particular swarm is a distribution of particles on the axis of normalized transition time (See Eq. 20). The input vector u was normalized (See Eq. 20), therefore an inverse operation (See Eq. 33) needs to be performed before obtaining the whole algorithm output.

Experiments and Results
In order to test the both proposed algorithms, the two adjacent cameras from the video surveillance system were chosen. Their FOVs and regions used are presented in Fig. 6.
Next, 1028 samples of transition times s were obtained manually. The histogram n and transition times vector t were also created. Based on n and t the vectors containing normalized values are obtained too. The both histograms are shown in Fig. 7. Next,

Experiments and Results
Video data were recorded with the cameras (AXIS P1377) which were observing the road and the parking lot. The described optimization algorithms are implemented as modules of a video processing framework in C++ and run on a workstation computer. In reference to the mentioned re-identification method, the temporal dependency modeling has to be carried out as an initialization of the re-identification method. Thus, transition time models in the form of the Gaussian Mixture Model are used to determine the probability of object identity based on temporal premise P t (O in , O out ) (see Equation (1)). In the case of the EM algorithm, the number of summands in GMM has to be set a priori and the results of Equations (13)-(15) from the last step of the algorithm are used to create the transition time model, whereas the mPSO algorithm can adapt the number of mixtures to the histogram data. Each swarm added by the modified Particle Swarm Optimization algorithm determines the parameters of a single Gaussian from the output GMM model (see Equations (33)).
In order to test both proposed algorithms, the two adjacent cameras from the video surveillance systems were chosen. Their FOVs are presented in Figures 5 and 6.  The transition time for each observation was determined as the time between the object's disappearance in the field of vision of the first camera and the moment of the object reappearance in the area observed by the second camera (see Figure 7). Next, 1028 samples of transition times s were obtained manually from 13.4 h of video data. These transition times were the input data for the presented algorithms (see Equation (2)). The histogram n and transition times vector t were also created (see Figure 8). Based on n and t vectors containing normalized values are obtained too (see Figure 9). Both histograms are shown in Figures 15 and 18. In order to use the normalized histogram in the modified particle swarm optimization, its values should be interpolated (see Figure 10). Interpolation operation is described with the Equations (22)- (24). The interpolation error is determined as RMSE and is equal to 4.75%.  The proposed mPSO is performed with the following parameters: δ rms = 0.0005, δ k = 1000, L = 3, Ξ = 150, α = 0.015, β = 0.25, η = 0.7, c 1 = 0.15, c 2 = 0.15. These results are also shown (See Figures 16-18). The output GMM parameters are tabulated in Table 1.

GMM --pe(t) transition time --vector t[s]
GMM weighted Gaussian from swarm S1 weighted Gaussian from swarm S2 weighted Gaussian from swarm S3 Figure 18. GMM obtained on the basis of swarms distributions.

Conclusions
Two approaches to object transition time modeling in multi-camera surveillance systems were developed and tested. The EM algorithm is based on iterative searching for the best fitness of the Gaussian Mixture Model to the transition time histogram. Both methods allow for conversion from statistically abundant data into a form of a Gaussian Mixture Model that is a quite compact representation of the probability density function. Moreover, an approximation histogram with the Gaussian Mixture Model enables the determination of probability for the continuous variable as an argument. However, in the EM method, the number of Gaussian Mixture Model summands has to be determined at the start of the algorithm. On the one hand, more mixtures should result in a better match to histogram data. On the other hand, additional mixtures cause an increment in computation consumption. This increment is related to the creation of the model as well as the usage of it. In the case of the modified Particle Swarm Optimization algorithm, the consecutive swarm can be added but this swarm will not change the Gaussian Mixture Model significantly, because previously added swarms took the most satisfying places of the normalized histogram. Therefore, the swarm S 3 was insignificant in the experiment since it adds very little to the Gaussian Mixture Model. The proposed modified Particle Swarm Optimization algorithm allows not only for the match of Gaussian Mixture Model parameters to the histogram but at the same time it adjusts the number of mixtures in the Gaussian Mixture Model.
The future development of the proposed methods is related to testing the modified Particle Swarm Optimization in multi-dimensional solution space and checking other more complex topologies of the swarms. Moreover, computation complexity analysis of the modified Particle Swarm Optimization algorithm is also an interesting issue to pursue. Funding: This work has been funded by the Polish National Science Centre within the grant belonging to the program "Preludium" No. 2014/15/N/ST6/04905 entitled: "Methods for design of the camera network topology aimed to re-identification and tracking objects on the basis of behavior modelling with the flow graph".

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

GMM
Gaussian Mixture Model EM Expectation-Maximization mPSO modified particle swarm optimization FOV fields of view List of Symbols P i probability of object identity O in observation event related to entrance into area observed by camera O out observation event related to disappearance of camera field of vision P v , P t , P b component of the probability of object identity based on visual, temporal, and behavioral premises, respectively w v , w t , w b weights used to determine the importance of given type of premises (visual, temporal, and behavioral, respectively) s vector containing raw input data, that are transition times for all observed objects resp j degree of the j-th GMM summand responsibility of values contained in input vector s L log-likelihood used in EM algorithm δ fitness threshold used in EM algorithm w i position of the i-th particle in the solution space w i velocity of the i-th particle in the solution space P is the number of the particles in the swarm f (w) the assessment function for the position in the solution space l i the best solution known by the i-th particle g the best global solution u vector of normalized transition time φ particle satisfaction g(w i ) interpolated value of the histogram h for the position w of the i-th particle d a(w i ) average distance between the i-th particle and adjacent particles from the same swarm d o average distance to the other swarms for the i-th particle Φ swarm satisfaction ir inner reluctance al average swarm location ea external avoidance