Comparison Study of the PSO and SBPSO on Universal Robot Trajectory Planning

: Industrial robots were modiﬁed over the years. The beneﬁt of robots is making production systems more efﬁcient. Most methods of controlling robots have some limitations, such as stopping the robots. The robot stops by various reasons, such as collisions. The goal of this study is to study the comparison of improving the Artiﬁcial Potential Field (APF) by the traditional Particle Swarm Optimization (PSO) algorithm and the Serendipity-Based PSO (SBPSO) algorithm to control the path of a universal robot UR5 with collision avoidance. Already, the metaheuristic algorithm kinds deal with a premature convergence. This paper presents a new approach, which depends on the concept of serendipity and premature convergence applied to the path of the universal manipulator UR5 and also compares it with traditional the PSO. The features of the SBPSO algorithm prototype are formalized in this paper using the concept of serendipity in two dimensions: intelligence and chance. The results showed that the SBPSO is more efficient and has better convergence behavior than the traditional PSO for controlling the trajectory planning of the UR5 manipulator with obstacle avoidance.


Introduction
Using robots in manipulation tasks had a significant growth in the context of industrial production in recent years [1]. The aforementioned diffusion of robots in an industrial environment provided, over the years, that various methods were developed in order to monitor and control manipulator and mobile robots [2]. With this, they acquired the ability to operate in dangerous environments for humans, for example, beyond Earth's atmosphere, in aquatic explorations, and in the transport of materials [3]. The purpose of this paper is to present the simulation of article Swarm Optimization (PSO) and the Serendipity-Based PSO (SBPSO) algorithm to modify APF approach, in order to generate free trajectory of collision of a UR5 universal manipulator robot. These metaheuristics will be used to optimize the repulsion constants of the potential field of the starting point and obstacles and the attraction constant of the end point, which are directly related to the intensity of the force resulting from the potential field.
Particle Swarm Optimization (PSO) is a heuristic algorithm dependent on social behavior of a flock of birds. The method was recommended by Kennedy and Eberhart in 1995 [4]. They had an objective to seek the optimal solution, in a space of search, by exchanging information between individuals of a population to determine which trajectory they should take in the search space. In this paper, the PSO was used because it has a number of advantages including rapid convergence and ease of implementation, but often faces a problem where their particles get "stuck" in great places. This problem is called premature convergence [5]. In this algorithm, individuals called particles function as a set of birds searching for a flight shape and considering the position of each particle within space; this is due to a curve of evolution within society or set of particles. Each particle will have its success defined by the general trend of the population [5]. Many engineering problems are common to study types of its behavior process, as a rule, such as a dynamic system which can been modeled by a set of equations that consider the same variables and is subjected to a certain evolution over time. A set of techniques which has been applied to this kind of problem is succussed on what it involves and is known as bio-inspired computing techniques [6]. These techniques applied to the different domains such as: prediction systems [7,8], power systems [9,10], telecommunications [11,12], intrusion detection systems [13,14], and others. In the context of bio-inspired computing, Swarm Intelligence consists of a set of metaheuristics based on some living beings whose behavior emergencies can result in an ability to resolve.
It is important to mention that the work of the PSO and SBPSO algorithm is to enhance the artificial potential field approach in order to obstacle avoidance collision.
In order to enhance work of the PSO by delaying premature convergence, the serendipity concept will be used. Serendipity is a term used to describe a fortunate situation that happens by chance. Recommendation systems are developed as a solution to the problem of information overload. The accuracy of the recommendations has been the focus of recommendation system research [15,16]. Accuracy, on the other hand, resulted in a problem known as overspecialization. This happens when the system only refers to items connected to the user's profile. Serendipity is a strategy that can be used to resolve overspecialization. According to [17], a recommendation system provides products based on the user's interest profile and converges on recommendations when these items do not meet the real expectations of the user. A metaheuristic algorithm converges to a place without considering other more suitable solutions in order to suggest items that may be more suitable. It is possible to observe the correlation between overspecialization and premature convergence. Some approaches in the literature use scout particles with the objective to improve the standard PSO algorithm's performance [18,19]. These approaches are stochastic, but is a benefit when combined with serendipity techniques. This work proposes Serendipity-Based Particle Swarm Optimization (SBPSO) using a combination of the particles Girl Scouts with serendipity to improve the exploration of the algorithm in robotic searches.
Serendipity, according to the Cambridge Dictionary, "is the act of obtaining important or interesting things by chance." Unlike other traditional definitions, which exclusively treat serendipity as a synonym for "chance," serendipity is a mix of chances [20]. In [21], a formal approach was used for multiple categories to present the concept of serendipity. In order to present four occurrences that cause serendipity, logical equations called Serendipity Equations were used. These occurrences are linked to four various kinds of serendipity: (a) pure serendipity; (b) fake serendipity; (c) serendipity with wrong knowledge; and (d) serendipity without an inspiration metaphor.
The contribution of this paper is to obtain the collision-free trajectories by using the PSO and the SBPSO algorithms, also comparing the effect of the PSO algorithm and SBPSO algorithm to modify APF approach on the trajectory of a universal robot UR5.

Artificial Potential Field
The elements and resultant of potential energy, contained from an attractive potential field and repulsive potential field. Equations (1) and (2) represent the attractive potential field and attraction force, respectively, by setting the position of end effecter x = (x, y) T and the point of goal coordinate x G = x g , y g . The UR5 robot has been attracted to the goal object before reaching it. The goal object has the bigger attraction force.
K att : Attraction potential field constant.
Distant between the first coordinate of manipulator and last coordinate of it (i.e., pick and place operation of the end-effector).
The attraction force of the UR5 robot is the negative gradient of attraction potential field The direction of attraction force is a line between first coordinate of the end-effecter and the goal object coordinate.
In Equations (3) and (4), repulsive potential field and repulsive force respectively [22,23]. The repulsive potential field is dependent on distant. When the UR5 manipulator reaches the obstacle's effect region, it is subjected to repulsion force. The repulsion is greater for the closest (closer) obstacle. The coordinate of obstacle is x o = (x o , y o ); the U rep (x) is generated between the manipulator and obstacle based on the distance.
Distant between the first coordinate of manipulator and obstacle of it (i.e., in the pick and place operation of the end-effector).
d o : Influence range of U rep (x). The repulsive force of the UR5 robot is the negative gradient of repulsion potential field U rep (X).
when d(x, x o ) ≤ d o and zero other where. The direction of repulsive force is the line between manipulator and obstacle object. In Equations (5) and (6) the resultant potential field and resultant force respectively effecting on the UR5 manipulator.
The resultant force F(x) is F Rep (x) is the repulsive force from more than one obstacle and the F att (x) is the attractive force of goal.

Particle Swarm Optimization (PSO)
The PSO algorithm can be used to solve problems of trajectory planning using a state function value and an Interactive Optimization by Particle Swarms (IPSO) [24,25]. The position at time (t) is updated through x i (t) and in the future tense (t + 1) will be updated to x i (t + 1) as in Equation (7) x i (t + 1) = x i (t) + v i (t + 1) The velocity of particles is v i (t) [12]. Each particle will have a cognitive component, which will be a relationship of the distance between the optimal solution and the social component, which is the collective's understanding of the particle's existence. For this problem, the PSO was applied globally (Global best PSO), with Equation (8) updating the particle velocity: (8) v ij (t) is the particle's velocity in a dimension at time t. The acceleration parameters are C 1 , and C 2 . The y ij andŷ ij which is the best position from the start [12], provide the information of the best particle. A velocity is assigned to each particle in the PSO algorithm. Particles move around the search space at different speeds, which are constantly changed based on their previous actions. Finally, during the search process, particles have a tendency to go via the best research area [12]. The trajectories of the joints of the handler can be optimized using a strategy which divides the robot into several aptitude functions to perform the optimal fit with modified PSO with crossover operator [26]. The simulation results are presented for manipulator trajectory planning of the UR5 using redundant kinematics mounted on a floating space.
Several simulations of the PSO algorithm were carried out with capability of changing the number of generations and particles. The best result of the algorithm was for 30 particles and for 2000 iterations that had the best values for the APF, and there was a convergence between the average of all particles and the best particle. For the PSO algorithm, the following were used element values: Quantity of particles = 30 particles Cognitive and social parameters (learning rates -C 1 and C 2 ) = 2 Iterations = 2000 Initial population generation = [−1, 1]; Inertia factor (w) = 0.5 The initial steps of traditional PSO algorithm represent in Algorithm 1. The PSO algorithm's stopping criterion is the algorithm's generations number. In metaheuristic algorithms, elements representing solutions identified in a particular iteration for a given search space are added to the set E. Additionally, e ij is an element that represents a solution i in iteration j, belonging to set E. The element e* ij E represents the fitness value of solution i which is the best value discovered in iteration j. A search space's probable solutions are denoted by the letter S. So, E is a subset of S. Otherwise, this solution is considered an occasional solution given by Equation (9): Chance ij is a serendipitous element. When chanceij's fitness value is greater than element of e* ij fitness value, the SRD forms occasional solutions according to Equation (10): The concept of serendipity is presented in this work. There are two important components in this work: irregularity and acceptability.
(a) Irregularity, because it is an element in iteration j that does not belong to set E. (b) Acceptability because it indicates a solution with a higher fitness value than the element e* ij.

Serendipity-Based Particle Swarm Optimization
The creation of serendipity-based recommendation mechanisms is a new and unsolved research challenge. "Something excellent or useful" might be considered as a candidate solution to replace after a specific iteration's number that occurred when the concept of serendipity is applied to metaheuristic algorithms and their optimization applications.
The PSO algorithm has gained a lot of attention from the community's search to solve optimization problems in the real world because of the simplicity of which it would be implemented, and the high quality of its output PSO is a stochastic approach for modeling birds social behavior. The solution in this type of algorithm is represented by a particle that "flies" over the search space looking for the best solution after a set number of iterations.
Particle motion is based on two types of information: pBest and gBest. pBest guides the particle to the swarm's optimal location, whereas gBest impacts the particle's progress to the swarm's best position. After determining pBest and gBest, each particle uses Equations (11) and (12) to update its position and velocity: In Equation (11), the term w represents the inertial velocity of the particle. X t id and V t id are the position and velocity of particle i at time t. The best fitness value of particle i and the best fitness value of all particles in the swarm up to time t, respectively, are p t id and p t gd . C 1 is a coefficient of the particle self-exploration, while C 2 is a particle's movement in the direction of the global displacement of the swarm in search space. r 1 and r 2 are random values in the interval [0, 1]. Serendipity presented as an ability to make lucky discoveries by way of intelligence and chance. In this work, the term " intelligence" should be understood as the quality of perceiving an event. This concept of serendipity is useful and interesting since it defines where serendipity can be applied to a traditional PSO algorithm, in addition to intelligence type, to improve the PSO's behavior through serendipity-inspired options. Three natural techniques to implementing this enhancement are defined. The first is based on point random choice modifications accessible in the conventional PSO method. The second requires the addition of a mechanism for implementing a PSO algorithm using serendipity [27,28]. The third approach is a hybrid of the previous two approaches. Because it is important for an optimization technique and the best accessible solution on the current iteration, in all of these techniques, consider the search space's "lucky discovery points." This work is dependent on the third approach. In [17], s strategy was proposed a to generate serendipity dependent on a perceptual model [29], which combines two strategies used in the recommendation systems [30]: Pasteur's principle and blind luck. To detect the capabilities and apply insights to seize the moment, Pasteur's principle is used. The principle uses scout particles in order to inspect regions in the space of unexplored search by the swarm, according to Equations (11) and (12). The blind luck strategy is implemented through the use of Girl Scout particles. They implement the random dimension and complement the space exploration of the traditional PSO generating additional diversity. These strategies are the implementation of the concept of intelligence or "the ready mind" in order to get a new algorithm called Serendipity-Based Particle Swarm Optimization (SBPSO). This algorithm used scout particles to search on peaks or valleys around the best in iterations points. Scout particles slow down the convergence time of the optimization algorithm and can be utilized to enhance algorithm capacity and scanning. It is worth remembering that adding the scout particles can help the swarm behavior. In the SBPSO method, the Girl Scouts are utilized to provide answers that were better than the swarm's best particle. Equation (13) represents the velocity of a Girl Scout particle k: C 3 is the variety coefficient and r 3 is a value random in the interval [−1, 1]. X t kd is the position of the Girl Scout particle k and X t best is the best particle's position in the swarm at time t. Equation (14) represents the position of a scout particle k: Algorithm 2 describes the initial step of SBPSO algorithm. The position and speed of each swarm particle and each scout particle which are randomly initialized (lines 1-2). The fitness value of each particle is then calculated to determine which particle is the best (lines 4-11) and for the scout particles (lines [12][13][14][15][16][17][18][19]. In the next step the best particle will be found and named the new gBest particle (lines [20][21][22][23][24][25][26]. After this, the particles start the inspection process at the proximity of the gBest particle (lines [27][28][29][30][31]. For this, the algorithm creates n adjacent points (AP) are present in Equation (15): Best: is the best particle current. α is a real constant positive. R is a 1× dim matrix with entries randomly distributed in the range [−1, 1], where dim is the current particle's dimension.
According to Equation (16a), for each AP n that was created, we define a vector → d n that represents the segment oriented ← bestAP n : best and AP n define a single vector IP m n is an inspection point m defined on the straight line that passes through best and AP n , → u is the vector → d n that has the smallest norm and λ is a non-zero real constant. Two criteria are defined to find an inspection point that can replace the particle gBest current. The criteria are: (1) IP with fitness value is better than current particle gBest and (2) IP with fitness value is better than others. However, if there is more than one IP that meets these two criteria, a lottery is performed to choose one of them. According to Pasteur's principle, luck favors the prepared mind. In this work, the concept of "mind prepared" is implemented through inspections around the particle gBest. As a result, there is the implementation of the intelligence dimension. Figure 1 presents an example of the choice of random inspection points in a 2D space.  In Figure 1, the direction of each of the straight lines is stochastic, since they connect the gBest to the inspection point tools. During all iterations, the lines have a reasonable perception of the behavior of the fitness function, even in an n-dimensional space.
Again, the swarm particles' velocity is calculated, and the scout particles' position is updated (lines [32][33][34][35]. Finally, the program assesses the swarm's state in terms of premature convergence and stagnation (row 36). The SBPSO restarts the position and velocity of the scout particles in (line 38):  16: if fit(bestswarm) < fit(best Girl Scout) and fit(bestswarm) < fit(gBest) then gBest ← bestswarm 17: else if fit(best Girl Scout) < fit(bestswarm) and fit(best Girl Scout) < fit(gBest) then gBest ← best Girl Scout Figure 1. Example of random selection of 8 inspection points around the gBest particle, in a 2D space. This is the result for each of the 4 adjacent points defined.
In Figure 1, the direction of each of the straight lines is stochastic, since they connect the gBest to the inspection point tools. During all iterations, the lines have a reasonable perception of the behavior of the fitness function, even in an n-dimensional space.
Again, the swarm particles' velocity is calculated, and the scout particles' position is updated (lines 32-35). Finally, the program assesses the swarm's state in terms of premature convergence and stagnation (row 36). The SBPSO restarts the position and velocity of the scout particles in (line 38):

Methods for Detecting Premature Convergence
The PSO algorithm is a mechanism that can generate good trajectories in a short amount of time. However, the algorithm runs the risk of prematurely converging. The swarm will be converged prematurely when the solution proposal approaches a good location instead of the problem being optimized. Premature convergence is caused by a decrease of variety in the search space, which enables the swarm to become stagnant. After premature convergence begins, the particles continue to shut on each other in a very small region of the search space [27]. In order to avoid swarm stagnation, three strategies for detecting early convergence are proposed in [28]:

1.
Determines the greatest Euclidean distance between the gBest particle and a swarm particle. When this distance is smaller than a threshold distance termed δ stag, stagnation develops. Equation (17) is a form to present it: where e ij − e* ij , is the Euclidean distance between the gBest particle and a particle of the swarm in iteration j. µ min and µ max represent the minimum and maximum of particle I, respectively.

2.
Cluster Analysis consists of evaluating the percentage of the particle's number of a cluster C. If the distance between a particle belongs to C and the gBest particle which is less than a threshold δ min, the convergence has occurred, when a percentage of particles in C reach a threshold δ max.

3.
Objective Slope Function considers the position of particles in search space based on its normalized value and the change rate of objective function which is obtained through Equation (18): where f (yˆj) s the objective function's fitness value on iteration j, and f (yˆj −1 ) is the objective function's fitness value on iteration j−1. A counter is incremented if the value of f ratio is less than a predefined threshold.

Computational Experiments
The computational experiments are presented in this section and implementation results of the APF+PSO and APF+SBPSO algorithm are presented to obtain collision-free trajectories as well as the potential surfaces for each algorithm will represent. A. APF+PSO simulations Figure 2 plots a graph of aptitudes and generations, where it is feasible to see when the number of particles is equal to 2000 iterations, the best particle, and the average of all particles converge.
where f(^) s the objective function's fitness value on iteration j, and f( −1 ) is the objective function's fitness value on iteration j−1. A counter is incremented if the value of f is less than a predefined threshold.

Computational Experiments
The computational experiments are presented in this section and implementation results of the APF+PSO and APF+SBPSO algorithm are presented to obtain collision-free trajectories as well as the potential surfaces for each algorithm will represent. Figure 2 plots a graph of aptitudes and generations, where it is feasible to see when the number of particles is equal to 2000 iterations, the best particle, and the average of all particles converge.

A. APF+PSO simulations
The potential field parameters found in the simulation of the PSO algorithm for 30 particles and 2000 iterations are presented in Table 1. Figure 3 shows the lines of the artificial potential field and the collision-free trajectory, in Cartesian space for the simulation performed, using the parameters in Table 1, obtained from the locations of the obstacles.   The potential field parameters found in the simulation of the PSO algorithm for 30 particles and 2000 iterations are presented in Table 1. Figure 3 shows the lines of the artificial potential field and the collision-free trajectory, in Cartesian space for the simulation performed, using the parameters in Table 1, obtained from the locations of the obstacles.  Figure 3. Lines of the potential field by the algorithm of APF for the parameters in Table 1.
In Figure 4, the resulting surface of the field is shown. Potential for the scene is shown in Figure 3.  Table 1. In Figure 4, the resulting surface of the field is shown. Potential for the scene is shown in Figure 3.  Table 1.
In Figure 4, the resulting surface of the field is shown. Potential for the scene is shown in Figure 3.

B. APF+SBPSO simulations
In this section, some functions of benchmark used to evaluate the proposed algorithm and the parameters used are represented.

The Benchmark Functions
Some functions chosen are often applied in minimization problems to confirm the stagnation and convergence of the algorithm [28]. In [29][30][31] and several other PSO studies, the functions are described below: 1. Sphere function-characterized by being simple, convex and unimodal. Its main function to evaluate the convergence rate of the search process are represented in Equation (19): 2. Rosenbrock function-has the global minimum in parabolic valley. In this function, the convergence is difficult [32]; this function is represented in Equation (20):

B. APF+SBPSO simulations
In this section, some functions of benchmark used to evaluate the proposed algorithm and the parameters used are represented.
The Benchmark Functions Some functions chosen are often applied in minimization problems to confirm the stagnation and convergence of the algorithm [28]. In [29][30][31] and several other PSO studies, the functions are described below:

1.
Sphere function-characterized by being simple, convex and unimodal. Its main function to evaluate the convergence rate of the search process are represented in Equation (19):

2.
Rosenbrock function-has the global minimum in parabolic valley. In this function, the convergence is difficult [32]; this function is represented in Equation (20): 3. Griewank function-several local minimums regularly distribute and are represented in Equation (21):

5.
HappyCat function-is characterized by being unimodal. This feature makes the algorithms of optimization find it difficult to escape from a "black groove" [33] represented in Equation (23): 6.
Ackley function-there are several local minimums in the parabolic valley. If the function characterized by having an almost flat outer region represent in Equation (24): For each of the functions presented, Table 2 shows the search space boundaries and the initialization ranges.

Functions
Search Space Initialization Range 10 ≤ x i ≤ 6.10 3.05 ≤ x i ≤ 6.10 The values associated with the parameters w, C 1 , and C 2 , are defined in Equation (11). The standard PSO's performance might vary, also its modifications [34,35].
To perform a proper comparison, all parameters of the SBPSO and the standard PSO were defined with the same value: w (weight of inertia) which starts at 0.9 and C 1 = C 2 = 2.0 and decays linearly ending at 0.3. The maximum velocity V max of the particle has been defined related to the dimensions of the search space [25].
Other parameters are specific to SBPSO. They are related to quantity and speed of the scout particles, the detection of premature convergence, and the creation of inspection and adjacent points. The number of Girl Scouts was set to 12 percent of the total number of particles. Equations (13), (15), and (17) have additional parameters: C3 = 1.5, which corresponds to 2% of the search space, and λ = 10 −4 .
Two thresholds were used: δ stag and δ conv to detect premature convergence. δ stag was assigned the value 10 −4 and the threshold δ conv . corresponds to 6% of the total iterations number. δ conv . used to set iterations number whose gBest particle's fitness value has not improved. All of the above parameters were set after repeated simulations and produced good results in the functions which have been studied.

Results
The performance of the SBPSO algorithms is evaluated and makes a comparison study with the traditional PSO algorithm applied on UR5 trajectory planning. The result presents the fact that, in the SBPSO, the result of serendipity combined with scout particles, more active than the traditional PSO. Figures 5-10 show the behavior of the swarm (with 30 particles) in the Sphere, Rosenbrock, Griewank, Rastrigin, HappyCat and Ackley, for 2000 iterations respectively. In the figures below, the traditional PSO stagnated before iteration 2000, but the SBPSO is still active. study with the traditional PSO algorithm applied on UR5 trajectory planning. The resu presents the fact that, in the SBPSO, the result of serendipity combined with scout par cles, more active than the traditional PSO. Figures 5-10 show the behavior of the swarm (with 30 particles) in the Sphere, Ro enbrock, Griewank, Rastrigin, HappyCat and Ackley, for 2000 iterations respectively. the figures below, the traditional PSO stagnated before iteration 2000, but the SBPSO still active.   study with the traditional PSO algorithm applied on UR5 trajectory planning. The res presents the fact that, in the SBPSO, the result of serendipity combined with scout pa cles, more active than the traditional PSO. Figures 5-10 show the behavior of the swarm (with 30 particles) in the Sphere, R enbrock, Griewank, Rastrigin, HappyCat and Ackley, for 2000 iterations respectively. the figures below, the traditional PSO stagnated before iteration 2000, but the SBPSO still active.           Tables 3 and 4 show the size of the population, the iterations number, the best fitne value and standard deviation of benchmark functions.    Tables 3 and 4 show the size of the population, the iterations number, the best fitne value and standard deviation of benchmark functions.   Tables 3 and 4 show the size of the population, the iterations number, the best fitness value and standard deviation of benchmark functions.   4.8007 × 10 −11 0 Figure 11 show the contour path with avoid collision for universal manipulator robot UR5. Figure 11. Tack contour of universal manipulator robot UR5 with SBPSO algorithm.

Conclusions
This paper developed a Serendipity-Based Particle Swarm Optimization (SBPSO) variant of the traditional PSO in order to prevent the premature convergence of optimization procedures. The algorithm based on an approach with two dimensions of the seren- Figure 11. Tack contour of universal manipulator robot UR5 with SBPSO algorithm. Figure 11 show the contour path with avoid collision for universal manipulator robot UR5.

Conclusions
This paper developed a Serendipity-Based Particle Swarm Optimization (SBPSO) variant of the traditional PSO in order to prevent the premature convergence of optimization procedures. The algorithm based on an approach with two dimensions of the serendipity concept: intelligence and chance. The intelligence dimension implemented by inspections at the adjacencies of the gBest particle. The chance dimension implemented by using the particles to improve search space exploration. After the studies were completed, it was shown that the SBPSO outperformed the standard PSO. To make these comparisons, two criteria were analyzed: convergence and stagnation. In the convergence criterion, when the Griewank and Rastrigin were evaluated, the SBPSO algorithm found the global solutions faster than the PSO algorithm. For the Sphere, Rosenbrock, HappyCat and Ackley functions, none of them found the optimal solutions. However, the SBPSO algorithm found very satisfactory solutions when compared with PSO. The evaluation of the stagnation criterion in the Sphere function showed that when the SBPSO was compared with the traditional PSO, it was delayed at least 21%. For the Rosenbrock function, the stagnation delay was at least 30% of the number of iterations compared to the traditional PSO. To the Griewank function, the SBPSO slowed the stagnation by less 20% when compared to the traditional PSO. For the Rastrigin function, stagnation was delayed at least 34% compared to the traditional PSO. In the HappyCat function, when the SBPSO was compared to the traditional PSO, there was a delay at less than 20% of the maximum iterations number. Finally, in the Ackley function, the SBPSO delayed the stagnation at least 20%, in relation to the traditional PSO algorithm. In these experiments, the SBPSO algorithm gave a better convergence behavior, overcoming some variations in relation to the quality of the solution, such as ability to find the global optimum, the stability of solutions and the ability to restart the swarm movement in stagnation case. In general, the results proved to solve the optimization problem of universal robot UR5 trajectory planning and find the optimal path. In future work, we will apply the SBPSO algorithm to solve more real-world optimization problems, such as vehicle localization, wireless sensor networks, and velocity estimation.