A Hybrid Particle Swarm Optimization Algorithm Enhanced with Nonlinear Inertial Weight and Gaussian Mutation for Job Shop Scheduling Problems

: Job shop scheduling problem (JSSP) has high theoretical and practical signiﬁcance in academia and manufacturing respectively. Therefore, scholars in many different ﬁelds have been attracted to study this problem, and many meta-heuristic algorithms have been proposed to solve this problem. As a meta-heuristic algorithm, particle swarm optimization (PSO) has been used to optimize many practical problems in industrial manufacturing. This paper proposes a hybrid PSO enhanced with nonlinear inertia weight and and Gaussian mutation (NGPSO) to solve JSSP. Nonlinear inertia weight improves local search capabilities of PSO, while Gaussian mutation strategy improves the global search ability of NGPSO, which is beneﬁcial to the population to maintain diversity and reduce probability of the algorithm falling into the local optimal solution. The proposed NGPSO algorithm is implemented to solve 62 benchmark instances of JSSP, and the experimental results are compared with other algorithms. The results obtained by analyzing the experimental data show that the algorithm is better than other comparison algorithms in solving JSSP.


Introduction
Job Shop Scheduling Problem (JSSP) is a simplified model of many practical scheduling problems, including aircraft carrier scheduling, airport dispatching, high-speed rail scheduling, automobile pipeline, etc. Therefor, JSSP has high theoretical significance and practical significance.JSSP studies how to process n jobs on m machines to optimize the processing performance.In order to optimize the processing performance, it is necessary to determine the processing start moment or completion moment or processing sequence of each job on each machine under the premise of meeting the process constraints.The final processing time for all jobs is called makespan [1].
Similar to most discrete optimization problems, JSSP is also a type of strong NP-hard problems, which have non-polynomial time complexity and complex solution space structure, and usually need to adopt appropriate strategies to reduce the search range and problem complexity.Traditional operations research methods solve problems by establishing mathematical models.The typical representatives of such methods include branch and bound algorithm (BAB) [2], Lagrangian relaxation method (LR) [3], dynamic programming algorithm (DR) [4], etc. BAB is a kind of algorithm commonly used to solve integer programming problems, which belongs to enumeration method.In the process of solving problems, BAB uses a mechanism that gives the upper and lower search limits of the solution to exclude solutions that are not between the upper and lower limits, thereby reducing the dimensionality of the subset of solutions to be selected and finding the target value more quickly.Experimental results show that the BAB is effective in solving small-scale scheduling problems, but the calculation time is often too long in solving large-scale problems.In the LR, when solving problems, some constraints of problems are relaxed appropriately to simplify the original problem, and then the optimal solution or approximate solution can be obtained by updating the Lagrangian operator.The DP algorithm decomposes the problem to be solved into several sub-problems, it first solves the sub-problems, and then obtains the solution of the original problem from the solution of these sub-problems.Generally, the above methods are effective only when solving small-scale problems, and have strong dependence on the problem to be solved, and are not suitable for solving actual production problems.
However, heuristic algorithm can get the approximate optimum or optimum in acceptable time, which mainly includes two categories: constructive heuristic algorithms and meta-heuristic algorithms.Although the constructive heuristic algorithms can find the solution of the problem in the solution space, it depends on local information of the scheduling problem, and the solution obtained is generally the local optimal solution; meta-heuristic algorithms is a type of optimization algorithm that is inspired by natural principles and formulaically describes natural principles through computer language.It can get the optimum or approximate optimum solution in an acceptable time, and it has become a kind of practical and feasible algorithm for solving various scheduling problems.Meta-heuristic algorithms mainly include: artificial bee colony algorithm (ABC) [5,6] clonal selection algorithms (CSA) [7,8], simulated annealing algorithm (SA) [9], genetic algorithm (GA) [10][11][12][13], bat algorithm (BA) [14], whale optimization algorithm (WOA) [15], biogeography based optimization algorithm (BBO) [16,17], particle swarm optimization algorithm (PSO) [18][19][20], differential evolution algorithm (DE) [21], ant colony optimization algorithm (ACO) [22,23], etc.
To obtain good performance, a meta-heuristic algorithm needs to find some strategies to balance the local search capability (exploitation) and global search capability (exploration) of the algorithm [24].The advantage of meta-heuristic algorithms is that it can combine the both in an optimal way in theory.However, when solving practical problems, the population size cannot reach the infinite in theory, and the fitness function cannot fully reflect the real adaptability of individuals, and the behavior of individuals can not perfectly reproduce the intelligence of individuals in nature, the above factors limit the performance of algorithms.Therefore, it has become a hot issue in the field of meta-heuristic algorithm research to more effectively balance exploration and exploitation by mixing different strategies in the algorithm.In recent years, many hybrid meta-heuristic algorithms have been used to solve JSSP.Wang and Duan [25] introduced chaos theory into the biogeographical optimization algorithm (BBO) to improve stability and accelerate convergence rate of the algorithm.Lu and Jiang [26] divided the bat optimization algorithm (BA) into two subpopulation, and added parallel search mechanism, communication strategy, and improved population discrete update method in the algorithm to solve the low-carbon JSSP.Rohainejad et al. [27] combined firefly algorithm (FA) and tabu search alogorithm (TS), which effectively reduced the overtime cost in JSSP, and made the algorithm more robust.Babukartik et al. [28] added the search strategy of the cuckoo algorithm (COA) into the ACO to improve the exploitation the efficiency of the algorithm for solving JSSP.Yu et al. [29] added a disturbance mechanism on the basis of DE and chaos theory to reduce the possibility of premature convergence of the teaching-learning-based optimization algorithm (TLB).Lu et al. [30] improved the social hierarchy and genetic factors of the multi-target gray wolf optimization algorithm (GWO) to enhance exploration and exploitation, thereby improving the efficiency in solving the dynamic JSSP.Min Liu et al. [31] added the Levy flight and DE into the WOA for solving JSSP.The Levy flight strategy enhanced the global search capability and convergence rate.The DE to enhance the local optimization capability of the algorithm and to keep the diversity of the scheduling scheme to prevent the algorithm from premature convergence.Nirmala Sharma et al. [32] improved the iterative formula for onlooker bees in ABC to solve JSSP.Xueni Qiu et al. [33] combined artificial immune system (AIS) theory and PSO to solve the JSSP.The algorithm adopts clone selection and immune network theories to simulate the basic processes of immunity, cloning, hypermutation, antibody selection, etc., while using the PSO to accelerate the search process.Atiya Masood et al. [34] proposed a hybrid genetic algorithm to solve the multi-objective JSSP.Adel Dabah et al. [35] added a parallel strategy to the tabu search algorithm (TS) to improve the capability of the TS to solve the problem of blocking JSSP.Hong Lu and Yang Jing [36] proposed a new PSO based on the clonal selection theory to avoid premature convergence.R.F.Abdel-Kade [37] introduced two neighborhood-based operators in the PSO to improve the diversity of the population for solving JSSP.The diversity operator improves population diversity by repositioning neighboring particles, and the local search operator performs local optimization on neighboring regions.T.-K.Dao et al. [38] proposed a parallel BA with mixed communication strategy for solving JSSP.Each subpopulation of the algorithm is associated with each other through communication strategy and shares the computing load on the processor.The algorithm has excellent accuracy and convergence rate.Tianhua Jiang et al. [39] proposed an hybrid WOA for solving the energy-efficient JSSP.
More and more hybrid meta-heuristic algorithms and are used to solveJSSP [31].However, the above algorithms mainly overcome the shortcomings of algorithms by increasing the diversity of the population, preventing algorithms from falling into the local optimal solution, and increasing the stability of algorithms.Few scholars analyze the performance of the algorithm from the perspective of balancing the global search capability and local search capability of the algorithm.In fact, the performance of meta-heuristic algorithm largely depends on whether the global search capability and local search capability of the algorithm can be effectively balanced.Furthermore, the two strategies we integrated, including Gaussian mutation and nonlinear inertia weighting strategy, are common analysis strategies when viewed separately.However, when Gaussian mutation and nonlinear programming are integrated together, it shows the superiority of the integration strategy in expanding the scale of individual variation and ensuring the stability of individual scale.From the perspective of the stability of individual evolution, the advantages of integration strategies are more obvious in the later stages of individual evolution.The spatial dissipation of individual populations gradually slows down, the internal energy entropy gradually decreases, the trend of population convergence is more stable, and the evolution of individuals gradually stabilizes.Therefore, in this research, it is important to balance the global search and local search capabilities of the algorithm through different strategies to overcome the shortcomings of PSO.Similar to other meta-heuristic population intelligence algorithms (PIA), PSO is an innovative global optimization algorithm first proposed by Dr. Kennedy and Dr. Eberhart in 1995 [40].Compared with other PIA, PSO has a simple concept, few parameters and fast convergence rate.Therefore, the particle swarm algorithm has become a very successful algorithm and has been used to optimize various practical problems.The unique information exchange and learning methods of PSO makes the movement of the whole population in the same direction, which endows the PSO with strong search performance and high adaptability to optimization problems.Meanwhile, the PSO algorithm has a strong exploitation but its exploration is very poor [41][42][43][44][45]. Exploitation and exploration are local optimization capability and global optimization capability of algorithms respectively.If a meta-heuristic algorithm is to have good search performance, it should be able to effectively balance exploitation and exploration in the iterative process of the algorithm.Similar to other random algorithms, PSO also has shortcomings: when the initial solution is far away from the global optimal solution, PSO often appears premature convergence and local stagnation.In order to overcome the above shortcomings of PSO, it is necessary to introduce some strategies to effectively balance exploitation and exploration.This paper proposes a hybrid PSO enhanced by nonlinear inertia weight and Gaussian mutation (NGPSO) to solve JSSP.In the exploration stage, PSO mixes Gaussian mutation to improve the global search capability of the algorithm.In the exploration stage, the nonlinear inertia weight is used to improve the change rules of population to improve the local optimization performance.
The rest of this paper is structured as follows: Section 2 introduces the basic PSO algorithm.Section 3 describes the proposed NGPSO algorithm, and analyzes the reason why the nonlinear inertial weight operator and Gaussian mutation strategy are merged into the PSO.Section 4 introduces the application of NGPSO in the JSSP.Section 5 analyzes the experimental results of NGPSO in solving JSSP benchmark instances.Finally, Section 6 gives a brief summary and outlook for future work.

An Overview of the PSO
For a long time, population intelligence and evolutionary algorithms have attracted scholars in various research fields.Researchers can derive inspiration from the behavior of biological populations in nature or the laws of biological evolution to design algorithms [46].As one of many swarm intelligence algorithms, PSO has been proven effective in various optimization fields [47][48][49][50][51][52].
PSO is an innovative global optimization algorithm first proposed by Dr. Kennedy and Dr. Eberhart in 1995 [39,53].It conducts a collaborative global search by simulating the foraging behavior of biological populations such as birds and fish [54].The PSO algorithm can enhance the information exchange between in the population individuals by exchanging learning information, and then promote the evolution direction of the population to be consistent.This mode of information exchange resulting from population individuals gives the PSO algorithm a strong search capability and a higher degree of adaptability to optimization problems.In a d-dimensional solution space [55], a particle i includes two vectors: one is a speed vector , and the other is a position vector [56].Each individual in the population will iterate through two formulas.The particle speed and position update formula is as follows: where, ω is the inertia weight, which is an important parameter affecting the search performance of the algorithm [57], its value indicates the amount of particles inheriting the current individual speed.c 1 and c 2 are called acceleration factors, c 1 is called cognitive coefficient, which represents the self cognitive experience of particles, c 2 is called social coefficient, which represents the capability of particles to learn from the current global optimal solution; r 1 and r 2 are two independent random numbers with sizes between [0, 1] [37]; pbest id is the extreme value of particle i in the d-th dimension, gbest id is the global extremum of all particles in the d-th dimension.

NGPSO Algorithm
The performance of the hybrid meta-heuristic optimization algorithm relies on the complementary advantages of optimization strategies, that is, some strategies have better local search capabilities, and the other strategies have better global search capabilities.As a verification index describing the performance of individual iterations in swarm intelligence algorithms, how to improve both the local optimization and global optimization capabilities of the algorithm has become the key to improving the performance of the algorithm [58][59][60][61].From the perspective of the iterative process of the algorithm, the global search capability is essentially the full-area search level determined by the iterative individual's breadth-first principle, which shows that the algorithm can search a larger area in the solution space to obtain a better solution [62].On the other hand, the local search capability is essentially the sub-area search level determined by the iterative individual's depth-first principle, which can use the searched area to improve the problem solution, prevent the algorithm from falling into a stagnation state, and provide the possibility for the algorithm to continue to iterate in the search area and finally obtain a high-precision optimal solution.Therefore, balancing the full-area search level and the sub-area search level becomes the key to optimize the search capability and alleviate the premature convergence of the algorithm.

Nonlinear Inertia Weight Improves the Local Search Capability of PSO (PSO-NIW)
PSO is a very successful algorithm, often applied for solving various practical problems.Nevertheless, similar to other optimization algorithms, PSO also has shortcomings [63], we need to adopt appropriate strategies to improve its shortcomings.The inertial weight ω, as a learning level and distribution proportion for balancing particle behavior, indicates the degree of influence of the pre-particle speed on the current particle speed, and determines the search ability of the algorithm.For this reason, we can choose a suitable inertia weight as the improvement strategy of the algorithm, which can enhance the convergence rate and solution accuracy of PSO.At present, the linear decreasing strategy of inertia weight is commonly used, also called linear adjustment ω strategy [64].In an actual iteration process, ω will show a linear decreasing trend as the number of individual iterations increases.When considering the actual optimization problem, the global search is always used first to quickly converge the population iteration space to a certain area, then to obtain high-precision solutions through local search.The ω update formula is as follows: where ω max and ω min are the maximum and minimum inertia weights respectively; iter and iter max are the current and maximum times of iterations respectively.Generally, the value of ω is initialized to 0.9 and then linearly decreased to 0.4, which will get better results.However, the search process for the solution of the problem in practical problems is often non-linear.Therefore, the linearly decreasing strategy of inertia weight is not suitable for solving practical problems.Therefore, in this paper, a nonlinear inertia weight adjustment curve is used to adjust ω, as shown in Equation ( 4):

Gauss Mutation Operation Improves the Global Search Capability of PSO (PSO-GM)
Gaussian distribution, also called normal distribution, is usually expressed as N(µ, σ 2 ), where µ and σ 2 is mean value and standard deviation respectively, and the Gaussian distribution has 3 − σ rules, that is, The random numbers generated according to Gaussian distribution [65] have 68%, 95% and 99.7% probability fall into the intervals of [µ − σ, µ + σ], [µ − 2σ, µ + 2σ] and [µ − 3σ, µ + 3σ], respectively.From the point of view of population energy, Gaussian variation can be considered as a kind of energy dissipation [66].Because the diversity of population increases, the entropy of population becomes higher, which reduces the energy of population and makes the population have higher stability and adaptability.The Gaussian 3 − σ distribution rule provides a good mathematical theoretical support for optimizing the distribution of individuals in the solution space.If the variable x follows Gaussian distribution, the probability density function of the random variable x is shown in Equation (5): The Gaussian distribution function of the random variable x is as follows [67]: Gaussian mutation is a mutation strategy composed of random disturbances generated by Gaussian distribution based on the original individual.In this paper, the d-dimensional Gaussian mutation operator is ) can be expressed as follows: where i is the individual after Gaussian mutation.Then we use the greedy selection strategy to prevent the algorithm population from degenerating, the specific operations are as follows:

The Main Process of NGPSO
The main process of the proposed hybrid NGPSO is given below: Step 1: Initialize the population and related parameters, including the population size and algorithm termination conditions; Step 2: Initialize the speed and position of the particles, and record the pbest and gbest; Step 3: Update the position and speed of all individuals according to the individual speed update Equation ( 1) and position update Equation (2) in the PSO; Step 4: In sequence to prevent the algorithm from premature convergence, the Gaussian mutation is performed on individual individuals to generate new populations; Step 5: Re-evaluate the adaptability of individuals in the new population.If the fitness of the new individual is better than that of the previous generation, replace the previous generation with the new individual, otherwise the original individual will not be changed; Step 6: Update pbest and gbest; Step 7: Judge whether the algorithm reaches the termination condition, if the individual iteration accuracy reaches the termination condition, then end the algorithm, otherwise execute Step 3.

The JSSP Model
JSSP is a simplified model of various practical scheduling problems.It is currently the most widely studied type of scheduling problem.JSSP description: There are m machines in a processing system, which are required to process n jobs, and each job contains k processes.the processing time of each process has been determined, and each job must be processed in the order of the process, the task of scheduling is to arrange the processing scheduling sequence of all jobs, so that the performance indicators are optimized under the premise of satisfying the constraints [68].A common mathematical description of the n/m/Cmax (where Cmax is the maximum completion time, called makespan [69]) scheduling problem is as follows [70]: where Equation ( 9) is the objective function; Equation (10) are the operation sequence of the jobs and the processing machine determined by the constraints.Where c ik and p ik are the completion time and processing time of the jobs i on the machine k; M is a sufficiently large real number; a ihk and x ijk indicator factor and indicator variables, and their meaning is as follows [71,72]: x ijk = 1, If job i is processed on machine k before job j, 0, otherwise.
In addition, the disjunctive graph is also an important tool for describing JSSP.For the JSSP of n × m (N operations), the corresponding disjunctive graph G = (V, A, E) is shown in Figure 1.Where V is the vertex set formed by all operations, including two virtual operations of 0 and N + 1 (0 and N + 1 are the two states of the beginning and the end for processing jobs); A is an edge set consisting of n sub-edges, the sub-edge formed by the solid line part is the processing path of a job on all machines from the beginning to the end according to the constraints; E is the arc set composed of m sub-edges, the sub-arc formed by the dotted line indicates the connection of the processing operations on the same machine.If the maximum completion time is considered as an indicator, the solution of JSSP will be transformed into a set of sequences (that is, trends) for each operation on the arc (that is, the machine) as a priority decision.When several operations conflict on the same machine, the above-mentioned jobs processing sequence will determine the sequence of operations.In the end we will get a conflict-free directed acyclic graph about machine operation, the critical path length in machine operation is makespan.
(1) The coding of the scheduling solution is complex and diverse, and the search method of the algorithm is also diverse; (2) The solution space is more complicated.The JSSP problem of n workpieces and m machines contains (n!) m kinds of arrangements; (3) There are limitations of technical constraints, and the feasibility of the solution needs to be considered; (4) The calculation of scheduling indicators is longer than the search time of the algorithm; (5) The optimization hypersurface lacks structural information.The solution space of the JSSP problem is usually complicated, and there are multiple minimum values with irregular distribution.

Analysis of the Main Process of the NGPSO in Solving JSSP
When the NGPSO algorithm described in this section solves the JSSP problem, the analysis of its solution space is essentially a combinatorial optimization problem(COP).How to achieve the exact solution of the algorithm in the search space is the key to solving this problem.In sequence to make the NGPSO algorithm to better solve the COP, the evaluation of each solution in the solution space needs to be determined by the sequence of the NGPSO.As the individual discretization strategy, the heuristic rule of smallest position value (SPV) can transform continuous optimization problems into discrete optimization problems.At this time, the NGPSO algorithm can be reasonably used to solve COP [73].Furthermore, the processing sequence of the jobs are determined by the discrete solution, so that obtain the makespan value.The position vector of the particles is not the scheduling sequence, but due to the sequence relationship of the components in the position vector, and each particle corresponds to a scheduling sequence(namely, a scheduling scheme), we can use SPV rules to transform the continuous position vector into discrete scheduling sequence.Specifically, the SPV rule transform a particle's position vector In the SPV rule, the n components of the individual position vector are first sorted in the forward direction, the level of the smallest component is set to 1.Then, the remaining components are sequentially set to 2, • • • , n, so that the particle position vector is transformed into a job scheduling sequence.To illustrate the SPV rules more specifically, Table 1 gives an instance of transformations from continuous position vectors to discrete scheduling sequences.In this example, the position vector is X i = [0.36,0.01, 0.67, 0.69, 1.19, 1.02].In X i , x i2 is the smallest value of the components, and its level is set to 1; similarly, the x i1 level is set to 2, and so on to deduce the level of the remaining position components.In the following steps, we will explain the specific operation process of SPV rules in detail.First phase: Algorithm initialization.
Set the initial parameters of the algorithm, including the inertia weight, the maximum number of iterations, and the population size.In order to solve the discrete JSSP problem, an individual position vector is generated by a random function, and the randomized individual position vector is mapped into a discrete scheduling sequence by SPV rules.The specific operation: the position vector is converted into a one-to-one mapping sequence, then the position vector with the constraint mapping sequence is decoded within the necessary processing time to generate an initial scheduling scheme.Obviously, this decoding method can generate a suitable scheduling sequence.In the description of job processing time and job scheduling sequence in the 3 × 3 JSSP listed in Table 2, one dimension is a independent operation of a job.A schedule list is n × m independent operations, so the size of a scheduling list is n × m.After individual initialization, the fitness of the individuals in the population is calculated, and the makespan is used as the target value.Figure 2 shows the mapping process between individual position variables and scheduling sequences.A job contains three operations, so a job appears exactly 3 times in the scheduling list.Figure 3 shows a feasible scheduling scheme generated from the scheduling list of Figure 2.

Second phase: Update individual scheduling sequence
In this phase, as shown in Figure 4, each individual in the population updates the position through Equations ( 1) and ( 2).The position coordinate of the current particles is formed by the pbest and gbest in the population of the previous generation, which is a continuous vector [31].Then a new scheduling list is generated through SPV rules, as a new discrete vector, which can solve the JSSP.In the current solution process, individuals are mainly searching randomly during the searching optimal solution, and the nonlinear inertia weight can better reflect the nonlinear search process of the PSO, and can improve the exploitation of the PSO.If the scheduling sequence corresponding to the current particle individual is more adaptable than before, the local individual optimal value pbest of the original individual will be replaced by the current individual position.If the fitness of the global extreme point in the current population is higher than the global extreme point of the previous generation, the current extreme point is used to replace the global extreme point gbest of the previous generation.

Third phase: Increase individual diversity
After the iteration of the previous stage, all iterative individuals have a tendency to move in the same direction.Although this phenomenon contributes to the rapid convergence of the NGPSO, it will reduce the population diversity.All particles will perform mutation operations through Equations ( 6) and (7) to increase the diversity of the population to avoid the NGPSO from falling into the local optimal solution prematurely and losing the global search capability.Select individuals in the next generation population through greedy selection strategies.If the fitness of the new individual after mutation is higher than that of the previous generation, the original particle will be replaced by the new particle, otherwise the original particle will remain unchanged [74].

Experimental Results and Analysis
In this study, nonlinear inertia weights (NIW) and Gaussian mutation (GM) strategies are used to improve the basic PSO.We will verify the effectiveness of the two strategies in solving the JSSP problem by comparing the four algorithms PSO-NF, PSO-GM, and PSO.In total, 62 JSSP benchmark instances from Dr. Ling Wang's book: JSSP and Genetic Algorithms [75].Including five ABZ benchmarks, this type of problem was given by Adams in 1988, including two different scales of five classic problems, where the scale of ABZ5 and ABZ6 is 10 × 10, and the scale of ABZ7, ABZ8 and ABZ9 is 20 × 15; three FT benchmarks, this type of problem was put forward by Fisher and Thompson in 1963, including three typical problems FT06, FT10 and FT20, with scales of 6 × 6, 10 × 10 and 20 × 5 respectively; 40 LA benchmarks, this type of question was given by Lawrence in 1984, named LA1 ∼ LA40, and divided into eight different scale questions, namely 10 × 5, 15 × 5, 20 × 5, 10 × 10, 15 × 10, 20 × 10, 30 × 10, 15 × 15; 10 ORB benchmarks, this type of problem was given by Applegate in 1991, named ORB1 ∼ ORB10, with a scale of 10 × 10; and four YN benchmarks, This type of problem was put forward by Yamada et al. in 1992, including four typical problems, called YN1 ∼ YN4, with a scale of 20 × 20.Table 3 shows the results of 33 JSSP benchmark instances running 30 times on a computer equipped with Windows 7 (64) system, 8 GB RAM, Intel Core i7, CPU 3.4 GHZ, and MATALAB 2017a.OVCK in Table 3 is the optimal value currently known, the best is the optimal value in 30 runs, the optimal solution is marked in bold, and Avg is the average value in 30 runs.The following conclusions can be drawn from the experimental results shown in Table 3: The NGPSO, PSO-NIW, PSO-GM, and PSO algorithms obtained 26, 17, 21, and 14 optimal solutions from 33 benchmark instances.In solving simple problems, the performance of the four algorithms is similar, but for complex problems, the performance of the NGPSO algorithm is significantly better than the other three algorithms.When comparing the average test results of the four algorithms for JSSP, the number of optimal values obtained by the NGPSO algorithm is the largest number of 33 times in total, which exceeds the number of optimal values obtained by other algorithms.In addition, in the numerical experiment test, the population number is set to 100, the maximum number of iterations is set to 10, 000.
To further verify the effectiveness of the proposed NGPSO, the NGPSO is compared with PSO1 [36], PSO2 [76], CSA, GA, DE, and ABC, where PSO1 algorithm is based on tabu search algorithm (TS) and SA improved PSO.The TS makes the algorithm jump out of the local optimal value by retrieving the tabu search table.The SA prevents the algorithm from falling into a local optimal value with a certain probability.Combining local search and global search, PSO can achieve higher search efficiency.PSO2 algorithm uses GA and SA to overcome the shortcomings of PSO.Where crossover operation and mutation operation of GA can update the algorithm search area, SA can improve the local search capability.Tables 4 and 5 describes the performance comparison between different algorithms.OVCK in Tables 4 and 5 is the optimal value currently known, the best is the optimal value in 30 runs, the optimal solution is marked in bold, The parameter settings of the four comparison algorithms are as follows: the population size is 150, and the maximum number of iterations is 1000.The following results can be obtained from Tables 4 and 5: (1) For the best value, the NGPSO algorithm obtained 12 known optimal solutions in 29 benchmark instances, PSO1, PSO2, CSA, GA, DE, and ABC, obtained eight, five, three, five, nine, and six known optimal solutions, respectively.(2) For the average value (Avg), the average value obtained by the NGPSO algorithm is smaller than other algorithms in most instances.In this table, the bold text in the OVCK column means is the optimal value currently known.The bold text in the Best column indicates that the algorithm found OVKC.For the convenience of comparison, it is marked in bold here.
In order to evaluate the performance of NPSO more intuitively, the convergence curve of the algorithms used for comparison in solving the ABZ6 instance is shown in Figure 5, where the vertical axis is makespan, and the horizontal axis is the number of iterations.It can be observed from the convergence images of Figure 5a,b that the convergence rate of NGPSO is slower than other comparison algorithms in the early stage of algorithm iteration, this shows that the algorithm has stronger global search capability than other comparison algorithms in the early stage of iteration, and the algorithm is not easy to fall into the local optimal solution in the early stage.In addition the NGPSO algorithm still maintains a strong search capability in the later period of the algorithm iteration.From the perspective of the entire iterative process, the global search capabilities and local search capabilities of NGPSO algorithm have been effectively balanced, which shows that the introduction of NIW and GM into PSO can effectively improve PSO, and is more suitable for solving JSSP than other comparison algorithms.

Figure 2 .
Figure 2. The mapping between individual position vectors and scheduling sequences in the 3 × 3 JSSP problem.

Figure 3 .
Figure 3. Scheduling scheme generated from the operation list in Figure 2.

Figure 4 .
Figure 4. Update process of individual position vector.

Table 1 .
A simple examples of the SPV rule.