Improved Hybrid Heuristic Algorithm Inspired by Tissue-Like Membrane System to Solve Job Shop Scheduling Problem

: In real industrial engineering, job shop scheduling problem (JSSP) is considered to be one of the most difﬁcult and tricky non-deterministic polynomial-time (NP)-hard problems. This study proposes a new hybrid heuristic algorithm for solving JSSP inspired by the tissue-like membrane system. The framework of the proposed algorithm incorporates improved genetic algorithms (GA), modiﬁed rumor particle swarm optimization (PSO), and ﬁne-grained local search methods (LSM). To effectively alleviate the premature convergence of GA, the improved GA uses adaptive crossover and mutation probabilities. Taking into account the improvement of the diversity of the population, the rumor PSO is discretized to interactively optimize the population. In addition, a local search operator incorporating critical path recognition is designed to enhance the local search ability of the population. Experiment with 24 benchmark instances show that the proposed algorithm outperforms other latest comparative algorithms, and hybrid optimization strategies that complement each other in performance can better break through the original limitations of the single meta-heuristic method.


Introduction
The production scheduling system plays a vital role in a specialized manufacturing system. It is the core of technologies that realize the overall management of the manufacturing system, the optimization of target tasks, and the automation of scheduling execution. Under the premise of meeting resource and process constraints, formulating a scientific and reasonable production scheduling plan plays critical role in controlling finished product inventory, shortening the maximum completion period, and optimizing machine load. Job shop scheduling problem (JSSP) is recognized as a very challenging and representative NP-hard problem in scheduling problems [1]. Its application field is extremely wide, involving aircraft carrier dispatching, airport aircraft dispatching, port terminal cargo dispatching, automobile processing assembly line, etc.
Since Fisher and Thomson [2] gave three benchmarks of JSSP in 1963, JSSP has attracted wide attention from many scholars. For more than half a century, many methods for solving JSSP have been proposed. These methods can be roughly summarized into two categories: precise methods and approximate methods. Some early research works focused on precise methods, including branch and bound, mathematical programming [3][4][5] etc. However, these precise methods only tend to solve small-scale problems, and the computational cost on larger-scale problems is unacceptable. As the complexity of JSSPs increases, the excellent performance of some approximation methods has attracted the attention of scholars, including priority dispatch [6], shifting bottleneck procedure [7], and metaheuristics. Among them, priority dispatch and shifting bottleneck are considered simple heuristics, and they are undistinguished in terms of effectiveness [8]. Different from problem-based simple heuristics, meta-heuristics draw inspiration from some random phenomena in nature and incorporate random factors. These random factors make the algorithm have a certain probability to jump out of the local optimal and try to develop the global optimal solution. Therefore, the problem-independent meta-heuristic algorithms have been widely used in various fields and have shown considerable performance. Metaheuristic algorithms commonly used to solve JSSP include genetic algorithm (GA) [9], particle swarm optimization algorithm (PSO) [10,11], ant colony optimization algorithm (ACO) [12], simulated annealing (SA) [13], and tabu search (TS) [14], etc.
However, as a classic discrete combinatorial optimization problem, JSSP has its inherent stubborn nature. Whether it is classic GA or PSO, a single meta-heuristic method will always face the problem of premature convergence, and its search performance has reached its limit. Therefore, after entering the 21st century, more and more scholars are devoted to studying hybrid optimization strategies in order to integrate the complementary advantages of different meta-heuristic methods. Zhang et al. [15] proposed a hybrid heuristic algorithm by merging GA and SA. Zhang et al. [16] proposed the famous crossover operator, i.e., precedence operation crossover (POX), and combined the improved GA with the local search algorithm. To better achieve a balance between diversification and intensification, Kurdi [8] merged the island model GA (IMGA) with TS. Abdel-Kader [17] combines standard PSO and GA to solve JSSP. Zhou [18] fuses the social spider optimization algorithm and the differential evolutionary based mutation operator for solving the JSSP. Peng et al. [19] proposed a MAGATS algorithm that combines multi-agent GA and TS to solve JSSP. Pongchairerks [20] presented a two-level meta-heuristic method for solving JSSP, where the upper-level algorithm (UPLA) was a population-based algorithm that serves as an input-parameter controller of the lower-level algorithm (LOLA). Aiming at the JSSP, a collective communication hybrid genetic algorithm using distributed processing is proposed [21].
Membrane systems were formally presented by Păun [22] in 1998. Since then, membrane computing has become an emerging research field in computer science and has attracted a large number of scholars' research interests due to its outstanding characteristics of distribution, parallelism and non-determinism. Membrane computing aims to explore new computing models from biological cells (especially cell membranes), which is also called membrane system or P system. Membrane system contains three basic components: membrane structure, objects and rules. Up till now, membrane computing models are mainly summarized into three categories, i.e., cell-like P system [22], tissue-like P system [23,24], neural-like P system [25]. In recent years, many other variants of membrane systems have been developed in terms of direct membranes [26,27]. In addition, in the application of indirect membranes, fruitful results have been achieved in the field of combinatorial optimization such as clustering [28,29] and neural networks [30].
Considering the above statement, this work presented a hybrid heuristic algorithm inspired by tissue-like membrane system for solving the JSSP. The algorithm framework combines improved GA, modified PSO, and local search algorithms based on critical paths. Among them, the communication rules between GA and PSO can increase the diversity of the population while delaying premature convergence, and local search can further fine-tune the optimization results. The comparative experiments show that the hybrid algorithm presented in this study outperforms other latest algorithms. Specifically, the contributions of this work are listed below.
(1) A new hybrid heuristic algorithm inspired by tissue-like membrane system is proposed for JSSP. The three types of evolutionary rules of the membrane system are introduced in detail. (2) Considering to further alleviate the premature convergence of GA, an adaptive crossover and mutation strategy is proposed. (3) To realize the full exploration of the entire solution space while enhancing the diversity of the population, the rumor PSO algorithm [31] is discretized, making it suitable for solving JSSP which belongs to discrete problem. (4) Aiming at the results of the above optimization, an algorithm for quickly identifying critical paths is presented. And based on this algorithm, a local search strategy is given. The remainder of this paper is organized as follow. Section 2 describes the mathematical modeling and disjunctive graph representation of JSSP. Section 3 gives a detailed description of the proposed hybrid heuristic algorithm inspired by the tissue-like membrane system, including three improved sub-algorithms and the rules of the P system. Section 4 reports the results of comparative experiments on several sets of benchmark instances. Section 5 concludes this work with a summary and future research directions.

Mathematical Modeling and Disjunctive Graph Representation of JSSP
An n × m JSSP can be formally described as follows. There are n jobs and m machines denoted as J = {J 1 , J 2 , · · · , J n } and M = {M 1 , M 2 , · · · , M m }, respectively. The processing of each job includes m operations, and each operation is processed on a designated machine. In other words, the number of machines means the number of operations. There are sequential constraints between different operations of the same job. There is no precedence constraint between different jobs. The operation sequence of each job and its processing time on the corresponding machine are given in advance. The ultimate goal of JSSP is to seek an optimal schedule to minimize the maximum completion time. The number of possible solutions for an n × m JSSP is up to (n!) m . In the process of solving the JSSP, the restrictions or constraints to be followed are described as follows.
(1) At the beginning of processing, any job may be selected for processing.
(2) Once a job starts to be processed on the corresponding machine, no interruption or preemption is allowed. (3) At a certain moment, a job can only be processed on one machine. (4) At a certain moment, a machine can only process one job. (5) Every job must be processed in a predetermined sequence of operations (i.e., the machining paths), which traverses all machines exactly once. (6) The transportation time generated during processing is not considered.
To facilitate the mathematical modeling of JSSP, the notations used and their meanings are described below.
(1) O ij denotes the j-th operation of the job i, where i ∈ [1, n] and j ∈ [1, m].
(2) t ij denotes the processing time of O ij .
(3) T J ij represents the cumulative completion time of job i corresponding to operation O ij . (4) TM ij denotes the earliest cumulative (not including t ij ) start time of the machine corresponding to the operation O ij .
Therefore, considering the above, the mathematical model of JSSP can be formally expressed as follows.
Min( Max Equation (1) represents the objective function of JSSP, which is to minimize the maximum completion time, i.e., the makespan. Equation (2) means priority constraints between different operations of the same job. Equation (3) means that preemption on the same machine is not allowed. Equation (4) gives the domains of the variables.
Any JSSP can be intuitively represented by its disjunction graph. Figure 1 shows a disjunctive representation for a JSSP of 3 × 3 (3 jobs and 3 machines). The machine sequence corresponding to the job and the processing time (p ij ) on the corresponding machine refer to Table 1. The disjunction graph is a triplet G = (V, A, E), where V = {0, O 11 , O 12 , . . . , O nm , 1} means the set of nodes denoting all operations, 0 and 1 represent the virtual start and end nodes, respectively. A represents the set of directed conjunctive arcs, which describes the precedence constraints between different operations on the same job. In Figure 1, the conjunctive arcs are marked with a black solid line, and each of the three rows represents a job, where O ij denotes the j-th operation of the job i. E represents non-directed set of disjunctive arcs, which describes the resource capacity constraints between different operations on the same machine. Specifically, E = ∪ m k=1 E k , Where E k represents a subset of the disjunctive arcs corresponding to machine k. Assuming that the number of operations processed on the same machine k is n, then the number of disjunctive arcs in the subset E k is C 2 n . In other words, the subset E k is actually a fully connected clique. The dotted lines of different colors in Figure 1 correspond to the disjunctive arcs on different machines. ure 1, the conjunctive arcs are marked with a black solid line, and each of the three rows represents a job, where Oij denotes the j-th operation of the job i. E represents non-directed set of disjunctive arcs, which describes the resource capacity constraints between different operations on the same machine. Specifically, , Where Ek represents a subset of the disjunctive arcs corresponding to machine k. For example, the operations O11, O22, and O33 connected by the disjunctive arcs are all processed on machine 3 (M3). The operations O12, O23, and O32 connected by the disjunctive arcs are all processed on machine 1 (M1). Similarly, the operations O13, O21, and O31 are processed on machine 2 (M2). Assuming that the number of operations processed on the same machine k is n, then the number of disjunctive arcs in the subset Ek is 2 n C . In other words, the subset Ek is actually a fully connected clique. The dotted lines of different colors in Figure 1 correspond to the disjunctive arcs on different machines.

The Proposed Hybrid Heuristic Algorithm Coupling with Tissue-Like P System
This section will describe the tissue-like membrane system coupling with the proposed hybrid heuristic algorithm. The specific content involved includes the structure of

The Proposed Hybrid Heuristic Algorithm Coupling with Tissue-Like P System
This section will describe the tissue-like membrane system coupling with the proposed hybrid heuristic algorithm. The specific content involved includes the structure of the coupled membrane system, the three improved sub-algorithms corresponding to the structure, and the related evolution-communication mechanism.

The Coupled Tissue-Like P System
The cell-like membrane system studies the computer theory of a single cell, while the tissue-like membrane system explores the mechanism by which multiple cells freely placed in the same environment cooperate and communicate with each other to complete calculations. The membrane structure of the tissue-like membrane system is a net structure. Figure 2 shows the membrane structure of the coupled membrane system.

The Coupled Tissue-Like P System
The cell-like membrane system studies the computer theory of a single cell, while the tissue-like membrane system explores the mechanism by which multiple cells freely placed in the same environment cooperate and communicate with each other to complete calculations. The membrane structure of the tissue-like membrane system is a net structure. Figure 2 shows the membrane structure of the coupled membrane system. The formal expression of a coupled tissue-like P system with degree 3 is given below. where: (1) O is a finite set of non-empty objects, in which the elements, that is, objects, are individuals in the population.
stands for cell, in which there are different objects and evolution rules. (3) Ri represents the evolution rule corresponding to cell i. Evolution rules are used to improve individuals in a population. The evolution rule is expressed in the form u → v, which means that the object u, that is, the individual in the population, evolves into the object v according to specific rules. (4) R' denotes a finite set of communication rules between different cells, which is used to transfer objects between cells, in the form represents the set of channels between cells in the coupled membrane system, as shown by the arcs with arrows in Figure 2. (6) i0 = 3 means that cell 3 is the output membrane of the coupled P system.
In the above membrane structure, the evolution rules in cell 1, cell 2 and cell 3 are respectively three improved sub-algorithms. The general procedure of the proposed coupled membrane system is described below.
Step 1: The initial population is produced in cell 1, and the initial population size is N. Initially, cell 2 and cell 3 are both empty.
Step 2: The initial N individuals are saved for subsequent evaluation, and their copies are sent to cell 2 according to communication rule 1.
Step 3: The improved GA and the modified rumor PSO are used as evolution rules in cell 1 and cell 2, respectively. Here, the evolution within cell 1 and cell 2 can be regarded as two independent internal loops to improve the initial population. The formal expression of a coupled tissue-like P system with degree 3 is given below. where: (1) O is a finite set of non-empty objects, in which the elements, that is, objects, are individuals in the population. (2) σ i (i = 1, 2, 3) stands for cell, in which there are different objects and evolution rules.
(3) R i represents the evolution rule corresponding to cell i. Evolution rules are used to improve individuals in a population. The evolution rule is expressed in the form u → v, which means that the object u, that is, the individual in the population, evolves into the object v according to specific rules. (4) R denotes a finite set of communication rules between different cells, which is used to transfer objects between cells, in the form represents the set of channels between cells in the coupled membrane system, as shown by the arcs with arrows in Figure 2. (6) i 0 = 3 means that cell 3 is the output membrane of the coupled P system.
In the above membrane structure, the evolution rules in cell 1, cell 2 and cell 3 are respectively three improved sub-algorithms. The general procedure of the proposed coupled membrane system is described below.
Step 1: The initial population is produced in cell 1, and the initial population size is N.
Initially, cell 2 and cell 3 are both empty.
Step 2: The initial N individuals are saved for subsequent evaluation, and their copies are sent to cell 2 according to communication rule 1.
Step 3: The improved GA and the modified rumor PSO are used as evolution rules in cell 1 and cell 2, respectively. Here, the evolution within cell 1 and cell 2 can be regarded as two independent internal loops to improve the initial population.
Step 4: Individuals that have evolved and updated in cell 2 are sent back to cell 1 according to the exchange rule 1. In other words, after reaching the upper limit of the number of internal iterations, 3N individuals will be collected in cell 1.
Step 5: All individuals in cell 1 are sorted in descending order of fitness value and duplicate individuals are removed. According to the elitist strategy, the population size is maintained as N. The resulting N individuals are used as the initialization of the next generation population.
Step 6: The process from Step 2 to Step 5 can be regarded as an external iteration. Repeat steps 2 to 5 until the number of external iterations is met. Then the N individuals finally obtained in cell 2 are sent to cell 3 according to the exchange rule 2.
Step 7: Use the improved local search method (LSM) in cell 3 to improve N individuals. Repeat this step until the termination condition is reached.
Step 8: According to rule 3, the individual with the best fitness value is output to the environment as the final result.
The next few sub-sections will introduce three improved sub-algorithms in detail, which correspond to the evolution rules in the three cells.

Evolutionary Rule of Objects in Cell 1
The evolutionary rule in cell 1 is the improved GA. This subsection will specifically give the strategies adopted by the improved GA, including encoding and initialization, selection strategies, and adaptive crossover and mutation.

Encoding and Initialization
The existing encoding methods for solving JSSP include job-based, operation-based, and machine-based and so on. Considering the pros and cons of different encoding schemes, operation-based coding is better than others, and it has received the most extensive attention in solving JSSP [19]. For an n × m JSSP, its operation-based encoding method can be expressed as represents the job number. The j-th appearance of job i indicates the j-th operation of job i, namely, O ij . Each job number appears exactly m times. Initially, random permutation is performed on the n × m integers from 1 to n × m, and then the sequence is executed element-level modulo n to take the remainder. This ensures that each job number appears exactly m times and forms an operation-based encoding. The advantages of this encoding scheme are: a.
The solutions generated during initialization are all feasible solutions. b.
Arbitrary arrangement of gene positions in chromosome sequence can still obtain feasible solutions, which is easy for mutation operation. c.
Easy to decode.
Take the chromosome encoding 1-3-2-2-1-3-3-1-2 of a simple 3 × 3 JSSP as an example, where the machine sequence corresponding to the job and the processing time (p ij ) on the corresponding machine refer to Table 1. The first number '1' indicates the 1st operation of job 1; the second number '3' means the 1st operation of job 3; the fourth number '2', i.e., the 2nd appearance of job 2, indicates the 2nd operation of job 2. As mentioned above, the i-th appearance of the same job number means the i-th operation of this job. According to the information provided in advance in Table 1, it is easy to decode and calculate the makespan of any encoding sequence. The completion times of jobs 1, 2 and 3 are 13, 18, and 18, respectively. Therefore, the makespan of this chromosome sequence is 18. Figure 3 below shows the scheduling Gantt chart corresponding to this encoding sequence. The same job processed by different machines are identified with the same color.

Fitness Function and Selection Strategy
The fitness function is used to measure the quality of individuals in the population, usually it is a function related to the optimization goal. Taking into account the natural law of "survival of the fittest", the higher the fitness of the individual, the higher the probability of survival. Therefore, the fitness function in this work is defined as

Fitness Function and Selection Strategy
The fitness function is used to measure the quality of individuals in the population, usually it is a function related to the optimization goal. Taking into account the natural law of "survival of the fittest", the higher the fitness of the individual, the higher the probability of survival. Therefore, the fitness function in this work is defined as where x represents an individual in the population.
On the basis of keeping the population size of each generation unchanged, in order to ensure the stable improvement of the overall quality of the population, this work adopts a roulette wheel selection strategy. For any individual x j , the probability P(x j ) to be selected is calculated according to the following formula.
When evaluating the fitness of each generation of the population, the elitism strategy is incorporated. This frees the optimal individual from the destruction of operations such as crossover and mutation, and directly replaces the worst individual in the next generation.

Adaptive Crossover and Mutation Operation
The crossover operation is considered as the backbone of GA, and its purpose is to inherit the characteristics of the parental solutions to generate two offspring solutions. In order to reduce the computational cost, it is always hoped that the offspring solutions produced after the crossover are feasible solutions. Based on the above two considerations, this study uses the POX operator to perform the crossover operation. The specific operation steps of POX operator are as follows.
Step 1: All jobs are randomly separated into two non-empty subsets, denoted as JS1 and JS2. The initial population is produced in cell 1, and the initial population size is N. Take the chromosome encoding with 4 jobs as an example, suppose JS1 = {1, 4}, JS2 = {2, 3}, then the newly generated child C1 after crossover is shown in Figure 4. In the same way, after a crossover operation, a child C2 will also be generated. The purpose of mutation operation is to bring slight disturbance to the population, which can enhance the diversity of the population. The mutation operator in this work is implemented by randomly selecting one of the two mutation operators with equal probability. They are swap mutation and inversion mutation as shown in Figure 5.  The purpose of mutation operation is to bring slight disturbance to the population, which can enhance the diversity of the population. The mutation operator in this work is implemented by randomly selecting one of the two mutation operators with equal probability. They are swap mutation and inversion mutation as shown in Figure 5. The purpose of mutation operation is to bring slight disturbance to the population, which can enhance the diversity of the population. The mutation operator in this work is implemented by randomly selecting one of the two mutation operators with equal probability. They are swap mutation and inversion mutation as shown in Figure 5. In classic GA, the crossover probability and mutation probability remain unchanged throughout the evolution process, but actual research shows that they are the key to GA performance [32]. The crossover and mutation probability of conventional adaptive GA is expressed as follows: Among them, Pc and Pm are the adaptive crossover and mutation probabilities, α1, α2, α3, α4 are constants. Fc is the larger fitness value of the two individuals to be crossed. Fm is the fitness value of the current mutant individual. Fmax is the maximum fitness value in the population. Favg is the average fitness value of the population.
The value of the crossover probability determines the update rate of the population. If its value is too large, it will destroy the excellent genetic model. If the value is too small, it will reduce the search efficiency of the algorithm and it is difficult to effectively improve the population. Therefore, this work combines two considerations to improve the conventional adaptive probability. On the one hand, in the early stage of evolution, in order to expand the overall search range and speed up the population update rate, should be increased; In the later stage of evolution, the overall solution set of the population tends to be stable. In order to keep the good genes better, should be appropriately reduced. On the other hand, considering the destructiveness of the crossover operator to In classic GA, the crossover probability and mutation probability remain unchanged throughout the evolution process, but actual research shows that they are the key to GA performance [32]. The crossover and mutation probability of conventional adaptive GA is expressed as follows: Among them, P c and P m are the adaptive crossover and mutation probabilities, α 1 , α 2 , α 3 , α 4 are constants. F c is the larger fitness value of the two individuals to be crossed. F m is the fitness value of the current mutant individual. F max is the maximum fitness value in the population. F avg is the average fitness value of the population.
The value of the crossover probability P c determines the update rate of the population. If its value is too large, it will destroy the excellent genetic model. If the value is too small, it will reduce the search efficiency of the algorithm and it is difficult to effectively improve the population. Therefore, this work combines two considerations to improve the conventional adaptive probability. On the one hand, in the early stage of evolution, in order to expand the overall search range and speed up the population update rate, P c should be increased; In the later stage of evolution, the overall solution set of the population tends to be stable. In order to keep the good genes better, P c should be appropriately reduced. On the other hand, considering the destructiveness of the crossover operator to the chromosome structure, individuals with poor fitness are given a higher P c value. Conversely, for individuals with higher fitness values, in order to reduce the damage to excellent genes, the P c value should be appropriately reduced. The case of mutation probability is similar. Considering the above two aspects, the improved adjustment mechanism is set as follows.
Processes 2021, 9, 219 9 of 18 Among them, P c1 (P m1 ) represents the maximum crossover (mutation) probability, which is related to the number of iterations of evolution. P c2 = 0.6 (P m2 = 0.1) is the minimum crossover (mutation) probability. IT denotes the maximum number of iterations of the evolution process. t means the current number of iterations. The factor 2 in the denominator in square brackets is to ensure that the maximum value of this part does not exceed the integer 1.

Evolutionary Rule of Objects in Cell 2
The evolution rule in cell 2 is a discretized improvement of rumor PSO, which is a variant of traditional PSO algorithm. The traditional PSO was proposed by Kennedy [33,34] through observing the social behavior of birds and other biological groups. It is a simple model constructed by swarm intelligence. The initial solution is generated by the random given velocity, while the new solution depends on the competition and cooperation among particle swarm. The best position of the particle itself and the best position of the entire population are two key factors for seeking the optimal solution or the approximate optimal solution. This is similar to human behavior in decision-making: people focus not only on their own best experiences, but also on the best experiences of others around them. The standard PSO equations [35] that can reflect the nature of the above-mentioned swarm intelligence evolution is as follows. Due to its excellent characteristics such as fast convergence and high feasibility, PSO has been applied in many fields.
where V i (t) and X i (t) respectively represent the velocity and position of the i-th particle (chromosome) in the t-th iteration; P ib denotes the historical best position of the i-th particle; P gb stands for the best position in the history of all particles; w is the inertia weight; c 1 and c 2 are learning factors; r 1 , r 2 are two independent random numbers uniformly distributed between (0,1). The rumor PSO proposed by Clerc has shown excellent performance in dealing with continuous problems [31]. In order to avoid the population falling into the local optimum and improve the diversity of the population as much as possible, the difference from the traditional PSO is that rumor PSO does not use the global optimum position information (P gb ) to guide the entire population. The equation of rumor PSO is shown in (16) (17).
Among them, P Kb replaces P gb in the standard PSO, which represents the best historical position among K particles randomly selected from the entire population. In this paper, K = 3 is set according to Clerc's research results [31]. As the iteration progresses, the object information is exchanged and spread from K particles to the entire population. This communication mechanism allows the particle trajectory to appear circuitous, and its propagation process is similar to the spread of rumors, so it is called rumor PSO.
By introducing a real-coded ascending mapping method, rumor PSO can be effectively applied to the JSSP which belongs to the discrete combinatorial optimization problem. The specific evolution process is carried out according to the following steps, and Figure 6 shows an evolutionary process taking the partial encoding of a chromosome as an example.
Step 1: Receive the initialized population, the population size is N; Step 2: Determine whether the iteration threshold is reached. If yes, stop and use the historical optimal value of N particles as the output result. If not, go to Step 3; Step 3: For each particle, that is, the chromosome, K particles are randomly selected from the population. Evaluate the historical optimal fitness value of these K particles to determine P Kb , and then use Equations (16) and (17) to update each particle; Step 4: Rearrange the real-numbered gene positions in ascending order; Step 5: Keep the original mapping relationship between real number codes and integer codes, and reconstruct gene positions; Step 6: Evaluate the newly generated particle, and if its fitness value is better, replace the P ib value corresponding to this particle. Continue to Step 2.
lem. The specific evolution process is carried out according to the following steps, and Figure 6 shows an evolutionary process taking the partial encoding of a chromosome as an example.
Step 1: Receive the initialized population, the population size is N; Determine whether the iteration threshold is reached. If yes, stop and use the historical optimal value of N particles as the output result. If not, go to Step 3; Step 3: For each particle, that is, the chromosome, K particles are randomly selected from the population. Evaluate the historical optimal fitness value of these K particles to determine PKb, and then use Equations (16) and (17) to update each particle; Step 4: Rearrange the real-numbered gene positions in ascending order; Step 5: Keep the original mapping relationship between real number codes and integer codes, and reconstruct gene positions; Step 6: Evaluate the newly generated particle, and if its fitness value is better, replace the Pib value corresponding to this particle. Continue to Step 2.
The gene positions calculated according to equations (16) and (17) Rearrange the real-numbered gene positions in ascending order.

Keep the original mapping relationship between real number codes and integer codes,
and reconstruct gene positions. Figure 6. Demonstration of the update process of the discretized rumor PSO (particle swarm optimization).

Evolutionary Rule of Objects in Cell 3
The evolutionary rules in cell 3 are guided by a local search method (LSM). The LSM proposed in this work consists of two parts, namely, an algorithm for quickly identifying critical path, and a neighborhood optimization algorithm based on critical path.

Evolutionary Rule of Objects in Cell 3
The evolutionary rules in cell 3 are guided by a local search method (LSM). The LSM proposed in this work consists of two parts, namely, an algorithm for quickly identifying critical path, and a neighborhood optimization algorithm based on critical path.
An important part of the feasible solution of JSSP is the critical path, which is the longest path from the starting point (0) to the ending point (1) in the disjunction graph. The length of the critical path is also called the makespan of a scheduling solution. The set of operations based on the critical path is recognized as a critical operation. The critical path can be broken down into several blocks (B 1 , B 2 , . . . , B r ). Each block refers to the longest sequence of adjacent critical operations processed on the same machine. It should be noted that for every two consecutive blocks B j and B j+1 , the last operation of B j and the first operation of B j+1 should belong to the same job but be processed on different machines. Considering that the permutation of non-critical adjacent operations does not help to improve the makespan of the scheduling solution and may even lead to infeasible solutions. Therefore, the LSM in this work will only implement the exploration and exploitation of individual neighborhoods based on the critical path.
In order to perform an effective neighborhood search based on the critical path, an algorithm for quickly finding and identifying the critical path of a feasible solution is first proposed. The notations S(O ij ) and E(O ij ) represent the start time and end time of the operation O ij , respectively. The specific algorithm steps to identify the critical path are as follows.
Step 1: Let P = O ij i ∈ [1, n], j ∈ [1, m] , Q = φ. The set Q is used to store the operations in the critical path, and it is initially an empty set; Step 2: Check each element τ in the set P. If for ∀λ ∈ P, λ = τ, the expression E(τ) = S(λ) is not satisfied, then operation τ is deleted from the set P; Step 3: Choose an operation σ that satisfies S(σ) = 0 from the updated set P, and add it to Q. Let w = 1; Step 4: While (E(σ w ) = makespan) then w = w + 1, choose an operation σ w ∈ P satisfied S(σ w ) = E(σ w−1 ) and add σ w to Q.
Step 5: Output the elements in Q in order, which is the required critical path.
Take another instance of 3 × 3 JSSP presented in Table 2 as an example to illustrate the concepts of critical path and blocks. Assuming that an approximate optimal solution of this instance is 2-3-1-2-1-3-1-2-3, the Gantt chart corresponding to the feasible solution is shown in Figure 7 and the makespan of this solution is 12. According to the above algorithm to identify the critical path of this feasible solution, the set Q = {O 21 Figure 7. The critical path corresponding to the approximate optimal solution and its three blocks.
With an algorithm for quickly identifying the critical path, it is possible to implement further neighborhood optimization for the already better individuals. The steps of the specific neighborhood search algorithm are as follows.
Step 1: Receive N feasible solutions optimized by GA and PSO (also can be regarded as approximate optimal solutions); Step 2: For each approximate optimal solution, use the algorithm given above to identify the critical path and its blocks; Step 3: According to the start and end time of the block, determine whether the current block is the first block or the last block. If it is the first block, exchange the last two operations; if it is the tail block, exchange the first two blocks. For the internal blocks, the first two operations and the last two operations of the block are exchanged respectively. All the operations after the exchange are allocated to the corresponding machines in the best available processing time. If only one operation is contained in a block, no exchange is performed; Step 4: As long as the makespan is improved, the current exchange is accepted. Otherwise, the current exchange is cancelled; Step 5: If an exchange is accepted, the original critical path may be destroyed. Go to Step 2, re-identify its critical path and perform a neighborhood search; Step 6: If the exchange in any block of the critical path does not improve the optimal goal, stop the local search.
After the feasible solution is optimized by the above algorithm, an optimal result can be obtained. For example, the approximate optimal solution given in Figure 7 has a makespan of 12. After critical path identification and neighborhood search, the optimal makespan is 11, as shown in Figure 8. With an algorithm for quickly identifying the critical path, it is possible to implement further neighborhood optimization for the already better individuals. The steps of the specific neighborhood search algorithm are as follows.
Step 1: Receive N feasible solutions optimized by GA and PSO (also can be regarded as approximate optimal solutions); Step 2: For each approximate optimal solution, use the algorithm given above to identify the critical path and its blocks; Step 3: According to the start and end time of the block, determine whether the current block is the first block or the last block. If it is the first block, exchange the last two operations; if it is the tail block, exchange the first two blocks. For the internal blocks, the first two operations and the last two operations of the block are exchanged respectively. All the operations after the exchange are allocated to the corresponding machines in the best available processing time. If only one operation is contained in a block, no exchange is performed; Step 4: As long as the makespan is improved, the current exchange is accepted. Otherwise, the current exchange is cancelled; Step 5: If an exchange is accepted, the original critical path may be destroyed. Go to Step 2, re-identify its critical path and perform a neighborhood search; Step 6: If the exchange in any block of the critical path does not improve the optimal goal, stop the local search.
After the feasible solution is optimized by the above algorithm, an optimal result can be obtained. For example, the approximate optimal solution given in Figure 7 has a makespan of 12. After critical path identification and neighborhood search, the optimal makespan is 11, as shown in Figure 8.

Flow Chart of the Proposed Hybrid Algorithm
This section presents a flow chart integrating different heuristic algorithms, as shown in Figure 9. The three main sub-algorithms are: improved adaptive GA, discretized rumor PSO, and improved LSM that can quickly identify critical path. As mentioned earlier, the above three sub-algorithms correspond to the evolution rules in the three cells in the tissue-like membrane system.

Flow Chart of the Proposed Hybrid Algorithm
This section presents a flow chart integrating different heuristic algorithms, as shown in Figure 9. The three main sub-algorithms are: improved adaptive GA, discretized rumor PSO, and improved LSM that can quickly identify critical path. As mentioned earlier, the above three sub-algorithms correspond to the evolution rules in the three cells in the tissue-like membrane system.

Comparative Experiment and Discussion
The algorithm proposed in this work is implemented by Python. The computer used for computation has an i5-4210H processor with a 2.90GHz clock speed and 12GB of RAM.

Comparative Experiment and Discussion
The algorithm proposed in this work is implemented by Python. The computer used for computation has an i5-4210H processor with a 2.90 GHz clock speed and 12 GB of RAM. The parameter settings involved in the algorithm are as follows. The initial population size is N = 128. The maximum number of external iterations used to control the population update is I ex = 20. The maximum number of iterations of modified PSO and improved GA, that is, the number of internal iterations in cell 1 and cell 2 are both I in = 50. The inertia weight in Equation (16) is w = 0.7, and the learning factor c 1 = c 2 = 2.10.
The proposed algorithm is first compared with three state-of-the-art heuristic algorithms on 22 JSSP benchmark instances (see Table 3). Among them, there are 1 instance (FT20) designed by Fisher and Thompson [2], and 21 instances (LA01~LA40) designed by Lawrence [36]. The other three heuristic algorithms used for comparison are MAGATS [19], NIMGA [37], and HIMGA [8]. In Table 3, the first column is the instance name, the second column is the instance size n × m, and the third column is the best-known solution (BKS). The remaining four columns are the best solutions obtained by different algorithms on corresponding instances. The results show that, except for the instances LA36 and LA40, the BKS of the remaining instances can be found by the proposed algorithm. Figures 10  and 11 show the Gantt chart of the experimental results of the instances LA20 and LA29, respectively, while the remaining algorithms for comparison did not get BKS on these two instances. The results of the comparison algorithms come from the corresponding original publication [8,19].  Next, further comparative analysis is done on four instances of different sizes with the other 3 heuristic algorithms. The three comparison algorithms are PSO [11], DE [38] and SSO-DM [18]. Four instances of different sizes are FT10, LA40, ORB10 [39] and YN4 [40]. For 20 independent runs of each algorithm, Table 4 shows the statistical results of the best, worst, mean and standard deviation (Std.). The comparison data of the above three algorithms in Table 4 are from the literature [18]. Figures 12-15 show the box plots of these four algorithms on the four instances. Box plot is a popular visual representation of data distribution. The horizontal line in the box is used to indicate the median. The red plus sign in the figure indicates possible outliers. Although the stability of the proposed algorithm is slightly inferior to SSO-DM, the accuracy of its solution is better than the other three algorithms. Figure 16 shows a Gantt chart of an optimal solution for the instance ORB10.  Next, further comparative analysis is done on four instances of different sizes with the other 3 heuristic algorithms. The three comparison algorithms are PSO [11], DE [38] and SSO-DM [18]. Four instances of different sizes are FT10, LA40, ORB10 [39] and YN4 [40]. For 20 independent runs of each algorithm, Table 4 shows the statistical results of the best, worst, mean and standard deviation (Std.). The comparison data of the above three algorithms in Table 4 are from the literature [18]. Figures 12-15 show the box plots of these four algorithms on the four instances. Box plot is a popular visual representation of data distribution. The horizontal line in the box is used to indicate the median. The red plus sign in the figure indicates possible outliers. Although the stability of the proposed algorithm is slightly inferior to SSO-DM, the accuracy of its solution is better than the other three algorithms. Figure 16 shows a Gantt chart of an optimal solution for the instance ORB10.

Conclusions
This work introduces the inherent mechanism of tissue-like P system for solving the job shop scheduling problem and proposes a hybrid heuristic algorithm based on this framework. The algorithm framework incorporates improved genetic algorithms (GA), modified rumor particle swarm optimization (PSO), and a local search method based on

Conclusions
This work introduces the inherent mechanism of tissue-like P system for solving the job shop scheduling problem and proposes a hybrid heuristic algorithm based on this framework. The algorithm framework incorporates improved genetic algorithms (GA),

Conclusions
This work introduces the inherent mechanism of tissue-like P system for solving the job shop scheduling problem and proposes a hybrid heuristic algorithm based on this framework. The algorithm framework incorporates improved genetic algorithms (GA), modified rumor particle swarm optimization (PSO), and a local search method based on rapid identification of critical path. Through comparative experiments in multiple benchmark instances of JSSP, comprehensively, the proposed algorithm performs better than the other comparison algorithms.
However, although this work couples the traditional hybrid heuristic algorithm with the tissue-like membrane system, it still has some shortcomings. That is, this coupling is only a formal simulation, and does not truly reflect the advantages of the distribution and parallelism of membrane computing. With the continuous development of hardware technology, the realization of parallel computing will better simulate the parallel advantages of membrane computing. This will also greatly reduce the optimization time of the algorithm. In addition, this work is dedicated to solving single-objective JSSP, and in actual industrial production, the demand for multi-objective optimization is increasing day by day.
Based on this, there will be several meaningful research directions in the future. First of all, in order to make full use of and simulate the parallelism of membrane computing, it is still a promising and challenging direction to develop multi-threaded parallel computing algorithms and implement them on GPUs. Secondly, the multi-objective flexible job shop scheduling problem, as the top problem of the shop scheduling problem, will become an important research direction that we continue to carry out based on the current work. In addition, as the two most important branches of biological computing, the combination of membrane computing and DNA computing will also become an extremely interesting and valuable cross field.