Improved Whale Algorithm for Solving the Flexible Job Shop Scheduling Problem

In this paper, a novel improved whale optimization algorithm (IWOA), based on the integrated approach, is presented for solving the flexible job shop scheduling problem (FJSP) with the objective of minimizing makespan. First of all, to make the whale optimization algorithm (WOA) adaptive to the FJSP, the conversion method between the whale individual position vector and the scheduling solution is firstly proposed. Secondly, a resultful initialization scheme with certain quality is obtained using chaotic reverse learning (CRL) strategies. Thirdly, a nonlinear convergence factor (NFC) and an adaptive weight (AW) are introduced to balance the abilities of exploitation and exploration of the algorithm. Furthermore, a variable neighborhood search (VNS) operation is performed on the current optimal individual to enhance the accuracy and effectiveness of the local exploration. Experimental results on various benchmark instances show that the proposed IWOA can obtain competitive results compared to the existing algorithms in a short time.


Introduction
In recent years, scheduling played a crucial role in almost all manufacturing systems, as global competition became more and more intense.The classical job shop scheduling problem (JSP) is one of the most important scheduling forms existing in real manufacturing.It became a hotspot in the academic circle and received a large amount of attention in the research literature with its wide applicability and inherent complexity [1][2][3].In JSP, a group of jobs need to be processed on a set of machines, where each job consists of a set of operations with a fixed order.The processing of each operation of the jobs must be performed on a given machine.Each machine is continuously available at time zero and can process only one operation at a time without interruption.The decision concerns how to sequence the operations of all the jobs on the machines, so that a given performance indicator can be optimized.Makespan is the time in which all the jobs need to be completed and is a typical performance indicator for the JSP.
The flexible job shop scheduling problem (FJSP) is an extension of the classical JSP, where each operation can be processed by any machine in a given set rather than one specified machine.The FJSP is closer to a real manufacturing environment compared with classical JSP.According to its practical applicability, the FJSP became very crucial in both academic and application fields.However, it is more difficult than classical JSP because it contains an additional decision problem, assigning operations to the appropriate machine.Therefore, the FJSP is a problem of challenging complexity and was proven to be non-deterministic polynomial-time (NP)-hard [4].
In the initial study, Brucker and Schlie firstly proposed a polynomial algorithm for solving the FJSP with two jobs [5].During the past two decades, the FJSP attracted the interest of many researchers.There were many approximation algorithms, mainly metaheuristics, presented for solving the FJSP.Dauzere-Peres and Paulli [6] proposed a tabu search (TS) algorithm which was based on a new neighborhood structure for the FJSP.Mastrolilli and Gambardella [7] designed two neighborhood functions and presented an improved TS algorithm based on the original one which was proposed in literature [6].Mati et al. [8] proposed a genetic algorithm for solving the FJSP with blocking constraints.Regarding the FJSP, Mousakhani.[9] developed a mixed-integer linear programming model (MILP) and designed an iterated local search algorithm to minimize total tardiness.Yuan et al. [10] designed a novel hybrid harmony search (HHS) algorithm based on the integrated approach for solving the FJSP with the objective to minimize makespan.Tao and Hua [11] presented an improved bacterial foraging algorithm (BFOA) based on cloud computing to solve the multi-objective flexible job shop scheduling problem (MOFJSP).Gong et al. [12] proposed a double flexible job shop scheduling problem (DFJSP) with flexible machines and workers, and then a new hybrid genetic algorithm (NHGA) was designed to solve the proposed DFJSP.Wang et al. [13] presented a two-stage energy-saving optimization algorithm for the FJSP.In their methods, the problem was divided into two subproblems: the machine assignment problem and the operation sequencing problem.An improved genetic algorithm was designed to solve the machine assignment problem and a genetic particle swarm hybrid algorithm was developed for the operation sequencing problem.An improved particle swarm optimization (PSO) was developed by Marzouki et al. [14].Yuan and Xu [15] designed memetic algorithms (MAs) for solving the MOFJSP with three objectives, makespan, total workload, and critical workload.Gao et al. [16] proposed a discrete harmony search (DHS) to solve the MOFJSP with two objectives of makespan, the mean of earliness and tardiness.Piroozfard et al. [17] devised a novel multi-objective genetic algorithm (MOGA) for solving the problem with two conflicting objectives, total carbon footprint and total late work.Jiang et al. [18] pronounced a gray wolf optimization algorithm with a double-searching mode (DMGWO) to solve the energy-efficient job shop scheduling problem (EJSP).Singh and Mahapatra [19] proposed an improved particle swarm optimization (PSO) for the FJSP, in which quantum behavior and a logistic map were introduced.Wu and Sun.[20] presented a green scheduling algorithm for solving the energy-saving flexible job shop scheduling problem (EFJSP).
According to their potential advantages, many metaheuristic algorithms were proposed and improved to solve various problems [21][22][23][24].The whale optimization algorithm (WOA) is a new metaheuristic algorithm which imitates the hunting behavior of humpback whales in nature [25].Because of its characteristics (simple principle, fewer parameter settings, and strong optimization performance), WOA was applied to deal with various optimization problems in different fields, i.e., neural networks [26], feature selection [27], image segmentation [28], photovoltaic cells [29], the energy-efficient job shop scheduling problem [30], and the permutation flow shop scheduling problem [31].This motivates us to present an improved whale optimization algorithm (IWOA) that can minimize the makespan of the FJSP.In our proposed IWOA, in order to make the whale optimization algorithm (WOA) adaptive to the FJSP, the conversion between the whale individual position vector and the scheduling solution is implemented by utilizing the converting method proposed in the literature [10].Then, a resultful initialization scheme with certain quality is obtained by combining chaotic opposition-based learning strategies.To converge quickly, a nonlinear convergence factor and an adaptive weight are introduced to balance the abilities of exploitation and exploration of the algorithm.Furthermore, a variable neighborhood search operation is performed on the current optimal individual to enhance the accuracy and effectiveness of the local exploration.Experimental results on various benchmark instances show that the proposed IWOA can obtain competitive results compared to the existing algorithms in short time.
The rest of this paper is organized as follows: Section 2 introduces the definition of the problem.Section 3 illustrates the original whale optimization algorithm.In Section 4, the proposed IWOA is described in detail.Section 5 shows the empirical results of IWOA.Conclusions and suggestions for future works are provided in Section 6.

Problem Description
The FJSP is defined in this section.There are a set of n jobs J = {J 1 , J 2 , . . ., J n } and a set of q machines M = M 1 , M 2 , . . ., M q , where n i is the number of operations of job J i , m is the total number of all operations, and O ij represents the jth operation of job J i .Each operation O ij can be processed on one machine among a set of alternative machines of the jth operation of job J i .The FJSP can be decomposed into two subproblems: the routing subproblem of assigning each operation to a machine among alternative machines M ij , which is a subset of M, and the scheduling subproblem of sequencing the assigned operations on all alternative machines to attain a feasible schedule for optimizing a certain objective function.
The FJSP can be classified into total FJSP (TFJSP) and partial FJSP (PFJSP).For the TFJSP, each operation can be processed on all machines of M. For the PFJSP, each operation can only be processed on partial machine of M.
Moreover, the following assumptions are put forward in our study: all jobs are processable at time 0; all machines available at time 0; each machine can process at most one operation at a time; each operation must be completed once it starts; the transfer time between operations and the set-up time of machines are negligible.
In this study, the makespan was selected as the objective to be minimized.The mathematical model can be described as follows: R ijegk ∈ {0, 1}, i, e = 1, 2, . . .n; j = 1, 2, . . .n i ; g = 1, 2, . . .n e ; k = 1, 2, . . .q, (7) where C max is the maximal completion time of jobs, C i is the continuous variable for the completion time of job J i , L ijk denotes the processing time of operation O ij on machine M k , M k denotes the kth machine of M, C ijk is the continuous variable for the completion time of operation O ij processing on machine M k , S ijk is the continuous variable for the start time of operation O ij processing on machine M k , and Y ijk is a 0-1 variable; if operation O ij is processed on machine M k , Y ijk = 1; otherwise, Y ijk = 0. R ijegk is a 0-1 variable; if operation O ij is processed on machine M k prior to operation O eg as they both can be processed on it, R ijegk = 1; otherwise, R ijegk = 0. Equation (1) indicates the optimizing objective.Equation (2) ensures the operation precedence constraint.Equation (3) states that each operation must be completed once it starts.Equation (4) ensures that each machine can processes only one operation at each time.Equation (5) ensures the operation can be processed only once.Equations ( 6) and (7) show the relevant 0-1 variables.Equation (8) denotes the non-negative feature of relevant variables.

Whale Optimization Algorithm
The whale optimization algorithm (WOA) is a new intelligent optimization algorithm that mimics the foraging behavior of humpback whales.After discovering the prey, the humpback whales swim in a spiral way toward the prey to surround it, at the same time emitting a bubble net for foraging.There are three kinds of predation methods, namely "encircling prey", "bubble-net attacking", and "search for prey"; among them, "bubble-net attacking" includes two kinds of approaches, namely "shrinking encircling mechanism" and "spiral updating position".Thus, the humpback whale's foraging method can be described mathematically as shown below.

Encircling Prey
Since the position of the prey (best position) is unknown in the search space, the WOA assumes that the current optimal individual is the target prey or is the closest individual to the target prey.After the optimal individual is discovered, other individuals will update their positions toward the optimal individual, and this behavior can be represented as follows where t defines the current iteration, represents the position vector of the optimal individual attained so far, → X(t) defines the position vector of an individual whale, || represents the absolute value, and • means an element-by-element multiplication.Furthermore, → r indicates a random vector in [0,1], and a is an element that linearly decreases from 2 to 0 according to Equation ( 13) over the course of an iteration, where t max defines the maximum of the iteration.
The position of an individual whale can be updated according to the position of the current optimal individual.Different places around the current optimal individual can be obtained with regard to the current position by adjusting the values of → A and → C. It is possible to reach any position within a feasible solution domain by defining the random vector r.Therefore, Equation ( 9) allows any individual whale to update its position in the neighborhood of the current optimal solution and simulates encircling the prey.

Bubble-Net Attacking
In the exploitation phase, the humpback whales swim around the prey in a shrinking circle and along a spiral path simultaneously.To model these two mechanisms, it is assumed that there is a probability of 50% to choose between them to update the position of whales during the optimization process.

Shrinking Encircling Mechanism
This behavior is obtained by decreasing the fluctuation range of A in Equation (9).According to Equation (11), the fluctuation range of A can be decreased by a. Specifically, A is a random value in the interval [−a, a].Setting random values for A in [−1,1], the new position of an individual whale can be defined anywhere in between the original position of the individual and the position of the current optimal individual.

Spiral Updating Position
To model this mechanism, the distance between the whale and the prey (current optimal individual position) is firstly calculated, and then a spiral path is achieved between the position of whale and the prey to simulate the helix-shaped movement of the humpback whales, which can be defined as follows: where → D is the absolute value for the distance between the current optimal individual → X * (t) and the current individual whale → X(t) at t iteration, b is a constant and denotes the shape of the logarithmic spiral, l is a random number in [−1, 1], and • is an element-by-element multiplication.
Thus, the mathematical model of the bubble-net attacking behavior of humpback whales can be defined by Equation ( 16), where p is a random number inside [0, 1].

Search for Prey
In contrast to the exploitation phase, the humpback whales also search for prey randomly; the mechanism is implemented by the variation of the vector A. When |A| < 1, the exploitation is achieved by updating the positions toward the current optimal individual; when |A| ≥ 1, the exploration is adopted by updating the positions toward a randomly chosen individual to search for the global optimum, which can be denoted as follows: where → X rand (t) is the individual position vector randomly selected from the current population.

Scheduling Solution Denotation
As mentioned above, the FJSP contains two subproblems, i.e., machine assignment and operation sequence.For this feature, a two-segment string with the size of 2mn is used to represent the scheduling solution.The first segment aims to choose an appropriate machine for each operation, and the second segment represents the processing sequence of operations on each machine.Taking a 3 × 3 (three jobs, three machines) FJSP as an example, each job has two operations.The scheduling solution is shown in Figure 1.For the first segment, the element j means the operation chooses the jth machine in the alternative machine set, where all elements are stored in a fixed order.For the second segment, each element represents the job code, where the elements with the same value i mean different operations of the same job i, and O ik presents the kth operation of the job i.
3 × 3 (three jobs, three machines) FJSP as an example, each job has two operations.The scheduling solution is shown in Figure 1.For the first segment, the element j means the operation chooses the jth machine in the alternative machine set, where all elements are stored in a fixed order.For the second segment, each element represents the job code, where the elements with the same value i mean different operations of the same job i , and ik O presents the kth operation of the job i .

Individual Position Vector
In our proposed IWOA, the individual position is still denoted as a multi-dimensional real vector, which also consists of two segments string with the size of mn, i.e.,

{ }
(1), ( 2 .The first segment x mn = denotes the information of machine assignment, and the second segment presents the information of operation sequencing. For the above 3 × 2 FJSP, the individual position vector can be represented by Figure 2, where element values are listed in the given order.In addition, the intervals min max [ ( ), ( )] x j x j are all set as [ ] where δ presents the number of the jobs.

Conversion Mechanism
Since the original WOA was proposed to tackle continuous problems, but the FJSP belongs to a discrete combinatorial problem, some measures should be implemented to construct the mapping relationship between the individual position vector and the discrete scheduling solution.In a previous study, Yuan et al. [10] proposed a method to implement the conversion between the continuous individual position vector and the discrete scheduling solution for the FJSP.Therefore, the conversion method in the literature [10] will be used in this study.

Conversion from Scheduling Solution to Individual Position Vector
For the machine assignment segment, the conversion process can be represented by Equation (19).Here, ( ) x i denotes the ith element of the individual position vector, ( ) s i presents the number of alternative machine set for the operation corresponding to the ith element, and ( ) n i means the serial number of the chosen machine in its alternative machine set; if For the operation sequence segment, firstly, it is needed to randomly generate mn real numbers in the range [- ] δ δ ， corresponding to the scheduling solution.According to the ranked-order-value
solution is shown in Figure 1.For the first segment, the element j means the operation chooses the jth machine in the alternative machine set, where all elements are stored in a fixed order.For the second segment, each element represents the job code, where the elements with the same value i mean different operations of the same job i , and ik O presents the kth operation of the job i .

Individual Position Vector
In our proposed IWOA, the individual position is still denoted as a multi-dimensional real vector, which also consists of two segments string with the size of mn, i.e.,

{ }
(1), ( 2 .The first segment x mn = denotes the information of machine assignment, and the second segment presents the information of operation sequencing. For the above 3 × 2 FJSP, the individual position vector can be represented by Figure 2, where element values are listed in the given order.In addition, the intervals min max [ ( ), ( )] x j x j are all set as [ ] where δ presents the number of the jobs.

Conversion Mechanism
Since the original WOA was proposed to tackle continuous problems, but the FJSP belongs to a discrete combinatorial problem, some measures should be implemented to construct the mapping relationship between the individual position vector and the discrete scheduling solution.In a previous study, Yuan et al. [10] proposed a method to implement the conversion between the continuous individual position vector and the discrete scheduling solution for the FJSP.Therefore, the conversion method in the literature [10] will be used in this study.

Conversion from Scheduling Solution to Individual Position Vector
For the machine assignment segment, the conversion process can be represented by Equation (19).Here, ( ) x i denotes the ith element of the individual position vector, ( ) s i presents the number of alternative machine set for the operation corresponding to the ith element, and ( ) n i means the serial number of the chosen machine in its alternative machine set; if ( )=1 s i , then ( ) x i can be achieved by choosing a random value in the interval [ ] For the operation sequence segment, firstly, it is needed to randomly generate mn real numbers in the range [- ] δ δ ， corresponding to the scheduling solution.According to the ranked-order-value

Conversion Mechanism
Since the original WOA was proposed to tackle continuous problems, but the FJSP belongs to a discrete combinatorial problem, some measures should be implemented to construct the mapping relationship between the individual position vector and the discrete scheduling solution.In a previous study, Yuan et al. [10] proposed a method to implement the conversion between the continuous individual position vector and the discrete scheduling solution for the FJSP.Therefore, the conversion method in the literature [10] will be used in this study.

Conversion from Scheduling Solution to Individual Position Vector
For the machine assignment segment, the conversion process can be represented by Equation (19).Here, x(i) denotes the ith element of the individual position vector, s(i) presents the number of alternative machine set for the operation corresponding to the ith element, and n(i) means the serial number of the chosen machine in its alternative machine set; if s(i) = 1, then x(i) can be achieved by choosing a random value in the interval [−δ, δ].
For the operation sequence segment, firstly, it is needed to randomly generate mn real numbers in the range [−δ, δ] corresponding to the scheduling solution.According to the ranked-order-value (ROV) rule, a unique ROV value is assigned to each random number in an increasing order, so that each ROV value can correspond to an operation.Secondly, the ROV value is rearranged according to the coding order of the operations, and the random number corresponding to the rearranged ROV value is the value of the element of the individual position vector.The conversion process is shown in Figure 3.
(ROV) rule, a unique ROV value is assigned to each random number in an increasing order, so that each ROV value can correspond to an operation.Secondly, the ROV value is rearranged according to the coding order of the operations, and the random number corresponding to the rearranged ROV value is the value of the element of the individual position vector.The conversion process is shown in Figure 3.

Conversion from Individual Position Vector to Scheduling Solution
For the machine assignment segment, according to the reverse derivation of Equation ( 19), the conversion can be achieved, which can be denoted by Equation (20).(20) For the operation sequence segment, the ROV value is firstly increasingly assigned to each element of the individual position vector, and then used as the Fixed ID.Therefore, a new operation sequence can be obtained by corresponding the ROV value to the operations, which is shown in Figure 4.

Population Initialization
For a swarm intelligence optimization algorithm, the quality of the initial population is very crucial for the computational performance.In light of the characteristic of the FJSP, the population initialization process can be implemented in two phases.In the machine assignment phase, the better initial assignment schemes can be generated by utilizing a chaotic reverse learning method.In the operation sequence phase, some operation sequences are randomly generated.Combining each operation sequence with one of the initial assignment schemes, some scheduling solutions are The conversion process from operation sequence to individual position vector.

Conversion from Individual Position Vector to Scheduling Solution
For the machine assignment segment, according to the reverse derivation of Equation ( 19), the conversion can be achieved, which can be denoted by Equation (20).
For the operation sequence segment, the ROV value is firstly increasingly assigned to each element of the individual position vector, and then used as the Fixed ID.Therefore, a new operation sequence can be obtained by corresponding the ROV value to the operations, which is shown in Figure 4.
the coding order of the operations, and the random number corresponding to the rearranged ROV value is the value of the element of the individual position vector.The conversion process is shown in Figure 3.

Conversion from Individual Position Vector to Scheduling Solution
For the machine assignment segment, according to the reverse derivation of Equation ( 19), the conversion can be achieved, which can be denoted by Equation (20).(20) For the operation sequence segment, the ROV value is firstly increasingly assigned to each element of the individual position vector, and then used as the Fixed ID.Therefore, a new operation sequence can be obtained by corresponding the ROV value to the operations, which is shown in Figure 4. O p r a t i o n s e q u e n c e

Population Initialization
For a swarm intelligence optimization algorithm, the quality of the initial population is very crucial for the computational performance.In light of the characteristic of the FJSP, the population initialization process can be implemented in two phases.In the machine assignment phase, the better initial assignment schemes can be generated by utilizing a chaotic reverse learning method.In the operation sequence phase, some operation sequences are randomly generated.Combining each operation sequence with one of the initial assignment schemes, some scheduling solutions are The conversion from individual position vector to operation sequence.

Population Initialization
For a swarm intelligence optimization algorithm, the quality of the initial population is very crucial for the computational performance.In light of the characteristic of the FJSP, the population initialization process can be implemented in two phases.In the machine assignment phase, the better initial assignment schemes can be generated by utilizing a chaotic reverse learning method.In the operation sequence phase, some operation sequences are randomly generated.Combining each operation sequence with one of the initial assignment schemes, some scheduling solutions are generated and fitness function values of each scheduling solution are calculated.Then, the initial population can be achieved by choosing the scheduling solution with the best fitness value each time.

Nonlinear Convergence Factor
Like other swarm intelligence optimization algorithms, the coordination between the abilities of exploitation and exploration is important for the performance of the algorithm.In the original WOA, the abilities of exploitation and exploration mainly depend on the convergence factor a. The larger the value of a is, the stronger the ability of exploitation is, and then the WOA can exploit the optimal solution in a large space.The smaller the value of a is, the stronger the ability of exploration is, and then it can merely explore the optimal solution in a small space.Therefore, for improving the efficiency of exploitation, the value of a can be set to be larger in the early stage of iterations, which is beneficial to exploit the optimal solution in a larger space, and then it can be set to be smaller in the later stage of iterations, which is beneficial to concretely explore the better solution around the current optimal one.However, the value of a linearly decreases over the course of iterations by Equation ( 13), which cannot improve the efficiency of the nonlinear search of the algorithm.Therefore, a nonlinear improvement of a is adopted by Equation ( 21), where t max and t denote the maximum iteration and current iteration, respectively.

Adaptive Weight
The improvement of a can improve the optimization ability of the algorithm to some extent, but it cannot achieve the purpose of effectively balancing the abilities of exploitation and exploration.Therefore, the adaptive weight and the nonlinear convergence factor a are cooperated to coordinate the abilities of exploitation and exploration of the algorithm.The adaptive weight proposed in the literature [32] is used to improve the optimization performance of the algorithm, with the formula shown by Equation (22), where t max and t denote the maximum iteration and current iteration, respectively.The improved iterative formulas in the WOA can be defined by Equations ( 23) and (24). →

Variable Neighborhood Search
In the local exploration phase, the whale individuals update their positions toward the current optimal individual X * using Equation (16).Therefore, X * determines the accuracy and effectiveness of the local exploration to some extent.Taking this into account, the variable neighborhood search strategy is used for improving the quality of the current optimal scheduling solution W * , and then the quality of the current optimal individual X * can be ameliorated as well.At the same time, an "iterative counter" is set for W * and assigned 0 at the initial moment.If W * does not change at each iteration, the "iterative counter" increases by 1; otherwise, it remains the same.When the "iterative counter" is equal to stability threshold Ts (15 in this paper), as the individuals reach the steady state, the variable neighborhood search strategy is performed on W * , allowing it to escape from the local optimum.For implementing the strategy, three neighborhood structures were designed as outlined below.
For the neighborhood structure N 1 , two random positions are chosen with different jobs in the second segment of the scheduling solution, exchanging the order of jobs from the second random position to the first random position.
For the neighborhood structure N 2 , two random positions are chosen with different jobs in the second segment of the scheduling solution, inserting the job of the first random position in the position behind the second random position.
For the neighborhood structure N 3 , a random position is chosen in the first segment of the scheduling solution, where the number of alternative machines is more than one, and then the current machine is replaced by another one of the alternative machines in the position.
The new scheduling solution is evaluated after each variable neighborhood search operation.If the new scheduling solution is better than the original one, then the new scheduling solution is set as the original one.The procedure of the variable neighborhood search operation can is illustrated in Algorithm 1.
Algorithm 1.The procedure of VNS.
Step 1: Set the current optimal scheduling solution W * as the initial solution W, where λ = 1, q = 1, q max = 3, and η max represents the maximum iteration, at the initial moment.
Step 2: If q = 1, set N 1 (W) as W ; if q = 2, set N 2 (W) as W ; if q = 3, set N 3 (W) as W ; W represents the new scheduling solution, and N i (W) represents employing the ith neighborhood structure operation on W, where I = 1, 2, or 3.
Step 3: Set W as W, and then the local optimal scheduling solution W can be obtained by executing the local search operation.
Step 4: If W is better than W, then set W as W, and set q = 1; otherwise, set q + 1 as q.
Step 5: If q > q max , then set η + 1 as η, and go to step 6; otherwise, go to step 3.
In this study, the threshold acceptance method is used for the local search operation, which is shown as Algorithm 2.

The Procedure of the Proposed IWOA
The detailed steps of the proposed IWOA can be described as Algorithm 3.
Algorithm 3. The procedure of IWOA.
Step 1: Set parameters and generate the initial population by utilizing the chaotic reverse learning strategy and search method.
Step 2 Calculate the fitness value of each scheduling solution in the population, and then find and retain the optimal scheduling solution W * .
Step 3: Judge whether the termination conditions can be met.If not met, perform steps 4-7; otherwise, perform step 8.
Step 4: Judge whether the value of the "iterative counter" is equal to 15.If met, go to step 5; otherwise, go to step 6.
Step 5: Employ the variable neighborhood search operation on W * , and update W * .
Step 6: Execute the conversion from scheduling solution to individual position vector, and retain the optimal individual position vector X * corresponding to W * Step 7: Update each individual position vector using Equations ( 17), ( 23) and (24), and execute the conversion from individual position vector to scheduling solution; set t = t + 1, and then go to step 2.
Step 8: The algorithm ends and outputs the optimal scheduling solution W * .

Experimental Settings
To evaluate the performance of the proposed IWOA for solving the FJSP, the algorithm was coded in MATLAB 2016a and run on a computer configured with an Intel Core i5-8250 central processing unit (CPU) with 1.80 GHz frequency, 8 GB random-access memory (RAM), and a Windows 10 Operating System.Fifteen famous benchmarks that included a set of 10 instances taken from Brandimarte (BRdata) [33] and five instances taken from Kacem et al (KAdata) [34] were chosen to test the proposed algorithm.These benchmark instances were used by many researchers to estimate their approaches.For each benchmark instance, experimental simulations were run 20 times using different algorithms.After several preliminary experiments, the parameters of the proposed IWOA were set as follows: a population size of 100, maximum iterations of 1000, spiral constant b of one, and η max and γ max both set to 10.

Effectiveness of the Improvement Strategies
In this paper, three strategies were employed to enhance the performance of the IWOA, i.e., CRL, NFC and AW, and VNS.In this subsection, the effectiveness of the three strategies is firstly evaluated.In Table 1, the first and second columns present the name and size of the problems, and computational data are listed in the following columns."WOA" defines the original whale optimization algorithm."IWOA-1" is the algorithm where the nonlinear convergence factor and adaptive weight are both applied to the WOA."IWOA-2" is the whale optimization algorithm with the variable neighborhood search strategy introduced."IWOA" is the presented algorithm in this study.In addition, "Best" represents the best result in the 20 runs."Avg" means the average results value of the twenty runs."Time" is the mean computational time (in seconds) in the 20 runs."LB" denotes the optimal value of makespan found so far.Boldface denotes the best mean result in the 20 runs.To enhance the comparison, the same parameters were set for the compared algorithms; for instance, population size was 100 and maximum iterations were 1000.
From the experimental result in Table 1, the following conclusions can be obtained: (1) in comparisons of the "Best" value, the IWOA algorithm was better than the other three algorithms, which obtained seven optimal values, outperforming IWOA-1 in 12 out of 15 instances, IWOA-2 in nine out of 15 instances, and WOA in 13 out of 15 instances; (2) in comparisons of the "Time" value, WOA spent a shorter time than the other three algorithms.Compared with IWOA-1, the increase in computation time was mainly the result of the addition of the variable neighborhood search operation in WOA, which led to increased time complexity of the algorithm; (3) in comparisons of the "Avg" value, the IWOA algorithm obtained all optimal values, outperforming WOA and IWOA-1 in 15 out of 15 instances, and outperforming IWOA-2 in 13 out of 15 instances.

Effectiveness of the Proposed IWOA
To demonstrate the effectiveness of the proposed IWOA, the second experiment was executed on KAdata.In Table 2, the proposed algorithm is compared with the knowledge-based ant colony optimization (KBACO) [35], hybrid tabu search algorithm (TSPCB) [36], and the hybrid gray wolf optimization algorithm (HGWO) [37].The first column presents the name of the problems."Best" represents the best makespan."Avg" means the average makespan."Time" is the mean computational time of the instance."LB" denotes the optimal value of makespan found so far."Avg-T" is the mean computational time executed on KAdata.As can be seen, the proposed IWOA obtained three optimal values in solving KAdata, compared with five for ACO, five for HTS, and four for HGWO.However, the average computational time for the IWOA was very low, at only 4 s (on a Lenovo Thinkpad E480 with CPU i5-8250 @1.80GHz and 8 GB RAM) compared to 4978.8 s (in Matlab on a Dell Precision 650 workstation with a Pentium IV 2.4 GHz CPU and 1 GB RAM) for KBACO, 214.8 s (in C++ on a Pentium IV 1.6 GHz CPU and 512 MB RAM) for TSPCB, and 19 s (in Fortran on a Pentium CPU G2030@ 3.0 GHz and 2 GB ) for HGWO.Because the computers applied for running the programs was different, the comparison among the running times of different algorithms was difficult.However, even if there exists some differences in the speed between the processors involved, IWOA was obviously faster than the other three algorithms.
Another experiment was implemented on BRdata.Table 3 compares our proposed IWOA with the following six algorithms: KBACO [35], TSPCB [36], HGWO [37], artificial immune algorithm (AIA) [38], particle swarm optimization combined with tabu search (PSO+TS) [39], and tabu search metaheuristic with a new neighborhood structure called "golf neighborhood" (TS3) [40].The first column stands for the name of the problems, and the second column represents the optimal value found so far."Best" represents the best makespan."Mean" represents the average result of "RPD" in the 20 runs.Boldface denotes the best result of "RPD" in the 20 runs."RPD" represents the relative percentage deviation to "LB" and is calculated as follows: As can be seen from Table 3, the following conclusions can be easily obtained: (1) in comparisons of the "Best" value, the proposed IWOA showed competitive performance on BRdata, obtaining four optimal values, outperforming KBACO in seven out of 10 instances, TS3 and PSO+TS in nine out of 10 instances, and HGWO in eight out of 10 instances, while it was equal to both AIA and TSPCB in six out of 10 instances; (2) in comparisons of the "RPD" value, the proposed IWOA obtained five optimal values, outperforming KBACO in seven out of 10 instances, both TS3 and PSO+TS in nine out of 10 instances, and HGWO in eight out of 10 instances, while it was inferior to both AIA and TSPCB in three out of 10 instances; (3) in comparisons of the "Mean" value, the value for the proposed IWOA was very low at only 4.91, outperforming the 5.65 for KBACO, 10.12 for HGWO, 23.89 for PSO+TS, and 13.34 for TS3, while it was inferior to the 2.78 for TSPCB and 2.22 for AIA.However, by comparison, the IWOA obtained the best values in an acceptable time.

Figure 3 .
Figure 3.The conversion process from operation sequence to individual position vector.

Figure 4 .
Figure 4.The conversion from individual position vector to operation sequence.

Figure 3 .
Figure 3.The conversion process from operation sequence to individual position vector.

Figure 4 .
Figure 4.The conversion from individual position vector to operation sequence.

Table 1 .
Effectiveness analysis of improvement strategy.See Section 5.2 for column descriptions.WOA-whale optimization algorithm.

Table 2 .
Comparison between the proposed improved WOA (IWOA) and existing algorithms on the KAdata.See Section 5.3 for column descriptions.