Next Article in Journal
Mixed Variational-like Inclusion Involving Yosida Approximation Operator in Banach Spaces
Next Article in Special Issue
Lagrangian Heuristic for Multi-Depot Technician Planning of Product Distribution and Installation with a Lunch Break
Previous Article in Journal
Development on a Fractional Hybrid Differential Inclusion with a Nonlinear Nonlocal Fractional-Order Integral Inclusion
Previous Article in Special Issue
Application of an Adaptive Adjacency Matrix-Based Graph Convolutional Neural Network in Taxi Demand Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Equilibrium Optimizer and Slime Mould Algorithm with Variable Neighborhood Search for Job Shop Scheduling Problem

1
Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
2
Xiangsihu College, Guangxi University for Nationalities, Nanning 530225, China
3
College of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, China
4
Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning 530006, China
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(21), 4063; https://doi.org/10.3390/math10214063
Submission received: 23 September 2022 / Revised: 23 October 2022 / Accepted: 25 October 2022 / Published: 1 November 2022

Abstract

:
Job Shop Scheduling Problem (JSSP) is a well-known NP-hard combinatorial optimization problem. In recent years, many scholars have proposed various metaheuristic algorithms to solve JSSP, playing an important role in solving small-scale JSSP. However, when the size of the problem increases, the algorithms usually take too much time to converge. In this paper, we propose a hybrid algorithm, namely EOSMA, which mixes the update strategy of Equilibrium Optimizer (EO) into Slime Mould Algorithm (SMA), adding Centroid Opposition-based Computation (COBC) in some iterations. The hybridization of EO with SMA makes a better balance between exploration and exploitation. The addition of COBC strengthens the exploration and exploitation, increases the diversity of the population, improves the convergence speed and convergence accuracy, and avoids falling into local optimum. In order to solve discrete problems efficiently, a Sort-Order-Index (SOI)-based coding method is proposed. In order to solve JSSP more efficiently, a neighbor search strategy based on a two-point exchange is added to the iterative process of EOSMA to improve the exploitation capability of EOSMA to solve JSSP. Then, it is utilized to solve 82 JSSP benchmark instances; its performance is evaluated compared to that of EO, Marine Predators Algorithm (MPA), Aquila Optimizer (AO), Bald Eagle Search (BES), and SMA. The experimental results and statistical analysis show that the proposed EOSMA outperforms other competing algorithms.

1. Introduction

Job Shop Scheduling Problem (JSSP) has become a hot topic in the manufacturing industry; a reasonable JSSP solution can effectively help manufacturers improve productivity and reduce production costs. However, JSSP has been proved to be an NP-hard problem, which is among the most difficult problems to solve [1]. This means that even medium-sized JSSP instances cannot be guaranteed to obtain an optimal solution in finite time with exact solution methods [2]. Therefore, many researchers have turned their attention to metaheuristic algorithms. According to the algorithmic principle, metaheuristic algorithms may be categorized into three categories: evolution-based, physics-based, and swarm-based [3]. The Genetic Algorithm (GA) [4] and Differential Evolution (DE) [5] are the two primary evolution-based algorithms that have been developed to simulate Darwinian biological evolution. The most common physics-based algorithms include Simulated Annealing (SA) [6], Gravitational Search Algorithm (GSA) [7], Multi-verse Optimizer (MVO) [8], Atom Search Optimization (ASO) [9], and Equilibrium Optimizer (EO) [10]; all are inspired by the principles of physics. Swarm-based algorithms mainly simulate the cooperative properties of natural biological communities. Typical algorithms include Particle Swarm Optimization (PSO) [11], Artificial Bee Colony (ABC) [12], Social Spider Optimization (SSO) [13], Gray Wolf Optimizer (GWO) [14], Whale Optimization Algorithm (WOA) [15], Seagull Optimization Algorithm (SOA) [16], Salp Swarm Algorithm (SSA) [17], Harris Hawks Optimization (HHO) [18], Teaching Learning-based Optimization (TLBO) [19], Aquila Optimizer (AO) [20], Bald Eagle Search (BES) [21], Slime Mould Algorithm (SMA) [22], Marine Predators Algorithm (MPA) [23], Chameleon Swarm Algorithm (CSA) [24], Adolescent Identity Search Algorithm (AISA) [25], etc.
In recent years, algorithms that have been used to solve JSSP include GA [26], Taboo Search Algorithm (TSA) [27], SA [28], PSO [29], Ant Colony Optimization (ACO) [30], ABC [31], TLBO [32], Bat Algorithm (BA) [33], Biogeography-based Optimization (BBO) [34], Harmony Search (HS) [35], WOA [36], HHO [37], etc. An increasing number of metaheuristic and hybrid algorithms have been developed and improved, providing new ideas and directions for solving the JSSP. However, to the authors’ knowledge, there are no online studies that apply EO or SMA to solve JSSP-related problems.
A novel bio-inspired optimization technique called Slime Mould Algorithm (SMA) was proposed by Li et al. in 2020 [22]. It is inspired by the oscillatory behavior of slime mould when it is foraging. Since it is easy to understand and implement, it has attracted the attention of many scholars since it was proposed and has been applied in various fields. For example, Wei et al. [38] proposed an improved SMA (ISMA) to solve the problem of optimal reactive power dispatch in power systems. Abdel-Basset et al. [39] applied SMA mixed with the WOA (HSMA-WOA) for X-ray image detection of COVID-19 and evaluated the performance of HSMA-WOA on 12 chest X-ray images and compared it with 5 algorithms. Liu et al. [40] proposed an SMA integrating the Nelder–Mead single-line strategy and chaotic mapping (CNMSMA) and applied it to the photovoltaic parameter extraction problem, which was tested on three photovoltaic modules. Yu et al. [41] proposed an enhanced SMA (ESMA) based on an opposing learning strategy and an elite chaotic search strategy, which was used to predict the water demand of Nanchang city and tested on four models, showing a prediction accuracy of 97.705%. Hassan et al. [42] proposed an improved SMA (ISMA) combined with Sine Cosine Algorithm (SCA) and applied it to single and bi-objective economic and emission dispatch problems, which was tested on five systems; the results showed that the proposed algorithm is more robust than other well-known algorithms. Zhao et al. [43] proposed an improved SMA (DASMA) based on diffusion mechanism and association strategy and applied it to Renyi’s entropy multilevel thresholding image segmentation based on a two-dimensional histogram with nonlocal means; the experimental results show that the proposed algorithm has good performance. Yu et al. [44] proposed an improved SMA (WQSMA), which employs a quantum rotating gate and water cycle operator to improve the robustness of basic SMA and keep the algorithm in balance with the tendency of exploration and exploitation. Rizk-Allah et al. [45] proposed a Chaos Opposition SMA (CO-SMA) to minimize the energy cost of wind turbines on high-altitude sites. Houssein et al. [46] proposed a hybrid algorithm called SMA-AGDE by mixing SMA with adaptive guided DE (AGDE), which enables the exploitation capability of SMA and exploration capability of AGDE to be well integrated into CEC2017 and three engineering design problems to validate the effectiveness of the SMA-AGDE. Premkumar et al. [47] proposed a multi-objective SMA (MOSMA) to solve multi-objective engineering optimization problems. Although SMA has been applied in many fields, many researchers have found that SMA also has shortcomings as the research progresses, such as insufficient global search capability and easy-to-fall-into local optimum. In this paper, in order to broaden the application of SMA, we first hybridize the search strategy of EO with SMA (EOSMA), which can balance exploration and exploitation, increase the population diversity, improve the robustness, and enhance the generalization capability of the algorithm. Then, introducing the Centroid Opposition-based Computation (COBC) [48] into the hybrid algorithm can strengthen the performance of the algorithm, help the search agent to jump out of the local optimum, improve the probability of finding the global optimal solution, and accelerate the convergence rate. Since the search space of JSSP is large and is a discrete problem. In order to solve JSSP quickly and efficiently, a local search operator based on Two-point Exchange Neighborhood (TEN) [31] is incorporated into the EOSMA. In order to solve the discrete problem efficiently, this research designs a Sort-Order-Index (SOI)-based coding method.
The main contributions of this paper can answer the following questions:
  • Whether the proposed SOI-based encoding method is effective for JSSP;
  • How SMA can be efficiently combined with the EO algorithm and COBC strategy;
  • Whether neighborhood structure combined with EOSMA is more efficient for JSSP;
  • Whether EOSMA can solve high-dimensional JSSP instances quickly and efficiently.
In this paper, 82 JSSP instance datasets from Operations Research Library (OR Library) are used to test the performance of the proposed EOSMA in comparison with SMA, EO and the newly proposed MPA, AO, and BES with the same neighborhood search. In order to facilitate readers to read and understand the research content of this paper, Table 1 lists the abbreviations used in subsequent sections.

2. Preliminaries

2.1. Job Shop Scheduling Problem

JSSP can be described as: there are m machines and n jobs, each job contains k operations, and the total number of operations is n × k . Each operation has a specified processing time T i and a processing machine M i , each machine can only process one operation at a time, and each job must be produced according to a predefined production sequence P i . The operation completion time of each job can be denoted as C i , and the total completion time of all jobs can be denoted as C max = max ( C i ) . The objective of JSSP is to generate a reasonable operation scheduling scheme X that minimizes the maximum completion time C max when all jobs are completed. The explanation of the parameters is shown in Table 2.
In summary, the mathematical model of JSSP can be described by Equation (1):
Minimize :   C max s . t .   C q C i T i , i = 1 , 2 , , n × k ; q P i             j A ( t ) R j , m 1 , m M ; t 0                 C i 0 , i = 1 , 2 , , n × k
where q denotes the predecessor operation before operation i , C q denotes the completion time of operation q , and M denotes the set of processing machines. The first constraint indicates the priority relationship between operations, that is, the completion time of any operation in front of the operation plus the processing time of the current operation should be less than or equal to the completion time of the current operation; the second constraint indicates that at most one operation can be processed at the same time on a machine; the third constraint indicates that the completion time of any operation must be a non-negative number.

2.2. Encoding and Decoding Mapping

Since both SMA and EO are proposed for continuous problems, they cannot be directly used to solve discrete JSSP. Therefore, in this paper, we propose a novel heuristic rule called Sort-Order-Index (SOI)-based encoding method, which maps the real number encoding to integer encoding, making the proposed EOSMA applicable to solve JSSP. The solution vector of EOSMA does not represent the processing order of the jobs, though the component size of the solution vector has an order relationship. The SOI-based encoding method uses the ordering relationship to map the consecutive locations of the slime mould into a discrete processing order, i.e., the processing order of all operations for all jobs, as shown in Figure 1. The SOI-based encoding method is described as follows: firstly, the components of the search agent are sorted in ascending order to find out the sorted component value corresponding to the position index where the component value before sorting is located to form the sort index s o r t X ; then the sort index s o r t X is modulo with the number of jobs n to obtain the integer-encoded solution vector X , as shown in Equation (2).
X = ( s o r t X mod n ) + 1
Each solution vector X corresponds to a processing order, which is a scheduling scheme. By this transformation, the feasibility of the scheduling scheme can be guaranteed without modifying the evolutionary operation of the algorithm. For example, in the scheduling sequence shown in Figure 1, there are three jobs, each containing three operations, and the number of occurrences of each job represents the corresponding operation from left to right. Since the processing machine and time for each operation are pre-specified (as shown in Table 3), the sequence of operations in Figure 1 can be decoded into the scheduling Gantt chart shown in Figure 2.

3. Related Works

3.1. Slime Mould Algorithm

Slime Mould Algorithm (SMA) is a swarm-based metaheuristic algorithm developed by Li et al. in 2020 [22]. It simulates the behavioral and morphological changes of slime mould during foraging to find the best food source. The mathematical model for updating the location of slime mould is seen in Equation (3):
X ( t + 1 ) = { r a n d · ( U B L B ) + L B r a n d < z X b ( t ) + v b · ( W · X A ( t ) X B ( t ) ) r < p v c · X ( t ) r p
where L B and U B denote the lower and upper bounds of the search range, r a n d and r denote random numbers in [0, 1], z = 0.03 is an adjustable parameter, X b ( t ) is the best location found so far, v b and v c are random parameter vectors, v b takes values in [ a , a ] , v c decreases linearly as the number of iterations t goes from 1 to 0, W is the thickness of the vein-like vessels, X A ( t ) and X B ( t ) are two randomly selected individual locations in the population, and X ( t ) indicates the location of slime mould.
The value of p is calculated as Equation (4):
p = tanh | S ( i ) D F |
where i 1 , 2 , , n , S ( i ) denotes the fitness X , and D F denotes the best fitness value obtained so far.
The value of a in the range of v b is calculated as Equation (5):
a = arctanh ( 1 t / max _ t )
where max _ t is the maximum number of iterations.
The formula of W is calculated as Equation (6):
W ( S m e l l I n d e x ( i ) ) = { 1 + r · log ( b F S ( i ) b F w F + 1 ) c o n d i t i o n 1 r · log ( b F S ( i ) b F w F + 1 ) o t h e r s
S m e l l I n d e x = s o r t ( S )
where c o n d i t i o n denotes the individuals whose fitness S ( i ) ranks in the top half, r denotes a random number in [0, 1], b F denotes the best fitness of the current iteration, w F denotes the worst fitness of the current iteration, and S m e l l I n d e x denotes the result of ranking the fitness S ( i ) in ascending order (in the minimization problem). The pseudo-code of SMA is shown in Algorithm 1 [22].
Algorithm 1: Pseudo-code of SMA
1.
Initialize the parameters z , m a x _ t , N , D i m ;
2.
Initialize the locations of slime mould X i ( i = 1 , 2 , , N ) ;
3.
While ( t m a x _ t )
4.
   Check the boundary and calculate the fitness S ;
5.
   Sort the fitness S ;
6.
   Update b F , w F , D F , X b ;
7.
   Calculate the W by Equation (6);
8.
   Update p , v b , v c , A , B ;
9.
      For each search agents
10.
      Update locations by Equation (3);
11.
     End For
12.
   t = t + 1 ;
13.
End While
14.
Return D F , X b ;

3.2. Equilibrium Optimizer

Faramarzi et al. [10] proposed the Equilibrium Optimizer (EO) in 2020, a novel optimization algorithm inspired by physical phenomena of control volume mass balance models. The mass balance equation is usually described by a first-order ordinary differential equation, as shown in Equation (8), which embodies the physical processes of entrance, departure, and generation of mass inside the control volume.
V d C d t = Q C e q Q C + G
Here, V denotes the control volume, C denotes the concentration within the control volume, Q denotes the volume flow rate into or out of the control volume, C e q denotes the concentration when equilibrium is achieved, and G denotes the mass generation rate in the control volume.
Equation (9) can be obtained by solving the ordinary differential equation described by Equation (8):
C = C e q + ( C 0 C e q ) F + G λ V ( 1 F )
where C 0 is the concentration of the control volume at the initial start time t 0 , λ is the flow rate, and F is the exponential term coefficient, which can be calculated by Equation (10).
F = exp [ λ ( t 1 t 0 ) ]
The EO is mainly based on Equation (9) iterative optimization search. For an optimization problem, the concentration represents the individual solution, C represents the solution generated by the current iteration, C 0 represents the solution obtained in the previous iteration, and C e q represents the best solution found so far.
To meet the optimization needs of different problems, the specific operation procedure and parameters of EO are designed as follows.
(1) Initialization: the algorithm performs random initialization within the upper and lower bounds of each optimization variable, as Equation (11):
C i 0 = C min + r a n d i ( C max C min ) , i = 1 , 2 , , N
where C min and C max are the lower and upper bound of the optimization variables, respectively, and r a n d i represents the random number vector for individual i , each element in [0, 1];
(2) Equilibrium pool: In order to improve the exploration capability of the algorithm and avoid falling into local optimum, the equilibrium state (i.e., the optimal individual) in Equation (9) will be selected from the five candidate solutions of the equilibrium pool, which is shown in Equation (12):
C e q , p o o l = { C e q , 1 , C e q , 2 , C e q , 3 , C e q , 4 , C e q , a v e }
where C e q , 1 , C e q , 2 , C e q , 3 , C e q , 4 are the four best solutions found so far, and C e q , a v e represents the average concentration of the four optimal solutions. The five solutions in the equilibrium pool are chosen as C e q with equal probability;
(3) Exponential term factor F : In order to better balance the exploration and exploitation capabilities of the algorithm, Equation (10) is improved as Equation (13).
F = a 1 · s i g n ( r 0.5 ) · ( e λ t 1 1 ) t 1 = ( 1 t / max _ t ) ( a 2 t / max _ t )
where a 1 means the weight constant coefficient of the global search, the larger a 1 the stronger the exploration ability of the algorithm and the weaker the exploitation ability, s i g n is the sign function, r and λ represent the random number vector, each element in [0, 1], t is the number of current iterations, and max _ t is the maximum number of iterations;
(4) Mass generation rate G : In order to enhance the exploitation capability of the algorithm, the generation rate is designed as Equation (14):
G = G C P ( C e q λ C ) F G C P = { 0.5 r 1 r 2 G P 0 r 2 < G P
where G C P is the vector of generation rate control parameter, r 1 and r 2 are random numbers in [0, 1], and G P = 0.5 is the generation probability.
Finally, the individual solution can be updated as shown in Equation (15):
C = C e q + ( C C e q ) F + G λ V ( 1 F )
where V = 1 is considered as a unit.
The pseudo-code of EO is shown in Algorithm 2 [10].
Algorithm 2: Pseudo-code of EO
1.
Initialize the parameters a 1 , a 2 , V , G P , max _ t , N , D i m ;
2.
Initialize the concentration in control volume C i ( i = 1 , 2 , , N ) ;
3.
Initialize the equilibrium pool C e q , p o o l ;
4.
While ( t m a x _ t )
5.
   Check the boundary and calculate the fitness F i t C ;
6.
   Update the equilibrium pool C e q , p o o l ;
7.
   Update C and F i t C with greedy strategy;
8.
      For each search agents
9.
        Update random variables λ , r n , r 1 , r 2 ;
10.
      Randomly select the C e q in the C e q , p o o l ;
11.
      Calculate the F and G by Equations (13) and (14);
12.
      Update concentrations C by Equation (15);
13.
     End For
14.
   t = t + 1 ;
15.
End While
16.
Return C e q , 1 and its fitness;

3.3. Centroid Opposition-Based Computation

Centroid Opposition-based Computation (COBC) is an opposition-based computation scheme proposed by Rahnamayan et al. in 2014 [48]. Experimental results have shown that the average performance of COBC improves by 15% over conventional opposition-based computation method, which is a better improvement strategy. Interested readers can find a detailed description of COBC in [48]. The pseudo-code of COBC is shown in Algorithm 3.
Algorithm 3: Pseudo-code of COBC
1.
Get initial location X i ( i = 1 , 2 , , N ) ;
2.
Centroid point evaluation M = m e a n ( X ) ;
3.
Centroid opposite population calculation O X i = 2 M X i ( i = 1 , 2 , , N ) ;
4.
For each search agents
5.
  Calculate the fitness of O X ;
6.
     If F i t O X < F i t X
7.
     Update X and F i t X with greedy strategy;
8.
     End If
9.
End For
10.
Return X , F i t X ;

3.4. Variable Neighborhood Search

Variable Neighborhood Search (VNS) [49] is a local search algorithm that uses alternating neighborhood structures composed of different actions to achieve a good balance between centralization and sparsity. The VNS is often used to solve combinatorial optimization problems, which rely on the fact: (1) the locally optimal solution of one neighborhood structure may not be the locally optimal solution of another neighborhood structure; (2) the globally optimal solution is the locally optimal solution of all possible neighborhoods. In order to enhance the local search capability of the metaheuristic algorithm for solving JSSP, a simplified VNS is introduced into the hybrid algorithm in this paper. The pseudo-code of the VNS is shown in Algorithm 4.
Algorithm 4: Pseudo-code of VNS
1.
Get initialize solution X = X 0 ;
2.
Calculate the fitness F i t X of X ;
3.
Set L = l e n g t h ( X ) ;
4.
While ( s t e p L )
5.
   Take random integers i and j from 1 to L , and i j ;
6.
   Update X X = E x c h a n g i n g ( X , i , j ) ;
7.
   Calculate the fitness F i t X X of X X ;
8.
      If F i t X X < F i t X
9.
      Update X = X X and F i t X = F i t X X ;
10.
     End If
11.
   s t e p = s t e p + 1 ;
12.
End While
13.
If F i t X < F i t X 0
14.
     Return X ;
15.
Else
16.
     Return X 0 ;
17.
End If
The E x c h a n g i n g ( X , i , j ) executes a Two-point Exchange Neighborhood (TEN) [50], which implies exchanging the job operations in solution X between the ith and jth dimensions, its pseudo-code is shown in [32]. The example of the exchanging process is shown in Figure 3. It is worth noting that, unlike the setup in [32], in this paper, in order to reduce the time complexity of VNS, E x c h a n g i n g ( X , i , j ) does not evaluate all solutions of TEN (a total of L ( L 1 ) / 2 solutions); only L solutions are randomly selected for evaluation and then the best solution among them is chosen.

4. Proposed EOSMA for JSSP

The shortcomings of the original SMA are unbalanced exploration and exploitation, weak exploration ability, and easy-to-fall-into local optimum. Changing the simple random search strategy in SMA to an equilibrium optimizer strategy can not only enhance the exploration ability but also improve the diversity of the population. The search agent of EOSMA performs a heuristic search based on the Equation (16):
X ( t + 1 ) = { X e q ( t ) + ( X ( t ) X e q ( t ) ) · F + G · ( 1 F ) / λ V r a n d < z X e q , 1 ( t ) + v b · ( W · X A ( t ) X B ( t ) ) r < p X ( t ) + v c · X ( t ) r p
where z = 0.6 is an empirical value; X e q denotes a randomly selected solution from the equilibrium pool; X e q , 1 denotes the first solution in the equilibrium pool, i.e., the optimal solution found so far; and the remaining parameters in the Equation (16) use the settings of the original algorithm.
It is worth noting that all components of the solution vector of the first equation of Equation (16) are updated synchronously, independent of the next two equations, while the components of the solution vector of the second and third equations are updated separately; i.e., the same solution vector X may be updated using the second or third equation. Experiments show that asynchronous updates possess better performance than synchronous updates. In addition, EOSMA needs to update the equilibrium pool and the fitness weights of individuals at each generation, which increases the computational effort but does not increase the time complexity of the algorithm. Updating the equilibrium pool requires O ( n ) and updating the fitness weights requires sorting the fitness and, therefore, requires O ( n log n ) . Finally, EOSMA uses greedy selection repeatedly during iterations to speed up convergence, while SMA does not use the greedy strategy.
To further improve the performance of the hybrid algorithm for solving JSSP, the COBC strategy and the VNS strategy are introduced. The former enhances the exploration capability of the algorithm by selecting some individuals in each generation to perform the opposing computation; the latter is often used to solve combinatorial optimization problems, which is a local search algorithm framework that enhances the exploitation capability of the algorithm. The flow chart of EOSMA for solving JSSP is shown in Figure 4 and its pseudo-code is shown in Algorithm 5.
Algorithm 5: Pseudo-code of EOSMA for JSSP
1.
Initialize the parameters z , a 1 , a 2 , V , G P , m a x _ t , N , D i m , J r ;
2.
Initialize the locations of search agent X i ( i = 1 , 2 , , N ) ;
3.
Calculate the fitness F i t X of X ;
4.
Execute COBC to update the initial locations;
5.
While ( t m a x _ t )
6.
   Calculate the fitness F i t X ;
7.
      If ( r a n d < J r )
8.
        Execute COBC to update individual locations;
9.
      End If
10.
  Retain better solutions compared to previous iteration;
11.
  Sort the fitness F i t X ;
12.
  Update the equilibrium pool C e q , p o o l ;
13.
  Update the b F , w F ;
14.
  Calculate the W by Equation (6);
15.
     For each search agents
16.
       Update locations X by Equation (16);
17.
       Execute VNS to update individual locations;
18.
     End For
19.
   t = t + 1 ;
20.
End While
21.
Return X e q , 1 and its fitness;

5. Experimental Results and Discussions

In this paper, the performance of the EOSMA is evaluated by testing it on 82 test datasets taken from the OR Library. They are low-dimensional FT and ORB from [51,52], higher-dimensional LA and ABZ from [53,54], and high-dimensional YN and SWV from [55,56]. All experiments were executed on Win 10 Operating System and all algorithm codes were run in MATLAB R2019a with hardware details: Intel® Core™ i7-9700 CPU (3.00 GHz) and 16 GB RAM.
For a fair comparison, the population size of all comparison algorithms was set to 25, the maximum number of iterations was set to 100, and all comparison algorithms were run 20 times independently on each dataset. In this paper, five algorithms were selected for comparison experiments with the EOSMA, namely SMA [22], EO [10], MPA [23], AO [20], and BES [21], which are the latest proposed algorithms with superior performance. For a fair comparison, all comparison algorithms incorporate the VNS strategy described in Algorithm 4. The specific parameters of the comparison algorithms are kept consistent with the original paper, as shown in Table 4. The performance of the algorithms is evaluated using the best fitness and the average fitness. The performance metrics are then ranked and the Friedman mean rank of the algorithms on different test instances is tallied; the experimental results are shown in Table 5, Table 6 and Table 7. In these tables, Instance denotes the case name; Size denotes the problem size, i.e., the number of jobs and machines; BKS denotes the best-known solution for that instance as reported by Liu et al. [36]; Best denotes the best fitness obtained by the algorithm; and Mean denotes the average fitness.
From Table 5, we can know that EOSMA can obtain better results on the low-dimensional JSSP. The results show that for the FT instance, EOSMA achieves the best results on all three instances and finds BKS on FT06. For the ORB instance, EOSMA achieves the best average performance on 10 instances; finds better solutions than other algorithms on 8 instances; and obtains BKS on ORB07, while VMPA and VBES, respectively, achieve the best results on ORB2 and ORB10 obtained the best solutions. Thus, EOSMA has good performance in solving JSSP-related problems compared to the recently proposed metaheuristic algorithms.
From Table 6, it can be seen that EOSMA can effectively solve JSSP. For 45 instances of LA and ABZ, EOSMA can obtain better performance metrics than other algorithms on all instances except LA18 where the best result is obtained by VBES and find BKS on 24 instances of LA. This shows that EOSMA overcomes the SMA exploration capability shortcomings, and its global search capability is stronger than the latest proposed algorithm to avoid falling into local optimum.
Table 7 presents the algorithm’s solution results on the high-dimensional JSSP; the optimal solution on these instances has not been found yet, so only approximate solutions obtained by different algorithms can be compared. The experimental results show that VAO shows a competitive advantage on high-dimensional instances, achieving better solutions than EOSMA on six instances of SWV, and VMPA achieves the best solutions on YN4 and SWV10, respectively; however, the average solution performance is inferior to that of EOSMA. Therefore, EOSMA still has better performance than other comparative algorithms on high-dimensional JSSP, verifying the effectiveness, accuracy, and robustness of EOSMA on the JSSP.
The execution times of the algorithms are shown in Figure 5 and Figure 6. Figure 5 represents the total time consumed by the six algorithms running 20 times on the six case datasets and Figure 6 shows the average single run times of the six algorithms on the 82 datasets.
As can be seen from Figure 5 and Figure 6, the execution time of EOSMA is the shortest among the six well-known comparison algorithms on ABZ, LA, YN, and SWV. The VAO has the shortest execution time on FT and ORB, followed by EOSMA, VEO, and VMPA. Since all six algorithms introduce the neighborhood search strategy, the main time consumption also comes from VNS but the execution time of EOSMA is significantly lower than VSMA on all instances. Particularly, the execution time of EOSMA is the shortest when solving the high-dimensional JSSP. It shows that EOSMA not only outperforms the well-known comparison algorithms in terms of convergence accuracy and robustness, but also has a shorter execution time.
In order to further analyze the convergence process of EOSMA and the comparison algorithm, two instances are selected from each instance set and their convergence curves and box plots are drawn in Figure 7 and Figure 8. It can be concluded that the convergence speed of EOSMA is faster than VSMA and VEO, and the final convergence accuracy is also better. It is worth noting that VSMA without hybridized EO operator has the slowest convergence speed, which is mainly due to the fact that SMA does not use a greedy selection strategy during the iterative process and the second equation of the Equation (3) uses the optimal solution of the current generation instead of the optimal solution found so far. Although EOSMA does not converge as fast as VAO and VBES in the early stages, the latter tends to fall into a local optimum later in the iteration, suggesting that EOSMA strikes a better balance between exploration and exploitation. The box plot likewise shows that EOSMA can find better solutions than other algorithms, with an average performance better than the comparison algorithms and far better than VSMA.
The Wilcoxon rank-sum test [37] was performed to examine whether there was a statistically significant difference between the two sets of data, i.e., whether the results obtained by the algorithm were influenced by random factors. A similar comparison of statistical experiments is required to confirm the validity of the data because the metaheuristic algorithm is random [57]. The smaller the p-value, the greater the degree of confidence that there is a significant difference between the two data sets. When the p-value is less than 0.05, it indicates that the results obtained by the two algorithms are significantly different at the 95% confidence interval. Table 8 exhibits the results of the Wilcoxon p-value test for EOSMA and other well-known comparison algorithms.
The results of the Wilcoxon rank-sum test indicate that there are fewer instances without significant differences (as shown in bold), where NaN indicates that the two algorithms find exactly the same solution, in which case the optimal solution for that instance is usually found. Moreover, EOSMA significantly outperforms VSMA, VEO, VMPA, VAO, and VBES on 76, 60, 49, 69, and 53 instances, respectively, indicating that the algorithm has performance advantages on different instances of JSSP. In conclusion, the performance of EOSMA is significantly different from SMA, EO, and AO on JSSP; the results are statistically significant, indicating that the results obtained by EOSMA can be reproducibly achieved with more than 95% confidence.

6. Conclusions and Future Work

SMA is a novel swarm-based optimization algorithm inspired by the foraging behavior of slime mould; EO is a superior performance physics-based optimization algorithm inspired by the control volume mass balance equation. Although SMA has been applied in various fields due to the novelty of its metaheuristic rules, SMA still suffers from slow convergence, poor robustness, unbalanced exploration and exploitation, and the tendency to fall into local optimality. To overcome these drawbacks, we propose a hybrid algorithm, EOSMA, which uses a centroid opposition-based computation and VNS strategy combined with an SOI rule-based encoding method for fast and efficient solution of job shop scheduling problems. In EOSMA, the random search strategy of SMA is first replaced by the concentration update operator of EO and the third equation of Equation (3) is replaced by the third equation of Equation (16). Then, the centroid opposition-based calculation was introduced into the hybrid algorithm. With these changes, the search agent has a higher probability of finding a better solution and reduces the number of invalid searches, thus improving the exploration and exploitation capabilities of SMA. Finally, to solve JSSP more effectively, the two-point exchange neighborhood search strategy is added to EOSMA, which enhances the local search capability of EOSMA to solve JSSP. The performance of EOSMA was tested on 82 JSSP datasets from the OR Library and compared with recently proposed algorithms with superior performance. The experimental results show that EOSMA exhibits better search capability than SMA, EO, MPA, AO, and BES in solving JSSP.
JSSP is one of the well-known NP-hard problems. As the size increases, the search space of the problem increases dramatically and the execution time of many existing algorithms will increase dramatically, which cannot solve the larger scale job shop scheduling problem well. Additionally, EOSMA can effectively solve the larger scale JSSP in a reasonable running time. This is mainly because EOSMA not only relies on the local search capability of VNS but also has a strong global search capability before the local search, which can better guide the VNS strategy to find the optimal solution. Therefore, EOSMA is a promising algorithm. Future work will consider more practical scheduling problems, such as the flow shop scheduling problem considering material handling time and the permutation flow shop dynamic scheduling problem considering dynamic changes of processed raw materials, etc.

Author Contributions

Conceptualization, methodology, Y.W. and S.Y.; software, Q.L.; writing—original draft preparation, Y.W.; writing—review and editing, Z.O., K.M.D. and Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant No. U21A20464, 62066005.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and the three anonymous referees for their positive comments and useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garey, M.R.; Johnson, D.S.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 330–348. [Google Scholar] [CrossRef]
  2. Gong, G.; Deng, Q.; Chiong, R.; Gong, X.; Huang, H. An effective memetic algorithm for multi-objective job-shop scheduling. Knowl.-Based Syst. 2019, 182, 104840. [Google Scholar] [CrossRef]
  3. Tang, C.; Zhou, Y.; Tang, Z.; Luo, Q. Teaching-learning-based pathfinder algorithm for function and engineering optimization problems. Appl. Intell. 2021, 51, 5040–5066. [Google Scholar] [CrossRef]
  4. Grefenstette, J.J. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef] [Green Version]
  5. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Kirkpatrick, S. Optimization by simulated annealing: Quantitative studies. J. Stat. Phys. 1984, 34, 975–986. [Google Scholar] [CrossRef]
  7. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  9. Zhao, W.; Wang, L.; Zhang, Z. A novel atom search optimization for dispersion coefficient estimation in groundwater. Future Gener. Comput. Syst. 2018, 91, 601–610. [Google Scholar] [CrossRef]
  10. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  11. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, Nagoya, Japan, 4–6 October 1995; IEEE: Nagoya, Japan, 1995; pp. 39–43. [Google Scholar]
  12. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial Bee Colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  13. Cuevas, E.; Cienfuegos, M.; Zaldívar, D.; Pérez-Cisneros, M. A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 2013, 40, 6374–6384. [Google Scholar] [CrossRef] [Green Version]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  15. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  16. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2018, 165, 169–196. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  18. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  19. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  20. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  21. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intell. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  22. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  23. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  24. Braik, M.S. Chameleon swarm algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  25. Bogar, E.; Beyhan, S. Adolescent Identity Search Algorithm (AISA): A novel metaheuristic approach for solving optimization problems. Appl. Soft Comput. 2020, 95, 106503. [Google Scholar] [CrossRef]
  26. Kurdi, M. An effective new island model genetic algorithm for job shop scheduling problem. Comput. Oper. Res. 2016, 67, 132–142. [Google Scholar] [CrossRef]
  27. Song, X.Y.; Meng, Q.H.; Yang, C. Improved taboo search algorithm for job shop scheduling problems. Syst. Eng. Electron. 2008, 30, 93–96. [Google Scholar] [CrossRef] [Green Version]
  28. Aydin, M.E.; Fogarty, T.C. A distributed evolutionary simulated annealing algorithm for combinatorial optimisation problems. J. Heuristics 2004, 10, 269–292. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, J.; Wang, W.; Xu, X. A hybrid discrete particle swarm optimization for dual-resource constrained job shop scheduling with resource flexibility. J. Intell. Manuf. 2017, 28, 1961–1972. [Google Scholar] [CrossRef]
  30. Huang, R.-H.; Yu, T.-H. An effective ant colony optimization algorithm for multi-objective job-shop scheduling with equal-size lot-splitting. Appl. Soft Comput. 2017, 57, 642–656. [Google Scholar] [CrossRef]
  31. Banharnsakun, A.; Sirinaovakul, B.; Achalakul, T. Job shop scheduling with the best-so-far ABC. Eng. Appl. Artif. Intell. 2012, 25, 583–593. [Google Scholar] [CrossRef]
  32. Keesari, H.S.; Rao, R.V. Optimization of job shop scheduling problems using teaching-learning-based optimization algorithm. OPSEARCH 2014, 51, 545–561. [Google Scholar] [CrossRef]
  33. Dao, T.-K.; Pan, T.-S.; Nguyen, T.-T.; Pan, J.-S. Parallel bat algorithm for optimizing makespan in job shop scheduling problems. J. Intell. Manuf. 2018, 29, 451–462. [Google Scholar] [CrossRef]
  34. Wang, X.; Duan, H. A hybrid biogeography-based optimization algorithm for job shop scheduling problem. Comput. Ind. Eng. 2014, 73, 96–114. [Google Scholar] [CrossRef]
  35. Zhao, F.; Liu, Y.; Zhang, Y.; Ma, W.; Zhang, C. A hybrid harmony search algorithm with efficient job sequence scheme and variable neighborhood search for the permutation flow shop scheduling problems. Eng. Appl. Artif. Intell. 2017, 65, 178–199. [Google Scholar] [CrossRef]
  36. Liu, M.; Yao, X.; Li, Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl. Soft Comput. 2020, 87, 105954. [Google Scholar] [CrossRef]
  37. Liu, C. An improved Harris hawks optimizer for job-shop scheduling problem. J. Supercomput. 2021, 77, 14090–14129. [Google Scholar] [CrossRef]
  38. Wei, Y.; Zhou, Y.; Luo, Q.; Deng, W. Optimal reactive power dispatch using an improved slime mould algorithm. Energy Rep. 2021, 7, 8742–8759. [Google Scholar] [CrossRef]
  39. Abdel-Basset, M.; Chang, V.; Mohamed, R. HSMA_WOA: A hybrid novel slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Appl. Soft Comput. 2020, 95, 106642. [Google Scholar] [CrossRef]
  40. Liu, Y.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; He, C. Boosting slime mould algorithm for parameter identification of photovoltaic models. Energy 2021, 234, 121164. [Google Scholar] [CrossRef]
  41. Yu, K.; Liu, L.; Chen, Z. An improved slime mould algorithm for demand estimation of urban water resources. Mathematics 2021, 9, 1316. [Google Scholar] [CrossRef]
  42. Hassan, M.H.; Kamel, S.; Abualigah, L.; Eid, A. Development and application of slime mould algorithm for optimal economic emission dispatch. Expert Syst. Appl. 2021, 182, 115205. [Google Scholar] [CrossRef]
  43. Zhao, S.; Wang, P.; Heidari, A.A.; Chen, H.; Turabieh, H.; Mafarja, M.; Li, C. Multilevel threshold image segmentation with diffusion association slime mould algorithm and Renyi’s entropy for chronic obstructive pulmonary disease. Comput. Biol. Med. 2021, 134, 104427. [Google Scholar] [CrossRef] [PubMed]
  44. Yu, C.; Asghar Heidari, A.; Xue, X.; Zhang, L.; Chen, H.; Chen, W. Boosting quantum rotation gate embedded slime mould algorithm. Expert Syst. Appl. 2021, 181, 115082. [Google Scholar] [CrossRef]
  45. Rizk-Allah, R.M.; Hassanien, A.E.; Song, D. Chaos-opposition-enhanced slime mould algorithm for minimizing the cost of energy for the wind turbines on high-altitude sites. ISA Trans. 2021, 121, 191–205. [Google Scholar] [CrossRef] [PubMed]
  46. Houssein, E.H.; Mahdy, M.A.; Blondin, M.J.; Shebl, D.; Mohamed, W.M. Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems. Expert Syst. Appl. 2021, 174, 114689. [Google Scholar] [CrossRef]
  47. Premkumar, M.; Jangir, P.; Sowmya, R.; Alhelou, H.H.; Heidari, A.A.; Chen, H. MOSMA: Multi-objective slime mould algorithm based on elitist non-dominated sorting. IEEE Access 2021, 9, 3229–3248. [Google Scholar] [CrossRef]
  48. Rahnamayan, S.; Jesuthasan, J.; Bourennani, F.; Salehinejad, H.; Naterer, G.F. Computing opposition by involving entire population. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Beijing, China, 2014; pp. 1800–1807. [Google Scholar]
  49. Hansen, P.; Mladenović, N.; Moreno Pérez, J.A. Variable neighbourhood search: Methods and applications. 4OR 2008, 6, 319–360. [Google Scholar] [CrossRef]
  50. Gao, L.; Li, X.; Wen, X.; Lu, C.; Wen, F. A hybrid algorithm based on a new neighborhood structure evaluation method for job shop scheduling problem. Comput. Ind. Eng. 2015, 88, 417–429. [Google Scholar] [CrossRef]
  51. Fisher, H.; Thompson, G.L. Probabilistic Learning Combinations of Local Job-Shop Scheduling Rules. Industrial Scheduling; Prentice-Hall: Hoboken, NJ, USA, 1963; pp. 225–251. [Google Scholar]
  52. Applegate, D.; Cook, W. A computational study of the job-shop scheduling instance. ORSA J. Comput. 1991, 3, 149–156. [Google Scholar] [CrossRef]
  53. Adams, J.; Balas, E.; Zawack, D. The shifting bottleneck procedure for job shop scheduling. Manag. Sci. 1988, 34, 391–401. [Google Scholar] [CrossRef]
  54. Lawrence, S. Resource Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement); Graduate School of Industrial Administration, Carnegie-Mellon University: Pittsburgh, PA, USA, 1984. [Google Scholar]
  55. Yamada, T.; Nakano, R. A genetic algorithm applicable to large-scale job-shop instances. Parallel Instance Solving Nat. 1992, 2, 281–290. [Google Scholar]
  56. Storer, R.H.; Wu, S.D.; Vaccari, R. New search spaces for sequencing problems with application to job shop scheduling. Manag. Sci. 1992, 38, 1495–1509. [Google Scholar] [CrossRef]
  57. Yin, S.; Luo, Q.; Du, Y.; Zhou, Y. DTSMA: Dominant swarm with adaptive t-distribution mutation-based slime mould algorithm. Math. Biosci. Eng. 2022, 19, 2240–2285. [Google Scholar] [CrossRef] [PubMed]
Figure 1. SOI-based encoding mapping.
Figure 1. SOI-based encoding mapping.
Mathematics 10 04063 g001
Figure 2. SOI-based decoding mapping.
Figure 2. SOI-based decoding mapping.
Mathematics 10 04063 g002
Figure 3. Exchanging process in VNS method [31].
Figure 3. Exchanging process in VNS method [31].
Mathematics 10 04063 g003
Figure 4. Flow chart of the EOSMA for JSSP.
Figure 4. Flow chart of the EOSMA for JSSP.
Mathematics 10 04063 g004
Figure 5. Execution time of six algorithms running 20 times.
Figure 5. Execution time of six algorithms running 20 times.
Mathematics 10 04063 g005
Figure 6. Average execution time of six algorithms on all instances.
Figure 6. Average execution time of six algorithms on all instances.
Mathematics 10 04063 g006
Figure 7. Average convergence curves of all comparison algorithms.
Figure 7. Average convergence curves of all comparison algorithms.
Mathematics 10 04063 g007
Figure 8. Box plots of all algorithms executed 20 times on instances.
Figure 8. Box plots of all algorithms executed 20 times on instances.
Mathematics 10 04063 g008
Table 1. List of abbreviations.
Table 1. List of abbreviations.
AbbreviationsMeaning of Abbreviations
BKSBest-known solution
COBCCentroid opposition-based computation
EOEquilibrium optimizer
EOSMAProposed algorithm
JSSPJob shop scheduling problem
SMASlime mould algorithm
SOISort-order-index
TENTwo-point exchange neighborhood
VAOAquila optimizer with VNS
VBESBald eagle search with VNS
VEOEquilibrium optimizer with VNS
VMPAMarine predators algorithm with VNS
VNSVariable neighborhood search
VSMASlime mould algorithm with VNS
Table 2. Detailed description of parameters [36].
Table 2. Detailed description of parameters [36].
ParametersMeaning of Parameters
n Number of jobs
m Number of machines
C i Completion time of operation i
T i Processing time of operation i on given machine
P i All predecessor operations of operation i
A ( t ) Set of operations processed at time t
R j , m Token of operation j that requires processing on machine m
C max Maximum completion time when complete all operations
Table 3. A case of JSSP [36].
Table 3. A case of JSSP [36].
Jobs(Mi, Ti)
O1O2O3
1(1, 10)(3, 15)(2, 5)
2(2, 15)(1, 8)(3, 20)
3(3, 9)(2, 10)(1, 15)
Table 4. Parameter settings of algorithms.
Table 4. Parameter settings of algorithms.
AlgorithmsParameter Settings
EOSMA z = 0.6 ; a 1 = 2 ; a 2 = 1 ; V = 1 ; G P = 0.5 ; J r = 0.3
VSMA z = 0.03
VEO a 1 = 2 ; a 2 = 1 ; V = 1 ; G P = 0.5
VMPA F A D s = 0.2 ; P = 0.5
VAO α = 0.1 ; δ = 0.1
VBES α = 2 ; a = 10 ; R = 1.5 ; c 1 = 2 ; c 2 = 2
Table 5. Comparison of solution results of algorithms on FT and ORB.
Table 5. Comparison of solution results of algorithms on FT and ORB.
InstancesSizeBKSEOSMAVSMAVEOVMPAVAOVBES
BestMeanBestMeanBestMeanBestMeanBestMeanBestMean
FT066 × 65555555558.05555555555556.655555
FT1010 × 10930942978.959541128.5976998.25971997.998110339831009.55
FT2020 × 5116511801212.4511991335.1511991234.4511981240.212071245.412131254.45
Friedman mean rank1.831.503.006.003.672.502.832.504.504.675.173.83
ORB0110 × 10105910861137.210901216.911041145.451122116211041193.711411161.9
ORB0210 × 10888899926.5902986.85921936.65894945.3931965.6920940.35
ORB0310 × 10100510271089.210531223.71064110910761108.2510801166.810951126.8
ORB0410 × 10100510111043.410421195.2510321059.710361064.6510401084.6510461063.55
ORB0510 × 10887899932.259181019.35909952899951.95920996.15910961.1
ORB0610 × 10101010311060.4510351154.4510341082.7510341090.910701156.710461098
ORB0710 × 10397397412.45407471.8406419.5408422.7408435.2412426.1
ORB0810 × 10899916966.159421118.2931976.6924983.559701037.35934986.9
ORB0910 × 10934950977.49701097.15969984.5962993.759571019.45963981.75
ORB1010 × 10944957983.39831081.2967998.19731003.510091047.89551004.4
Friedman mean rank1.251.004.005.903.202.303.053.204.905.104.603.50
The optimal values are shown in bold.
Table 6. Comparison of solution results of algorithms on ABZ and LA.
Table 6. Comparison of solution results of algorithms on ABZ and LA.
InstancesSizeBKSEOSMAVSMAVEOVMPAVAOVBES
BestMeanBestMeanBestMeanBestMeanBestMeanBestMean
ABZ510 × 10123412421259.5512451340.512481271.512491266.4512601297.512511272.15
ABZ610 × 10943947964.759481007.8948975.05958974.6951994.9951971.65
ABZ720 × 15656717743.8724812.8744773.2729751.95736770.8740755.05
ABZ820 × 15665723760.65742858.3772800.3752774759792.6768780.3
ABZ920 × 15678742777.25770877.3776810.9763787.4773809.7789802.75
Friedman mean rank1.001.002.306.004.504.403.602.204.504.404.503.00
LA0110 × 5666666666666692666666666666666670.7666666
LA0210 × 5655655665.3655718.05655669.2655674.55655690.35655666.7
LA0310 × 5597597612.55611667.5606619.5605624.35617640.75604617.75
LA0410 × 5590590598.15590638.75590601.8590604.25599618.45590601.8
LA0510 × 5593593593593596.8593593593593593593593593
LA0615 × 5926926926926948.35926926926926926926926926
LA0715 × 5890890890890958.4890890890891.65890894.35890890
LA0815 × 5863863863863894.9863863863863863866.45863863
LA0915 × 5951951951951979.65951951951951951951951951
LA1015 × 5958958958958973.2958958958958958958958958
LA1120 × 512221222122212221263.212221222122212221222122212221222
LA1220 × 510391039103910391081.1510391039103910391039103910391039
LA1320 × 511501150115011501186.9511501150115011501150115011501150
LA1420 × 512921292129212921302.1512921292129212921292129212921292
LA1520 × 512071207120712071248.812071212.3512071208.312071225.0512071209.1
LA1610 × 10945946975.659591090.25946982.2959990.759881014.8978987.95
LA1710 × 10784784792.85784860.05787797.35784801789821.45792801.35
LA1810 × 10848852866.8853937.25854869852883.8861917.1849875.25
LA1910 × 10842842877.1852990.6852883.6866883.5875926.1869887.15
LA2010 × 10902907916.059071005.45907928.65907929.05924961.85914934.2
LA2115 × 10104610811119.9511171213.2511111142.710891128.9511051166.211041130.1
LA2215 × 10927951980.659681071.759641007.79511011.159971048.89671008.4
LA2315 × 1010321032104710321202.5510421064.9510321064.610371085.310321060.65
LA2415 × 109359701008.959841107.59901029.59871019.459931042.259941018.75
LA2515 × 109779981046.510231114.8510251070.91015106010571111.710401066.35
LA2620 × 10121812251276.6512511420.212901326.5512561291.612731334.2512491295.6
LA2720 × 10123512881346.413131513.951347138013051358.0513101383.9513111361.3
LA2820 × 10121612561304.2512791397.5513111351.3512841326.1512851357.7512851330.7
LA2920 × 10115212451288.312471411.712781338.8512561310.5512921345.312871315.4
LA3020 × 10135513551407.5513871542.4513991453.2514071442.514071473.313961438.1
LA3130 × 10178417841785.417841947.617841818.0517841794.217841800.417841786.1
LA3230 × 10185018501853.4518501930.518631910.7518501860.3518501865.7518501853.75
LA3330 × 1017191719172217191879.617321774.917191725.6517191748.817191722.65
LA3430 × 10172117211758.117501881.617901832.117221779.517471801.317431789.35
LA3530 × 10188818881895.8518882043.6518941935.918881909.6518881935.618951932.75
LA3615 × 15126813111348.9513361469.3513591386.3513341375.0513621429.713501399.35
LA3715 × 15139714641524.6514841717.715091559.4514901535.5515241602.115191565.15
LA3815 × 15119612801329.812811469.413091364.2513001341.413341398.813351365.95
LA3915 × 1512331276133713071479.813031366.313271376.413281424.2513451377.5
LA4015 × 15122212691331.1512771510.413001351.9131213541327138113161352.6
Friedman mean rank2.231.503.246.004.013.263.112.854.514.483.902.91
The optimal values are shown in bold.
Table 7. Comparison of solution results of algorithms on YN and SWV.
Table 7. Comparison of solution results of algorithms on YN and SWV.
InstancesSizeEOSMAVSMAVEOVMPAVAOVBES
BestMeanBestMeanBestMeanBestMeanBestMeanBestMean
YN120 × 209851013.39931121.8510341064.959901035.29911047.410021038.25
YN220 × 2010041052.2510241148.210611095.510131055.110241070.6510331067.6
YN320 × 209821034.1510171245.410451086.610181050.059951062.310311061.05
YN420 × 2010841124.310941268.511381179.9510701142.111361180.511271156.85
Friedman mean rank1.251.003.386.006.004.752.252.003.384.254.753.00
SWV0120 × 1015901684.5516391747.8516931754.8516461728.816991791.917201765.75
SWV0220 × 1016531723.3516521908.6517301777.717091756.117161806.517451785.85
SWV0320 × 1015881650.716271790.3516401727.2516211688.4516541754.8516741712.9
SWV0420 × 1016551727.917131884.517351789.4517051760.216611785.717331777.25
SWV0520 × 1016201706.216491941.216841743.616811722.1516661747.116781718.05
SWV0620 × 1519642036.9520092286.9520452147.8520012069.0519572098.3520442080.25
SWV0720 × 1518321935.9519072148.919282022.119061960.651890198519121962.15
SWV0820 × 1520242152.0521202473.121822252.920762173.721292211.0521222176.75
SWV0920 × 1518931981.41942228819922101.919732034.919752066.720072047.45
SWV1020 × 152007208520712246.321322192.2520012123.0520332149.1520732118.65
SWV1150 × 1037503877.3537223951.238434063.837863913.2535743739.139914042.8
SWV1250 × 1037553924.6538454190.939064052.7538513989.9537023784.9539554046.8
SWV1350 × 1038183933.337713991.2540124128.6539534041.3536224041.540514137
SWV1450 × 1036903759.436763959.838153955.2537743863.635143614.438253941.75
SWV1550 × 1036623829.5537374008.2538483991.7537833879.6535593669.4539144020.75
SWV1650 × 102924292429243112.9529242924.3292429242924292429242924
SWV1750 × 102794279427942932.527942810.252794279427942799.1527942796.1
SWV1850 × 102852285228523028.928522852.752852285228522852.928522852
SWV1950 × 1028432847.328433014.528712955.228432853.7528432878.5528432864.25
SWV2050 × 102823282328233005.252823282728232823.228232828.428232823
Friedman mean rank2.151.382.905.504.954.653.252.402.803.684.953.40
The optimal values are shown in bold.
Table 8. The p-value generated by Wilcoxon rank-sum test (two-tailed).
Table 8. The p-value generated by Wilcoxon rank-sum test (two-tailed).
InstancesVSMAVEOVMPAVAOVBESInstancesVSMAVEOVMPAVAOVBES
FT063.10 × 10−6NaNNaN1.62 × 10−4NaNLA241.06 × 10−33.53 × 10−41.80 × 10−15.06 × 10−43.71 × 10−2
FT105.24 × 10−51.61 × 10−34.67 × 10−31.28 × 10−61.31 × 10−5LA256.00 × 10−24.68 × 10−31.67 × 10−14.48 × 10−72.29 × 10−3
FT201.24 × 10−51.41 × 10−33.54 × 10−42.02 × 10−51.77 × 10−6LA266.84 × 10−41.32 × 10−51.72 × 10−12.03 × 10−53.59 × 10−2
ORB011.02 × 10−24.49 × 10−11.78 × 10−36.23 × 10−68.31 × 10−4LA272.34 × 10−31.78 × 10−42.79 × 10−11.28 × 10−32.94 × 10−2
ORB022.21 × 10−23.81 × 10−24.47 × 10−38.96 × 10−73.92 × 10−3LA281.06 × 10−33.04 × 10−67.05 × 10−39.18 × 10−44.13 × 10−4
ORB039.62 × 10−46.98 × 10−22.47 × 10−23.26 × 10−62.87 × 10−4LA292.15 × 10−28.54 × 10−63.37 × 10−27.07 × 10−69.62 × 10−4
ORB045.41 × 10−64.84 × 10−34.54 × 10−47.97 × 10−61.20 × 10−4LA306.21 × 10−48.25 × 10−54.83 × 10−43.72 × 10−69.18 × 10−4
ORB051.43 × 10−43.25 × 10−22.56 × 10−25.86 × 10−62.21 × 10−4LA313.76 × 10−51.19 × 10−71.07 × 10−22.03 × 10−43.50 × 10−1
ORB061.52 × 10−45.54 × 10−32.20 × 10−41.91 × 10−73.28 × 10−5LA329.93 × 10−47.53 × 10−81.76 × 10−22.43 × 10−26.96 × 10−1
ORB076.93 × 10−51.70 × 10−21.40 × 10−32.72 × 10−58.46 × 10−6LA337.85 × 10−63.88 × 10−84.03 × 10−12.87 × 10−41.62 × 10−1
ORB082.92 × 10−51.85 × 10−14.10 × 10−21.79 × 10−66.81 × 10−3LA341.28 × 10−41.52 × 10−71.43 × 10−25.21 × 10−59.72 × 10−5
ORB093.27 × 10−55.46 × 10−21.65 × 10−26.20 × 10−53.16 × 10−1LA352.08 × 10−25.62 × 10−77.78 × 10−21.69 × 10−47.96 × 10−7
ORB103.48 × 10−54.23 × 10−21.73 × 10−23.65 × 10−77.68 × 10−3LA364.14 × 10−44.82 × 10−47.68 × 10−31.80 × 10−69.65 × 10−6
ABZ51.01 × 10−39.66 × 10−31.55 × 10−14.09 × 10−57.86 × 10−4LA375.62 × 10−42.44 × 10−33.30 × 10−11.47 × 10−61.36 × 10−4
ABZ62.80 × 10−22.24 × 10−22.10 × 10−23.87 × 10−57.52 × 10−2LA381.73 × 10−21.15 × 10−41.99 × 10−12.35 × 10−67.50 × 10−6
ABZ73.63 × 10−31.69 × 10−51.01 × 10−11.51 × 10−41.48 × 10−2LA391.14 × 10−21.28 × 10−22.60 × 10−49.09 × 10−71.36 × 10−4
ABZ83.20 × 10−43.15 × 10−71.54 × 10−28.70 × 10−44.62 × 10−5LA407.57 × 10−44.24 × 10−24.38 × 10−28.75 × 10−51.85 × 10−2
ABZ91.25 × 10−58.47 × 10−61.48 × 10−17.49 × 10−65.44 × 10−6YN11.35 × 10−31.63 × 10−75.78 × 10−32.21 × 10−43.55 × 10−4
LA014.01 × 10−4NaNNaN9.54 × 10−3NaNYN21.60 × 10−21.91 × 10−67.87 × 10−17.85 × 10−21.10 × 10−2
LA027.00 × 10−31.77 × 10−12.13 × 1021.23 × 10−46.81 × 10−1YN32.58 × 10−54.81 × 10−65.77 × 10−33.95 × 10−32.21 × 10−4
LA035.04 × 10−62.84 × 10−23.09 × 10−41.26 × 10−61.40 × 10−1YN41.14 × 10−21.92 × 10−63.71 × 10−24.20 × 10−62.42 × 10−5
LA044.03 × 10−64.76 × 10−22.45 × 10−27.67 × 10−78.19 × 10−2SWV012.73 × 10−11.04 × 10−48.68 × 10−35.81 × 10−62.84 × 10−6
LA054.53 × 10−3NaNNaNNaNNaNSWV027.87 × 10−23.74 × 10−43.37 × 10−24.83 × 10−42.15 × 10−5
LA069.30 × 10−4NaNNaNNaNNaNSWV031.18 × 10−22.44 × 10−52.00 × 10−21.67 × 10−65.99 × 10−7
LA071.10 × 10−6NaN8.06 × 10−29.58 × 10−3NaNSWV045.56 × 10−33.92 × 10−53.85 × 10−22.80 × 10−38.71 × 10−5
LA081.67 × 10−4NaNNaN3.42 × 10−1NaNSWV053.96 × 10−31.67 × 10−23.72 × 10−11.61 × 10−23.58 × 10−1
LA096.68 × 10−5NaNNaNNaNNaNSWV068.33 × 10−42.94 × 10−78.01 × 10−31.03 × 10−41.29 × 10−4
LA101.67 × 10−4NaNNaNNaNNaNSWV075.34 × 10−43.05 × 10−62.08 × 10−13.48 × 10−29.07 × 10−2
LA111.67 × 10−4NaNNaNNaNNaNSWV081.12 × 10−32.30 × 10−52.79 × 10−11.23 × 10−21.33 × 10−1
LA126.68 × 10−5NaNNaNNaNNaNSWV091.16 × 10−46.65 × 10−61.95 × 10−34.66 × 10−51.89 × 10−4
LA132.57 × 10−5NaNNaNNaNNaNSWV107.95 × 10−42.20 × 10−63.96 × 10−32.60 × 10−43.19 × 10−3
LA144.53 × 10−3NaNNaNNaNNaNSWV114.82 × 10−13.98 × 10−63.98 × 10−26.86 × 10−46.77 × 10−8
LA159.58 × 10−39.58 × 10−31.63 × 10−16.67 × 10−58.06 × 10−2SWV121.86 × 10−21.17 × 10−51.86 × 10−21.91 × 10−51.33 × 10−5
LA162.16 × 10−62.24 × 10−12.19 × 10−32.40 × 10−72.60 × 10−2SWV131.26 × 10−12.56 × 10−73.48 × 10−51.89 × 10−41.65 × 10−7
LA171.07 × 10−51.57 × 10−27.11 × 10−35.16 × 10−67.84 × 10−4SWV143.60 × 10−22.56 × 10−71.25 × 10−53.93 × 10−52.22 × 10−7
LA186.13 × 10−36.31 × 10−19.84 × 10−31.32 × 10−66.56 × 10−2SWV156.79 × 10−28.06 × 10−62.15 × 10−21.17 × 10−51.57 × 10−6
LA191.89 × 10−51.43 × 10−19.66 × 10−28.85 × 10−71.31 × 10−2SWV161.67 × 10−43.42 × 10−1NaNNaNNaN
LA201.23 × 10−33.58 × 10−37.82 × 10−31.35 × 10−78.98 × 10−5SWV174.53 × 10−31.67 × 10−4NaN8.06 × 10−28.06 × 10−2
LA211.10 × 10−55.30 × 10−41.04 × 10−11.36 × 10−41.13 × 10−1SWV186.68 × 10−58.06 × 10−2NaN1.63 × 10−1NaN
LA221.47 × 10−33.36 × 10−44.58 × 10−41.64 × 10−71.98 × 10−4SWV191.62 × 10−53.46 × 10−84.38 × 10−21.75 × 10−33.65 × 10−4
LA237.89 × 10−45.90 × 10−41.99 × 10−29.08 × 10−69.34 × 10−3SWV201.67 × 10−42.09 × 10−33.42 × 10−18.06 × 10−2NaN
No significant differences are shown in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, Y.; Othman, Z.; Daud, K.M.; Yin, S.; Luo, Q.; Zhou, Y. Equilibrium Optimizer and Slime Mould Algorithm with Variable Neighborhood Search for Job Shop Scheduling Problem. Mathematics 2022, 10, 4063. https://doi.org/10.3390/math10214063

AMA Style

Wei Y, Othman Z, Daud KM, Yin S, Luo Q, Zhou Y. Equilibrium Optimizer and Slime Mould Algorithm with Variable Neighborhood Search for Job Shop Scheduling Problem. Mathematics. 2022; 10(21):4063. https://doi.org/10.3390/math10214063

Chicago/Turabian Style

Wei, Yuanfei, Zalinda Othman, Kauthar Mohd Daud, Shihong Yin, Qifang Luo, and Yongquan Zhou. 2022. "Equilibrium Optimizer and Slime Mould Algorithm with Variable Neighborhood Search for Job Shop Scheduling Problem" Mathematics 10, no. 21: 4063. https://doi.org/10.3390/math10214063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop