Next Article in Journal
Characteristics of Particulate Pollution (PM2.5 and PM10) and Their Spacescale-Dependent Relationships with Meteorological Elements in China
Previous Article in Journal
Measuring Crowdedness between Adjacent Stations in an Urban Metro System: a Chinese Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Local Search Algorithm for the Sequence Dependent Setup Times Flowshop Scheduling Problem with Makespan Criterion

1
School of Information and Science Technology, Northeast Normal University, Changchun 130117, China
2
Department of Computer Science, College of Humanities & Sciences of Northeast Normal University, Changchun 130117, China
*
Authors to whom correspondence should be addressed.
Sustainability 2017, 9(12), 2318; https://doi.org/10.3390/su9122318
Submission received: 25 October 2017 / Revised: 7 December 2017 / Accepted: 9 December 2017 / Published: 14 December 2017

Abstract

:
This paper focuses on the flowshop scheduling problem with sequence dependent setup times (FSSP-SDST), which has been an investigated object for decades as one of the most popular scheduling problems in manufacturing systems. A novel hybrid local search algorithm called HLS is presented to solve the flowshop scheduling problem with sequence dependent setup times with the criterion of minimizing the makespan. Firstly, the population is initialized by the Nawaz-Enscore-Hoam based problem-specific method (NEHBPS) to generate high quality individuals of the current population. Then, a global search embedded with a light perturbation is designed to produce a new population. After that, to improve the quality of individuals in the current population, a single insertion-based local search is applied. Meanwhile, a further local search strategy based on the insertion-based local search is used to find better solutions for the individuals which are non-improved. Finally, the heavy perturbation is used to explore potential solutions in the neighbor region. To validate the performance of HLS, we compare our proposed algorithm with other competitive algorithms on Taillard benchmark problems. From the experimental results, it can be concluded that the proposed algorithm outperforms the benchmark algorithms.

1. Introduction

Permutation flowshop scheduling is one of the classical combinatorial optimization problems since the seminal work of Johnson Seminal [1], which is widely applied in real life, such as computer work, industrial engineering, mathematics. In particular, the permutation flowshop scheduling problem with sequence dependent setup times has attracted the attention of many researchers. For this problem, the sequence dependent means that the setup times of the processing jobs depend both on the preceding job and the job to be processed. It is different from the sequence independent which depends only on the job to be processed [2,3]. Meanwhile, the setup time is regarded as negligible in a lot of cases thus it is often ignored or dealt with a part of processing time [4,5,6,7].
Recently, many heuristic methods have been introduced to solve this problem since exact methods [8] may take too much computation cost. However, the exact methods are applied widely in the hybridization algorithms for many combinational optimization problems. Such as, Nishi and Hiranaka employed the Lagrangian relaxation and cut generation technique for solving sequence-dependent setup time flowshop scheduling problems to minimise the total weighted tardiness effectively [9], and in the book about the use of metaheuristics in optimization [10], the chapters [11,12] surveyed the integer linear programming techniques and metaheuristics derived from branch and bound for the combinatorial optimization respectively. Furthermore, Andreagiovanni and Nardin [13] proposed an improved ant-colony algorithm for designing sensor networks in 2015, where linear programming relaxations were employed to take better variable fixing decisions. Andreagiovanni and et al. also raised a very effective and fast algorithm [14] integrated an ant colony-like algorithm with an approximation algorithm and linear relaxations for solving multiperiod network design. Besides, Gambardella and et al. [15] presented how ant colony algorithms combined with solution improvement phases following some simple but effective rules can lead to good improvements in the quality of obtained solutions. In terms of the heuristic algorithms, the advantage of them is that some excellent solutions can be gained quickly, contrarily the disadvantage is that the quality of the solutions can not be guaranteed in general. There are three types of heuristics including constructive heuristic, meta-heuristic and hybrid heuristic respectively to deal with FSSP-SDST in the literature. A constructive heuristic comprises a stochastic construction and a greedy construction, and constructs feasible solutions by adding jobs one by one to the partial permutation. NEH algorithm was proposed by Nawaz et al. [16] and proved effective in finding the good solution for FSSP-SDST. Based on it, Rios-Mercado and Bard [17] proposed an extension constructive deterministic heuristic (NEHT_RMB) with high effectiveness for FSSP-SDST with the makespan objective. Meanwhile, a greedy randomized adaptive search procedure (GRASP) was also presented to improve the fitness of solutions. Ruiz et al. [18] used two algorithms namely a novel genetic algorithm (GA) and a memetic algorithm (MA) based on the classical genetic algorithm and they performed better than that of Rios-Mercado and Bard [19]. Rajendran and Ziegler [20] proposed an ant colony optimization (PACO) to enhance the solutions for FSSP-SDST with the objective of makespan and total flowtime of jobs respectively using NEHT_RMB method. Another ant colony optimization algorithm (ACO) was developed by Gajpal et al. [21] for FSSP-SDST with makespan criterion. Tseng et al. [22] presented an inventive heuristic algorithm and compared it with an existing index heuristic algorithm. The migrating birds optimization meta-heuristic was proposed by Benkalai et al. [23] and its good performance was proved by related experiments. The last but not least type of the heuristic is the hybrid algorithm; it also contains many algorithms blended with local search methods. Rios-Mercado and Bard [19] developed a heuristic algorithm called the hybrid algorithm which was the modification of the four heuristics compared with three benchmarks presented by Simons [24]. Besides, a new heuristic method called iterative greedy algorithm (IG) was proposed by Jacobs and Brusco [25], which was applied widely in other heuristics and had shown better performance for FSSP-SDST in the algorithm developed by Ruiz et al. [26]. Based on the basic IG (IG_RS), an effective local search was added to it in the destruction and construction phases for FSSP-SDST. In the same way, that effective local search was also added to the original MA which was called MA L S to solve the same problem. By the experiments, it has to be noted that the above two hybrid algorithms, IG_RS L S and MA L S , outperformed others including earlier heuristics and that of Rajendran and Ziegler [27] in getting better solutions. In addition, another effective iterated local search algorithm (ILS) developed by Wang et al. was shown its high efficiency with the objective of makespan in FSSP-SDST [28]. However, most of the above algorithms still cannot find solutions with high quality in a reasonable computational time. In this paper, we propose a hybrid local search algorithm for the sequence dependent setup times flowshop scheduling problem with makespan criterion. The difference of our proposed algorithm is that in most of the above algorithms, the initial individuals are generated randomly to keep the diversity of the population and most of them are not of high quality and this leads to solutions of low quality in next generations. In HLS, a NEH based problem-specific method is applied to guarantee the diversity and quality of the initial solutions at the same time. Besides, some of the above methods only focus on the global search capability and on contrary close attention to the local search ability is paid in some algorithms such as ILS and ACO. Hence, we combine the global search with the local search to enhance the solution quality in HLS. Furthermore, without effective perturbation methods, some algorithms such as IG_RS and IG_RS L S , are easy to be trapped into a local optimum after a number of iterations of local search without improving the quality of solutions further. Thus, in HLS, an efficient perturbation method is carried out to guide the search towards another area of the solutions and achieve better exploration.
Based on the above analyses, the hybrid local search algorithm can be summarized as follows: first of all, an overall framework of HLS based on local search and perturbation operators is presented to solve FSSP-SDST. During the search, the NEH based problem-specific method is used to initialize the population. Then to improve the performance of current population, a global search with a perturbation is applied. Next, we use an insertion-based local search to enhance the solutions for FSSP-SDST. What’s more, a further local search method is used to exploit neighbors with good quality if the individuals cannot be improved by single local search method. At last, an update method is applied to update the current population. In order to show the performance of our proposed algorithm, six state-of-the-art algorithms namely GA, MA, MA L S , IG_RS, IG_RS L S and PACO are compared to test its high effectiveness. For the purpose of experimentation, we use the Taillard benchmark problems for FSSP-SDST at four levels including SDST10, SDST50, SDST100 and SDST125 with 12 sizes of data to analyze the performance of all the above seven algorithms. The statistical results show the high efficiency and competitive performance of HLS.
The rest of this paper is organized as follows: Section 2 describes the flowshop scheduling problem with sequence dependent setup times. The detail of the proposed algorithm is given in Section 3. In Section 4, the computational experiments are conducted. Finally, we draw the conclusion and the future work in Section 5.

2. The Flowshop Scheduling Problem with Sequence Dependent Setup Times

For the flowshop scheduling problem with sequence dependent setup times, a set of n jobs or tasks in J = {1, 2, 3, …, n} has to be processed on a set M = {1, 2, 3, …, m} of m machines sequentially, such as first on machine 1, then on machine 2, till the end on machine m. The objective of this problem is to find a processing sequence with the minimized makespan ( C m a x ). For all the machines, the operating order of each processing sequence is identical. For a job permutation sequence π = { π 1 , π 2 , , π n } , the value of the setup times depends on the current job processing on the certain machine and the next job in the same permutation. Each job can be processed on one machine every time and meanwhile each machine can process only one job. All n × m operations are non-preemptive and none of them on each machine has the priority. The definition of the flowshop scheduling problem with sequence dependent setup times can be described as follows:
C ( π 1 , 1 ) = p π 1 , 1 C ( π i , 1 ) = C ( π i 1 , 1 ) + p π i , 1 + S 1 , π i 1 , π i , i = 2 , , n C ( π 1 , j ) = C ( π 1 , j 1 ) + p π 1 , j , j = 2 , , m C ( π i , j ) = max ( C ( π i 1 , j ) + S j , π i 1 , π i , C ( π i , j 1 ) ) + p π i , j , i = 2 , , n ; j = 2 , , m C m a x ( π ) = C ( π n , m )
where S j , π i , π k is denoted as the setup time when producing job π k on the machine j after having processed job π i . p π i , j denotes the processing time of job π i on machine j. The C ( π i , j ) is the complete time of job π i on machine j. In this paper, we assume that S j , 0 , π i = 0 , C π i , 0 = 0 and C π 0 , j = 0 ( j M , i , k J ) .
Next, in order to show the problem simply, a problem with three jobs and three machines are introduced. The setup times and processing times are presented in Table 1 and Table 2 respectively. A scheduling chart is outlined in Figure 1. From this figure, we can calculate the value of makespan is 20.

3. A Hybrid Local Search Algorithm

3.1. Overall Framework

In this section, the hybrid local search framework for solving FSSP-SDST with the objective of makespan is proposed by integrating some problem-specific methods. Each operator plays an important role in improving the efficiency of HLS.
At the beginning of the proposed algorithm, a NEH based problem-specific heuristic is utilized to initialize the population with N individuals. In other words, N permutation sequences for FSSP-SDST are formed. Then an improved population with good neighbor individuals is developed by the global search method. In addition, an effective single insertion-based local search method is applied to each individual for exploiting good individuals around it. If a better individual is found, the previous one will be updated. Otherwise, the multiple insertion-based local search is executed deeply as a further search regarding the individuals that are trapped into the local optima. Then, the solution will be stored in PotentialInd to maintain the good individual completely. In order to explore individuals transversely, the heavy perturbation approach is used to partly guide the trend of search. At last, we apply an effective update method to update the current population. The detail schemes of HLS are given in Algorithm 1.
Algorithm 1 HLS.
Input:
The population size N.
The number of jobs n.
The maximum times of further search FSNum m a x .
Output:
The best individual s b e s t .
1:
Initialize the current population P c = s 1 , s 2 , , s N and PotentialInd;
2:
Initialize s b e s t = s 1 ;
3:
while the stopping criterion is not met do
4:
GlobalSearch;
5:
for i = 1 to N do
6:
  position set R = { 1 , 2 , n } ;
7:
   i s R e p l a c i n g = false;
8:
  select r R ;
9:
  R = R \ r ;
10:
   s i = I n s e r t i o n - b a s e d   L o c a l   S e a r c h ( r , s i ) ;
11:
  if C m a x ( s i ) < C m a x ( s i ) then
12:
    s i = s i ; i s R e p l a c i n g = true;
13:
  else
14:
    i t e r N u m = 0;
15:
   while i t e r N u m FSNum m a x do
16:
    select r R ;
17:
    R = R \ r ;
18:
     s i = I n s e r t i o n - b a s e d   L o c a l   S e a r c h ( r , s i ) ;
19:
    if C m a x ( s i ) < C m a x ( s i ) then
20:
      s i = s i ; i s R e p l a c i n g = true; break;
21:
    else
22:
      i t e r N u m = i t e r N u m + 1;
23:
    end if
24:
   end while
25:
  end if
26:
  if i s R e p l a c i n g = true then
27:
   Update the PotentialInd;
28:
  else
29:
    s i = HeavyPerturbation ( s i ) ;
30:
  end if
31:
   P c = U p d a t e P c ( s i ) ;
32:
  Update s b e s t using s i ;
33:
end for
34:
 Update s b e s t using PotentialInd;
35:
end while
36:
return s b e s t

3.2. Initialization

Regarding the initialization stage, it is divided into three parts including the population initialization, PotentialInd initialization and the initialization of s b e s t . First, for the population initialization, it is generated randomly. Then, in order to produce some high quality solutions, the NEH based problem-specific method is applied to initialize the population which is composed of N random individuals. After that, HLS can provide some high quality solutions at the beginning of the search process. NEH method can be presented as follows in detail:
(1)
Select a solution s from P c randomly, the first two jobs are extracted and two partial possible schedules with these two jobs are evaluated. The better one based on makespan is chosen as the current sequence.
(2)
For each unscheduled job j in s, put it into all the possible positions of the current scheduled sequence to generate all the possible partial sequences. The best one is selected as the current sequence for scheduling the next job.
(3)
A new individual s n e w is formed after scheduling all jobs, if s n e w is better than s, then it will replace s.
With respect to NEHBPS, for N times of iterations, a random individual is chosen to carry out NEH. Not all the individuals are processed by NEH, hence, the diversity of the initial population can be ensured. In summary, a population P c is generated with the feature of quality and diversity.
Secondly, HLS will initialize PotentialInd by assigning it a determinate sequence to store some problem-specific heuristic information. It is promised that a good individual is stored completely to get ready for finding the best individual during the search process.
Lastly, we initialize s b e s t using the first individual in the population. This individual is updated via the iterations of search process in HLS. At the end of HLS, s b e s t will be outputted with the minimum makespan for FSSP-SDST.

3.3. Global Search Method

The goal of this method is to gain an improved population by perturbing the good individuals and then updating them. In this subsection, the top 25% individuals are chosen as a new elite population to carry out the light perturbation. This setting aims to explore large search space using the appropriate solutions. Furthermore, we update the current population by using the individual which has undergone the light perturbation for enhancing the fitness of the population overall. The framework of GlobalSearch is shown in Algorithm 2.
Algorithm 2 GlobalSearch.
Input:
The number of jobs n.
The current population P c .
The maximal number of GlobalSearch GSNum m a x .
The mutation probability M P .
Output:
The improved population P b .
1:
i = 0, j = 0;
2:
Sort individuals in P c by an ascending makespan order.
3:
while iGSNum m a x do
4:
for each individual s in the first 25% range of P c do
5:
  while j < M P × n do
6:
    s = L i g h t P e r t u r b a t i o n ( s ) ;
7:
    P c = U p d a t e I n d s ( s ) ;
8:
   if C m a x ( s ) < C m a x ( s ) then
9:
    s = s ;
10:
   end if
11:
    j = j + 1 ;
12:
  end while
13:
end for
14:
 Sort individuals in P c by an ascending makespan order;
15:
i = i + 1 ;
16:
end while
17:
P b = P c ;
18:
return P b
This method allows the perturbed solution to maintain some characteristics of the previous solution and find other characteristics in new solutions for improving the diversity of the population. It also helps HLS to find other optimal solutions in the neighborhoods. The light perturbation strength is so sufficient to lead the search trajectory to another neighbor region which can result in a different solution. In conclusion, GlobalSearch can both enhance the diversity of the population and improve the convergence rate of HLS.

3.4. Update Method

In this section, two algorithms, namely Algorithms 3 and 4, are presented. In Algorithm 3, we use the specified individual to update the individual randomly in the current population. At the same time, the update population method is presented in Algorithm 4. After achieving it, an elite population including some individuals with top fitness has acquired.
Algorithm 3 UpdateInds.
Input:
The current population P c .
An individual to be used for updating s.
Output:
The current population with updated individuals P c .
1:
T e m p P c = P c ;
2:
while T e m p P c do
3:
 Randomly select an individual s in P c ;
4:
if C m a x ( s ) < C m a x ( s ) then
5:
   s = s; break;
6:
else
7:
  remove the solution s in T e m p P c ;
8:
end if
9:
end while
10:
return P c
Algorithm 4 UpdateP c .
Input:
The population size N.
Each individual in the current population s i .
The maximal update number UpdateNum m a x .
Output:
The updated population P c .
1:
i t e r = 0;
2:
t = { 1 , 2 , N } ;
3:
while i t e r < U p d a t e N u m m a x do
4:
 Randomly select a position k in t;
5:
t = t \ k ;
6:
if C m a x ( s i ) < C m a x ( s k ) then
7:
   s k = s i ; break;
8:
end if
9:
i t e r = i t e r + 1;
10:
end while
11:
return P c
Except for the above two update methods, another update method is used for PotentialInd. As Algorithm 1 dedicates, if the current solution has improved successfully in the single insertion-based local search or the further search section, PotentialInd will be updated.

3.5. Perturbation and Local Search Methods

In the single insertion-based local search and the further search, if the current individual has been replaced with a better individual, then the better one is stored. Otherwise, it means that this solution has been trapped into a local optimum, the heavy perturbation will be employed to the current solution. Compared with the light perturbation, the heavy perturbation has larger interference strength to provide enough perturbation for the high quality solutions. The heavy perturbation is used to obtain some characteristics of the previous solution, but it isn’t apt for all the local search methods. Such as in GlobalSearch, the light perturbation method is more suitable as it has a better result for disturbing the good solution with the relatively fitting strength. The light perturbation and the heavy perturbation operations can be defined as follows:
  • The heavy perturbation:
    (1)
    Input a solution s. Three different positions p 1 , p 2 , p 3 of s are randomly selected, where p 1 < p 2 < p 3 .
    (2)
    Let S 1 represent the partial sequence between p 1 and p 2 and S 2 denote the other partial sequence between p 2 and p 3 (not including p 2 ), then exchange S 1 and S 2 to generate a new solution s n e w to be outputted.
  • The light perturbation:
    (1)
    Input a solution s. Two different positions p 1 , p 2 of s are randomly selected, where p 1 < p 2 .
    (2)
    Let S represent the partial sequence between p 1 and p 2 (not including p 1 ), then move the job in position p 1 behind S to generate a new solution s n e w to be outputted.
On the first phase, a single insertion-based local search is employed to HLS for exploring better solutions. In [29,30,31,32,33,34,35,36,37], this local search has shown its high effectiveness in obtaining the better quality solutions than the local search based on swap operators. The curial idea of this method is to compare the previous makespan with the new makespan of the sequence, which has inserted a job into all positions of the sequence. Then the better one will be accepted. But if a solution cannot be improved, it will experience the further search to make it rope out of the local optimum on the second phase. Moreover, the further search with the purpose of exploiting good individuals in the neighborhood can increase the convergence speed of HLS. The effective insertion-based local search is described as follows:
(1)
Input a solution s and a position r.
(2)
Let the job j be a job in the position r. Put j into each of the left possible positions of s to generate n 1 neighborhood solutions.
(3)
Let s b e s t be the best solution based on the minimal makespan among the n 1 neighborhood solutions.
(4)
Output s b e s t .

3.6. How to Balance between Exploitation and Exploration

For designing the hybrid heuristic algorithm, it is known to all that diversity and convergence are the two basic issues. In terms of diversity, exploitation is executed in HLS for improving the convergence rate. Otherwise, explorations to other search directions on breadth is applied to HLS for maintaining or enhancing the diversity of the population. HLS can trade off the convergence and diversity efficiently where the diversity is pursued by the exploration and the convergence is increased in the manner of the exploitation.
(1)
Regarding the exploitation: It means that a further search in the large and deep search space can be executed. For the purpose of searching better solutions, firstly we have used the single insertion-based local search to begin the evolution of population. Furthermore, if the current individual cannot be replaced with the new one after carrying out the single local search, thus secondly we choose the further search based on the insertion-based local search to search the solutions deeply. After above methods, the rate of convergence can be improved quickly.
(2)
Regarding the exploration: In other words, the central idea of it is to maintain the diversity of the population. In the search process, we have applied the heavy perturbation method to the individuals which are trapped in the local optima. This way effectively avoids the current population being trapped into the local optima and urges that population to an anticipant direction for next generation. It is particularly worth mentioning here that the light perturbation in GlobalSearch keeps the diversity of the improved population, which has a high convergence for searching good solutions at the same time.
As we expect, a good population with high quality and diversity will be formed. Moreover, at the foundation of the above analyses, HLS can balance the exploitation with exploration in the search process successfully.

4. Experimental Results

4.1. Environmental Setup

HLS is performed on a PC with 4 GB RAM and a CPU of 3.40 GHz on Windows 7 OS. It is programmed in C++ by Microsoft Visual Studio 2013. It is obviously known that the algorithm with the more running time can bring out the better result. To compare with other algorithms fairly, the stop criterion of HLS is CPU time limit given by ( n × m / 2 ) × f milliseconds. Besides, f is set as 30, 60, 90 respectively, which is the same as Ruiz used in [38].

4.2. Benchmark Problem Instances and Benchmark Algorithms

To verify the performance of the proposed algorithm, Taillard-based sets from Vallada et al. (2003) including 4 sets and 480 benchmark instances are used. The instances in each set range from 20 jobs with 5 machines to 500 jobs with 20 machines, consisting of 20 × 5, 20 × 10, 20 × 20, 50 × 5, 50 × 10, 50 × 20, 100 × 5, 100 × 10, 100 × 20, 200 × 10, 200 × 20 and 500 × 20 respectively. Besides, each size has 10 specific examples. It’s remarkable that each set is different in processing times and setup times. The first two sets are SDST10 and SDST50, in which the setup times are 10% and 50% of the average processing times p π i , j . In other words, the setup times are generated equably from the distribution ranges [1, 9] and [1, 49] because p π i , j in this benchmark are uniformly generated in the distribution range [0, 99]. Similarly, the last two sets are SDST100 and SDST125, in which the setup times are generated uniformly from the distribution ranges [1, 99] and [1, 124], respectively 100% and 125% of p π i , j .
Then, to show the effectiveness of HLS, six efficient algorithms including GA, MA, MA L S , PACO, IG_RS, and IG_RS L S on FSSP-SDST [18,38] are used to compare with our proposed algorithm. Furthermore, a response variable called the relative percentage deviation (RPD) is applied to show the increase between a specific solution produced from a certain algorithm and the best bound found in http://soa.iti.es/problem-instances.
R P D = S O M E s o l B E S T s o l B E S T s o l × 100 ( % )

4.3. Experimental Parameter Settings

HLS has five control parameters, comprised of N, M P , GSNum m a x , UpdateNum m a x and FSNum m a x . Instances Ta071–Ta080 ( 100 × 10 ) in SDST50 are used to calibrate the parameters as the base case due to space limitations in the following experiments. Each experiment runs ten times independently with CPU time equaling to ( n × m / 2 ) × 30 . The good results relatively are marked in bold.

4.3.1. The Influence of the Population Size N

It is obvious that the population size is an essential parameter to heuristic algorithms. Selecting an appropriate size has been known as a challenging and puzzling task. If the size is too small, it is easy for the algorithm to have a fast convergence which leads to local optima prematurely. However, using too large size will bring about additional computation costs. To get the influence of the population size on the search performance distinctly, the population size has been varied from 10 to 40 with the RPD results summarized in Table 3.
From the average results of RPD in Table 3, the population size 20 can provide better average RPD (ARPD) in some instances including Ta071, Ta075, Ta078 and Ta080. In Ta078, a population size 10 provides the same least ARPD as a population size 20 with the value of 0.41. Besides a population size 10 and 30 can bring out the same least ARPD (0.56) in Ta079. Moreover, HLS with a population size 30 can gain a better RPD in instances Ta074, Ta077 and Ta079. Figure 2 gives an empirical insight of influence on each population size using the confidence intervals at the 95% confidence level of ARPD for Ta071–Ta080 and it is observed that the population size 20 beats other settings with the best ARPD. Therefore, it is concluded that the population size 20 outperforms other settings in producing better ARPD results.

4.3.2. The Influence of the Maximal Number of GlobalSearch GSNum m a x

In this experiment, we will discuss the effect of GSNum m a x . In GlobalSearch, GSNum m a x indicates the maximum number of iterations that conduct the search on the selected individuals who can be further interfered. To demonstrate the effect of GSNum m a x on the proposed algorithm, Ta071–Ta080 are also applied to present the performance measured as RPD with GSNum m a x chosen from set {3, 4, 5, 6}. Other parameters temporarily set as the previous experiment in this experiment. Table 4 summaries RPD results of each GSNum m a x .
From Table 4, it is easily observed that GSNum m a x = 3 gains only one best result in Ta071, while GSNum m a x = 4 can perform better in Ta075, Ta076, Ta078, Ta079 and Ta080. What’s more, GSNum m a x = 5 provides better results on three instances including Ta073, Ta074 and Ta077, however only in Ta072, GSNum m a x = 6 beats other settings. In general, GSNum m a x = 4 gives the best average RPD on instance set Ta071–Ta080.
Figure 3 provides the 95% confidence interval error graph in terms of ARPD for Ta071–Ta080 with different GSNum m a x . We can find that GSNum m a x = 4 has a strong adaptability with higher convergence ability and it is good in solving FSSP-SDST with the best ARPD among all the settings. Based on the above analyses, it is summarized that GSNum m a x = 4 results in the better performance for most instances in terms of RPD. Therefore, GSNum m a x = 4 is the best choice for HLS.

4.3.3. The Effect of the Mutation Probability M P

As HLS introduced before, in GlobalSearch, a mutation probability was used to control iteration times to implement the perturbation and update process. It is so important that too large M P will bring a move at angles to the space with better results and small values of M P will guide the search to the parallel axes in the exploring space. Hence, a set of M P ranging from 0.1 to 0.4 is used to present the relative performance in getting better results with Ta071–Ta080. Table 5 details the results with different values of M P .
It is shown in Table 5 that M P = 0.1 can gain best RPD only in Ta075, while M P = 0.2 provides better RPD in instances set Ta071, Ta074. Besides, M P = 0.3 can give better results in most of the instances including Ta072, Ta073, Ta076, Ta079 and Ta080 with the smallest average RPD. For Ta077 and Ta078, M P = 0.4 performs better results among all the settings. ARPD concluded in average 95% CI is presented in Figure 4. Generally speaking, the experiments implies that M P = 0.3 might be a good setting to solve FSSP-SDST with the better ARPD.

4.3.4. The Influence of the Maximal Number of Update UpdateNum m a x

In the update method UpdateP c of HLS, we use UpdateNum m a x to present the maximal iteration number of update taking advantage of an individual separately. To maintain the diversity of the population, the value of it can’t be too large and it should be less than the population size N according to the framework of UpdateP c , however if it was too small, the update method has little effect on gaining better result in the search space. A certain number is not apt for all instances. Here UpdateNum m a x is set as 4, 6, 8, 10 to test its suitability. Other settings are as the preliminary experiments. Table 6 shows the detail results.
From Table 6, UpdateNum m a x = 4 is suitable for Ta072, Ta073, Ta079 and Ta080. For Ta071, Ta074 and Ta077, UpdateNum m a x = 6 beats others in getting better RPD. By the same way, Ta072 and Ta075 can give better results when UpdateNum m a x = 8 while UpdateNum m a x = 10 provides better RPD in instances Ta076 and Ta078. Herein UpdateNum m a x = 4 and UpdateNum m a x = 8 present the same results in Ta072. As can be seen in the average RPD, UpdateNum m a x = 6 outperforms other numbers of UpdateNum m a x . It can be found in Figure 5, an average RPD plot with the confidence intervals at the 95% confidence level is shown and it is easy to find out that UpdateNum m a x = 6 performs best among all the settings. There can be concluded that different numbers of UpdateNum m a x will affect the performance of the proposed algorithm variously and we set UpdateNum m a x to 6 for its good achivement.

4.3.5. The Effect of the Maximum Number of Further Search FSNum m a x

If some individuals cannot be improved in the single insertion-based local search, then a further search method will be applied to them. In this section, the effect of FSNum m a x used to define the maximal number of the further exploitation will be discussed. We have taken advantage of Ta071–Ta080 measured as RPD to present the effect. The value of it varies through the set {10, 15, 20, 25} and other parameter settings are the same as the previous experiments. The results of the settings are summarized in Table 7.
From Table 7, FSNum m a x = 10 provides good RPD on Ta072, however for Ta078 and Ta073, FSNum m a x = 20 and FSNum m a x = 25 perform better than other settings respectively. For the rest instances including Ta071, Ta074, Ta075, Ta076, Ta077, Ta079 and Ta080, FSNum m a x = 15 is a good setting beating other values in gaining better RPD. It is obvious that FSNum m a x = 15 outperforms others on most of the instances. Besides, Figure 6 demonstrates ARPD with 95% confidence interval of all the ten instances. It can be easily seen that FSNum m a x = 15 has the best result for solving the instances in HLS.

4.4. Effect of NEH Based Problem-Specific Heuristic

In the initial stage of the proposed algorithm, a NEH based problem-specific heuristic method is used to initialize the population which is produced by a random method. It plays an important part in enhancing the results of the algorithm. Without it, the search of getting good individuals will spread out in a quite broad direction. We respectively use the instances Ta071–Ta080 ( 100 × 10 ) in SDST50 and SDST125 ran ten times to demonstrate the vital effect of it. Besides, the rest frameworks are the same in order to compare fairly. Table 8 shows the results of instances in SDST50 and SDST125 clearly. The better RPD results are marked in bold.
From Table 8, with the exception of Ta071, RPD of other instances in SDST50 using the method with NEHBPS are all better than those without NEHBPS. Besides NEHBPS on all the instances of SDST125 behaves the same as that on SDST50 except for Ta073. It means that RPD of the instances in SDST125 taking advantage of the method with NEHBPS outperforms the method without NEHBPS except Ta071 and Ta073. ARPD plot with 95% confidence interval of SDST50 and SDST125 is directly shown in Figure 7. It has demonstrated that if NEHBPS is not adopted, the performance of HLS seriously degrades. In summary, the NEH based problem-specific heuristic method has affected the proposed algorithm to a great extent. HLS that making use of it to initialize the population is a great strategy.

4.5. Effect of Different Perturbation Operators

In GlobalSearch, a perturbation has applied to disturb some individuals which trapped in local optima of the current population in terms of balancing the exploration and exploitation. It has an important role in finding individuals with high quality and reducing the further local search. However, there are many kinds of perturbation operators with different effects on the proposed algorithm. Here, two operators consisting of light and heavy perturbation mentioned before are compared by the experiments using instances Ta071–Ta080 in SDST50 and SDST125. In order to compare fairly, it is necessary to make sure the rest frameworks are the same. The effect of the two perturbations details in Table 9 which the better results are marked in bold.
From Table 9, for the instances in SDST50, the heavy perturbation method performs better in Ta072, Ta074, Ta078 and Ta080. In contrast, the light perturbation behaves better in the other seven instances. In terms of the instances in SDST125, the light perturbation also performs better in most of the instances including Ta072, Ta073, Ta074, Ta075, Ta076, Ta077, Ta078 and Ta080. However, the heavy perturbation beats the light one only in instances Ta071 and Ta079. Figure 8 can provide the direct comparison between the two methods. It is proved that the light perturbation exhibits the superior performance in gaining better overall ARPD compared with the heavy perturbation. To conclude, the heavy perturbation is a quite efficient method to help the individuals quit the local optimal condition.

4.6. Effectiveness Evaluation of Different Local Search Operators

In the beginning search process and the further search of the proposed algorithm, an insertion-based local search local search was executed to find individuals with better fitness in the neighbour search space. Different types of local search can guide the individuals to different directions no matter they are good or bad. In this section, two other local search operators are used to compare with the effective insertion-based local search by instances from Ta071 to Ta080 ( 100 × 10 ) in SDST50 and SDST125 running ten times. The compared two operators can be described as follows.
(1)
exchange_based local search: For the positions ranging from 1 to n 1 of the scheduling permutation, exchange the job in it with the job in the adjacent next position.
(2)
swap_based local search: For every position of the scheduling permutation (from 1 to n), swap the job in it with the job in the given position.
It is noted that the other frameworks are the same in this comparison to make the comparison fair. Table 10 demonstrates the details of the three operators. The better results are marked in bold. There are definite differences among these three local search operators. For the standard deviation (SD) of these instances in SDST50 and SDST125, the insertion-based local search has the smallest standard deviation 0.26 and 0.47 respectively among these local search methods. It means that the insertion-based local search is much stabler in getting high-qualified solutions. Furthermore, ARPD that obtained by the insertion-based local search of HLS is much smaller. Hence, it is obvious that the insertion-based local search which can generate high quality solutions performs better than other two methods on all the instances in SDST50 and SDST125. Figure 9 demonstrates that the insertion-based local search in the proposed algorithm can yield better overall mean namely ARPD. In conclusion, the insertion-based local search exhibits its superior performance with high efficiency in the proposed algorithm.

4.7. Comparison Results with Some State-Of-The-Art Approaches

In this subsection, the proposed algorithm with the best algorithm settings yielded from preceding sections is tested with six compared benchmark algorithms containing GA, MA, MA L S , IG_RS, IG_RS L S and PACO which can be found in the literature [38] to present a comprehensive performance comparison. Among them IG_RS L S and MA L S are integrated IG_RS and MA respectively with an effective local search. To examine the relative ranking of all the algorithms which are dependent on the computation time with the comparative performances, three series of experiments setting the f aforesaid at 30, 60 and 90 are carried out. For each benchmark instance in the following experiments, HLS runs independently ten times and thirty times. HLS(10) denotes the running time of HLS is ten, while HLS(30) represents that HLS is running thirty times. RPD is calculated to compare the performance of different algorithms for each instance. Besides, the standard deviations for each type of instances are calculated to show the robustness of the compared algorithms. The comparative results are shown from Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16 grouped by instances type and size. Because of the limitation of space, only the average RPD of problems in 12 different scales is presented in the sets including SDST10, SDST50, SDST100 and SDST125. It means that ARPD values for each scale with 10 instances, ARPD and SD of all the scales in each type of instance sets are shown in the corresponding tables. The best results of ARPD for the specific algorithm are marked at bold. Besides the negative values represent that the results found by HLS are better than the current best solution provided in http://soa.iti.es/problem-instances. It has to be noted that in the following analyses the ARPDs are achieving by HLS running ten times and the improvement of ARPD means the decrease of that value compared with a certain value because the lower value of ARPD represents the better performance of the algorithm.
From ARPD through Table 11, which the computation time is ( n × m / 2 ) × 30 , the algorithms except for HLS, the best cross ARPD are all gained by IG_RS L S . It is said that the IG_RS L S owns the optimal performance in the compared algorithms. In SDST10, HLS can provide slightly better results for instances in the scales from 20 × 5 to 200 × 20 than the algorithm IG_RS L S . Among them, the maximum improvement compared with IG_RS L S is 0.41 in the scale of 100 × 20 and the minimum improvement is 0.01 for the scale of 20 × 10. As for the instances in large scale 500 × 20, IG_RS L S gives a better result of ARPD which is 0.46 than the ARPD of our proposed algorithm 0.72. For the rest algorithms, HLS performs better than all of them in obtaining better ARPD which has the lower value. Overall HLS is better than IG_RS L S that is the suboptimal algorithm with an ARPD improvement of 0.17. In SDST50, the promotion of ARPD which is developed by HLS has a bit improvement than the type of SDST10. Except for the scale of 500 × 20, HLS presents the best performance on each scale among all the algorithms. But in terms of ARPD in the scale 500 × 20, HLS is just next to IG_RS L S and is superior to the rest algorithms owing the second-best performance. The least improvement of ARPD is 0.04 and the most improvement is 0.99 compared with the suboptimal RPD obtained by IG_RS L S . To the overall ARPD, HLS beats all the other algorithms on average with the value of 0.45, better than ARPD 1.00 of IG_RS L S . The ranking order of the algorithms as the best performing to the worst performing are HLS, IG_RS L S , MA L S , MA, IG_RS, PACO and GA.
From Table 12, for the large sets SDST100 and SDST125, HLS can generate better results compared to the small sets including SDST10 and SDST50. For SDST100, the least difference which is better than IG_RS L S is 0.19 which is the difference between 0.27 and 0.08 in the scale of 20 × 20. The above data records the most difference is 1.30. The values of ARPD in each scale of instances gaining by HLS are all the best with the exception of the scale 500 × 20. Considering the cross average in this table, the ranking of HLS is best. For SDST125, the least and the most difference which are improved compared with IG_RS L S are 0.19 and 1.58 respectively. Seen from the overall ARPD, there is an ARPD 0.62 developed by HLS better than the other algorithms and it has the value of 0.82 in terms of the improvement compared with the suboptimal ARPD in the algorithm IG_RS L S . It is noted that it has a much higher average percentage promotion than other five algorithms. It also implies that HLS can provide better performance with the increase of ratio between the setup times and the processing times.
From Table 13 and Table 14, which are ARPD results of all the types in all the instances when we set the computation time as ( n × m / 2 ) × 60 , the same pattern are obtained in the four data sets. With the exception of the instances in 500 × 20, all other ARPD are improved rather better gradually with the increase of the setup times in the range of SDST10, SDST50, SDST100 and SDST125. It is clear that HLS exceeds all other algorithms because of its good performance on the least overall ARPD.
It is the same as Table 15 and Table 16 when the computation time is ( n × m / 2 ) × 90 . For SDST50, except for the scale of 500 × 20, ARPD of each algorithm in each scale decreases as the order of GA, PACO, IG_RS, MA, MA L S , IG_RS L S , HLS, which shows the best performance of HLS among all the compared algorithms. For SDST100 and SDST125, it is shown the same pattern. However, the tendency of increasing softens in SDST10 with the exceptions of ARPD in the scale of 20 × 10 and 500 × 20. For brevity, the remaining analyses of the minimum and maximum of ARPD between the relatively good two algorithms which are HLS and IG_RS L S in each scale are omitted. But the trend of performance for each algorithm is the same as the previous tables has shown. As we can see from the overall ARPD, it is evident that there is the least ARPD in HLS which surpasses by other algorithms attributed to the high effectiveness of each component in HLS. Furthermore, via the standard deviations from Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16 of each compared algorithm, it is demonstrated that the high stability of HLS in gaining good solutions.
In addition, it is obvious that each algorithm is benefited from additional computation time in all the types of instances including SDST10, SDST50, SDST100 and SDST125. Figure 10, Figure 11, Figure 12 and Figure 13 are used to prove the importance of the computation time in different scales of instances by ARPD evaluated with HLS running ten times. The trajectory of each curse in the figure overall declines and becomes closer to the X axis along with the increase of f which represents the better cross ARPD. Besides, it is also found that the average RPD shows an increasing trend because the increase of the instance size in the number of machines classified by the number of jobs including 20, 50, 100, 200 and 500. Via the above figures, what can be summarized is that the value of the cross ARPD is decreasing and becomes better clearly with the increase of f regardless of the instance type and instance size.
Besides, the interactions between the ratio of the setup times in the processing times and the scales have to be studied to verify the effectiveness of HLS. It is shown in Figure 14, Figure 15 and Figure 16 to present the different types of instances segregated by f = 30, 60 and 90 for HLS running ten times respectively. As a matter of fact, the average RPD has reduced rather more acutely with a larger difference in each algorithm for the large type of SDST100 and SDST125 compared with the type sets of SDST10 and SDST50. Moreover, the trend of each curve in Figure 15 and Figure 16 demonstrates the performance ranking of these compared algorithms are as the ascendent order of GA, PACO, IG_RS, MA, MA L S , IG_RS L S , HLS visually. However, Figure 14 has presented the descendent performance order of algorithms is HLS, IG_RS L S , MA, IG_RS, MA L S , PACO and GA. It is summarized that HLS gives higher priority to the increase of the computation time and the increase ratio between the setup time and the processing time.
The next tables from Table 17, Table 18, Table 19, Table 20, Table 21, Table 22 and Table 23 present the newbounds of makespan discovered by HLS running ten times and the aforementioned best-known bounds grouped by different scales in different types of instances. The differences between the two values are presented in the tables which shows the high effectiveness in getting high-qualified solutions of HLS. Besides, they become larger as the increase of the ratio of the setup times in the processing times and the scales of instances. By the same pattern, the number of newbounds has raised progressively. For an example, in the scale 50 × 5, the difference in Ta039 of SDST10 is −2, but for Ta038 of SDST100, the difference is −27. Table 18 provides the newbounds, the best-known bounds and the differences between them for the instances in the scale 50 × 10. Hence, it is evident that the differences are bigger in SDST50 and SDST100 overall which mean that HLS is much more effective. The tables are shown that HLS is statistically superior to other well-performing algorithms. For the illustration purpose, what follows it is Table 24 that shows some permutations of jobs chosen from the scale 50 × 5 in the instances of SDST10 and SDST100 because of space limitations. The permutation sequence begins from 0 and ends in 49 in each instance which is the best solution with the newbound of makespan developed by HLS.
To illustrate the robustness of HLS, the instances in the set SDST50 are chosen to be the base examples. Besides, Figure 17, Figure 18 and Figure 19 use ARPD in confidence intervals at the 95% confidence level taking the CPU time of ( n × m / 2 ) × 30 , ( n × m / 2 ) × 60 and ( n × m / 2 ) × 90 respectively to analyse the robustness. The ARPDs here are developed by HLS running ten times. The length of the error bar represents the stability degree of each algorithm. It means if the length of the bar for one algorithm is short, then the algorithm gives the better robustness. It is clearly shown that HLS has the strongest robustness with the shortest interval among all the compared algorithms no matter how long the CPU time is. The robustness order of other algorithms are PACO, GA, IG_RS, MA, MA L S , IG_RS L S from worst to best as the above figures have provided. Moreover, the above figures have presented the best effectiveness of HLS with a smallest value of the average RPD. As a result, the average RPD also shows that the effectiveness ranking of other algorithms is GA, PACO, IG_RS, MA, MA L S , IG_RS L S from the last rank to the first rank. Although PACO has the better effectiveness which has a lower ARPD compared with GA, in terms of robustness GA is stabler than PACO. Through synthetical consideration of the effectiveness and the robustness for each algorithm, HLS is resulted to the best algorithm in all the compared algorithms.
As we can see from the above tables and figures, it can be concluded that it is GA, PACO, IG_RS, MA, MA L S , IG_RS L S , HLS from the worst to the best performance for the comprehensive consideration particularly because ARPD of all the 480 instances is decreasing in line. Although in the instances of the scale 500 × 20, IG_RS L S can provide a better ARPD, HLS can present a better overall ARPD regardless of instance type. Besides, HLS performs much better as the increase ratio of the setup times in the processing times. Based on the above results, it can be said that HLS performs better than other six algorithms. It is verified that HLS is an effective and robust algorithm for solving the flowshop scheduling problem with sequence dependent setup times.

4.8. Comparison Results with Some Recent Algorithms

In order to further illustrate the effectiveness of the proposed algorithm, we reconstruct an experiment to compare with two recent algorithms, namely the adaptive hybrid algorithm (AHA) [39], and the enhanced migrating birds optimization (EMBO) [40] for FSSP-SDST with the makespan criterion.
In AHA, each job is assigned an inheriting factor. For dynamically updating the factor, a novel operator is constructed. Therefore, both good and bad genes can be explored. A new crossover operator is proposed by inheriting good genes to the offspring and destroying the bad genes with a high probability. Hence, the offspring is integrated with more and more good genes generation after some generations. It helps to improve the effectiveness of AHA in obtaining high-qualified solutions. The algorithm AHA is chosen to compare with HLS. Besides, for the migrating birds optimization (MBO) [41], it is a metaheuristic inspired from the flight of migrating birds. To save the energy, the known V-flight shape3 is developed. Thereafter, a basic migrating birds optimization (BMBO) [23] is proposed to solve FSSP-SDST with the makespan criterion. In terms of EMBO, since the performance of BMBO keeps decreasing with the increase in the size of instances, EMBO is based on the BMBO with an or-opt neighbourhood which was designed for the travelling salesman problem (TSP) first and the well-known heuristics to generate its leader bird. The or-opt neighbourhood operator is moving a block of one, two, three jobs or four jobs and inserting it elsewhere in the sequence.
HLS runs 30 times to compare with the above algorithms. The RPD results for each scale data in SDST10, SDST50, SDST100, SDST125 of each compared algorithm is presented in Table 25. The relatively good results are marked in bold. From Table 25, it is seen that HLS is more suitable than the other compared algorithms for achieving good results. In detail, HLS has the best values of ARPD 0.31 and 0.40 in SDST10 and SDST50 respectively among the ARPDs gaining by the compared algorithms. For SDST100 and SDST125, the values of the overall ARPD are 0.47 and 0.52 which are clearly better than the other ARPD values of EMBO and AHA. In addition, Figure 20 is presented to show the good effectiveness of HLS in obtaining high-qualified results intuitively.
In conclusion, HLS is very competitive to the good performing algorithms for solving FSSP-SDST with makespan criterion.

5. Conclusions and Future Work

In this paper, the flowshop scheduling problem with sequence dependent setup times is addressed with minimizing the makespan. A hybrid local search algorithm based on novel local search methods is presented to deal with this problem. First, to initialize the population, we apply an effective NEH based problem-specific method to the initialize the population. Second, the global search embedded with a light perturbation is used to generate the better population. Then, to find good individuals in the current population, the insertion-based local search is adopted. Next, a further local search is applied to individuals which are trapped into local optima. Last, a heavy perturbation is used to explore better neighbours in the new research regions.
In order to demonstrate the performance of the proposed algorithm, extensive experiments are conducted on all the 120 instances of four different scales. It is obviously shown in the compared experiments that HLS gives better results than other six state-of-the-art algorithms, including GA, MA, MA L S , IG_RS, IG_RS L S and PACO. The experimental results confirm that it is appropriate in using the combination of local search and perturbation methods in HLS.
In the future, the proposed HLS algorithm is expected to be used to solve other combinational problems such as no-wait flowshop scheduling problems (NWFSSP), hybrid flowshop scheduling problems (HFSSP), blocking flowshop scheduling problems (BFSSP) under makespan criterion. Different values of parameters for HLS can be experimented comprehensively to enhance the solution quality. In addition, different neighbourhood structures can be developed for improving the effectiveness of HLS. Meanwhile, the interactions between the exploitation and the exploration strategies should be investigated in depth. In other words, other techniques on different ways of local search and perturbation which is more suitable for FSSP-SDST can be studied to guide the search to other extensive spaces and enhance the performance of HLS. Moreover, it helps to construct other metaheuristics with some effective ingredients adding to the basic HLS. For example, the restart strategy that a new population is generated randomly with the current best individual not improving after a given number of iterations and the elite strategy that keeps some elite individuals in current population for a number of iterations.
Furthermore, we can integrate HLS with other heuristics, such as the variable neighbourhood search and the tabu search. This would be more effective to keep balance between the intensification and the diversification and improve the quality of solutions.

Acknowledgments

The authors would like to thank all anonymous reviewers for their constructive comments, which have helped improve the study in numerous ways. This research is supported by the National Natural Science Foundation of China under Grant No. 61603087 and also funded by the Natural Science Foundation of Jilin Province under Grant No. 20160101253JC. Meanwhile, this research is also supported by the Fundamental Research Funds for the Central Universities No. 2412017FZ026.

Author Contributions

Yunhe Wang operated the experiments and drafted the manuscript. Xiangtao Li designed the research and Zhiqiang Ma checked the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FSSP-SDSTThe flowshop scheduling problem with sequence dependent setup times
HLSHybrid local search algorithm
NEHBPSNawaz-Enscore-Hoam based problem-specific method
RPDRelatively percentage deviation

References

  1. Johnson, S.M. Optimal two- and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  2. Cheng, T.C.E.; Gupta, J.N.D.; Wang, G. A review of flowshop scheduling reasearch with setup times. Prod. Oper. Manag. 2000, 9, 262–282. [Google Scholar] [CrossRef]
  3. Vanchipura, R.; Sridharan, R. Development and analysis of constructive heuristic algorithms for flow shop scheduling problems with sequence-dependent setup times. Int. J. Adv. Manuf. Technol. 2013, 67, 1337–1353. [Google Scholar] [CrossRef]
  4. Kheirkhah, A.; Navidi, H.; Bidgoli, M.M. Dynamic facility layout problem: A new bilevel formulation and some metaheuristic solution methods. IEEE Trans. Eng. Manag. 2015, 62, 396–410. [Google Scholar] [CrossRef]
  5. Balouka, N.; Cohen, I.; Shtub, A. Extending the Multimode Resource-Constrained Project Scheduling Problem by Including Value Considerations. IEEE Trans. Eng. Manag. 2016, 63, 4–15. [Google Scholar] [CrossRef]
  6. Li, S.; Wang, N.; Jia, T.; He, Z.; Liang, H. Multiobjective Optimization for Multiperiod Reverse Logistics Network Design. IEEE Trans. Eng. Manag. 2016, 63, 223–236. [Google Scholar] [CrossRef]
  7. Kheirkhah, A.; Navidi, H.; Bidgoli, M.M. An Improved Benders Decomposition Algorithm for an Arc Interdiction Vehicle Routing Problem. IEEE Trans. Eng. Manag. 2016, 63, 259–273. [Google Scholar] [CrossRef]
  8. Rios-Mercado, R.Z.; Bard, J.F. The flow shop scheduling polyhedron with setup times. J. Comb. Optim. 2003, 7, 291–318. [Google Scholar] [CrossRef]
  9. Nishi, T.; Hiranaka, Y. Lagrangian relaxation and cut generation for sequence-dependent setup time flowshop scheduling problems to minimise the total weighted tardiness. Int. J. Prod. Res. 2013, 51, 4778–4796. [Google Scholar] [CrossRef]
  10. Christian, B.; Andrea, R.; Michael, S. Hybrid Metaheuristics: An Emerging Approach to Optimization; Studies in Computational Intelligence; Springer: Berlin, Germany, 2008; Volume 114. [Google Scholar]
  11. Raidl, G.R.; Puchinger, J. Combining (Integer) Linear Programming Techniques and Metaheuristics for Combinatorial Optimization. Hybrid Metaheuristics 2008, 114, 31–62. [Google Scholar]
  12. Blum, C.; Cotta, C.; Fernandez, A.; Sampels, M. Hybridizations of metaheuristics with branch & bound derivates. Hybrid Metaheuristics 2008, 114, 85–116. [Google Scholar]
  13. D’Andreagiovanni, F.; Nardin, A. Towards the fast and robust optimal design of wireless body area networks. Appl. Soft Comput. 2015, 37, 971–982. [Google Scholar] [CrossRef]
  14. D’Andreagiovanni, F.; Krolikowski, J.; Pulaj, J. A fast hybrid primal heuristic for multiband robust capacitated network design with multiple time periods. Appl. Soft Comput. 2015, 26, 497–507. [Google Scholar] [CrossRef]
  15. Gambardella, L.M.; Montemanni, R.; Weyland, D. Coupling ant colony systems with strong local searches. Eur. J. Oper. Res. 2012, 220, 831–843. [Google Scholar] [CrossRef]
  16. Nawaz, M.; Enscore, E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  17. Rios-Mercado, R.Z.; Bard, J.F. Heuristics for the flow line problem with setup costs. Eur. J. Oper. Res. 1998, 110, 76–98. [Google Scholar] [CrossRef]
  18. Ruiz, R.; Maroto, C.; Alcaraz, J. Solving the flowshop scheduling problem with sequence dependent setup times using advanced metaheuristics. Eur. J. Oper. Res. 2005, 165, 34–54. [Google Scholar] [CrossRef]
  19. Rios-Mercado, R.Z.; Bard, J.F. An Enhanced TSP-Based Heuristic for Makespan Minimization in a Flow Shop with Setup Times. J. Heuristics 1999, 5, 53–70. [Google Scholar] [CrossRef]
  20. Rajendran, C.; Ziegler, H. Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. Eur. J. Oper. Res. 2004, 155, 426–438. [Google Scholar] [CrossRef]
  21. Gajpal, Y.; Rajendran, C.; Ziegler, H. An ant colony algorithm for scheduling in flowshops with sequence-dependent setup times of jobs. Int. J. Adv. Manuf. Technol. 2006, 30, 416–424. [Google Scholar] [CrossRef]
  22. Tseng, F.T.; Gupta, J.N.D.; Stafford, E.F. A penalty-based heuristic algorithm for the permutation flowshop scheduling problem with sequence-dependent set-up times. J. Oper. Res. Soc. 2006, 57, 541–551. [Google Scholar] [CrossRef]
  23. Benkalai, I.; Rebaine, D.; Gagne, C.; Baptiste, P. The migrating birds optimization metaheuristic for the permutation flow shop with sequence dependent setup times. IFAC PapersOnLine 2016, 49, 408–413. [Google Scholar] [CrossRef]
  24. Simons, J.V. Heuristics in flow shop scheduling with sequence dependent setup times. Omega 1992, 20, 215–225. [Google Scholar] [CrossRef]
  25. Jacobs, L.W.; Brusco, M.J. Note: A local-search heuristic for large set-covering problems. Nav. Res. Logist. 1995, 42, 1129–1140. [Google Scholar] [CrossRef]
  26. Ruizab, R. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar]
  27. Rajendran, C.; Ziegler, H. A heuristic for scheduling to minimize the sum of weighted flowtime of jobs in a flowshop with sequence-dependent setup times of jobs. Comput. Ind. Eng. 1997, 33, 281–284. [Google Scholar] [CrossRef]
  28. Wang, Y.; Dong, X.; Chen, P.; Lin, Y. Iterated Local Search Algorithms for the Sequence-Dependent Setup Times Flow Shop Scheduling Problem Minimizing Makespan. In Foundations of Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 329–338. [Google Scholar]
  29. Li, X.; Yin, M. An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure. Adv. Eng. Softw. 2013, 55, 10–31. [Google Scholar] [CrossRef]
  30. Li, X.; Yin, M. A hybrid cuckoo search via Levy flights for the permutation flow shop scheduling problem. Int. J. Prod. Res. 2013, 51, 4732–4754. [Google Scholar] [CrossRef]
  31. Li, X.; Yin, M. A discrete artificial bee colony algorithm with composite mutation strategies for permutation flow shop scheduling problem. Sci. Iran. 2012, 19, 1921–1935. [Google Scholar] [CrossRef]
  32. Wang, L.; Fang, C. A hybrid estimation of distribution algorithm for solving the resource-constrained project scheduling problem. Expert Syst. Appl. 2012, 39, 2451–2460. [Google Scholar] [CrossRef]
  33. Fang, C.; Wang, L. An effective shuffled frog-leaping algorithm for resource-constrained project scheduling problem. Inf. Sci. 2011, 181, 4804–4822. [Google Scholar] [CrossRef]
  34. Pan, Q.K.; Wang, L. Effective heuristics for the blocking flowshop scheduling problem with makespan minimization. Omega 2012, 40, 218–229. [Google Scholar] [CrossRef]
  35. Li, X.; Wang, Q.; Wu, C. Efficient composite heuristics for total flowtime minimization in permutation flow shops. Omega 2009, 37, 155–164. [Google Scholar] [CrossRef]
  36. Li, X.; Ma, S. Multi-Objective Memetic Search Algorithm for Multi-Objective Permutation Flow Shop Scheduling Problem. IEEE Access 2017, 4, 2154–2165. [Google Scholar] [CrossRef]
  37. Li, X.; Li, M. Multiobjective Local Search Algorithm-Based Decomposition for Multiobjective Permutation Flow Shop Scheduling Problem. IEEE Trans. Eng. Manag. 2015, 62, 544–557. [Google Scholar] [CrossRef]
  38. Ruizab, R. An Iterated Greedy heuristic for the sequence dependent setup times flowshop problem with makespan and weighted tardiness objectives. Eur. J. Oper. Res. 2008, 187, 1143–1159. [Google Scholar]
  39. Li, X.; Zhang, Y. Adaptive hybrid algorithms for the sequence-dependent setup time permutation flow shop scheduling problem. IEEE Trans. Autom. Sci. Eng. 2012, 9, 578–595. [Google Scholar] [CrossRef]
  40. Benkalai, I.; Rebaine, D.; Gagne, C.; Baptiste, P. Improving the migrating birds optimization metaheuristic for the permutation flow shop with sequence-dependent set-up times. Int. J. Prod. Res. 2017, 1–13. [Google Scholar] [CrossRef]
  41. Duman, E.; Uysal, M.; Alkaya, A.F. Migrating Birds Optimization: A new metaheuristic approach and its performance on quadratic assignment problem. Inf. Sci. 2012, 217, 65–77. [Google Scholar] [CrossRef]
Figure 1. Gratt chart for J 2 - J 1 - J 3 to the example problem.
Figure 1. Gratt chart for J 2 - J 1 - J 3 to the example problem.
Sustainability 09 02318 g001
Figure 2. ARPD plot for Ta071–Ta080 with different numbers of N.
Figure 2. ARPD plot for Ta071–Ta080 with different numbers of N.
Sustainability 09 02318 g002
Figure 3. ARPD plot for Ta071–Ta080 with different numbers of GSNum m a x .
Figure 3. ARPD plot for Ta071–Ta080 with different numbers of GSNum m a x .
Sustainability 09 02318 g003
Figure 4. ARPD plot for Ta071–Ta080 with different numbers of M P .
Figure 4. ARPD plot for Ta071–Ta080 with different numbers of M P .
Sustainability 09 02318 g004
Figure 5. ARPD plot for Ta071–Ta080 with different numbers of UpdateNum m a x .
Figure 5. ARPD plot for Ta071–Ta080 with different numbers of UpdateNum m a x .
Sustainability 09 02318 g005
Figure 6. ARPD plot for Ta071–Ta080 with different numbers of FSNum m a x .
Figure 6. ARPD plot for Ta071–Ta080 with different numbers of FSNum m a x .
Sustainability 09 02318 g006
Figure 7. ARPD plot for SDST50 and SDST125 with or without NEHBPS.
Figure 7. ARPD plot for SDST50 and SDST125 with or without NEHBPS.
Sustainability 09 02318 g007
Figure 8. ARPD plot for SDST50 and SDST125 with different types of local search.
Figure 8. ARPD plot for SDST50 and SDST125 with different types of local search.
Sustainability 09 02318 g008
Figure 9. ARPD plot for SDST50 and SDST125 with different types of local search.
Figure 9. ARPD plot for SDST50 and SDST125 with different types of local search.
Sustainability 09 02318 g009
Figure 10. ARPD plot for SDST10 with the different values of f.
Figure 10. ARPD plot for SDST10 with the different values of f.
Sustainability 09 02318 g010
Figure 11. ARPD plot for SDST50 with the different values of f.
Figure 11. ARPD plot for SDST50 with the different values of f.
Sustainability 09 02318 g011
Figure 12. ARPD plot for SDST100 with the different values of f.
Figure 12. ARPD plot for SDST100 with the different values of f.
Sustainability 09 02318 g012
Figure 13. ARPD plot for SDST125 with the different values of f.
Figure 13. ARPD plot for SDST125 with the different values of f.
Sustainability 09 02318 g013
Figure 14. ARPD comparison of different methods on different types of instances (f = 30).
Figure 14. ARPD comparison of different methods on different types of instances (f = 30).
Sustainability 09 02318 g014
Figure 15. ARPD comparison of different methods on different types of instances (f = 60).
Figure 15. ARPD comparison of different methods on different types of instances (f = 60).
Sustainability 09 02318 g015
Figure 16. ARPD comparison of different methods on different types of instances (f = 90).
Figure 16. ARPD comparison of different methods on different types of instances (f = 90).
Sustainability 09 02318 g016
Figure 17. ARPD in 95% CI for each algorithm in SDST50 (f = 30).
Figure 17. ARPD in 95% CI for each algorithm in SDST50 (f = 30).
Sustainability 09 02318 g017
Figure 18. ARPD in 95% CI for each algorithm in SDST50 (f = 60).
Figure 18. ARPD in 95% CI for each algorithm in SDST50 (f = 60).
Sustainability 09 02318 g018
Figure 19. ARPD in 95% CI for each algorithm in SDST50 (f = 90).
Figure 19. ARPD in 95% CI for each algorithm in SDST50 (f = 90).
Sustainability 09 02318 g019
Figure 20. ARPD comparison for methods on different types of instances.
Figure 20. ARPD comparison for methods on different types of instances.
Sustainability 09 02318 g020
Table 1. Processing times of jobs on each machine.
Table 1. Processing times of jobs on each machine.
J1J2J3
M1231
M2425
M3312
Table 2. Sequence dependent setup times of jobs on each machine.
Table 2. Sequence dependent setup times of jobs on each machine.
M1 M2 M3
J1J2J3 J1J2J3 J1J2J3
J123J131J124
J224J213J238
J334J332J367
Table 3. The RPD of different numbers of N. The bold numbers are the relatively good ARPD results.
Table 3. The RPD of different numbers of N. The bold numbers are the relatively good ARPD results.
DataNo.10203040
Ta0710.720.470.540.50
Ta0720.800.810.941.17
Ta0730.650.580.590.56
Ta0740.620.680.580.89
Ta0750.700.460.770.76
Ta0760.671.111.130.78
Ta0770.860.650.630.66
Ta0780.410.410.880.80
Ta0790.560.670.560.78
Ta0801.020.831.050.91
Average0.700.670.770.78
Table 4. The RPD of different numbers of GSNum m a x . The bold numbers are the relatively good ARPD results.
Table 4. The RPD of different numbers of GSNum m a x . The bold numbers are the relatively good ARPD results.
DataNo.3456
Ta0710.420.550.620.60
Ta0721.220.910.940.90
Ta0730.760.730.590.65
Ta0741.100.650.230.83
Ta0750.860.220.770.62
Ta0760.920.621.050.86
Ta0770.870.690.630.91
Ta0780.820.290.400.55
Ta0790.930.540.550.74
Ta0801.050.541.050.90
Average0.890.570.680.76
Table 5. The RPD of different numbers of M P . The bold numbers are the relatively good ARPD results.
Table 5. The RPD of different numbers of M P . The bold numbers are the relatively good ARPD results.
DataNo.0.10.20.30.4
Ta0710.770.460.630.85
Ta0720.940.900.551.02
Ta0730.630.620.580.81
Ta0740.940.460.570.56
Ta0750.510.910.720.72
Ta0761.190.820.750.79
Ta0771.161.081.080.93
Ta0780.930.430.480.21
Ta0790.820.470.350.38
Ta0800.870.390.151.00
Average0.880.650.590.73
Table 6. The RPD of different numbers of UpdateNum m a x . The bold numbers are the relatively good ARPD results.
Table 6. The RPD of different numbers of UpdateNum m a x . The bold numbers are the relatively good ARPD results.
DataNo.46810
Ta0710.670.540.830.66
Ta0720.780.800.780.91
Ta0730.330.590.370.39
Ta0740.940.580.740.99
Ta0750.790.770.640.68
Ta0760.921.130.780.69
Ta0771.080.631.180.97
Ta0780.780.400.820.37
Ta0790.400.560.810.79
Ta0800.621.050.860.91
Average0.730.710.780.74
Table 7. The RPD of different numbers of FSNum m a x . The bold numbers are the relatively good ARPD results.
Table 7. The RPD of different numbers of FSNum m a x . The bold numbers are the relatively good ARPD results.
DataNo.10152025
Ta0710.710.280.540.78
Ta0720.670.840.800.95
Ta0730.390.410.590.32
Ta0740.830.560.580.64
Ta0750.520.220.770.76
Ta0761.210.621.050.88
Ta0770.910.490.630.86
Ta0780.860.880.401.08
Ta0790.590.500.550.62
Ta0800.690.491.050.96
Average0.740.530.700.78
Table 8. The RPD for SDST50 and SDST125 with or without NEHBPS. The bold numbers are the relatively good ARPD results.
Table 8. The RPD for SDST50 and SDST125 with or without NEHBPS. The bold numbers are the relatively good ARPD results.
DataNo.HLS_noNEHBPSHLS_NEHBPS
SDST50SDST125SDST50SDST125
Ta0710.080.430.590.55
Ta0720.800.860.380.60
Ta0730.291.050.141.41
Ta0740.600.990.520.70
Ta0750.330.900.150.41
Ta0760.421.090.230.78
Ta0770.901.130.210.79
Ta0780.770.780.470.42
Ta0790.280.63−0.050.58
Ta0800.840.790.72−0.38
Average0.530.870.340.59
Table 9. The RPD of different perturbation operators in SDST50 and SDST125. The bold numbers are the relatively good ARPD results.
Table 9. The RPD of different perturbation operators in SDST50 and SDST125. The bold numbers are the relatively good ARPD results.
DataNo.HLS_heavyHLS_light
SDST50SDST125SDST50SDST125
Ta0710.630.590.590.75
Ta0720.551.610.640.60
Ta0730.841.850.481.41
Ta0740.570.920.740.73
Ta0750.721.780.640.69
Ta0760.751.300.691.00
Ta0771.111.140.281.07
Ta0780.430.920.580.42
Ta0790.350.460.250.79
Ta0800.150.950.750.32
Average0.611.150.560.78
Table 10. The RPD and SD of different local search operators for instances in SDST50 and SDST125. The bold numbers are the relatively good ARPD results.
Table 10. The RPD and SD of different local search operators for instances in SDST50 and SDST125. The bold numbers are the relatively good ARPD results.
DataNo.HLS_exLSHLS_swLSHLS_inLS
SDST50SDST125SDST50SDST125SDST50SDST125
Ta0714.896.641.021.930.630.59
Ta0725.697.672.292.580.551.61
Ta0735.117.601.393.210.581.85
Ta0744.826.782.052.030.570.92
Ta0754.946.221.592.670.721.78
Ta0765.607.691.953.160.751.30
Ta0775.857.242.061.661.111.14
Ta0784.835.741.552.680.431.05
Ta0795.697.191.442.480.310.46
Ta0805.506.421.402.650.150.95
SD0.410.670.400.500.260.47
Average5.296.921.672.500.581.16
Table 11. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 30; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 11. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 30; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST1020 × 5Ta001–Ta0100.430.490.120.180.210.080.040.04
20 × 10Ta011–Ta0200.590.550.130.330.280.080.070.06
20 × 20Ta021–Ta0300.440.590.140.200.300.070.030.02
50 × 5Ta031–Ta0401.040.770.430.531.000.370.130.12
50 × 10Ta041–Ta0502.101.211.121.231.580.760.600.57
50 × 20Ta051–Ta0602.231.381.161.271.850.910.570.49
100 × 5Ta061–Ta0701.280.760.540.871.440.430.110.12
100 × 10Ta071–Ta0801.480.910.780.991.490.610.300.25
100 × 20Ta081–Ta0902.071.491.271.491.750.880.470.46
200 × 10Ta091–Ta1001.630.810.791.041.500.580.330.30
200 × 20Ta101–Ta1102.001.141.111.311.450.790.610.59
500 × 20Ta111–Ta1201.380.740.690.821.010.460.720.69
SD0.650.330.420.450.590.310.250.24
ARPD1.390.900.690.861.160.500.330.31
SDST5020 × 5Ta001–Ta0101.340.440.370.580.830.260.120.09
20 × 10Ta011–Ta0201.210.920.410.490.660.280.050.03
20 × 20Ta021–Ta0300.570.870.200.350.600.100.060.05
50 × 5Ta031–Ta0403.852.271.792.522.991.410.600.48
50 × 10Ta041–Ta0503.241.811.492.152.441.330.440.40
50 × 20Ta051–Ta0602.571.931.331.652.341.160.410.32
100 × 5Ta061–Ta0704.642.642.234.372.931.510.660.63
100 × 10Ta071–Ta0803.612.201.843.442.691.370.380.22
100 × 20Ta081–Ta0902.962.001.732.872.381.290.300.12
200 × 10Ta091–Ta1003.951.981.883.622.591.330.710.54
200 × 20Ta101–Ta1103.041.621.612.892.071.100.540.65
500 × 20Ta111–Ta1202.141.291.232.001.790.861.191.28
SD1.240.660.671.300.870.500.320.36
ARPD2.761.661.342.242.031.000.450.40
Table 12. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 30; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 12. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 30; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST10020 × 5Ta001–Ta0102.011.290.430.821.530.300.020.02
20 × 10Ta011–Ta0201.480.870.310.661.420.350.070.07
20 × 20Ta021–Ta0301.080.620.290.520.950.270.080.05
50 × 5Ta031–Ta0405.003.122.373.793.831.950.850.73
50 × 10Ta041–Ta0504.192.591.983.053.101.570.500.44
50 × 20Ta051–Ta0603.391.781.662.512.761.410.420.40
100 × 5Ta061–Ta0706.493.633.206.863.932.161.000.88
100 × 10Ta071–Ta0804.583.032.265.143.281.610.310.13
100 × 20Ta081–Ta0903.732.372.124.042.761.410.270.17
200 × 10Ta091–Ta1005.122.562.535.482.981.670.920.71
200 × 20Ta101–Ta1103.591.991.933.702.271.260.910.57
500 × 20Ta111–Ta1202.501.531.532.501.870.961.771.45
SD1.610.930.932.000.960.640.510.43
ARPD3.602.111.723.262.561.240.590.47
SDST12520 × 5Ta001–Ta0102.061.690.670.881.960.460.020.02
20 × 10Ta011–Ta0201.741.020.510.851.620.530.070.04
20 × 20Ta021–Ta0301.061.370.280.470.940.260.070.04
50 × 5Ta031–Ta0406.093.712.974.594.572.371.000.80
50 × 10Ta041–Ta0504.643.142.073.603.951.940.360.22
50 × 20Ta051–Ta0603.322.161.592.552.771.420.440.21
100 × 5Ta061–Ta0707.334.383.558.194.702.411.061.00
100 × 10Ta071–Ta0805.333.242.786.023.662.070.530.34
100 × 20Ta081–Ta0903.992.562.314.372.911.520.06−0.07
200 × 10Ta091–Ta1005.532.812.735.803.331.790.990.98
200 × 20Ta101–Ta1103.862.082.043.932.511.380.890.77
500 × 20Ta111–Ta1202.711.711.702.772.131.081.951.92
SD1.901.001.032.331.170.730.580.59
ARPD3.972.491.933.672.921.440.620.52
Table 13. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 60; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 13. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 60; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST1020 × 5Ta001–Ta0100.460.900.100.210.190.050.020.02
20 × 10Ta011–Ta0200.570.280.130.260.220.050.050.05
20 × 20Ta021–Ta0300.370.520.090.160.220.050.030.02
50 × 5Ta031–Ta0400.930.570.310.440.880.320.100.07
50 × 10Ta041–Ta0502.071.380.831.021.580.600.530.49
50 × 20Ta051–Ta0602.181.210.961.061.700.640.510.42
100 × 5Ta061–Ta0701.100.700.400.801.360.380.020.01
100 × 10Ta071–Ta0801.390.810.600.841.370.440.200.13
100 × 20Ta081–Ta0901.931.110.971.251.480.710.260.24
200 × 10Ta091–Ta1001.420.730.610.941.390.430.180.17
200 × 20Ta101–Ta1101.790.930.871.101.250.530.450.39
500 × 20Ta111–Ta1201.310.540.540.690.880.310.540.52
SD0.630.320.330.380.560.230.210.19
ARPD1.290.810.530.731.040.380.240.21
SDST5020 × 5Ta001–Ta0101.301.210.350.530.690.180.120.05
20 × 10Ta011–Ta0201.160.870.310.430.620.200.050.03
20 × 20Ta021–Ta0300.570.230.160.320.410.090.040.04
50 × 5Ta031–Ta0403.571.651.392.052.611.130.420.30
50 × 10Ta041–Ta0503.151.961.241.812.231.170.310.19
50 × 20Ta051–Ta0602.491.611.071.422.060.930.230.12
100 × 5Ta061–Ta0704.062.351.724.112.671.270.290.24
100 × 10Ta071–Ta0803.241.821.533.192.231.040.03−0.10
100 × 20Ta081–Ta0902.711.661.352.662.010.96−0.07−0.22
200 × 10Ta091–Ta1003.641.711.433.482.190.880.100.00
200 × 20Ta101–Ta1102.821.341.172.781.770.740.120.15
500 × 20Ta111–Ta1202.090.990.962.001.470.500.780.89
SD1.090.560.511.240.780.410.230.28
ARPD2.571.451.062.061.750.760.200.14
Table 14. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 60; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 14. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 60; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST10020 × 5Ta001–Ta0101.881.730.370.711.480.250.020.02
20 × 10Ta011–Ta0201.260.880.280.471.010.250.070.07
20 × 20Ta021–Ta0301.000.280.260.410.920.180.020.02
50 × 5Ta031–Ta0405.352.652.243.403.891.950.440.31
50 × 10Ta041–Ta0504.212.721.662.733.091.480.290.10
50 × 20Ta051–Ta0603.232.111.352.142.581.280.180.10
100 × 5Ta061–Ta0705.993.502.696.893.821.950.550.23
100 × 10Ta071–Ta0804.392.672.014.962.951.44−0.36−0.48
100 × 20Ta081–Ta0903.672.312.034.042.651.35−0.22−0.36
200 × 10Ta091–Ta1004.952.312.195.622.861.250.00−0.13
200 × 20Ta101–Ta1103.651.811.683.872.080.930.25−0.06
500 × 20Ta111–Ta1202.661.441.352.751.700.731.340.94
SD1.590.880.822.060.990.620.440.36
ARPD3.522.031.513.172.421.090.220.06
SDST12520 × 5Ta001–Ta0101.802.050.340.641.400.350.020.02
20 × 10Ta011–Ta0201.661.480.420.681.390.410.070.04
20 × 20Ta021–Ta0300.970.960.220.390.840.220.050.03
50 × 5Ta031–Ta0405.833.972.474.074.252.180.570.47
50 × 10Ta041–Ta0504.732.131.783.163.601.670.120.02
50 × 20Ta051–Ta0603.412.501.432.432.711.450.16−0.03
100 × 5Ta061–Ta0706.864.453.027.894.582.270.320.18
100 × 10Ta071–Ta0805.143.102.375.893.431.65−0.16−0.31
100 × 20Ta081–Ta0903.792.401.804.322.691.22−0.42−0.55
200 × 10Ta091–Ta1005.652.762.516.273.171.600.120.08
200 × 20Ta101–Ta1103.881.941.744.202.401.060.210.12
500 × 20Ta111–Ta1202.891.661.533.031.910.831.441.39
SD1.841.010.912.361.170.690.460.47
ARPD3.882.451.643.582.701.240.210.12
Table 15. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 90; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 15. ARPD and SD for each algorithm in SDST10 and SDST50 setting f at 90; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST1020 × 5Ta001–Ta0100.410.700.080.180.140.040.020.02
20 × 10Ta011–Ta0200.560.360.130.220.240.040.050.05
20 × 20Ta021–Ta0300.390.560.100.120.190.040.030.02
50 × 5Ta031–Ta0400.920.770.300.420.840.270.090.04
50 × 10Ta041–Ta0502.011.260.811.061.430.530.500.44
50 × 20Ta051–Ta0602.101.280.821.011.540.600.440.36
100 × 5Ta061–Ta0701.030.630.310.761.340.33−0.02−0.04
100 × 10Ta071–Ta0801.330.900.480.771.320.380.120.05
100 × 20Ta081–Ta0901.831.060.821.121.470.540.160.16
200 × 10Ta091–Ta1001.320.650.480.851.330.320.060.06
200 × 20Ta101–Ta1101.710.870.760.951.120.380.350.29
500 × 20Ta111–Ta1201.270.480.430.610.820.210.430.40
SD0.600.290.290.360.530.200.190.17
ARPD1.240.790.460.670.980.310.190.15
SDST5020 × 5Ta001–Ta0101.151.500.300.510.580.100.040.02
20 × 10Ta011–Ta0201.170.770.320.440.580.190.050.03
20 × 20Ta021–Ta0300.490.780.160.250.370.070.040.03
50 × 5Ta031–Ta0403.432.181.131.982.421.040.290.22
50 × 10Ta041–Ta0503.011.681.081.622.120.920.140.07
50 × 20Ta051–Ta0602.431.690.891.282.030.820.170.07
100 × 5Ta061–Ta0703.982.341.383.952.331.090.030.00
100 × 10Ta071–Ta0803.071.521.213.102.130.88−0.10−0.25
100 × 20Ta081–Ta0902.511.541.032.451.820.81−0.34−0.42
200 × 10Ta091–Ta1003.491.351.213.371.900.63−0.39−0.40
200 × 20Ta101–Ta1102.671.191.022.641.510.53−0.20−0.12
500 × 20Ta111–Ta1202.070.760.792.001.280.310.510.65
SD1.060.510.401.200.730.370.250.29
ARPD2.461.440.881.971.590.620.02−0.01
Table 16. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 90; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
Table 16. ARPD and SD for each algorithm in SDST100 and SDST125 setting f at 90; HLS(10) denotes the experimental results of running 10 times; HLS(30) denotes the experimental results of running 30 times. The bold numbers are the relatively good ARPD results for each type of algorithms.
DataSetGAMAMA LS PACOIG_RSIG_RS LS HLS(10)HLS(30)
SDST10020 × 5Ta001–Ta0101.821.430.390.611.240.170.020.02
20 × 10Ta011–Ta0201.271.090.290.481.030.180.070.07
20 × 20Ta021–Ta0300.941.140.170.480.740.170.020.02
50 × 5Ta031–Ta0405.263.021.993.313.701.820.180.09
50 × 10Ta041–Ta0504.182.551.502.492.991.30−0.07−0.14
50 × 20Ta051–Ta0603.111.771.181.982.401.110.04−0.06
100 × 5Ta061–Ta0706.003.042.166.653.481.63−0.10−0.2
100 × 10Ta071–Ta0804.152.451.614.892.771.02−0.60−0.68
100 × 20Ta081–Ta0903.492.391.533.912.461.05−0.56−0.61
200 × 10Ta091–Ta1004.712.191.775.532.490.92−0.33−0.5
200 × 20Ta101–Ta1103.481.681.403.821.920.76−0.12−0.39
500 × 20Ta111–Ta1202.641.161.142.751.500.461.010.61
SD1.560.710.662.010.950.560.410.36
Average3.421.991.263.072.230.88−0.04−0.15
SDST12520 × 5Ta001–Ta0101.901.400.320.651.240.300.020.02
20 × 10Ta011–Ta0201.521.240.370.561.440.360.050.03
20 × 20Ta021–Ta0300.951.210.240.390.810.190.020.02
50 × 5Ta031–Ta0405.633.481.973.674.002.01−0.020.04
50 × 10Ta041–Ta0504.593.351.502.963.471.54−0.35−0.34
50 × 20Ta051–Ta0603.251.631.262.062.591.18−0.15-0.16
100 × 5Ta061–Ta0706.823.652.527.754.141.91−0.78−0.28
100 × 10Ta071–Ta0804.802.841.945.613.261.34−1.16−0.85
100 × 20Ta081–Ta0903.502.161.504.152.601.00−1.27−1.05
200 × 10Ta091–Ta1005.372.632.146.202.941.17−1.42−0.84
200 × 20Ta101–Ta1103.691.691.494.162.240.76−0.84−0.55
500 × 20Ta111–Ta1202.831.361.233.021.640.521.371.15
SD1.780.930.742.331.090.610.780.58
Average3.742.221.373.432.531.02−0.38−0.23
Table 17. NewBounds for instances with scale 50 × 5.
Table 17. NewBounds for instances with scale 50 × 5.
ScaleInstancesNewBoundBest-Known_BoundDifference
50 × 5SDST10
Ta03128132814−1
Ta03926712673−2
SDST100
Ta03440194020−1
Ta03739953999−4
Ta03839393966−27
SDST125
Ta03142124226−14
Ta03443484356−8
Ta03543404342−2
Ta03941174145−28
Table 18. NewBounds for instances with scale 50 × 10.
Table 18. NewBounds for instances with scale 50 × 10.
ScaleInstancesNewBoundBest-Known_BoundDifference
50 × 10SDST50
Ta04539323939−7
Ta04839423950−8
Ta05039813983−2
SDST100
Ta04147964812−16
Ta04448134830−17
Ta04547874812−25
Ta04648094816−7
Ta04748894898−9
Ta04848394849−10
SDST125
Ta04152155275−60
Ta04251455177−32
Ta04351645193−29
Ta04452695286−17
Ta04753095340−31
Ta04853155317−2
Ta04951735194−21
Ta05053255334−9
Table 19. NewBounds for instances with scale 100 × 5.
Table 19. NewBounds for instances with scale 100 × 5.
ScaleInstancesNewBoundBest-Known_BoundDifference
100 × 5SDST10
Ta06156455647−2
Ta06254635465−2
Ta06354055406−1
Ta06452085213−5
Ta06554635466−3
Ta06956365641−5
Ta07055365537−1
SDST50
Ta06165256542−17
Ta06461756182−7
Ta06662486270−22
Ta06763856390−5
Ta06965566576−20
SDST100
Ta06176977714−17
Ta06275917610−19
Ta06374977539−42
Ta07076647735−71
SDST125
Ta06182468339−93
Ta06281218230−109
Ta06380858168−83
Ta06479958005−10
Ta06581978231−34
Ta06680098082−73
Ta06781888267−79
Ta06879597993−34
Ta06983248393−69
Ta07082328290−58
Table 20. NewBounds for instances with scale 100 × 10.
Table 20. NewBounds for instances with scale 100 × 10.
ScaleInstancesNewBoundBest-Known_BoundDifference
100 × 10SDST10
Ta07256755683−8
Ta07656065607−1
SDST50
Ta07174427450−8
Ta07270117033−22
Ta07372547262−8
Ta07475187549−31
Ta07572327240−8
Ta07669556964−9
Ta07872737290−17
Ta07974437452−9
SDST100
Ta07191389201−63
Ta07287588794−36
Ta07389379004−67
Ta07491869276−90
Ta07589549002−48
Ta07686718689−18
Ta07788118858−47
Ta07889489028−80
Ta07990829133−51
Ta08090759114−39
SDST125
Ta071993010070−140
Ta07295119631−120
Ta07397729808−36
Ta0741004010168−128
Ta07597709852−82
Ta07694269529−103
Ta07795809696−116
Ta07897379891−154
Ta079988510004−119
Ta080986410013−149
Table 21. NewBounds for instances with scale 100 × 20.
Table 21. NewBounds for instances with scale 100 × 20.
ScaleInstancesNewBoundBest-Known_BoundDifference
100 × 20SDST50
Ta08184158437−22
Ta08283648387−23
Ta08483038389−86
Ta08584318471−40
Ta08685208548−28
Ta08784578482−25
Ta08886228662−40
Ta08984648473−9
Ta09085058519−14
SDST100
Ta0811051310578−65
Ta0821049210535−43
Ta0831052810552−24
Ta0841045110479−28
Ta0851049910539−40
Ta0861060010679−79
Ta0871058010645−65
Ta0881069410794−100
Ta0891050810612−104
Ta0901060510651−46
SDST125
Ta0811157211694−122
Ta0821153911679−140
Ta0831156611701−135
Ta0841145011634−184
Ta0851148911675−186
Ta0861163111740−109
Ta0871163011784−154
Ta0881171311883−170
Ta0891158111731−150
Ta0901161711753−136
Table 22. NewBounds for instances with scale 200 × 10.
Table 22. NewBounds for instances with scale 200 × 10.
ScaleInstancesNewBoundBest-Known_BoundDifference
200 × 10SDST10
Ta0951119511207−12
Ta1001127611284−8
SDST50
Ta0911390814005−97
Ta0921376913902−133
Ta0931397214087−115
Ta0941379513873−78
Ta0961362513653−28
Ta0971408314115−32
Ta0981398814018−30
Ta0991381113857−46
Ta1001388513894−9
SDST100
Ta0911719917307−108
Ta0921713217210−78
Ta0931735417386−32
Ta0941716717206−39
Ta0951718517244−59
Ta0961699517022−27
Ta0981729817407−109
Ta0991712917194−65
Ta1001719517263−68
SDST125
Ta0911858918930−341
Ta0921858418876−292
Ta0931874719059−312
Ta0941861018934−324
Ta0951860818906−298
Ta0961845118659−208
Ta0971885719118−261
Ta0981881919058−239
Ta0991857618819−243
Ta1001861518793−178
Table 23. NewBounds for instances with scale 200 × 20.
Table 23. NewBounds for instances with scale 200 × 20.
ScaleInstancesNewBoundBest-Known_BoundDifference
200 × 20SDST50
Ta1021562715644−17
Ta1031558515689−104
Ta1041561915627−8
Ta1051543515470−35
Ta1061548015514−34
Ta1071564015669−29
Ta1081562715645−18
Ta1091553815544−6
Ta1101563215694−62
SDST100
Ta1011959919618−19
Ta1021978719816−29
Ta1031983019881−51
Ta1041971419810−96
Ta1071981419888−74
Ta1081979719826−29
Ta1091966619757−91
SDST125
Ta1012151321765−252
Ta1022171421973−259
Ta1032184421975−131
Ta1042181021984−174
Ta1052156821773−205
Ta1062162521829−204
Ta1072178922055−266
Ta1082181421902−88
Ta1092168421821−137
Ta1102185421975−121
Table 24. Permutations of Jobs for Some Instances.
Table 24. Permutations of Jobs for Some Instances.
ScaleInstances
50 × 5SDST10Permutation of JobsMakespan
Ta03130 16 40 0 39 49 25 5 17 29 31 12 35 9 28 7 45 19 33 44 38 10 20 24 42813
37 23 36 13 3 6 14 21 8 1 42 26 43 27 46 15 41 48 22 47 32 18 11 2 34
Ta03945 43 23 48 14 40 0 12 32 29 46 49 41 9 6 19 10 31 20 44 24 3 21 22 372671
35 4 33 39 28 17 2 47 5 34 15 8 42 18 1 25 30 36 38 27 26 16 7 11 13
SDST100Permutation of Jobs
Ta03425 49 12 21 23 43 16 47 9 11 0 2 29 8 45 27 31 13 26 34 19 41 35 14 74019
30 4 48 46 15 44 6 32 1 17 24 3 40 39 36 10 42 28 38 22 37 18 33 5 20
Ta03736 48 4 21 18 29 20 14 17 44 26 32 38 46 8 30 42 35 34 13 31 9 39 473995
45 10 24 3 19 40 1 43 23 2 28 12 49 7 16 6 41 25 0 5 33 11 22 27 37 15
Ta03833 38 13 17 4 1 24 15 9 25 7 16 19 45 34 46 14 35 29 8 39 21 27 0 26 233939
48 37 30 49 36 6 18 20 42 40 5 22 32 44 2 43 11 28 47 31 12 10 41 3
Table 25. ARPD for compared algorithms in each dataset. The bold numbers are the relatively good ARPD results for each type of instances.
Table 25. ARPD for compared algorithms in each dataset. The bold numbers are the relatively good ARPD results for each type of instances.
DataSetSDST10SDST50SDST100SDST125
EMBOAHAHLSEMBOAHAHLSEMBOAHAHLSEMBOAHAHLS
20 × 50.670.340.041.880.510.093.710.620.023.051.010.02
20 × 101.100.460.061.870.920.032.640.990.073.631.290.04
20 × 201.090.710.021.461.040.052.171.230.052.121.380.04
50 × 51.830.920.125.880.990.489.271.050.7310.731.320.80
50 × 103.501.300.575.921.200.407.541.820.448.912.040.22
50 × 203.931.480.495.051.780.325.821.920.406.312.410.21
100 × 52.150.780.127.351.580.6310.361.750.8811.962.001.00
100 × 102.911.230.255.752.560.227.832.560.139.472.290.34
100 × 203.481.740.465.282.540.126.872.290.176.602.66−0.07
200 × 102.240.940.305.221.320.547.261.550.718.082.110.98
200 × 203.031.350.593.741.750.655.181.810.575.551.980.77
500 × 201.631.170.692.481.431.283.371.501.453.661.731.92
ARPD2.301.040.314.321.470.406.001.590.476.671.850.52

Share and Cite

MDPI and ACS Style

Wang, Y.; Li, X.; Ma, Z. A Hybrid Local Search Algorithm for the Sequence Dependent Setup Times Flowshop Scheduling Problem with Makespan Criterion. Sustainability 2017, 9, 2318. https://doi.org/10.3390/su9122318

AMA Style

Wang Y, Li X, Ma Z. A Hybrid Local Search Algorithm for the Sequence Dependent Setup Times Flowshop Scheduling Problem with Makespan Criterion. Sustainability. 2017; 9(12):2318. https://doi.org/10.3390/su9122318

Chicago/Turabian Style

Wang, Yunhe, Xiangtao Li, and Zhiqiang Ma. 2017. "A Hybrid Local Search Algorithm for the Sequence Dependent Setup Times Flowshop Scheduling Problem with Makespan Criterion" Sustainability 9, no. 12: 2318. https://doi.org/10.3390/su9122318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop