Next Article in Journal
An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter
Next Article in Special Issue
OPTIMUS: Self-Adaptive Differential Evolution with Ensemble of Mutation Strategies for Grasshopper Algorithmic Modeling
Previous Article in Journal
Evolutionary Machine Learning for Multi-Objective Class Solutions in Medical Deformable Image Registration
Previous Article in Special Issue
Multi-Metaheuristic Competitive Model for Optimization of Fuzzy Controllers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion

1
Department of Industrial Engineering, Yasar University, Izmir 35100, Turkey
2
Department of Industrial and System Engineering, Istinye University, Istanbul 34010, Turkey
3
Department of Industrial and Manufacturing System Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
4
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(5), 100; https://doi.org/10.3390/a12050100
Submission received: 8 April 2019 / Revised: 3 May 2019 / Accepted: 6 May 2019 / Published: 9 May 2019
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications (volume 2))

Abstract

:
In this paper, we propose a variable block insertion heuristic (VBIH) algorithm to solve the permutation flow shop scheduling problem (PFSP). The VBIH algorithm removes a block of jobs from the current solution. It applies an insertion local search to the partial solution. Then, it inserts the block into all possible positions in the partial solution sequentially. It chooses the best one amongst those solutions from block insertion moves. Finally, again an insertion local search is applied to the complete solution. If the new solution obtained is better than the current solution, it replaces the current solution with the new one. As long as it improves, it retains the same block size. Otherwise, the block size is incremented by one and a simulated annealing-based acceptance criterion is employed to accept the new solution in order to escape from local minima. This process is repeated until the block size reaches its maximum size. To verify the computational results, mixed integer programming (MIP) and constraint programming (CP) models are developed and solved using very recent small VRF benchmark suite. Optimal solutions are found for 108 out of 240 instances. Extensive computational results on the VRF large benchmark suite show that the proposed algorithm outperforms two variants of the iterated greedy algorithm. 236 out of 240 instances of large VRF benchmark suite are further improved for the first time in this paper. Ultimately, we run Taillard’s benchmark suite and compare the algorithms. In addition to the above, three instances of Taillard’s benchmark suite are also further improved for the first time in this paper since 1993.

1. Introduction

Sustainability in manufacturing industries is mainly measured by their competitiveness in the market place. Competitiveness is referred to timely product delivery with the best quality, minimum manufacturing time and price to customers. Minimum manufacturing time can be obtained by optimal production sequences that can minimize makespan or total flowtime. Note that a manufacturing company can fail to satisfy production plans although the other production entities such as operators, maintenance, inventory, quality control, etc. are in control due to the lack of optimal or near optimal production sequences in the shop floor. For this reason, seeking optimal or near-optimal production sequences and schedules is vital to manufacturing companies in order to minimize the makespan, which also minimizes idle times on the machines and maximize machine utilization.
The permutation flow shop scheduling problem (PFSP) has been widely studied in the literature and has extensively been applied in the inustry. There are many different fields in real-life where PFSP can be used [1]. It is yet an exceptionally active topic of investigation, especially because flow shop environments are at the center of real-life scheduling problems in various fields of high social or economic impact. In addition, the flow shop layout is a regular configuration in many manufacturing companies. The basic PFSP consists of a set of n jobs which are processed by m machines. These jobs follow the same route and their operations on the machines cannot be interrupted. All the jobs must be processed in the same order on the machines and the aim is to find the best permutation π = { π 1 , π 2 , , π n } of these jobs with respect to the given objective.
In this study, our aim is to maximize the throughput of the system by maximizing the utilization rate of the machines which means minimizing makespan. To compute the makespan, π denotes the given arbitrary solution, where job π i is the job at the ith position of solution π . C i , k is denoted as the completion time of job π i on machine k at position i . Following this notation, completion times of jobs at each machine are computed as in the following Equations (1)–(5), where p π i , k is the processing time of job π i at the kth machine. The makespan of solution π , denoted as C m a x ( π ) , is the completion time of the last job (i.e., n ) on the last machine (i.e., m ). It is simply denoted as C n m and calculated as follows:
C 1 , 1 = p π 1 , 1
C i , 1 = C i 1 , 1 + p π i , 1 i = 2 , , n
C 1 , k = C 1 , k 1 + p π 1 , k k = 2 , , m
C i , k = m a x { C i 1 , k ,   C i , k 1 } + p π i , k i = 2 , , n ; k = 2 , , m
C m a x ( π ) = C n m .
The PFSP with makespan criterion is denoted as F m | P e r m u | C m a x according to the notation of [2] and has been proven to be NP-hard for the makespan criterion [3], so it is challenging to solve it with exact methods. Therefore, metaheuristic algorithms were employed to solve the problem and obtain near-optimal solutions. In recent years, various metaheuristic algorithms have been presented to solve various variants of PFSP with different objectives. One of the state-of-the-art algorithms for PFSPs is the iterated greedy algorithm (IG) presented by [4]. We focused on the recent literature that considers the IG algorithm in their solution approaches.
The IG algorithm was employed to PFSP with makespan criterion in [4,5,6,7,8,9]. In [5], to improve the solution quality, a local search was applied to the partial solution after the destruction step of the IG algorithm, while in [6] sequence depended setup times were employed for the PFSP with makespan criterion. In addition, in [7] the authors studied the PFSP with makespan and proposed a tie-breaking mechanism for the IG algorithm, while in [8] an IG and a discrete differential evolution algorithm were proposed and compared. In this study, we employ new hard VRF instances which are first introduced in [9], and they also applied an IG algorithm. In addition, the same problem was studied in [10] to minimize the makespan over Taillard’s benchmark suite.
The IG algorithm was applied to various variants of PFSP such as no-wait flow shops in [11,12,13]; blocking flow shops in [14,15,16,17]; no-idle flow shops in [18,19,20]; energy-efficient PFSP in [21,22]; multi-objective PFSP in [23,24] where both studies presented a restarted iterated Pareto greedy algorithm. In a no-wait variant of PFSP, distributed no-wait flow shop problem [11], tabu-based reconstruction strategy [12], and sequence depended setup times [13] were employed with IG algorithm. In blocking variant of PFSP, IG algorithms were combined with local search algorithms [14], constructive heuristics [15,16], and also embedded in differential evolution framework [17]. In [25] profile fitting and NEH heuristic algorithms were proposed for the same problem. In a no-idle variant of PFSP, iterated reference greedy algorithm [18], and variable IG algorithm [19] were presented. In addition, IG algorithm was employed for the mixed no-idle PFSP [20].
IG algorithm was also applied to PFSP with different objective functions such as total tardiness criterion [26,27]; total flowtime criterion [28]. In [1], they carried out an exhaustive review and computational evaluation of heuristics and metaheuristics published until 2017 for the PFSP to minimize the makespan. Therefore, for the further analysis of the literature of PFSP, the indicated manuscript [1] should be examined.
In traditional search algorithms, swap and insertion neighborhood structures are generally employed. The swap neighborhood exchanges two jobs in a solution, whereas the insertion one removes a job from a solution and inserts it into another position in the solution. Recently, block move-based local search algorithms are presented for the single machine-scheduling problems in the literature [29,30,31,32]. Xu et al. [31] developed a Block Move neighborhood structure in which l consecutive jobs (called a block) are inserted into another position in the solution. They represent a block move by a t r i p l e t   ( i ,   k ,   l ) , where i denotes the position of the first job of the block, k the target position of the block to be inserted and l the size of the block. It is obvious that one edge insertion, two edge-insertion and 3-block insertion are the block move neighborhoods with l = 1 , l = 2 , and l = 3 . Similarly, Gonzales and Vela [32] developed a variable neighborhood descent algorithm with three distinct block move neighborhoods and employed in a memetic algorithm. Then, a memetic algorithm with block insertion heuristic is presented in [29]. Moreover, in [33], a variable block insertion heuristic (VBIH) algorithm was employed to solve the blocking PFSP with makespan criterion.
In IG algorithms, some solutions components are removed from the current solution and reinserted into the partial solutions. In other words, a number, d S , of jobs are removed randomly, which is known as the destruction phase. Then, in the construction phase, these d S jobs are reinserted into the partial solution in the same order they are removed. For each of d S jobs, it makes a number n d S + 1 of insertions. However, the VBIH algorithm removes a block of jobs π b with size b from the current solution and it makes a number n b + 1 of block insertions only. That is the difference between IG and VBIH algorithms.
The main contributions of the paper can be outlined as follows. VBIH is employed to solve PFSP with makespan criterion using the new hard VRF benchmark sets [9]. Detailed computational results show that VBIH algorithm outperforms two variants of the iterated greedy algorithm. 236 out of 240 instances of large VRF benchmark suite are further improved for the first time in this paper, while the results of the remaining four instances are found as the same with the current results. In addition, the formulation of two mathematical models is given to solve the small benchmark set in order to verify the results of our proposed VBIH algorithm. One hundred and eight out of 240 small instances are proven to be optimal. Therefore, this paper proposes new lower bounds with the use of an efficient algorithm, which differentiates the study from the current literature. We also show that the speed up method of Taillard is substantially effective when solving the PFSP with makespan criterion.
The remaining part of the paper is organized as follows: Section 2 introduces the formulation of PFSP including mixed integer programming (MIP) model and constraint programming (CP) model whereas Section 3 presents all the heuristic algorithms. Section 4 explains the computational results of the MIP and CP models on small VRF instances to show the solution quality of the heuristic algorithms and the limitations of the models. Section 5 reports the experimental results of the heuristic algorithms and the improvements to the large VRF instances. Finally, Section 6 summarizes the concluding remarks.

2. Mathematical Model Formulation

This paper proposes MIP and CP models to solve small VRF instances for PFSP with the makespan criterion in order to verify the solution quality of proposed heuristic algorithms. The input parameters used in the models are presented in follows:
Parameters:
n : Total number of jobs, i = 1 , , n
m : Total number of machines, k = 1 , , m
p i , k : Processing time of job i on machine k
M : A sufficient large constant integer.

2.1. The MIP Model

The MIP decision variables, objective function and the constraints are given in the following equations. The MIP formulation of PFSP, which were proposed by Manne [34], is used.
Decision Variables:
C m a x : Makespan
C i , k : Completion time of job i on machine k
D i , j : Binary variable: 1 if job i is scheduled before job j ;   0 , otherwise; i < j
MIP Model: Objective and Constraints:
M i n   C m a x s t :
C m a x C i , m i = 1 , , n
C i , 1 p i , 1 i = 1 , , n
C i , k C i , k 1 p i , k i = 1 , , n ,   k = 2 , , m
C i , k C j , k + M D i , j p i , k i = 1 i < j n ,   k = 1 , , m
C i , k C j , k + M D i , j M p j , k i = 1 i < j n ,   k = 1 , , m
D i , j ( 0 , 1 ) .
The objective function (6) minimizes the makespan while Constraint (7) calculates the maximum completion time of all jobs on the last machine. In PFSP, all jobs follow the same route through the machines so that their final processes will be done on the last machine. Constraint (8) computes the completion time of each job on machine 1 ensuring that they cannot occur earlier than the duration of their processing time on machine 1 which is the starting machine for all jobs. Constraint (9) ensures that the completion time of each job on each machine cannot be processed before their completion time on the previous machine. Constraints (10) and (11) specify the relationship between the processing of two consecutive jobs on the same machine. Constraint (11) starts that if job i precedes job j in the permutation, then job i should be completed before job j on each machine. Otherwise, job j should precede job i on each machine which is shown by Constraint (10).

2.2. The CP Model

CP decision variables, objective function and the constraints are presented in the following equations using the OPL API of CP Optimizer. To express the processing times of the jobs on the machines, the model uses interval variables denoted as J o b I n t . In addition, sequence variables for the machines are defined in the model as M a c h i n e which collects all these interval variables.
Decision Variables:
J o b I n t i , k : Interval variable for job i on machine k with duration p i , k
M a c h i n e k : Sequence variable for machine k over { J o b I n t i , k | 1 i n } .
CP Model: Objective and Constraints:
M i n   ( max i J ( e n d O f ( J o b I n t i , m ) ) )
e n d B e f o r e S t a r t ( J o b I n t i , k , J o b I n t i , k + 1 ) i = 1 , , n ,   k = 1 , , m 1
n o O v e r l a p ( M a c h i n e k ) k = 1 , , m
s a m e S e q u e n c e ( M a c h i n e 1 , M a c h i n e k ) k = 2 , , m .
The CP model minimizes the makespan by computing the maximum end date of each job on the last machine (13). Constraint (14) impose the precedence constraints between the consecutive operations of each job on the sequence of machines. Machines are the disjunctive resources and can process only one job at a time, which is expressed by the n o O v e r l a p Constraint (15) over the sequence variables associated with machines. The relationship between sequence variables and the interval variables are provided while defining the decision variables. The last constraint s a m e S e q u e n c e (16) guarantees that all the jobs are processed in the same order on each machine. Therefore, the permutation of the jobs will be the same for each machine.

3. Meta-Heuristic Algorithms

3.1. Taillard’s Speed Up Method for PFSP with Makespan Criterion

Insertion neighborhood structure is very effective for makespan minimization. The size of the insertion neighborhood is ( n 1 ) 2 . Since each objective function evaluation takes O ( n m ) time, its computational complexity is O ( n 3 m ) . In [35], a speed-up method is proposed where it reduces the computational complexity from O ( n 3 m ) to O ( n 2 m ) for the PFSP with makespan criterion. In order to execute the insertion procedure in time O ( n m ) , this speed-up method can be explained as follows: Suppose that job π i will be inserted in a position l . Then the speed up method can be described below:
  • Compute the head, e i , k , which is the earliest completion time of each job on each machine. The starting time of the first job on the first machine is 0.
    e 0 , k = e i , 0 = 0               i = 1 , , l 1 ; k = 1 , , m
    e i , k = m a x { e i , k 1 , e i 1 , k } + p π i , k     i = 1 , , l 1 ; k = 1 , , m .
  • Compute the tail, q i , k , which is the duration between the starting time of each job on each machine and the end of all the operations on each machine.
    q i , m + 1 = 0                 i = n , , l 1 ;   k = m , , 1
    q l , k = 0                  i = n , , l 1 ;   k = m , , 1
    q i , k = m a x { q i , k + 1 , q i + 1 , k } + p π i , , k     i = n , , l 1 ;   k = m , , 1 .
  • Compute the earliest relative completion time f i , k on the lth machine of job π j inserted at the lth position. Completion time of an inserted job on the first machine is zero.
    f i , 0 = 0                   i = 1 , , l
    f i , k = m a x { f i , k 1 , e i 1 , k } + p π i , , k     i = 1 , , l ; k = 1 , , m .
  • The value of the makespan C m a x , l when inserting job j at the lth position is:
    C m a x , l = m a x k ( f i k + q i k )        i = 1 , , l ; k = 1 , , m .
In order to illustrate the speed up the procedure, we give the 7-job 2-machine example. Note that Johnson’s algorithm [36] solves this problem to optimality. Hence, in Table 1, we provide the problem instance with the processing times as well as the optimal solution.
According to the Johnson’s algorithm [36], the optimal solution is { 1 , 2 , 7 , 3 , 5 , 4 , 6 } with C m a x = 36. Now, suppose that we remove job 7 and obtain the partial solution, { 1 , 2 , 3 , 5 , 4 , 6 } . Suppose that we insert job 7 into position l = 3 of the partial solution to obtain the optimal solution. We follow the speed up method now:
  • Compute heads:
    e 0 , k = e i , 0 = 0                       i = 1 , , l 1 ; k = 1 , , m
    e i , k = m a x { e i , k 1 , e i 1 , k } + p π i , k             i = 1 , , l 1 ; k = 1 , , m
    e 1 , 1 = m a x { e 1 , 0 , e 0 , 1 } + p π 1 , 1 = m a x { e 1 , 0 , e 0 , 1 } + p 1 , 1 = m a x { 0 , 0 } + 1 = 1
    e 1 , 2 = m a x { e 1 , 1 , e 0 , 2 } + p π 1 , 2 = m a x { e 1 , 1 , e 0 , 2 } + p 1 , 2 = m a x { 1 , 0 } + 8 = 9
    e 2 , 1 = m a x { e 2 , 0 , e 1 , 1 } + p π 2 , 1 = m a x { e 2 , 0 , e 1 , 1 } + p 2 , 1 = m a x { 0 , 1 } + 2 = 3
    e 2 , 2 = m a x { e 2 , 1 , e 1 , 2 } + p π 2 , 2 = m a x { e 2 , 1 , e 1 , 2 } + p 2 , 2 = m a x { 3 , 9 } + 9 = 18 .
  • Compute tails:
    q i , m + 1 = 0                       i = n , , l 1 ;   k = m , , 1
    q l , k = 0                        i = n , , l 1 ;   k = m , , 1
    q i , k = m a x { q i , k + 1 , q i + 1 , k } + p π i , k            i = n , , l 1 ;   k = m , , 1
    q 6 , 2 = m a x { q 6 , 3 , q 7 , 2 } + p π 6 , 2 = m a x { q 6 , 3 , q 7 , 2 } + p 6 , 2 = m a x { 0 , 0 } + 1 = 1
    q 6 , 1 = m a x { q 6 , 2 , q 7 , 1 } + p π 6 , 1 = m a x { q 6 , 2 , q 7 , 1 } + p 6 , 1 = m a x { 1 , 0 } + 7 = 8
    q 5 , 2 = m a x { q 5 , 3 , q 6 , 2 } + p π 5 , 2 = m a x { q 5 , 3 , q 6 , 2 } + p 4 , 2 = m a x { 0 , 1 } + 3 = 4
    q 5 , 1 = m a x { q 5 , 2 , q 6 , 1 } + p π 5 , 1 = m a x { q 5 , 2 , q 6 , 1 } + p 4 , 1 = m a x { 4 , 8 } + 5 = 13
    q 4 , 2 = m a x { q 4 , 3 , q 5 , 2 } + p π 4 , 2 = m a x { q 4 , 3 , q 5 , 2 } + p 5 , 2 = m a x { 0 , 4 } + 4 = 8
    q 4 , 1 = m a x { q 4 , 2 , q 5 , 1 } + p π 4 , 1 = m a x { q 4 , 2 , q 5 , 1 } + p 5 , 1 = m a x { 8 , 13 } + 5 = 18
    q 3 , 2 = m a x { q 3 , 3 , q 4 , 2 } + p π 3 , 2 = m a x { q 3 , 3 , q 4 , 2 } + p 3 , 2 = m a x { 0 , 8 } + 5 = 13
    q 3 , 1 = m a x { q 3 , 2 , q 4 , 1 } + p π 3 , 1 = m a x { q 3 , 2 , q 4 , 1 } + p 3 , 1 = m a x { 13 , 18 } + 7 = 25 .
Speed-up calculation of the partial solution is given in Figure 1.
5.
Compute the earliest relative completion time f i , k
f i , 0 = 0                    i = 1 , , l
f i , k = m a x { f i , k 1 , e i 1 , k } + p π i , k      i = 1 , , l ; k = 1 , , m
f 1 , 1 = m a x { f 1 , 0 , e 0 , 1 } + p π 1 , 1 = m a x { f 1 , 0 , e 0 , 1 } + p 1 , 1 = m a x { 0 , 0 } + 1 = 1
f 1 , 2 = m a x { f 1 , 1 , e 0 , 2 } + p π 1 , 2 = m a x { f 1 , 1 , e 0 , 2 } + p 1 , 2 = m a x { 1 , 0 } + 8 = 9
f 2 , 1 = m a x { f 2 , 0 , e 1 , 1 } + p π 2 , 1 = m a x { f 2 , 0 , e 1 , 1 } + p 2 , 1 = m a x { 0 , 1 } + 2 = 3
f 2 , 2 = m a x { f 2 , 1 , e 1 , 2 } + p π 2 , 2 = m a x { f 2 , 1 , e 1 , 2 } + p 2 , 2 = m a x { 3 , 9 } + 9 = 18
f 3 , 1 = m a x { f 3 , 0 , e 2 , 1 } + p π 3 , 1 = m a x { f 3 , 0 , e 2 , 1 } + p 7 , 1 = m a x { 0 , 3 } + 4 = 7
f 3 , 2 = m a x { f 3 , 1 , e 2 , 2 } + p π 3 , 2 = m a x { f 3 , 1 , e 2 , 2 } + p 7 , 2 = m a x { 7 , 18 } + 5 = 23 .
Speed-up calculation of the complete solution is given in Figure 2.
6.
The value of the makespan C m a x , l when inserting job π i at the lth position is:
C m a x , l = m a x k ( f i k + q i k )        i = l ; k = 1 , , m
C m a x , 3 = m a x k ( f i k + q i k )
C m a x , 3 = m a x { ( f 31 + q 31 ) , ( f 32 + q 32 ) }
C m a x , 3 = m a x { ( 7 + 25 ) , ( 23 + 13 ) }
C m a x = m a x { 32 , 36 } = 36 .
It is clear that the above speed-up method reduces the complexity of the whole insertion neighborhood from O ( n 3 m ) to O ( n 2 m ) . This speed-up method is the key to success for any algorithm for PFSP with makespan criterion. For this reason, we have chosen the Car8 instance from the literature in order to illustrate the speed-up method above in detail. From the literature, we know that best or optimal solution is { 7 , 3 , 8 , 5 , 2 , 1 , 6 , 4 } with C m a x = 8366 . In Appendix A, we remove job 2 from the optimal solution and re-insert it into the 5th position again. A detailed implementation of Taillard’s speed up method is given in Appendix A in order to ease the understanding of it.

3.2. IG Algorithms

IG algorithms mainly have four components; namely, initial solution, destruction-construction (DC) procedure, local search, and acceptance criterion. The traditional IGRS is proposed by [4]. In this algorithm, the initial solution is constructed by the NEH heuristic in [37]. In the destruction step, d S jobs are randomly removed from the solution π without repetition and stored in π D . The remaining jobs are also stored in π P that represents the partial solution. In the construction step, each job in π D is inserted into the partial solution π P , in the order in which they were removed, until a complete solution of n jobs is constructed. Having carried out the destruction and construction procedure, a local search is employed to further enhance solution quality. After a local search, if the solution is better than or equal to the incumbent solution, it is accepted. Otherwise, it is accepted with a simple simulated annealing-type acceptance criterion, which is suggested by [38]:
T = j = 1 n k = 1 m p k j 10 n m × τ P
where τ P is a parameter to be adjusted. The pseudo-code of the traditional IGRS is given in Algorithm 1, where r is a uniform random number between 0 and 1.
Algorithm 1: Traditional IGRS algorithm
π = N E H
π b e s t = π
w h i l e   ( N o t T e r m i n a t i o n )   d o
                      π D = D e s t r u c t i o n ( π )
π 1 = Construction ( π D , π P )
π 1 = L o c a l S e a r c h ( π 1 )                        / / A l g o r i t h m   4
                     i f   ( f ( π 1 ) f ( π ) ) t h e n
π = π 1
                    i f   ( f ( π 1 ) < f ( π b e s t ) ) t h e n
π b e s t = π 1
e n d i f
e l s e i f   ( r < e x p { ( f ( π 1 ) f ( π ) ) / T } )   t h e n
π = π 1
e n d i f
e n d w h i l e
r e t u r n   π b e s t   a n d   f ( π b e s t )
The IGRS algorithm for the PFSP under makespan minimization employs an initial solution generated by the NEH heuristic. In addition, the NEH heuristic was extended to the FRB5 heuristic with a local search on the partial solutions [39]. Both heuristics are simple and very effective for minimizing the makespan, and its pseudo-code is given in Algorithm 2. In the first phase, the sum of the processing times on all machines are calculated for each job. Then, jobs are sorted in decreasing order to obtain δ . In the second phase, the first job in δ is selected to establish a partial solution π 1 . The remaining jobs in δ are inserted in the partial solution one by one. After each iteration, optionally, a local search is applied to the partial solution. Local search is implemented as long as the partial solution is improved. After having inserted all jobs, a complete solution is obtained. Note that the NEH heuristic is denoted as FRB5 heuristic with an optional local search to partial solutions.
Algorithm 2: NEH and FRB5 constructive heuristics
δ = D e c r e a s i n g O r d e r ( k = 1 m p i k )
π 1 = δ 1
f o r   i = 2   t o   n   d o
π i = I n s e r t J o b I n B e s t P o s i t i o n ( π i , δ i )
π i = A p p l y L o c a l S e a r c h ( π i ,   f ( π i ) )          / / A l g o r i t h m   3   f o r   F R B 5   h e u r i s t i c  
e n d   f o r
r e t u r n   π   w i t h   n   j o b s   a n d   f ( π )
The IGRS algorithm employs insertion neighborhood structure as a local search after destruction and construction procedure. Insertion neighborhood is very effective with the speed-up method explained in Section 3.1 for makespan minimization. Insertion neighborhood can be deterministic or stochastic depending on the decision of choosing a job from solution to be removed. The deterministic variant is given in Algorithm 3. This procedure removes π i from the solution π and inserts it into all possible positions of the incumbent solution π . When the best-improving insertion position is found, job π i is inserted into that position. These steps are repeated for all jobs. If an improvement is observed, the local search is re-run until no better solution is obtained.
Algorithm 3: First improvement insertion neighborhood(π)
f o r   i = 1   t o   n   d o
π * = I n s e r t J o b I n B e s t P o s i t i o n ( π ,   π i )
i f   ( f ( π * ) < f ( π ) )   t h e n   d o  
    π = π *
e n d   i f
e n d   f o r
r e t u r n   π   a n d   f ( π )
In the stochastic variant given in Algorithm 4, jobs are randomly chosen from solutions to make insertions. In Algorithm 4, job π k at position k is randomly chosen from the solution π without repetition, and partial solution π P is obtained. Then, job π k is inserted into all possible positions of the partial solution π P . When the best-improving insertion position is found, job π k is inserted into that position, and a complete solution π * is obtained. These steps are repeated for all jobs. If an improvement is found, the local search is rerun again until no better solution is obtained.
Algorithm 4: First improvement insertion neighborhood(π)
f o r   i = 1   t o   n   d o
π P = R e m o v e   j o b   π k   f r o m   s o l u t i o n   π   r a n d o m l y   a n d   w i t h o u t   r e p e t i t i o n
π * = I n s e r t J o b I n B e s t P o s i t i o n ( π P ,   π k )
i f   ( f ( π * ) < f ( π ) )   t h e n   d o
    π = π *
e n d   i f
e n d   f o r
r e t u r n   π   a n d   f ( π )
Recently, a new IGALL algorithm has been presented in the literature [5] with excellent results for the PFSP with makespan minimization. The difference between IGALL and IGRS is that IGALL applies an additional local search to partial solutions after destruction, which substantially enhances solution quality. In the IGRS algorithm, local search is applied to the complete solution after the construction phase to improve the current candidate solution whereas, in IGALL algorithm, local search is applied to a partial solution after destruction phase. This idea is applied in heuristic algorithms by Reference [39]. They study on vehicle routing problem and apply local search on the routes in the construction phase. Applying local search to the partial solution is more advantageous in terms of computational time and providing different search directions. Due to having a partial solution, a local search is applied to the smaller size of the complete solution so that the search procedure can be conducted quickly. Another difference between IGRS and IGALL is due to the fact that the initial solution is constructed by FRB5 heuristic. The pseudo code of IGALL algorithm is presented in Algorithm 5.
Algorithm 5: IGALL algorithm
π = FRB 5
π b e s t = π
W h i l e   ( N o t T e r m i n a t i o n )   d o
   π D = D e s t r u c t i o n ( π )
   π P = L o c a l S e a r c h T o P a r t i a l S o l u t i o n ( π P )            / / A l g o r i t h m   4
   π 1 = C o n s t r u c t i o n ( π P , π D )
   π 1 = L o c a l S e a r c h T o C o m p l e t e S o l u t i o n ( π 1 )           / / A l g o r i t h m   4
   i f   f ( π 1 ) f ( π ) t h e n   d o
       π = π 1
       i f   f ( π 1 ) < f ( π b e s t ) t h e n   d o
        π b e s t = π 1
       e n d i f
       e l s e   i f   ( r < e x p { ( f ( π 1 ) f ( π ) ) / T } )
          π = π 1
       e n d i f
   e n d i f
e n d w h i l e
r e t u r n   π b e s t   a n d   f ( π b e s t )
e n d p r o c e d u r e
Note that Algorithm 3 is used in the FRB5 heuristic in order to construct the initial solution with a single run due to its deterministic property. In both algorithms, Algorithm 4 is employed in applying local search to both partial and complete solutions.

3.3. Variable Block Insertion Algorithm

In this paper, we propose a VBIH algorithm as follows. The VBIH algorithm employs the FRB5 heuristic as an initial solution. It has a minimum block size ( b m i n ) , and a maximum block size ( b m a x ) . It removes a block of jobs ( π b ) with size b from the current solution and obtains a partial solution ( π P ) . Similar to the IGALL algorithm, it applies the local search in Algorithm 4 to the partial solution. Then, it makes a number, n b + 1 , of block insertion moves sequentially in the partial solution. It chooses the best one amongst those solutions from block insertion moves. Well-known RIS local search in the literature is applied to the complete solution found after block insertion moves. If the new solution obtained after the local search is better than or equal to the current solution, it replaces the current solution. As long as it improves, it retains the same block size ( i . e . ,   b = b ). Otherwise, the block size is incremented by one ( i . e . ,   b = b + 1 ) and a simulated annealing-based acceptance criterion, similar to IGRS and IGALL algorithms, is employed to accept the new solution to escape from local minima. This process is repeated until the block size reaches its maximum limit ( i . e . ,   b b m a x ) . The outline of the VBIH algorithm is given in Algorithm 6. Note that π R is the reference sequence; t P is temperature parameter for the acceptance criterion and r is a uniform random number between 0 and 1.
Algorithm 6: VBIH algorithm
π = F R B 5
π b e s t = π
π R = π b e s t
w h i l e   ( N o t T e r m i n a t i o n )
b = b m i n = 2
d o
   π b = R e m o v e   b l o c k   π b   f r o m   π
   π P = L o c a l S e a r c h T o P a r t i a l S o l u t i o n ( π P )        / / A l g o r i t h m   4
   π 1 = I n s e r t B l o c k I n B e s t P o s i t i o n ( π P , π b )
   π 1 = R I S L o c a l S e a r c h T o C o m p l e t e S o l u t i o n ( π 1 )    / / A l g o r i t h m   5
   i f   ( f ( π 1 ) < f ( π ) )   t h e n   d o
       π = π 1
       b = b
       i f   ( f ( π 1 ) < f ( π b e s t ) ) t h e n   d o
         π b e s t = π 1
         π R = π b e s t
       e n d i f
    e l s e
       b = b + 1
       i f   ( r < e x p { ( f ( π 1 ) f ( π ) ) / T } )
         π = π 1
       e n d i f
    e n d i f
   w h i l e ( ( b b m a x )
e n d w h i l e
r e t u r n   π b e s t   a n d   f ( π b e s t )
To explain the block insertion procedure, we give the following example. Suppose that we are given a current solution π = { 1 ,   2 ,   3 ,   4 ,   5 } . Furthermore, assume that the block size is b = 2 . Let’s randomly choose a block π b = { 2 ,   5 } , thus ending up with a partial solution, π p = { 1 ,   3 ,   4 } . After applying local search to the partial solution π p , suppose that we have a partial solution π p = { 3 ,   1 ,   4 } . Now, the block π b is inserted into four positions as follows: π 1 = { 2 ,   5 ,   3 ,   1 ,   4 } , π 2 = { 3 ,   2 ,   5 ,   1 ,   4 } , π 3 = { 3 ,   1 ,   2 ,   5 ,   4 } and π 4 = { 3 ,   1 ,   4 ,   2 ,   5 } . Among these four solutions, the best one will be chosen as a final solution.
Regarding the local search algorithm that will be applied only to complete solutions, we use a well-known referenced insertion scheme local search, RIS [8,40]. RIS is guided by a reference solution π R , which is the best solution obtained so far during the search process. For instance, if the reference solution is given by π R = { 3 ,   5 ,   1 ,   4 ,   2 } and the current solution by π = { 1 , 2 , 3 , 4 , 5 } . The reference solution π R implies that job 3 in the current solution π might not be in a proper position. For this reason, the RIS local search first removes job 3 from the current solution π and inserts it into all possible slots of the partial solution π P . A new solution with the best insertion slot is replaced by the current solution, and the iteration counter is reset to one if any improvement occurs. Otherwise, the iteration counter is incremented by one. Then, it removes job 5 from the current solution π and inserts it into all possible positions of the partial solution π P . This procedure is repeated until the iteration counter is greater than the number of jobs n , and a new solution is obtained. The pseudo-code of the RIS local search is given in Algorithm 7.
After the local search phase, it should be decided if the new solution is accepted as the incumbent solution for the next iteration. A simple simulated annealing-type of acceptance criterion is used with a constant temperature similar to the IGRS and IGALL algorithms. Note that Taillard’s speed-ups are employed wherever possible in our code.
Algorithm 7: Referenced insertion neighborhood(π)
C o u n t = 1
p o s = 1
π R = π b e s t
w h i l e   ( C o u n t n )   d o
   k = 1
   w h i l e   ( π k   ! =   π P o s R )   k = k + 1 ; e n d w h i l e      / / F i n d   j o b   π k   a t   p o s i t i o n   p o s   i n   π R
   p o s = p o s + 1
   i f ( p o s = n + 1 )   t h e n
     p o s = 1
   e n d   i f
   π P = r e m o v e   π k   f r o m   π  
   π * = I n s e r t J o b I n B e s t P o s i t i o n ( π P , π k )
   i f   ( f ( π * ) < f ( π ) )   t h e n   d o
     π = π *
     C o u n t = 1
   e n d
     C o u n t = C o u n t + 1
   e n d   i f
e n d w h i l e
r e t u r n   π   a n d   f ( π )

4. Design of Experiment for Parameter Tuning

In this section, we present a Design of Experiments (DOE) approach [41] for parameter settings of the VBIH algorithm. In order to carry out experiments, we generate random instances with the method proposed in [9]. In other words, random instances are generated for each combination of n { 100 ,   200 ,   300 ,   400 ,   500 ,   600 ,   700 ,   800 } and m { 20 ,   40 ,   60 } . Five instances are generated for each job and machine combination. Ultimately, we obtained 1200 instances in total. We consider three parameters in the DOE approach. These are maximum block size ( b M a x ) , temperature adjustment parameter ( τ P ), and the decision of whether or not to implement the local search to the partial solution after removal of a block of jobs. We have taken the maximum block size with seven levels as b M a x ( 2 , 3 , 4 , 5 , 6 , 7 , 8 ) ; the temperature adjustment parameter with ten levels as τ P ( 0.1 ,   0.2 ,   0.3 ,   0.4 ,   0.5 ) ; and the decision on the local search to partial solutions as p L ( 1 , 2 ) . Note that p L = 1 means that the local search is applied to partial solutions whereas p L = 2 does not apply the local search to partial solutions. In the design of VBIH algorithm, there are 7 × 5 × 2 = 70 algorithm configurations, i.e., treatments. The VBIH algorithm is coded in C++ programming language on Microsoft Visual Studio 2013, and a full factorial design of experiments is carried out for each algorithm on a Core i5, 3.40 GHz, 8 GB RAM computer. Each instance is run for 70 treatments with a maximum CPU time T m a x = 10 × n × m milliseconds. Note that it took 18 days to run the full factorial design. We calculate the relative percent deviation (RPD) for each instance-treatment pair as follows:
R P D = ( C M A X i C M A X m i n C M A X m i n ) × 100
where C M A X i is the makespan value generated by the VBIH algorithm in each treatment and C M A X m i n is the minimum makespan value found amongst 70 treatments. For each job size-treatment pair, the average RPD value is calculated by taking the average of five instances in each job size. Then, the response variable (ARPD) of each treatment is obtained by averaging these RPD values of all job sizes. After determining the ARPD values for each treatment as mentioned above, the main effects plots of the parameters are analyzed and given in Figure 3.
As it can be seen from Figure 3, the following parameters have better ARPD values than the others: b M a x = 2 , τ P = 0.5 , and p L = 1 . Furthermore, in order to see whether or not there is an interaction effect between parameters, an ANOVA analysis is also given in Table 2.
Table 2 indicates that b M a x , t P , and p L were statistically significant since higher magnitude of F values and p -values of parameter interaction effects are less than the significance level α = 0.05 . High magnitude of F value for p L also suggest that applying local search to partial solutions has a significant impact on the solution quality as mentioned in [5]. In terms of interaction effects, it can be observed that b M a x × t P interaction is not significant because the p-value is much higher than the significance level α = 0.05 . However, b M a x × p L and t P × p L interactions were significant since their p values are less than the significance level α = 0.05 . The interaction effects plot for b M a x × p L is given in Figure 4.
Figure 4 indicates that maximum block size should be taken as b M a x = 2 and local search to the partial solution should be applied. Since t P × p L interaction is also significant, we provide the interaction plot in Figure 5.
Figure 5 also suggests that t P and p L parameters should be taken as τ P = 0.5 and p L = 1 . Ultimately, we set the parameters of VBIH algorithm as follows: b M a x = 2 , τ P = 0.5 , and p L = 1 .

5. Computational Results

In this section, the computational results for the small and large set of VRF benchmark sets are provided. MIP and CP models were written in OPL and run on the IBM ILOG CPLEX 12.8 software suite, while all the heuristic algorithms were being written in Visual C++ 13 and carried out on an Intel Core i5, 3.40 GHz, 8 GB RAM computer. The proposed VBIH algorithm is compared to IGRS and IGALL algorithms. In addition, the results of these algorithms are obtained without the Taillard’s speed up method, and they are denoted as IGRS*, IGALL* and VBIH*. Regarding parameters of them with, destruction size d s , and temperature adjustment factor, t P are taken as d s = 4 and t P = 0.4 for IGRS and IGRS* as suggested in [4]. They are taken as d s = 2 and t P = 0.7 for IGALL and IGALL* as indicated in [5]. As explained in the previous section DOE is conducted for the VBIH algorithm and its parameters are determined as follows: b M a x = 2 , τ P = 0.5 , and p L = 1 , which are also used for the VBIH* algorithm.

5.1. Small VRF Instances

5.1.1. MIP Versus CP

Computational results are given in Table 3 for each combination, giving a total of 240 small VRF instances. For each combination, the table summarizes the number of optimal solutions (nOpt) found for ten instances of each job-machine combination (n × m), the average relative percent deviation (ARPD%) from the upper bounds given in [9], the average CPU time for its ten instances, and the optimality gap percentage (GAP%) on termination, which means the gap between best lower and best upper bound. The maximum CPU time is restricted to an hour (3600 s). The result of CP and MIP models are compared for job sizes 10 and 20. While MIP model can find solutions for very small sized instances (10 jobs) in a shorter time than CP model, it becomes hard for MIP to solve large sized problems (20 jobs and more). Both models cannot always find optimal solutions when the machine size becomes greater than 5, but the MIP model has larger gaps than the CP model. The results show that CP is more efficient than MIP on PFSP, except for very small-sized instances. The results of the remaining instances are obtained only from the CP model because of very large gaps by MIP model. CP model always captures optimal solutions when the machine number is five regardless of the number of jobs. Besides, CP can find optimal solutions in some of the instances when the machine size is 10. Overall, within the time limit, the CP model verifies optimality for 108 out of 240 instances.

5.1.2. Comparison of Heuristic Algorithms with Exact Solutions

In order to compare performances of heuristic algorithms with CP exact method, we run all algorithms for five independent replications with different seed numbers. Relative percent deviation values from upper bounds for ten different instances of each job-machine combinations are calculated as follows:
R P D = ( M M U B ) × 100 M U B
where M is the makespan value generated by any heuristic; and M U B is the upper bound provided in [9]. Note that, for each instance, we record the average RFD of five replications for statistical analysis purposes, especially, for interval graphs. The solutions of the CP model are limited to 3600 s and its average CPU times are given in Table 4. IGALL, IGRS, and VBIH algorithms are run for five replications with three different time limits 15 , 30 , and 45 × n × m . As expected, the performance of these algorithms is much better than those by CP exact model, and they improve the upper bounds provided in [9], which means that the proposed algorithm and other IG algorithms can find good (optimal in some cases) solutions in a very short time. As the solution time increases, the solution quality of VBIH algorithm increases and according to the RPD, it gives the best solutions amongst all other algorithms. It should be noted that the VBIH algorithm further improves 64 out 240 upper bounds for small VRF instances within a very short time.

5.2. Large VRF Instances

Note that both IGALL and VBIH algorithms employ the FRB5 heuristic for constructing initial solution whereas IGRS uses the traditional NEH heuristic. For the large VRF instances, Table 5 summarizes the ARPD values of heuristic methods such as NEH, NEH without speed-up, denoted as NEH*, and extended NEH heuristic with a local search on partial solutions denoted as FRB5.
As shown in Table 5, NEH is very fast with 0.04 s on overall average CPU time. However, its overall average of ARPD is 3.33%. Although FRB5 heuristic is computationally very expensive, which is 43.76 s on overall average CPU time, its average ARPD is only 0.89% from the upper bounds. It is obvious from Table 5 that FRB5 heuristic is substantially better than NEH with a very large margin at the expense of increased CPU time. It is interesting to observe the CPU time performance of the NEH heuristic without the speed-up method of Taillard. Table 5 clearly indicates that the Taillard’s speed-up method is substantially effective since the overall average CPU time is jumped from 0.04 s to 4.95 s without the speed-up method of Taillard. In addition to the above, we present the interval graph of both constructive heuristics in Figure 6 in order for justification. Figure 6 indicates that differences in ARPDs are significantly meaningful on the behalf of FRB5 heuristic since their confidence intervals do not coincide.

5.3. Computational Results of Metaheuristics

In this section, the performance of VBIH algorithm is compared to the best-performing algorithms, IGRS and IGALL, from the literature. All algorithms are run five replications to solve the large VRF instances. In Table 6, we present average, minimum and maximum ARPD values for the CPU time limit T m a x = 15 × n × m milliseconds.
As seen in Table 6, VBIH generated better Avg, Min and Max RPD values on the overall average. On overall average, it was able to further improve the upper bounds up to −0.05%; its best overall performance was −0.18% indicating that 0.18% of 240 instances are further improved and its worst-case performance was 0.08%. In order to see if differences in ARPDs are statistically significant, we provide the 95% confidence interval plot of algorithms in Figure 7, where we can observe that differences in ARPD values are statistically significant on the behalf of VBIH against IGRS and IGALL algorithms because their confidence intervals do not coincide.
Computational results for Avg, Min and Max ARPD values with the CPU time limit T m a x = 30 × n × m milliseconds are given in Table 7. As seen in Table 7, VBIH was able to generate better Avg, Min and Max ARPD values on the overall average. On overall average, it was able to further improve the upper bounds by −0.11% in Avg value, −0.24% of upper bounds are further improved on Min value and its worst-case performance was 0.02%. However, as CPU times increased, the performance of IGALL algorithm was also remarkable. Briefly, both VBIH and IGALL outperformed IGRS in almost each problem set.
In order to see if these results are statistically significant, we provide the 95% confidence interval plot of algorithms in Figure 8, where we can observe that differences in ARPD values are statistically significant on the behalf of both VBIH and IGALL algorithms against IGRS algorithm because their confidence intervals do not coincide with IGRS. In other words, VBIH and IGALL algorithms were statistically equivalent but significantly superior to IGRS.
Computational results for average, minimum and maximum RPD values with the CPU time limit T m a x = 45 × n × m milliseconds are given in Table 8, where VBIH outperformed IGRS and IGALL algorithms with respect to average, minimum and maximum RPD values on the overall average. On overall average, it was able to further improve the upper bounds by −0.25% on the average value, −0.36% on the minimum value, and its worst-case performance was −0.13%. These statistics indicate that VBIH generated much better results than both the IGRS and IGALL algorithms.
In order to see if these results are statistically significant, we provide the 95% confidence interval plot of algorithms in Figure 9, where we can observe that differences in ARPD values are statistically significant on the behalf of VBIH algorithm against both IGRS and IGALL algorithms because their confidence intervals do not coincide. In other words, VBIH algorithm was statistically superior to both IGRS and IGALL algorithm.
In the Supplementary Materials, we summarize all the best-known solutions found for the first time by IGALL and VBIH algorithms. The VBIH algorithm further improves 230 out of 240 instances. In addition, 173 out of 240 instances are improved by the IGRS algorithm, while the IGALL algorithm further improves 222 out of 240 instances. The IGALL algorithm improves six instances that are not improved by VBIH algorithm. Ultimately, 236 out of 240 instances are further improved by all algorithms within 45 × n × m time limits with the remaining four solutions being equal.
As mentioned before, IGALL algorithm is presented in [5], where they analyzed the performances of IGRS and IGALL on both Taillard’s [42] and large VRF instances. They observed that the results obtained by using Taillard’s benchmark set, both algorithms do not present very significant differences with respect to the RPDs obtained. In fact, they have shown that both algorithms did not show any statistically significant differences. However, statistically significant differences between IGRS and IGALL have been shown when large VRF instances are employed. In order to validate this observation, we have run three algorithms on Taillard’s benchmark set with a stopping criterion T m a x = 45 × n × m milliseconds. Furthermore, we run three algorithms without the Taillard’s speed up method and they are denoted as IGRS*, IGALL* and VBIH*. The computational results are given in Table 9. As seen in Table 9, VBIH produced much better RPDs than IGRS and IGALL algorithms when the Taillard’s speed up method is employed since its overall RPD was 0.17 from the best-known solutions. However, IGRS and IGALL algorithms do not show so many differences in terms of RPDs. Interval plots of the algorithms in Figure 10 show that differences in RFDs are not statistically significant because their confidence intervals do coincide. This suggests a fact that researches on PFSP and its variants should employ VRF benchmark suite to see differences in algorithms newly presented. Figure 10 also shows that the Taillard’s speed up method is significantly effective for all three algorithms. During these runs, we were also able to find 3 new best-known solutions for the Taillard’s benchmark suite (ta054 = 3719, ta55 = 3610, ta56 = 3680) and their permutations are also provided in the Supplementary Materials.

6. Conclusions

This paper presents a variable block insertion heuristic (VBIH) algorithm for solving the permutation flow shop scheduling problem (PFSP) with makespan criterion. In addition, we introduce mixed integer programming (MIP) and constraint programming (CP) models to solve the small benchmark set and to verify the results of our proposed heuristic algorithm. By employing the time limited CP model, we can find optimal solutions for some of small VRF instances for the first time in the literature. Furthermore, all algorithms can generate better solution values than upper those currently exist in the literature. We adapted a well-known speed-up method of Taillard and applied all the necessary parts while coding the heuristic algorithms. The parameters of the proposed VBIH algorithm is tuned through a design of experiments on randomly generated benchmark instances. Extensive computational results on two new VRF benchmark suites show that the VBIH algorithm is superior to the best performing algorithms from the literature.
CP model found and verify optimal solutions for 108 out of 240 small VRF instances, whereas 236 out of 240 large VRF benchmark instances are further improved by the VBIH and IGALL algorithms for the first time in this paper with remaining solutions being equal, which are also given in Appendix B (Table A5). Furthermore, three instances of Taillard’s benchmark suite are also further improved for the first time in this paper since 1993.
As future research, VBIH algorithm can be easily extended to other variants of the PFSPs such as no-idle, blocking and no-wait PFSP. In addition, other performance criteria can be considered such as total flow time and total tardiness. Furthermore, different meta-heuristic algorithms or matheuristics can be proposed to solve the PFSP.

Supplementary Materials

The following are available online at https://www.mdpi.com/1999-4893/12/5/100/s1.

Author Contributions

Conceptualization, M.F.T. and D.K.; methodology, Q.-K.P.; software, D.K., M.F.T. and Q.-K.P.; validation, D.K., and M.F.T. and L.G.; writing—original draft preparation, D.K.; writing—review and editing, L.G. and M.F.T.; supervision, M.F.T., Q.-K.P.; project administration, L.G; funding acquisition, L.G.

Funding

This research is partially supported by the National Natural Science Foundation of China (Grant No. 51435009).

Acknowledgments

M. Fatih Tasgetiren, Quan-Ke Pan and Liang Gao acknowledge the HUST Project in Wuhan in China. We also thank Eva Vallada for providing the permutations for large VRF benchmark suite so that we were able to check our code.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The processing times of Car8 instance is given in Table A1 in order to explain the speed-up method.
Table A1. Processing times of Car8 instance.
Table A1. Processing times of Car8 instance.
JobsMachines
12345678
1456654852145632425214654
2789123369678581396123789
3654123632965475325456654
432145658142132147789123
5456789472365536852654123
678965458682432512321456
7654321320758863452456789
878914712063921863789654
We remove job 2 from the optimal solution and calculate the completion times of the partial solution, which is given in Table A2.
Table A2. Completion times of partial permutation.
Table A2. Completion times of partial permutation.
e j , k Machines
Job Position12345678
71654975129520532916336838244613
3213081431206330283503382842845267
8320972244236436673688469154806134
5425533342381441794715556762216344
1530093996484849935625605064357089
6637984650543462586583659569167545
4741195106601566796711685877057828
After inserting job 2 to the 5th position j = 5 , we calculate the completion times of heads below and they are summarized in Table A3:
f j , 0 = 0 f j , k = m a x { f j , k 1 , e j 1 , k } + p π j , k f 5 , 0 = 0 f 5 , 1 = m a x { f 5 , 0 , e 4 , 1 } + p 5 , 1 = m a x { 0 ,   2553 } + 789 = 3342 f 5 , 2 = m a x { f 5 , 1 , e 4 , 2 } + p 5 , 2 = m a x { 3342 ,   3342 } + 123 = 3465 f 5 , 3 = m a x { f 5 , 2 , e 4 , 3 } + p 5 , 3 = m a x { 3465 ,   3814 } + 369 = 4183 f 5 , 4 = m a x { f 5 , 3 , e 4 , 4 } + p 5 , 4 = m a x { 4183 ,   4179 } + 678 = 4861 f 5 , 5 = m a x { f 5 , 4 , e 4 , 5 } + p 5 , 5 = m a x { 4861 ,   4715 } + 581 = 5442 f 5 , 6 = m a x { f 5 , 5 , e 4 , 6 } + p 5 , 6 = m a x { 5442 ,   5567 } + 396 = 5936 f 5 , 7 = m a x { f 5 , 6 , e 4 , 7 } + p 5 , 7 = m a x { 5936 ,   6221 } + 123 = 6344 f 5 , 8 = m a x { f 5 , 7 , e 4 , 8 } + p 5 , 8 = m a x { 6344 ,   6344 } + 789 = 7133
Table A3. Completion times of heads for { 7 , 3 , 8 , 5 , 2 } with C m a x = 7133 .
Table A3. Completion times of heads for { 7 , 3 , 8 , 5 , 2 } with C m a x = 7133 .
f j , k Machines
Job Position12345678
71654975129520532916336838244613
3213081431206330283503382842845267
8320972244236436673688469154806134
5425533342381441794715556762216344
2533423465418348615442593663447133
q j , m + 1 = 0 q j , k = m a x { q j , k + 1 , q j + 1 , k } + p π j , k q 7 , 9 = 0 q 7 , 8 = m a x { q 7 , 9 , q 8 , 8 } + p 7 , 8 = m a x { 0 ,   0 } + 123 = 123 q 7 , 7 = m a x { q 7 , 8 , q 8 , 7 } + p 7 , 7 = m a x { 123 ,   0 } + 789 = 912 q 7 , 6 = m a x { q 7 , 7 , q 8 , 6 } + p 7 , 6 = m a x { 912 ,   0 } + 147 = 1059 q 7 , 5 = m a x { q 7 , 6 , q 8 , 5 } + p 7 , 5 = m a x { 1059 ,   0 } + 32 = 1091 q 7 , 4 = m a x { q 7 , 5 , q 8 , 4 } + p 7 , 4 = m a x { 1091 ,   0 } + 421 = 1512 q 7 , 3 = m a x { q 7 , 4 , q 8 , 3 } + p 7 , 3 = m a x { 1512 ,   0 } + 581 = 2093 q 7 , 2 = m a x { q 7 , 3 , q 8 , 2 } + p 7 , 2 = m a x { 2093 ,   0 } + 456 = 2549 q 7 , 1 = m a x { q 7 , 2 , q 8 , 1 } + p 7 , 1 = m a x { 2549 ,   0 } + 321 = 2870 q 6 , 9 = 0 q 6 , 8 = m a x { q 6 , 9 , q 7 , 8 } + p 6 , 8 = m a x { 0 ,   123 } + 456 = 579 q 6 , 7 = m a x { q 6 , 8 , q 7 , 7 } + p 6 , 7 = m a x { 579 , 912 } + 321 = 1233 q 6 , 6 = m a x { q 6 , 7 , q 7 , 6 } + p 6 , 6 = m a x { 1233 ,   1059 } + 12 = 1245 q 6 , 5 = m a x { q 6 , 6 , q 7 , 5 } + p 6 , 5 = m a x { 1245 ,   1091 } + 325 = 1570 q 6 , 4 = m a x { q 6 , 5 , q 7 , 4 } + p 6 , 4 = m a x { 1570 ,   1512 } + 824 = 2394 q 6 , 3 = m a x { q 6 , 4 , q 7 , 3 } + p 6 , 3 = m a x { 2394 ,   2093 } + 586 = 2980 q 6 , 2 = m a x { q 6 , 3 , q 7 , 2 } + p 6 , 2 = m a x { 2980 ,   2549 } + 654 = 3634 q 6 , 1 = m a x { q 6 , 2 , q 7 , 1 } + p 6 , 1 = m a x { 3634 ,   2870 } + 789 = 4423 q 5 , 9 = 0 q 5 , 8 = m a x { q 5 , 9 , q 6 , 8 } + p 5 , 8 = m a x { 0 ,   579 } + 654 = 1233 q 5 , 7 = m a x { q 5 , 8 , q 6 , 7 } + p 5 , 7 = m a x { 1233 , 1233 } + 214 = 1447 q 5 , 6 = m a x { q 5 , 7 , q 6 , 6 } + p 5 , 6 = m a x { 1447 ,   1245 } + 425 = 1872 q 5 , 5 = m a x { q 5 , 6 , q 6 , 5 } + p 5 , 5 = m a x { 1872 ,   1570 } + 632 = 2504 q 5 , 4 = m a x { q 5 , 5 , q 6 , 4 } + p 5 , 4 = m a x { 2504 ,   2394 } + 145 = 2649 q 5 , 3 = m a x { q 5 , 4 , q 6 , 3 } + p 5 , 3 = m a x { 2649 ,   2980 } + 852 = 3832 q 5 , 2 = m a x { q 5 , 3 , q 6 , 2 } + p 5 , 2 = m a x { 3832 ,   3634 } + 654 = 4486 q 5 , 1 = m a x { q 5 , 2 , q 6 , 1 } + p 5 , 1 = m a x { 4486 ,   4423 } + 456 = 4942
Now, we calculate the completion times of tails as shown in Table A4.
Table A4. Completion times of tails for { 2 , 1 , 6 , 4 } with C m a x = 4942 .
Table A4. Completion times of tails for { 2 , 1 , 6 , 4 } with C m a x = 4942 .
q j , k Machines
Job Position12345678
1649424486383226492504187214471233
674423363429802394157012451233579
48287025492093151210911059912123
Now, we calculate C m a x = m a x k ( f j , k + q j , k ) at position j as follows:
C m a x = m a x { ( 3342 + 4942 ) , ( 3465 + 4486 ) , ( 4183 + 3832 ) , ( 4861 + 2649 ) , ( 5442 + 2504 ) , ( 5936 + 1872 ) , ( 6344 + 1447 ) , ( 7133 + 1233 ) } C m a x = m a x { 8284 , 7951 , 8015 , 7979 , 7832 , 8108 , 8366 , 8366 } = 8366

Appendix B

Table A5. New best solutions of our algorithms for Large VRF Instances (The bolds shows the new best known solutions).
Table A5. New best solutions of our algorithms for Large VRF Instances (The bolds shows the new best known solutions).
InstanceCmaxBestInstanceCmaxBestInstanceCmaxBest
100_20_161986173300_60_12052220483600_40_13383933683
100_20_263066267300_60_22039920249600_40_23346733405
100_20_362386221300_60_32043420328600_40_33386633713
100_20_462456227300_60_42039520293600_40_43369333584
100_20_562966264300_60_52034120200600_40_53355333401
100_20_663216285300_60_62038820280600_40_63380933626
100_20_764346401300_60_72045720358600_40_73368633545
100_20_861046074300_60_82041020319600_40_83348233298
100_20_963546328300_60_92054920405600_40_93369733567
100_20_1061456125300_60_102047220385600_40_103364233473
100_40_178817846400_20_12112021042600_60_13619835976
100_40_280077976400_20_22145721411600_60_23618435923
100_40_379357894400_20_32144121428600_60_33620135917
100_40_479327913400_20_42124721237600_60_43613636000
100_40_580117997400_20_52155321528600_60_53615336004
100_40_680237993400_20_62121421188600_60_63611635943
100_40_780067980400_20_72162521599600_60_73617935965
100_40_879797957400_20_82127721264600_60_83618535894
100_40_979317888400_20_92134621293600_60_93619535987
100_40_1079527917400_20_102153821526600_60_103616335943
100_60_193959353400_40_12357823393700_20_13639436388
100_60_295969567400_40_22345623380700_20_23633736316
100_60_393499349400_40_32357523467700_20_33656836519
100_60_494269403400_40_42340923269700_20_43645236380
100_60_594659431400_40_52333923213700_20_53658436556
100_60_696679630400_40_62344423298700_20_63667136645
100_60_793919346400_40_72355623415700_20_73662436597
100_60_895349523400_40_82341123290700_20_83652236492
100_60_995279488400_40_92363723424700_20_93632936315
100_60_1095989572400_40_102372023606700_20_103641736386
200_20_11130511272400_60_12560725395700_40_13896438767
200_20_21126511240400_60_22565625549700_40_23877538560
200_20_31132711294400_60_32582125707700_40_33862138460
200_20_41120811188400_60_42583725638700_40_43878538597
200_20_51120811143400_60_52587725669700_40_53867138490
200_20_61136711310400_60_62553625407700_40_63871038440
200_20_71138011365400_60_72560025415700_40_73858538355
200_20_81114111128400_60_82580025603700_40_83905938817
200_20_91112311091400_60_92588225673700_40_93881438569
200_20_101131011294400_60_102576725658700_40_103885038712
200_40_11313213124500_20_12641126374700_60_14143641192
200_40_21310213049500_20_22668126641700_60_24137541002
200_40_31326413222500_20_32640926359700_60_34131741173
200_40_41323213163500_20_42612426080700_60_44140141120
200_40_51304312974500_20_52678126759700_60_54126241167
200_40_61312413061500_20_62644326411700_60_64134041159
200_40_71329913220500_20_72643326409700_60_74087640734
200_40_81323813132500_20_82631826305700_60_84147441305
200_40_91316613033500_20_92644226430700_60_94129141111
200_40_101322813146500_20_102607226034700_60_104137741186
200_60_11499014906500_40_12854828402800_20_14155841479
200_60_21495414909500_40_22879328613800_20_24140741345
200_60_31520015134500_40_32860728526800_20_34142541399
200_60_41504414968500_40_42882828615800_20_44142641426
200_60_51513015042500_40_52868328579800_20_54171041705
200_60_61503514996500_40_62852428432800_20_64201041961
200_60_71504015006500_40_72876028553800_20_74142541395
200_60_81496814894500_40_82869828488800_20_84149241435
200_60_91502214925500_40_92887028640800_20_94179641783
200_60_101500014908500_40_102875828644800_20_104157441568
300_20_11614916089500_60_13086130682800_40_14367143466
300_20_21651216483500_60_23082830664800_40_24374643575
300_20_31617316129500_60_33112530852800_40_34374943596
300_20_41618116168500_60_43092830793800_40_44389243743
300_20_51634216307500_60_53093530763800_40_54390543794
300_20_61613716095500_60_63102730788800_40_64381143638
300_20_71626616244500_60_73092830826800_40_74376643484
300_20_81641616369500_60_83098830837800_40_84383943666
300_20_91637616324500_60_93097830805800_40_94387943643
300_20_101689916798500_60_103105030866800_40_104386143630
300_40_11829818199600_20_13143331372800_60_14647046279
300_40_21845418373600_20_23141831397800_60_24649346232
300_40_31845718348600_20_33142931429800_60_34638946258
300_40_41835118227600_20_43154731487800_60_44645746261
300_40_51848418343600_20_53144831407800_60_54640146164
300_40_61844918340600_20_63171731696800_60_64642146288
300_40_71841918396600_20_73152731527800_60_74631946061
300_40_81839218290600_20_83156431523800_60_84647446257
300_40_91839418261600_20_93157731532800_60_94653846279
300_40_101840118286600_20_103113031107800_60_104624446211

References

  1. Fernandez-Viagas, V.; Ruiz, R.; Framinan, J.M. A new vision of approximate methods for the permutation flowshop to minimise makespan: State-of-the-art and computational evaluation. Eur. J. Oper. Res. 2017, 257, 707–721. [Google Scholar] [CrossRef]
  2. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems; Springer: New York, NY, USA, 2008. [Google Scholar]
  3. Garey, M.R.; Johnson, D.S.; Sethi, R. The Complexity of Flowshop and Jobshop Scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  4. Ruiz, R.; Stützle, T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  5. Dubois-Lacoste, J.; Pagnozzi, F.; Stützle, T. An iterated greedy algorithm with optimization of partial solutions for the makespan permutation flowshop problem. Comput. Oper. Res. 2017, 81, 160–166. [Google Scholar] [CrossRef]
  6. Ruiz, R.; Stützle, T. An Iterated Greedy heuristic for the sequence dependent setup times flowshop problem with makespan and weighted tardiness objectives. Eur. J. Oper. Res. 2008, 187, 1143–1159. [Google Scholar] [CrossRef]
  7. Fernandez-Viagas, V.; Framinan, J. On insertion tie-breaking rules in heuristics for the permutation flowshop scheduling problem. Comput. Oper. Res. 2014, 45, 60–67. [Google Scholar] [CrossRef]
  8. Pan, Q.-K.; Tasgetiren, M.F.; Liang, Y.-C. A discrete differential evolution algorithm for the permutation flowshop scheduling problem. Comput. Ind. Eng. 2008, 55, 795–816. [Google Scholar] [CrossRef]
  9. Vallada, E.; Ruiz, R.; Framinan, J.M. New hard benchmark for flowshop scheduling problems minimising makespan. Eur. J. Oper. Res. 2015, 240, 666–677. [Google Scholar] [CrossRef]
  10. Tasgetiren, M.F.; Pan, Q.-K.; Kizilay, D.; Velez-Gallego, M.C. A variable block insertion heuristic for permutation flowshops with makespan criterion. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017. [Google Scholar]
  11. Shao, W.; Pi, D.; Shao, Z. Optimization of makespan for the distributed no-wait flow shop scheduling problem with iterated greedy algorithms. Knowl. Based Syst. 2017, 137, 163–181. [Google Scholar] [CrossRef]
  12. Ding, J.-Y.; Song, S.; Gupta, J.; Zhang, R.; Chiong, R.; Wu, C. An improved iterated greedy algorithm with a Tabu-based reconstruction strategy for the no-wait flowshop scheduling problem. Appl. Soft Comput. 2015, 30, 604–613. [Google Scholar] [CrossRef]
  13. Li, X.; Yang, Z.; Ruiz, R.; Chen, T.; Sui, S. An iterated greedy heuristic for no-wait flow shops with sequence dependent setup times, learning and forgetting effects. Inf. Sci. 2018, 453, 408–425. [Google Scholar] [CrossRef]
  14. Ribas, I.; Companys, R.; Tort-Martorell, X. An iterated greedy algorithm for the flowshop scheduling problem with blocking. Omega 2011, 39, 293–301. [Google Scholar] [CrossRef]
  15. Tasgetiren, M.F.; Kizilay, D.; Pan, Q.-K.; Suganthan, P.N. Iterated greedy algorithms for the blocking flowshop scheduling problem with makespan criterion. Comput. Oper. Res. 2017, 77, 111–126. [Google Scholar] [CrossRef]
  16. Fernandez-Viagas, V.; Leisten, R.; Framinan, J. A computational evaluation of constructive and improvement heuristics for the blocking flow shop to minimise total flowtime. Expert Syst. Appl. 2016, 61, 290–301. [Google Scholar] [CrossRef]
  17. Tasgetiren, M.F.; Pan, Q.-K.; Kizilay, D.; Suer, G. A populated local search with differential evolution for blocking flowshop scheduling problem. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015. [Google Scholar]
  18. Ying, K.-C.; Lin, S.-W.; Cheng, C.-Y.; He, C.-D. Iterated reference greedy algorithm for solving distributed no-idle permutation flowshop scheduling problems. Comput. Ind. Eng. 2017, 110, 413–423. [Google Scholar] [CrossRef]
  19. Tasgetiren, M.F.; Pan, Q.-K.; Suganthan, P.N.; Buyukdagli, O. A variable iterated greedy algorithm with differential evolution for the no-idle permutation flowshop scheduling problem. Comput. Oper. Res. 2013, 40, 1729–1743. [Google Scholar] [CrossRef]
  20. Pan, Q.-K.; Ruiz, R. An effective iterated greedy algorithm for the mixed no-idle permutation flowshop scheduling problem. Omega 2014, 44, 41–50. [Google Scholar] [CrossRef]
  21. Ding, J.-Y.; Song, S.; Wu, C. Carbon-efficient scheduling of flow shops by multi-objective optimization. Eur. J. Oper. Res. 2016, 248, 758–771. [Google Scholar] [CrossRef]
  22. Öztop, H.; Tasgetiren, M.F.; Eliiyi, D.T.; Pan, Q.-K. Green Permutation Flowshop Scheduling: A Trade- off- Between Energy Consumption and Total Flow Time. In Intelligent Computing Methodologies; Springer: Cham, Switzerland, 2018; pp. 753–759. [Google Scholar]
  23. Minella, G.; Ruiz, R.; Ciavotta, M. Restarted Iterated Pareto Greedy algorithm for multi-objective flowshop scheduling problems. Comput. Oper. Res. 2011, 38, 1521–1533. [Google Scholar] [CrossRef]
  24. Ciavotta, M.; Minella, G.; Ruiz, R. Multi-objective sequence dependent setup times permutation flowshop: A new algorithm and a comprehensive study. Eur. J. Oper. Res. 2013, 227, 301–312. [Google Scholar] [CrossRef]
  25. Pan, Q.-K.; Wang, L. Effective heuristics for the blocking flowshop scheduling problem with makespan minimization. Omega 2012, 40, 218–229. [Google Scholar] [CrossRef]
  26. Karabulut, K. A hybrid iterated greedy algorithm for total tardiness minimization in permutation flowshops. Comput. Ind. Eng. 2016, 98, 300–307. [Google Scholar] [CrossRef]
  27. Fernandez-Viagas, V.; Valente, J.M.S.; Framinan, J. Iterated-greedy-based algorithms with beam search initialization for the permutation flowshop to minimise total tardiness. Expert Syst. Appl. 2018, 94, 58–69. [Google Scholar] [CrossRef]
  28. Pan, Q.-K.; Ruiz, R. Local search methods for the flowshop scheduling problem with flowtime minimization. Eur. J. Oper. Res. 2012, 222, 31–43. [Google Scholar] [CrossRef]
  29. Tasgetiren, M.F.; Pan, Q.; Ozturkoglu, Y.; Chen, A.H.L. A memetic algorithm with a variable block insertion heuristic for single machine total weighted tardiness problem with sequence dependent setup times. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2911–2918. [Google Scholar]
  30. Subramanian, A.; Battarra, M.; Potts, C.N. An Iterated Local Search heuristic for the single machine total weighted tardiness scheduling problem with sequence-dependent setup times. Int. J. Prod. Res. 2014, 52, 2729–2742. [Google Scholar] [CrossRef]
  31. Xu, H.; Lü, Z.; Cheng, T.C.E. Iterated Local Search for single-machine scheduling with sequence-dependent setup times to minimize total weighted tardiness. J. Sched. 2014, 17, 271–287. [Google Scholar] [CrossRef]
  32. Fernández, M.Á.G.; Palacios, J.; Vela, C.; Hernández-Arauzo, A. Scatter search for minimizing weighted tardiness in a single machine scheduling with setups. J. Heuristics 2017, 23, 81–110. [Google Scholar]
  33. Tasgetiren, M.F.; Pan, Q.-K.; Kizilay, D.; Gao, K. A Variable Block Insertion Heuristic for the Blocking Flowshop Scheduling Problem with Total Flowtime Criterion. Algorithms 2016, 9, 71. [Google Scholar] [CrossRef]
  34. Manne, A.S. On the Job-Shop Scheduling Problem. Oper. Res. 1960, 8, 219–223. [Google Scholar] [CrossRef]
  35. Taillard, E. Some efficient heuristic methods for the flow shop sequencing problem. Eur. J. Oper. Res. 1990, 47, 65–74. [Google Scholar] [CrossRef]
  36. Johnson, S.M. Optimal Two and Three Stage Production Schedules with Set-Up Time Included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  37. Nawaz, M.; Enscore, E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  38. Osman, I.; Potts, C.N. Simulated Annealing for Permutation Flow-Shop Scheduling. Omega 1989, 17, 551–557. [Google Scholar] [CrossRef]
  39. Rad, S.F.; Ruiz, R.; Boroojerdian, N. New high performing heuristics for minimizing makespan in permutation flowshops. Omega 2009, 37, 331–345. [Google Scholar] [CrossRef]
  40. Tasgetiren, M.F.; Pan, Q.-K.; Suganthan, P.N.; Chua, T.J. A differential evolution algorithm for the no-idle flowshop scheduling problem with total tardiness criterion. Int. J. Prod. Res. 2011, 49, 5033–5050. [Google Scholar] [CrossRef]
  41. Montgomery, D.C. Design and Analysis of Experiments, 2nd ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  42. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
Figure 1. Speed-up calculation of a partial solution.
Figure 1. Speed-up calculation of a partial solution.
Algorithms 12 00100 g001
Figure 2. Speed-up calculation of a complete solution.
Figure 2. Speed-up calculation of a complete solution.
Algorithms 12 00100 g002
Figure 3. Main effects plot for parameters of VBIH.
Figure 3. Main effects plot for parameters of VBIH.
Algorithms 12 00100 g003
Figure 4. Interaction plot for b M a x versus p L .
Figure 4. Interaction plot for b M a x versus p L .
Algorithms 12 00100 g004
Figure 5. Interaction plot for t P versus p L .
Figure 5. Interaction plot for t P versus p L .
Algorithms 12 00100 g005
Figure 6. Interval plot for small VRF instances.
Figure 6. Interval plot for small VRF instances.
Algorithms 12 00100 g006
Figure 7. Interval plot at the 95% confidence level for large VRF instances.
Figure 7. Interval plot at the 95% confidence level for large VRF instances.
Algorithms 12 00100 g007
Figure 8. Interval plot at the 95% confidence level for large VRF instances.
Figure 8. Interval plot at the 95% confidence level for large VRF instances.
Algorithms 12 00100 g008
Figure 9. Interval plot at the 95% confidence level for large VRF instances.
Figure 9. Interval plot at the 95% confidence level for large VRF instances.
Algorithms 12 00100 g009
Figure 10. Interval plot at the 95% confidence level for Taillard’s instances.
Figure 10. Interval plot at the 95% confidence level for Taillard’s instances.
Algorithms 12 00100 g010
Table 1. Problem instance with processing times and optimal solution.
Table 1. Problem instance with processing times and optimal solution.
InstanceOptimal Solution with C m a x = 36
JobsM1M2JobsPositionM1M2
1181118
2292229
3757345
4533475
5545554
6714653
7456771
Table 2. ANOVA results for parameters of VBIH.
Table 2. ANOVA results for parameters of VBIH.
SourceDFSeq SSAdj SSAdj MSFp-Value
b M a x 60.00860.00860.001433.3700.000
t P 40.00900.00900.002252.0800.000
p L 15.54415.54415.5441129,096.7200.000
b M a x × t P 240.00100.00100.00000.9900.505
b M a x × p L 60.00250.00250.00049.8300.000
t P × p L 40.00900.00900.002252.1000.000
Error240.00100.00100.0000
Total695.5752
Table 3. MIP and CP results for VRF small benchmarks with 3600 s time limit (The number in bold shows the total optimal solutions).
Table 3. MIP and CP results for VRF small benchmarks with 3600 s time limit (The number in bold shows the total optimal solutions).
n × mCPMIP
nOptARPDCPUGAPnOptRPD CPUGAP
10 × 510014.0301002.680
10 × 10100102.1301004.350
10 × 15100256.4501005.680
10 × 20100452.7901009.590
20 × 51002.49000.583600.180.37
20 × 1060.112250.090.0302.243600.510.32
20 × 1500.533600.050.1302.543600.060.29
20 × 2000.483600.070.17402.613600.060.25
30 × 51005.820NaNaNaNa
30 × 1020.473191.890.05NaNaNaNa
30 × 1501.293600.140.11NaNaNaNa
30 × 2001.633600.130.15NaNaNaNa
40 × 510015.030NaNaNaNa
40 × 1030.223113.360.03NaNaNaNa
40 × 1502.163600.100.10NaNaNaNa
40 × 2002.113600.160.13NaNaNaNa
50 × 510011.640NaNaNaNa
50 × 1030.192939.960.02NaNaNaNa
50 × 1502.283600.220.08NaNaNaNa
50 × 2002.733600.220.12NaNaNaNa
60 × 151006.440NaNaNaNa
60 × 1040.193158.950.01NaNaNaNa
60 × 1501.983600.190.07NaNaNaNa
60 × 2002.823600.290.10NaNaNaNa
Overall Avg.1080.802146.780.05402.613600.060.25
Table 4. Comparison of ARPD of all algorithms for small VRF instances.
Table 4. Comparison of ARPD of all algorithms for small VRF instances.
InstanceCP15 × n × m30 × n × m45 × n × m
IGRSIGALLVBIHIGRSIGALLVBIHIGRSIGALLVBIH
10 × 50.000.000.000.000.000.000.000.000.000.00
10 × 100.000.000.000.000.000.000.000.000.000.00
10 × 150.000.000.000.020.000.000.020.000.000.02
10 × 200.000.000.000.000.000.000.000.000.000.00
20 × 50.000.000.000.000.000.000.000.000.000.00
20 × 100.110.040.000.040.030.000.040.020.000.04
20 × 150.530.000.000.000.000.000.000.000.000.00
20 × 200.480.000.000.000.000.000.000.000.000.00
30 × 50.000.000.000.000.000.000.000.000.000.00
30 × 100.470.060.040.050.010.030.010.010.03−0.01
30 × 151.290.030.020.030.02−0.020.020.02−0.020.02
30 × 201.630.020.000.030.020.000.020.020.000.02
40 × 50.000.000.000.000.000.000.000.000.000.00
40 × 100.220.060.020.030.020.01−0.010.000.00−0.01
40 × 152.160.090.050.040.040.02−0.02−0.01−0.05−0.05
40 × 202.110.10−0.08−0.040.04−0.08−0.05−0.01−0.08−0.07
50 × 50.000.000.000.000.000.000.000.000.000.00
50 × 100.190.160.140.040.110.110.000.080.08−0.03
50 × 152.280.240.180.100.150.140.050.100.090.02
50 × 202.730.170.020.000.07−0.08−0.100.04−0.11−0.10
60 × 50.000.000.000.000.000.000.000.000.000.00
60 × 100.190.070.11−0.01−0.040.08−0.03−0.060.05−0.05
60 × 151.980.210.090.100.120.060.010.080.06−0.04
60 × 202.810.200.010.000.03−0.07−0.12−0.03−0.08−0.17
Avg.0.800.060.020.020.030.01−0.010.010.00−0.02
Table 5. Comparison of ARPD and computation time (CPU) for constructive heuristic methods (The number in bold shows better results).
Table 5. Comparison of ARPD and computation time (CPU) for constructive heuristic methods (The number in bold shows better results).
InstanceNEHNEH *FRB5
ARPDCPU(s)ARPDCPU(s)ARPDCPU(s)
100 × 205.820.005.820.012.450.10
100 × 405.300.005.300.032.570.21
100 × 604.890.004.890.052.190.32
200 × 204.150.004.150.101.420.89
200 × 404.810.014.810.231.671.91
200 × 604.480.014.480.391.562.73
300 × 203.170.013.170.330.802.75
300 × 404.050.024.050.791.076.45
300 × 603.940.033.941.311.239.85
400 × 202.440.012.440.800.506.27
400 × 403.800.033.801.910.8215.83
400 × 603.420.043.423.140.7524.39
500 × 202.060.022.061.530.4312.10
500 × 403.170.043.173.750.6331.73
500 × 603.270.063.276.050.5747.97
600 × 201.700.031.702.600.2420.76
600 × 402.960.062.966.340.5354.97
600 × 602.970.092.9710.310.3782.27
700 × 201.420.041.424.130.2531.50
700 × 402.800.082.8010.060.2684.38
700 × 602.660.132.6617.220.32249.99
800 × 201.350.041.356.060.2142.31
800 × 402.450.102.4515.480.24125.13
800 × 602.740.162.7426.170.31195.41
Avg3.330.043.334.950.8943.76
Table 6. Computational results of algorithms with T m a x = 15 × n × m milliseconds (The bolds show better results).
Table 6. Computational results of algorithms with T m a x = 15 × n × m milliseconds (The bolds show better results).
InstanceIGRSIGALLVBIH
Avg.MinMaxAvg.MinMaxAvg.MinMax
100 × 200.450.130.740.12−0.070.330.00−0.210.23
100 × 400.560.260.900.280.040.490.13−0.090.37
100 × 600.500.220.780.230.020.420.270.050.54
200 × 200.420.240.610.190.040.350.03−0.140.17
200 × 400.470.250.680.14−0.010.310.01−0.210.24
200 × 600.460.240.650.17−0.010.370.05−0.150.22
300 × 200.220.060.350.10−0.030.21−0.03−0.170.11
300 × 400.350.150.560.04−0.160.25−0.18−0.35−0.02
300 × 600.360.160.560.12−0.060.27−0.03−0.200.15
400 × 200.200.110.330.090.010.180.03−0.030.10
400 × 400.310.120.500.01−0.110.14−0.17−0.32−0.03
400 × 600.270.080.46−0.02−0.170.12−0.16−0.27−0.05
500 × 200.150.060.260.120.070.180.03−0.050.12
500 × 400.290.120.450.00−0.100.11−0.19−0.30−0.07
500 × 600.330.150.51−0.06−0.200.08−0.19−0.31−0.06
600 × 200.110.030.180.02−0.030.070.01−0.050.06
600 × 400.380.230.540.03−0.070.13−0.05−0.170.06
600 × 600.300.120.50−0.05−0.180.05−0.13−0.23−0.04
700 × 200.110.050.180.04−0.010.080.03−0.030.08
700 × 400.240.130.37−0.11−0.200.00−0.21−0.28−0.12
700 × 600.260.090.46−0.05−0.150.04−0.13−0.24−0.03
800 × 200.070.020.140.060.020.120.01−0.040.05
800 × 400.220.090.36−0.06−0.140.02−0.25−0.33−0.17
800 × 600.400.250.570.02−0.040.08−0.19−0.29−0.10
Avg0.310.140.480.06−0.060.18−0.05−0.180.08
Table 7. Computational results of algorithms with T m a x = 30 × n × m milliseconds (The bolds show better results).
Table 7. Computational results of algorithms with T m a x = 30 × n × m milliseconds (The bolds show better results).
n × mIGRSIGALLVBIH
Avg.MinMax Avg.MinMax Avg.MinMax
100 × 200.25−0.020.540.03−0.110.16−0.05−0.250.16
100 × 400.380.080.680.05−0.140.230.07−0.150.33
100 × 600.360.130.630.05−0.170.230.21−0.020.51
200 × 200.280.120.450.07−0.050.220.00−0.160.14
200 × 400.300.060.51−0.08−0.250.08−0.04−0.250.16
200 × 600.260.050.51−0.04−0.190.130.02−0.170.19
300 × 200.12−0.010.230.01−0.100.14−0.06−0.210.08
300 × 400.17−0.030.41−0.22−0.37−0.04−0.23−0.39−0.07
300 × 600.18−0.030.42−0.08−0.250.12−0.09−0.240.07
400 × 200.120.040.190.03−0.040.090.01−0.060.09
400 × 400.16−0.030.37−0.20−0.38−0.07−0.22−0.36−0.08
400 × 600.08−0.110.24−0.22−0.37−0.07−0.20−0.31−0.11
500 × 200.110.020.200.070.010.130.02−0.060.10
500 × 400.13−0.050.32−0.16−0.26−0.06−0.24−0.36−0.12
500 × 600.15−0.030.32−0.22−0.35−0.09−0.23−0.35−0.10
600 × 200.07−0.020.15−0.01−0.060.04−0.02−0.070.03
600 × 400.200.040.36−0.11−0.19−0.02−0.19−0.29−0.07
600 × 600.13−0.030.32−0.23−0.37−0.11−0.26−0.37−0.15
700 × 200.080.010.160.02−0.030.06−0.01−0.070.03
700 × 400.09−0.010.19−0.27−0.38−0.15−0.34−0.42−0.27
700 × 600.07−0.110.23−0.21−0.28−0.13−0.28−0.39−0.19
800 × 200.04−0.010.090.02−0.010.050.00−0.040.04
800 × 400.07−0.070.21−0.20−0.30−0.11−0.28−0.35−0.21
800 × 600.220.100.40−0.13−0.22−0.04−0.23−0.32−0.13
Avg0.170.000.34−0.08−0.200.03−0.11−0.240.02
Table 8. Computational results of algorithms with T m a x = 45 × n × m milliseconds (The bolds show better results).
Table 8. Computational results of algorithms with T m a x = 45 × n × m milliseconds (The bolds show better results).
n × mIGRSIGALLVBIH
Avg.MinMaxAvg.MinMaxAvg.MinMax
100 × 200.13−0.140.39−0.04−0.210.1−0.25−0.44−0.03
100 × 400.290.020.59−0.05−0.250.13−0.18−0.35−0.01
100 × 600.260.030.48−0.03−0.280.17−0.02−0.170.19
200 × 200.210.050.370−0.140.12−0.12−0.270.03
200 × 400.210.010.4−0.2−0.36−0.03−0.3−0.53−0.07
200 × 600.14−0.070.37−0.14−0.30.02−0.27−0.43−0.1
300 × 200.07−0.060.17−0.04−0.180.1−0.15−0.26−0.05
300 × 400.06−0.130.27−0.33−0.47−0.17−0.45−0.56−0.28
300 × 600.08−0.140.34−0.24−0.4−0.04−0.32−0.47−0.17
400 × 200.0900.17−0.03−0.120.02−0.05−0.120.01
400 × 400.09−0.090.3−0.44−0.57−0.3−0.41−0.52−0.28
400 × 60−0.03−0.230.16−0.48−0.64−0.31−0.41−0.52−0.32
500 × 200.07−0.020.180.02−0.060.08−0.04−0.110.06
500 × 400.04−0.160.21−0.41−0.53−0.29−0.42−0.5−0.29
500 × 600.02−0.140.17−0.44−0.56−0.3−0.41−0.54−0.29
600 × 200.04−0.040.13−0.04−0.080.01−0.05−0.08−0.01
600 × 400.11−0.050.29−0.32−0.41−0.21−0.27−0.39−0.15
600 × 600.03−0.120.22−0.45−0.6−0.33−0.35−0.44−0.23
700 × 200.06−0.020.140−0.050.05−0.03−0.080.02
700 × 400.01−0.110.13−0.36−0.48−0.24−0.42−0.5−0.35
700 × 60−0.01−0.20.16−0.3−0.4−0.22−0.37−0.48−0.25
800 × 200.02−0.040.070.01−0.030.04−0.01−0.060.03
800 × 40−0.01−0.150.12−0.27−0.36−0.17−0.36−0.43−0.29
800 × 600.1300.31−0.21−0.3−0.14−0.32−0.4−0.22
Average0.09−0.070.26−0.20−0.32−0.08−0.25−0.36−0.13
Table 9. Computational results of Taillard’s instances with T m a x = 45 × n × m milliseconds (The bolds show better results).
Table 9. Computational results of Taillard’s instances with T m a x = 45 × n × m milliseconds (The bolds show better results).
IGRSIGRS *IGALLIGALL *VBIHVBIH *
AvgAvgAvgAvgAvgAvg
20 × 50.000.000.000.000.000.00
20 × 100.010.000.000.000.000.01
20 × 200.010.010.000.000.000.01
50 × 50.000.000.000.000.000.00
50 × 100.340.430.400.430.260.31
50 × 200.570.790.530.710.330.53
100 × 50.000.000.000.000.000.00
100 × 100.100.190.040.110.020.09
100 × 200.821.330.891.230.540.94
200 × 100.050.140.030.050.030.05
200 × 201.041.460.821.290.551.02
500 × 200.470.920.350.750.260.64
Overall Avg.0.280.440.260.380.170.30

Share and Cite

MDPI and ACS Style

Kizilay, D.; Tasgetiren, M.F.; Pan, Q.-K.; Gao, L. A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion. Algorithms 2019, 12, 100. https://doi.org/10.3390/a12050100

AMA Style

Kizilay D, Tasgetiren MF, Pan Q-K, Gao L. A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion. Algorithms. 2019; 12(5):100. https://doi.org/10.3390/a12050100

Chicago/Turabian Style

Kizilay, Damla, Mehmet Fatih Tasgetiren, Quan-Ke Pan, and Liang Gao. 2019. "A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion" Algorithms 12, no. 5: 100. https://doi.org/10.3390/a12050100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop