Next Article in Journal
ST-DEVS: A Methodology Using Time-Dependent-Variable-Based Spatiotemporal Computation
Previous Article in Journal
Generating Soft Topologies via Soft Set Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Matheuristic Approach for the No-Wait Flowshop Scheduling Problem with Makespan Criterion

1
School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
2
School of Physics and Electronic Science, Hubei Normal University, Huangshi 435002, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 913; https://doi.org/10.3390/sym14050913
Submission received: 28 March 2022 / Revised: 18 April 2022 / Accepted: 26 April 2022 / Published: 29 April 2022
(This article belongs to the Section Computer)

Abstract

:
The No-wait Flowshop Scheduling Problem (NWFSP) has always been a research hotspot because of its importance in various industries. This paper uses a matheuristic approach that combines exact and heuristic algorithms to solve it with the objective to minimize the makespan. Firstly, according to the symmetry characteristics in NWFSP, a local search method is designed, where the first job and the last job in the symmetrical position remain unchanged, and then, a three-level neighborhood division method and the corresponding rapid evaluation method at each level are given. The two proposed heuristic algorithms are built on them, which can effectively avoid al-ready searched areas, so as to quickly obtain the local optimal solutions, and even directly obtain the optimal solutions for small-scale instances. Secondly, using the equivalence of this problem and the Asymmetric Traveling Salesman Problem (ATSP), an exact method for solving NWFSP is constructed. Importing the results of the heuristics into the model, the efficiency of the Mil-ler-Tucker-Zemlin (MTZ) model for solving small-scale NWFSP can be improved. Thirdly, the matheuristic algorithm is used to test 141 instances of the Tailard and Reeves benchmarks, and each optimal solution can be obtained within 134 s, which verifies the stability and effectiveness of the algorithm.

1. Introduction

Improving production efficiency is one of the final aims of the manufacturing industry. Production scheduling can meet that need, so it is used in manufacturing systems extensively. Considering the differences in factory conditions and job processing requirements, the scheduling problem has been divided into several kinds, such as flow shop [1,2], hybrid flow shop [3,4], job shop [5,6], open shop [7,8], and so on. The flow shop problem (FSP) claims that each job should be processed in the same order through every stage, and the hybrid flow shop refers to parallel machines at one or more stages. A job shop environment permits jobs to pass some steps or repeat them at a given machine route, and the processing order of the job in an open shop can be arbitrary. It can be concluded that the flow shop problem is the basic one, then the others can be seen as its variants. In other words, the FSP reflects some common characteristics of production scheduling problems.
In recent years, flow shop scheduling problems with constraints have become research hotspots owing to the actual technology demand. Typically, no-wait attribute, which stipulates all the jobs should be processed from the first stage to the last without any interruption, fits lots of production industries. For instance, in steel processing, maintaining the temperature is a necessary condition, so it calls for no-wait demand. Once you wait, you need more time and energy to heat to the desired temperature, or accept degradation. As early as 1971, Callahan [9] considered that the steel industry required a dependent processing and the no-wait constraint. Tang and Song [10] modeled the mill roll annealing process as a no-wait hybrid flow shop scheduling problem. Höhn and Jacobs [11] also considered continuous casting in the multistage production process a variant of no-wait flowshop scheduling. Yuan and Jing [12] explained and affirmed the importance of the no-wait flowshop problem to steel manufacturing. Similarly, Reklaits [13] pointed out that the no-wait constraint is also suitable for the chemical industry due to its instability. Hall [14] emphasized that a similar situation also occurs in plastic molding and silver-ware industries, as well as the food processing industry [15]. In addition, there are also many other practical problems that can be solved by the assistance of being modeled as no-wait scheduling problems, such as controlling health care management costs [16], developing a metro train real-time control system [17], and so on.
In this paper, the NWFSP with makespan criterion is considered because of its universality. It minimizes the maximum completion time of the jobs ( C max ), maximizing the efficiency of machine usage [18]. The problem can be described as F m | n w t | C max when the symbolic notation proposed by Graham [19] is applied. The first field represents the production environment, which is a flow shop with m stages. The second field shows the constraints and “ n w t “ is for “no-wait constraint”. The performance criteria occupy the third field.
The research on NWFSP began around the 1970s. Garey and Johnson [20] proved that the NWFSP is a strongly NP-hard problem when the number of machines is no less than three. At the same time, the problem was documented as NP-hard for more than two machines by Papadimitriou and Kanellakis [21]. Due to the complexity of NWFSP and the wide range of its applications, many heuristic algorithms have emerged or been tried by Bonney and Gundry [22], King and Spachis [23], Gangadharan and Rajendran [24], Laha [25], and so on, which aimed to make the solution as close as possible to the optimal within an acceptable time. In addition, exact methods [26] like branch-and-bound [27] and the column generation method [28] are widely used to obtain optimal solutions for small-scale problems. Metaheuristic algorithms, whose search mechanisms are inspired by phenomena in various fields of the real world are also active for solving NWFSP. Hall [14] and Allahverdi [29] jointly detailed the research progress of NWFSP before 2016. A typical example is the discrete particle swarm optimization algorithm [30]. In recent years, various improved or emerging metaheuristic algorithms have been used to solve the NWFSP. For example, a hybrid ant colony optimization algorithm (HACO [31]), a discrete water wave optimization algorithm (DWWO [32]), a factorial based particle swarm optimization with a population adaptation mechanism (FPAPSO [33]), a single water wave optimization (SWWO [34]), a hybrid biogeography-based optimization with variable neighborhood search (HBV [35]), a quantum-inspired cuckoo co-search (QCCS [36]) algorithm, and an improved discrete migrating birds optimization algorithm (IDMBO [37]) have also been used with good results. In 2021, Lai used the Discrete Wolf Pack Algorithm [38] to solve the NWFSP problem. Each algorithm has its own advantages, which makes it perform better than others in certain applications. In order to give full play to the advantages of each algorithm and enhance the generality of the algorithm, hyper-heuristic (HH) [39] and matheuristic [40] algorithms have appeared. The emergence of a new generation of large-scale mathematical programming optimizers such as CPLEX [41] and Gurobi [42] has greatly improved the performance of matheuristic algorithms that combine heuristic algorithms and exact algorithms. In recent years, matheuristics [43] have been widely used in scheduling problems, including NWFSP. Since NWFSP can be transformed into the ATSP [44], it is effective to incorporate more mathematical computations into its search mechanism.
Existing NWFSP researches mostly focus on designing algorithms from the perspective of instances rather than problems. The heuristic rules or the component mechanisms and parameter selections of the metaheuristics are determined by the quality of the test results, so as to balance the local search and the global search. In this way, the algorithm is dependent on the instances, and the performance of the algorithm will be different for different instances. This paper aims to explore the heuristic rules from the perspective of the problem, so that the rules show consistent effects on the instances, thereby enhancing the stability of the algorithm. Existing neighborhood structures for local search focus on insertion, swap, destruction-construction, etc., and do not divide the already searched area and the unsearched area, so the same sequence cannot avoid being tested repeatedly. Likewise, good structures are not necessarily preserved after multiple iterations. In this paper, by expanding the search domain gradually, the search process is kept in the unsearched potential areas, and the repeated calculation is reduced. The experimental results in Section 4.1 show that such a search method can also effectively preserve good structures. Exact algorithms are limited by the complexity and scale of the problem, so they are less applicable than heuristics and metaheuristics. Using heuristic rules to guide the search direction of the exact algorithm can improve the solving efficiency and increase the application scope of the exact algorithm. The experiments in Section 4.2 demonstrate that by fully exploiting the advantages of both methods, better results can be obtained than the current mainstream metaheuristic algorithms for solving NWFSP problems.
The rest of this article is organized as follows. Section 2 describes the problem of NWFSP. Section 3 proposes two heuristic algorithms for improving the solutions of NWFSP. Section 4 discusses the performance of the proposed matheuristic algorithm on two benchmarks. Section 5 summarizes the contributions of this article and provides some suggestions for future research directions.

2. No-Wait Flow Shop Scheduling Problem

Compared to FSP, NWFSP permits no interruption when a job is processed from the first stage to the last one. The details are expressed as follows.
Typically, the NWFSP includes m stages, preparing for n jobs to follow the same given route, where the m and n are determined. It also restricts only one machine at each stage, so it contains m machines overall. Furthermore, the processing time of job j on machine i is also known and may be named as P ( i , j ) . It is natural to consider every P ( i , j ) as the element of row i , column j in processing time matrix P . One job should be processed on only one machine at the same time and it is merely allowed to be transferred to the next stage until the current work is finished; similarly, each machine cannot deal with more than one job at any time. Here, one only considers the situation of ignoring the setup time and transfer time.

2.1. Problem Description

As we known, makespan, the complete time of the last job in a sequence, is one of the indicators that directly measures economic benefits. Let ( C max ) min denote the minimum makespan and its expression is shown in Equation (1). The meanings of related symbols are as follows.
The job set: J = { 1 , 2 , 3 , L , n | n Z , n 1 }
The machine set: M = { 1 , 2 , 3 , L , m | m Z , m 1 }
π any sequence
Π the set of all possible sequence π
π k the k t h job in the sequence k Z   and   k 1
C i j end time of job j on machine i
( C max ) min = min π Π ( C m , π n )
D Y ( j 1 , j 2 ) is the delay of the job j 2 when it is arranged after job j 1 , j 1 , j 2 J ,   j 1 j 2 .
P R j the tail value of the job j .
P R j = i = 2 m P ( i , j ) , j J , i M
P 1 the sum of the processing times of all the jobs on the first machine
P 1 = j = 1 n P ( 1 , j )
Owing to the no-wait constraint, the delay value between two adjacent jobs is constant and can be calculated by matrix P . As shown in Figure 1, C max has been divided into three parts and can be calculated from Equation (4). Obviously, P 1 is a constant, so the change of C max is only affected by two parts. Then, the simplified objective function was formulated in Equation (5).
C max = j = 1 n 1 D Y ( π j , π j + 1 ) + P R π n + P 1
C max = j = 1 n 1 D Y ( π j , π j + 1 ) + P R π n

2.2. Equivalent Model

Considering jobs as cities and the delay value as the distance between two cities, the NWFSP can be transformed into the ATSP. After adding virtual city/job “0”and “n + 1”, the model can be described as follows. For convenience, “0” and “n + 1” are counted as the same city, and “0” represents the starting city/job, and “n + 1” represents the ending city/job. The city/job set is J C = { 0 , 1 , 2 , 3 , L , n + 1 | n Z , n 1 } .
D Y ( 0 , j 2 ) = P 1 , j 2 [ 1 , n ]
D Y ( j 1 , n + 1 ) = P R j 1 , j 1 [ 1 , n ]
The objective:
M i n i m i z e ( j 2 = 1 n D Y ( 0 , j 2 ) x 0 , j 2 + j 1 = 1 n j 2 = 1 , j 1 j 2 n + 1 D Y ( j 1 , j 2 ) x j 1 , j 2 )
Subjected to:
j 2 = 1 n x 0 , j 2 = 1
j 2 = 1 , j 2 j 1 n + 1 x j 1 , j 2 = 1 , j 1 [ 1 , n ]
j 1 = 0 , j 1 j 2 n x j 1 , j 2 = 1 , j 2 [ 1 , n ]
j 1 = 1 n x j 1 , n + 1 = 1
j 1 , j 2 S ,   j 1 j 2 i f   j 1 = 0 ,   t h e n   j 2 n + 1 j 1 [ 0 , n ] ,   j 2 [ 1 , n + 1 ] x j 1 , j 2 | S | 1 , 2 | S | n , S J C
x j 1 , j 2 = { 1 0 i f   j 2   i s   t h e   n e x t   j o b / c i t y   o f   j 1 e l s e

3. The Proposed Heuristic Algorithms

The NWFSP can be transformed into the ATSP, so the path-exchange-strategy used in the Lin-Kernighan–Helsgaun (LKH) [45] algorithm is also suitable for solving NWFSP. Based on this idea, we can divide the neighborhood and construct heuristic algorithms.

3.1. Basic Neighborhood Structures

Considering the search efficiency fully, the search solution domain is initially limited to three levels, that is, Neighborhood N 2 , N 3 , N 4 . The characteristics of the three basic neighborhoods are given in Table 1. The n e x c h a n g e optimum means that the solution cannot be better by exchanging n areas, which are arbitrarily divided in the sequence.
The N 2 neighborhood is shown in Figure 2 as an example, where L 1 and L 2 are the two area lines and the three regions are A e r a 0 , A e r a 1 , A e r a 2 .

3.2. Selected Potential Sub-Neighborhoods

In order to improve the search efficiency and advance the search process in more potential sub-domains, three rules are proposed and three sub-domains are selected, namely Z 2 , Z 3 , and Z 4 . The details are shown in Table 2. Two heuristic algorithms will be carried out along these three neighborhoods.
  • Skipping consecutive area rule
Considering that the current solution sequence is almost derived from the last region-exchange operation, the combinations containing the adjacent area number pair (that is “01”, “12”, “23”, “34”) as a part do not temporarily need to test again.
  • First and last job unchanged rules
From the experimental results, along the evolutionary direction, the job in last position is easier to have coincidence with the optimal solution because of being affected by P R values. Then is the first position. When the evolution process has advanced to a certain level, the first and last job can be fixed.
  • Low-Level-neighborhood first searched rule
The lower the neighborhood level, the higher the search priority. The algorithm flow chart is shown in Figure 3. “US” is the abbreviation of “Update successfully” and “UF” is the abbreviation of “Update failed”. For ease of representation, the structure in the dashed box in Figure 3 is referred to as “PHM”.

3.3. Two Heuristic Algorithms

Based on the mechanism in Figure 3, it is easy to propose two heuristic algorithms according to whether the P R value characteristic is highlighted. The two heuristic algorithms HG1 and HG2 differ only in the first step, that is, the input feasible solutions are different. The details are shown in Figure 4. The advantage of HG1 is randomness. HG2 always takes the job with the smallest P R value as the final job to start the search, which is more in line with the characteristics of NWFSP and can speed up the algorithm convergence.
In order to improve the utilization of historical data and improve the efficiency of neighborhood search. According to the different characteristics of the neighborhoods of Z 2 , Z 3 and Z 4 , corresponding search algorithms are provided one by one. They are vmcmp2, vmcmp3, and vmcmp4.

3.3.1. Local Evolutionary Algorithm in Z2

Searching the only manner {102} can improve evolution speed. Without the change of P R , the evaluation speed is further improved. Let the size of Δ characterize the degree of improvement of the new sequence; the larger the value, the better the improvement.
Δ = C max ( C S ) C max ( N S ) = D Y ( x 1 , y 1 ) + D Y ( x 2 , y 2 ) D Y ( x 2 , A ) D Y ( x 1 , y 2 )
In Figure 2, it can be seen that for the same C S , when L 1 and L 2 move, x 1 , x 2 , y 1 , y 2 are changed, but A , B remains unchanged, which can be regarded as constants. Because the first job is stored in A , once the adjustment is made according to {102}, the A value of the new sequence must change, and the value of D Y ( x 2 , A ) will also change. As can be seen from Equation (15), in order to make full use of historical data and reduce repeated calculating, Δ is regarded as two components. The first part and the second part are respectively represented by functions f and g , as shown in Equations (16) and (17).
f ( x 2 , A ) = D Y ( x 2 , A )
g ( x 1 , y 2 ) = D Y ( x 1 , y 1 ) + D Y ( x 2 , y 2 ) D Y ( x 1 , y 2 )
The two-dimensional array G is constructed, and the calculation result g ( x 1 , y 2 ) of Equation (17) is stored in the x 1 t h row and the y 2 t h column of G . Such an operation can ensure that after the sequence is updated, the intermediate result G only needs to recalculate part of the data. For the convenience of description, several definitions are given below.
Definition 1
(Point). Adjacent job pair makes up a point. If  ( x 1 , y 1 ) is a point,  x 1 is the predecessor job of  y 1 , and  y 1 is the successor job of  x 1 .
Definition 2
(Single-point Attribute (SPA)). The function value calculated by a point only or with information of a stable job. For example, function f is the single-point attribute of the point  ( x 2 , y 2 ) , where  A is the stable job.
Definition 3
(Double-point Attribute (DPA)). The function value is calculated by two points and reflects the relationship of four related jobs. For example, function g is the double-point attribute of point  ( x 1 , y 1 ) and point  ( x 2 , y 2 ) .
When traversing ( L 1 , L 2 ) , function f is associated with the attributes of job x 2 which belongs to the second point ( x 2 , y 2 ) . But in essence, x 2 is still a job in C S . If the index number of the first job in the sequence is 0, then the index range of x 2 is [ 1 , n 2 ] (denote the index of x 2 in the sequence as L ( x 2 ) , then L ( x 2 ) [ 1 , n 2 ] ). Function f is seen as the single-point attributes of the jobs whose index ranges from 1 to n 2 . Once C S is given, we can directly calculate the function f . It can be seen from the characteristics of the double-point attribute that if the two points do not change, then their DPAs do not change. That is, as long as the con1 is met, the saved values in G need not be updated, and the calculation speed can be increased.
The con1 is described as follows.
First, the jobs x 1 and y 2 remain in the original pair order, that is, in the sequence N S , and x 1 is still on the left side of y 2 .
Second, to find the g value of the job x and job y , that is, in g ( x , y ) , x cannot be equal to x 1 or x 2 in the last operation; y cannot be equal to y 1 or y 2 .
The local optimal algorithm (Algorithm 1) for Z 2 is described as follows.
Algorithm 1. vmcmp2(con1)// Search {102} only
Step 1. compute all the SPAs in CS, that is, all the f
Step 2. compute all the DPAs in CS, that is, all the g
Step 3. ( Δ max , x 1 , x 2 ) = ( g - f ) max //Save the best positions to   x 1   and   x 2 which provide the Δ max
Step 4. if ( Δ max > 0 )
     L1 = x 1 and L2 = x 2
     CS=CS ({102})//Perform operation {102} on CS to update CS
     update ( f ); //Recalculate all the f values
     update ( g , c o n 1 ); // Recalculate g values that do not meet con1
     jump to Step 3
   end if
   else// CS is the best of Z2 now
     Stop searching in Z2
   end else

3.3.2. Local Evolutionary Algorithm in Z3

Since the first job and the last job are constant, the point attribute can be used with the historical data to optimize the algorithm. Figure 5 shows the evolution form of “0213”, from which Equation (20) can be obtained.
Where S D ( j 1 , j 2 ) represents the sum of the delay values of all the jobs from the job j 1 to the job j 2 in the current sequence. Equation (20) is the sum of the expressions in the three brackets, and the three sub-expressions, which can be seen as DPAs are in a rotation-symmetric relationship. Parentheses 1 and 2 can share the function f 1 . Parenthesis 3 is denoted as function f 2 .
C max ( C S ) = S D ( A , x 1 ) + D Y ( x 1 , y 1 ) + S D ( y 1 , x 2 ) + D Y ( x 2 , y 2 ) + S D ( y 2 , x 3 ) + D Y ( x 3 , y 3 ) + S D ( y 3 , B ) + P R ( B )
C max ( N S ) = S D ( A , x 1 ) + D Y ( x 1 , y 2 ) + S D ( y 2 , x 3 ) + D Y ( x 3 , y 1 ) + S D ( y 1 , x 2 ) + D Y ( x 2 , y 3 ) + S D ( y 3 , B ) + P R ( B )
Δ ( 0213 ) = C max ( C S ) C max ( N S ) = [ D Y ( x 1 , y 1 ) D Y ( x 1 , y 2 ) ] + [ D Y ( x 2 , y 2 ) D Y ( x 2 , y 3 ) ] + [ D Y ( x 3 , y 3 ) D Y ( x 3 , y 1 ) ]
f 1 and f 2 are considered as functions of the job numbers and the function values are saved. See Equations (21) and (22) for details.
{ f 1 ( x a , y b ) = D Y ( x a , y a ) D Y ( x a , y b ) L ( x a ) [ 0 , n 3 ] , L ( y b ) [ 2 , n 1 ] , a n d   L ( x a ) + 1 < L ( y b )
{ f 2 ( x b , y a ) = D Y ( x b , y b ) D Y ( x b , y a ) L ( x b ) [ 2 , n 2 ] , L ( y a ) [ 1 , n 3 ] , a n d   L ( x b ) > L ( y a ) 1
Compute and save each f 1 ( x a , y b ) value to the x a t h row, y b t h column of the two-dimensional array G 1 . Assume point1 ( x a , y a ) , point2 ( x b , y b ) , point3 ( x c , y c ) , then Equation (20) can be replaced by Equation (23) to reduce double counting.
{ Δ ( 0213 ) = f 1 ( x a , y b ) + f 1 ( x b , y c ) + f 2 ( x c , y a ) L ( x a ) [ 0 , n 4 ] , L ( x b ) [ 1 , n 3 ] , L ( x c ) [ 2 , n 2 ] , a n d   L ( x a ) < L ( x b ) < L ( x c )
In contrast to the search in S 3 , there is only one combination to try, so compute all Δ ( 0213 ) and choose the biggest improvement. If all Δ ( 0213 ) is computed but only updated at most once, the utilization rate of historical data is too low. Then Multiple Iterations in One step (MIO) is proposed to improve this situation.
Traverse the Z3 neighborhood of CS, the specific method is to calculate by Formula 23, and save all the values greater than 0 in D Y 0 . In Figure 6, D Y 0 is a multiset that sorts the results according to its Δ values, from largest to smallest, and itRS7 is the iterator of D Y 0 .
.
Suppose there is a NWFSP with n = 12 , and the D Y 0 is shown in Figure 6. As shown in Figure 7, its current solution is C S ( 0 ) = { 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 } .
Take out the element pointed to by the iterator i t R S 7 , that is ( T 1 , T 2 , T 3 ) .Then the evolution process can be seen in Figure 8. The indices of points (1, 2), (3, 4), (10, 11) in the sequence are respectively recorded as T 1 , T 2 , T 3 . In Figure 6, ( T 1 , T 2 , T 3 ) means T 1 < T 2 < T 3 .
Because of the constant last job, the P R is constant too. Then the essence of every evolution manner is the change of boundary values. Combining Figure 6 and Figure 7, we can draw the conclusions of Table 3.
It is easy to see from Table 3 that no matter how many iterations are completed, as long as the changed path of boundary values still exist, evolution in this direction will succeed.
It can be seen from Figure 9 that after two iterations, the corresponding point of R 1 cannot be found in C S ( 2 ) at this time, so it cannot be evolved in this way. Next, derive the conditions con2 whose paths still exist after iterations, as shown in Table 4.
Traverse D Y 0 and determine whether each element meets the con2 in turn. If one is satisfied, it is applicable and the iteration can be completed directly. Therefore, D Y 0 is calculated only once, and there is a chance to iterate multiple times. The algorithm (Algorithm 2) vmcmp3(con2) suitable for Z 3 is described as follows.
Algorithm 2. vmcmp3(con2)
Step 1. Input   C S ( 0 ) , record Loc (j, C S ( 0 ) ) and save them to vector k1, Let its jth element, k1(j) to save the location of job j in C S ( 0 )
Step 2. flag = 0// Set the initial value of the flag
Step 3. Compute and save   f 1 , f 2 , then compute Δ and save them to multiset DY0 when Δ > 0 .
Step 4. if (DY0.size()! = 0) // if DY0 is not empty
      for itRS7 = DY0.begin() to DY0.end ()-1)// Traverse DY0 by itRS7
if(con2) //If con2 is met
CS= update (itRS7, {2013}, CS);// According the locations
               //provided by DY0, give CS a {2013} operation
flag=1// Updated successfully
update (Loc (j, CS)); // Save the new correspondence of location //and job number
end if(con2)
      end for
end if
else
flag=2
end else
    if flag==1 // When update successfully
    jump to Step2
    end if
    else // When update failed
    Stop searching in Z3
end else

3.3.3. Local Evolutionary Algorithm in Z4

Experiments show that the combination {03214} has a higher hit rate. Its main feature is that the first and last jobs are not changed. The Figure 10 shows the evolution form.
Still, start with the expression of Δ ( 03214 ) , specific as shown in Equations (24)–(26).
C max ( C S ) = S D ( A , x 1 ) + D Y ( x 1 , y 1 ) + S D ( y 1 , x 2 ) + D Y ( x 2 , y 2 ) + S D ( y 2 , x 3 ) + D Y ( x 3 , y 3 ) + S D ( y 3 , x 4 ) + D Y ( x 4 , y 4 ) + S D ( y 4 , B ) + P R ( B )
C max ( N S ) = S D ( A , x 1 ) + D Y ( x 1 , y 3 ) + S D ( y 3 , x 4 ) + D Y ( x 4 , y 2 ) + S D ( y 2 , x 3 ) + D Y ( x 3 , y 1 ) + S D ( y 1 , x 2 ) + D Y ( x 2 , y 4 ) + S D ( y 4 , B ) + P R ( B )
Δ ( 03214 ) = C max ( C S ) C max ( N S ) = D Y ( x 1 , y 1 ) + D Y ( x 2 , y 2 ) + D Y ( x 3 , y 3 ) + D Y ( x 4 , y 4 ) D Y ( x 1 , y 3 ) D Y ( x 4 , y 2 ) D Y ( x 3 , y 1 ) D Y ( x 2 , y 4 ) = [ D Y ( x 1 , y 1 ) + D Y ( x 3 , y 3 ) D Y ( x 1 , y 3 ) D Y ( x 3 , y 1 ) ] + [ D Y ( x 2 , y 2 ) + D Y ( x 4 , y 4 ) D Y ( x 4 , y 2 ) D Y ( x 2 , y 4 ) ]
Similarly, using Equation (27) to uniformly calculate the two parts separated by parentheses in Equation (26) can simplify the calculation. Let Δ ( 03214 ) be V ( x a , y b ) here.
{ V ( x a , y b ) = D Y ( x a , y a ) + D Y ( x b , y b ) D Y ( x a , y b ) D Y ( x b , y a ) L ( x a ) [ 0 , n 4 ] , L ( y b ) [ 3 , n 1 ] , a n d   L ( x a ) < L ( y b ) 1  
Compute all the results by Equation (27) and save them in multiset R S 2 .
Use two iterators, i t 1 and i t 2 to traverse R S 2 . Then select all the elements that meet the condition of con3, and store them in multiset J S F . See Figure 11 and Figure 12 for details.
J S F collects all successful evolution paths. Its iterator is itRS9 in Figure 12 ( T 1 < T 2 < T 3 < T 4 ). Suppose the variables saved by a single element in R S 2 are D O T 1 and D O T 2 , then the conditions for con3 to be established are shown in Table 5.
Similarly, use MIO Strategy to construct an algorithm (Algorithm 3) to speed up the search process.
Algorithm 3. vmcmp4(con3, con4)
Step 1. Input   C S ( 0 ) , record Loc (j, C S ( 0 ) ) and save them to vector k1, Let its jth element, k1(j) to save the location of job j in C S ( 0 )
Step 2. flag = 0// Set the initial value of the flag
Step 3. for x A = 0 to n-4 // The union of the ranges L1 and L2
for y B = 3 to n − 1// The union of the ranges L3 and L4
cmpRS2( x A , y B )//compute the DPAs of ( x A , y A , x B , y B ) and save them in RS2
end for // End the inner loop
end for// End outer loop
Step 4. it1begin = RS2.begin()
   it2begin = it1 + 1 // Assign initial values to iterators it1 and it2
   for it1 = it1begin to it1end
for it2 = it2begin to it2end // Double loop for traverse RS2
if (con3(it1, it2)) //If the elements pointed to by it1 and it2 satisfy con3
JSF. insert (it1, it2)//Save the evolution path provided
            //by the elements which are pointed to by it1 and it2 in JSF
end if
end for// End the inner loop
end for// End outer loop
Step 5. if (JSF. size () 0)//If JSF is not empty, it means that   C S ( 0 ) can be
            //updated in its neighborhood Z4
for itJSF9 = JSF. begin () to JSF. end ()−1 //Traverse multiset JSF
                    //which is the neighborhood Z4 of C S ( 0 )
        if (con4(itJSF9))//If the element pointed to by itJSF9 satisfies the con4
CS = update (JSF9, {03214}, CS)//Update CS as specified by JSF9
update (Loc (j, CS)); // Save the new correspondence of
                    //location and job number
    flag = 1; // Assign 1 to the flag bit
end if
end for
    end if
else
   flag = 2
end else
if flag == 1// If there is an update in this round
jump to Step2
   end if
else// This round of searching this C S ( 0 ) in Z4 has not been updated
Stop searching in Z4
end else
Let the condition that satisfies MIO in Z4 be con4. If con4 in the algorithm is to be established, two conditions must be met, as shown in Table 6.

3.4. Experimental Results of the Two Heuristic Algorithms

Use HG1 and HG2 to solve Tailard (TA) benchmark. Then record the calculation results in Table 7. The heuristic algorithms were coded in C++. All the experiments executed in Windows 10 on a desktop PC with an 11th Gen Intel(R) Core (TM) i7-1165G7 (2.80GHz) processor and 16.0 GB RAM. The calculation formula of RPI (Relative Percentage Increase) and R P I _ _ _ _ _ are shown in Equations (28) and (29), where C ( o p t , a 1 ) refers to the optimal C max of instance a1. C ( i 1 , a 1 ) refers to the result of algorithm i1 to solve the instance a1.
R P I ( i 1 , a 1 ) = C ( i 1 , a 1 ) C ( o p t , a 1 ) C ( o p t , a 1 ) × 100
{ R P I _ _ _ _ _ | f r o m   b   to   b + 9   = a 1 = b a 1 = b + 9 R P I ( i 1 , a 1 ) 10 × 100 b { 1 , 11 , 21 , 31 , 41 , 51 , 61 , 71 , 81 , 91 , 101 , 111 }
In Table 7, HG2 has a smaller R P I _ _ _ _ _ value than HG1 in all scale instances; its fluctuation range is from 0.8% to 2.58%, the maximum R P I _ _ _ _ _ value of 2.58% appears on the small-scale instances (scale is 20 × 10), while its R P I _ _ _ _ _ value on the largest-scale instances (500 × 20) is only 1.28%. On the whole, it performs better for large-scale instances. The average R P I _ _ _ _ _ is 1.30%, and the fluctuation is not large, indicating that the algorithm has strong local search ability, and can quickly converge to a local optimal solution regardless of the scale, with stable performance.
It can be seen from Table 7 that out of 120 instances, HG1 only exceeds HG2 in 6 instances (the part in bold in the table). Therefore, for the goal of approximating the optimal value, the effect of the HG2 algorithm constructed by introducing the P R value is much better than that of HG1, and even HG2 directly obtains the optimum on some small-scale instances (the part in italics in the table). Taking “TA 05” as an example, its Gantt chart is shown in Figure 13. If the result of HG1 is imported into Gurobi, another set of optimal solution sequences can be obtained. This Gantt chart is shown in Figure 14. Therefore, the use of different heuristic algorithms is effective for the breadth search. The same job-blocks in the HG1 sequence and the HG2 sequence can also be considered as high-quality structures by comparing them with the optimal solution sequence. Therefore, the heuristic algorithms can easily construct multiple high-quality solutions. Arrange the jobs according to their P R values from small to large, take them as the last job in turn, and then perform the PHM operation.
Similarly, we used HG1 and HG2 to solve Reeves (REC) benchmark. The results are recorded in Table 8.
On this benchmark, the values obtained by HG2 are also better than that obtained by HG1. Except for REC11, HG1 does not get a better C max than HG2, and HG2 even gets the optimal solution on REC03. Obviously, the effect of HG2 is better than that of HG1, so HG2 was selected to participate in the calculation in subsequent studies.

4. Solving the Model with Matheuristic Algorithms

4.1. Test on Small-Scale Instances with MTZ Model

When Gurobi solves the NWFSP problem, if all constraints are added directly at the beginning of the optimization, the solution performance will be greatly reduced, for example, when the famous Miller-Tucker-Zemlin model [46] (Formulas (6)–(12), Equation (14) and Formula (30)) is used directly. However, the model has high research value [47]. We combine the heuristic algorithm to improve the efficiency.
{ μ j 1 μ j 2 + n × x j 1 , j 2 n 1 μ j 0
Using Gurobi (The version is 9.1.2.) to solve this model directly, the results are shown in Table 9 and Table 10. Among them, TA02 takes a longer time, and TA37 has not completed the calculation within 36,000 s.
We select those instances (T > 7s) with longer solution times, and combine heuristics to improve efficiency. NA (not a number) indicates that the calculation has not been completed within 36,000 s.
The method of importing only the result (named as S2) of HG2 as an initial feasible solution to the optimizer is called MH2, then the solution efficiency for instances marked in italics is improved.
Since NWFSP is solved by transforming it into an ATSP problem, i.e., the solution satisfies the characteristic of a “loop”. We choose the last three jobs in S2 to join the first three jobs to form a test sequence. The test sequence is shown in Formula (31).
{ S 2 [ n 2 ] , S 2 [ n 1 ] , S 2 [ n ] , S 2 [ 1 ] , S 2 [ 2 ] , S 2 [ 3 ] }
Using adjacent job pairs in the test sequence as constraints one by one, test the running time separately (if a single test exceeds 10s, test the next job pair or end the test). Among the results satisfying RPI = 0, select the best (the one with the shortest running time) as the improvement value and store it in the IMH2 column of Table 11. It can be seen that the speed of solving other instances in Table 11 are also improved to varying degrees (the last column in Table 11). This shows that the matheuristic method is effective.

4.2. Comparison Results of MH2 and Other Algorithms

While we used the matheuristic in the previous subsection to improve the efficiency of the MTZ model, the strategy of dynamically adding constraints using callback functions when solving large-scale instances has greater advantages. We use this strategy to solve the proposed model (Formula 6-Formula 14), also naming the mathematic algorithm that introduces HG2 as MH2. The comparison results of MH2, DWWO [32], SWWO [34], and IDMBO [37] are shown in Table 12 and Table 13.
These comparison algorithms are the main intelligent algorithms for solving NWFSP problems in recent years (2018–2020). We carefully reproduced the algorithms following the steps in the literature and set the parameters as required therein. Considering the uniformity, the condition for terminating the program was selected from the literature [34], that is, the upper limit of the running time is n × m × 90/2. Each algorithm takes five independent runs for each instance, according to [37]. Calculate with Equations (32)–(34) and fill in Table 12 and Table 13 with the results. All algorithms are programmed in C++ (Visual studio 2019) and run on the same desktop PC with an 11th Gen Intel(R) Core (TM) i7-1165G7 (2.80GHz) processor and 16.0 GB RAM.
Δ min , i 1 , a 1 = C ( min , i 1 , a 1 ) C ( o p t , a 1 ) C ( o p t , a 1 ) × 100
Δ A v g , i 1 , a 1 = C ( A v g , i 1 , a 1 ) C ( o p t , a 1 ) C ( o p t , a 1 ) × 100
S D i 1 , a 1 = c t = 1 5 ( C ( c t , i 1 , a 1 ) C ( A v g , i 1 , a 1 ) ) 2 5
The minimum and average values of five independent runs of algorithm i1 on instance a1 are denoted as C ( min , i 1 , a 1 ) and C ( A v g , i 1 , a 1 ) . Δ min _ _ _ , Δ A v g _ _ _ , S D _ _ _ and T i m e _ _ _ _ _ _ represent the arithmetic mean of Δ min , Δ A v g , S D and Time of the same scale, respectively, and C ( c t , i 1 , a 1 ) represents the result of the ctth running of the algorithm i1 on the instance a1.
Table 12 shows the test results of the four algorithms on the Tailard benchmark, in which the first column o p t . _ _ _ _ records the averages of the optimal values for instances of the same size. The proposed matheuristic algorithm MH2 achieves the optimal solutions in all instances. Even for the largest instances (500 × 20), the average running time is 112.82 s. The running time of a single instance with a scale of less than or equal to 200 is short, ranging from tens of milliseconds to a dozen seconds. Therefore, the algorithm is efficient and feasible. MH2 does not contain random parameters, so the SD value is always 0, and the algorithm is stable.
It can be seen from column Δ min _ _ _ that when the job size of the Tailard benchmark is greater than or equal to 50, the three metaheuristic algorithms cannot obtain an optimal solution within n × m × 90/2, even if they are run 5 times independently. As the job size increases, the difference between the output value and the optimal solution also increases. In the case of SWWO [34], which has the best comprehensive ability among the three metaheuristics, when the scale is 200 × 20, the difference between the average output solution and the average optimum is about 200 (19788.4 × 1.00%). The metaheuristic algorithm needs more running time to get a better solution, and its convergence speed is not as fast as MH2.
Table 13 shows the results of the four algorithms for solving the small-scale benchmark Reeves, where the first column records the optimal values of the instances. The operating efficiency of the MH2 algorithm is the highest, and the optimal value of any REC instance can be obtained within 1 s. The second is the SWWO algorithm [34], which can find the optimal solution except REC39 and REC41.
In summary, when n < 30, the difference between the four algorithms is not big, and the optimal solutions can basically be found. When n = 50, the search ability of the two swarm intelligence algorithms, IDMBO [37] and DWWO [32], are not as good as that of the adaptive algorithm SWWO [34]. SWWO [34] has the ability to find the optimal solution for an instance of size n = 75 in 67.5 s. To a certain extent, it verifies the NWFSP, which is more suitable for a search mechanism with more local search than global search. Comparing Table 7, it can be seen that when the scale is increased to 200 × 10 and 500 × 20, the R P I _ _ _ _ _ _ _ _ of the heuristic algorithm HG2 is 0.92% and 1.28% respectively, that is, the search performance also exceeds the other three intelligent algorithms. This shows that it is feasible to perform a local search with the job with the smallest P R value in the job set as the end job.

4.3. Results and Discussion

  • This study determined the optimal solutions for 141 instances on the Tailard and Reeves benchmarks. Further analysis based on the optimal solutions can verify the validity of the proposed heuristic rule. The final job of the optimal solution is often the job with a smaller P R value.
Step 1. Arrange the jobs according to their P R values from small to large. The corresponding rank with the smallest P R value is 0.
Step 2. Record the Final Job Ranking (FJR) of each optimal sequence. See Table 14 for details.
It can be seen from Table 14 that the FJR values of the instances are distributed between 0 and 30, and the case of “ F J R = 0 ” accounts for 55%, as shown in Figure 15.
  • The heuristic algorithm can effectively guide the optimization direction of the optimizer and improve the solution rate of complex models. Making full use of the advantages of both can not only find more optimal solutions, but also expand the application range of exact algorithms and obtain high-quality solutions that cannot be obtained by metaheuristic algorithms. Two different optimal sequences are obtained on REC05, REC09, TA01, TA03, TA05, TA21, TA32, TA33, TA34, TA35, TA41, TA42, TA44, TA45 and TA46, respectively.

5. Conclusions and Future Work

5.1. Conclusions

This paper studied the characteristics of NWFSP and verified that the P R value is an important indicator of a job. Taking the job with the smaller P R value as the final job has a higher probability of obtaining the optimal solution.
Along the evolutionary direction of HG1 and HG2, different optimal values can be obtained. This shows that HG1 and HG2 as initial solutions have good dispersion. The P R rule and Z 2 , Z 3 , Z 4 neighborhoods are also fit to be components of intelligent algorithms.
If the job with the smallest P R value is placed in the last position, and then the neighborhood search is expanded step by step (search Z 2 , Z 3 , Z 4 in turn), the iterative results are often consistent with the optimal solutions in the first few and last few jobs. Bringing these into the optimizer increases the model solution rate.
Experiments show that the optimal values of the two benchmarks (REC and TA) can be obtained by using the Matheuristic MH2 method within an acceptable time (the running time of a single instance with a scale of 500 × 20 is less than 134 s). This shows that the algorithm is feasible.

5.2. Future Work

  • Considering the computing power of personal laptops, this paper only studies the three-level neighborhood division method, and initially realizes the idea of gradually expanding the search domain of NWFSP. However, for large-scale instances, the coverage of the three-level neighborhood is not enough, so the effect has a certain dependence on the initial value. In the future, more levels of neighborhood division methods and corresponding neighborhood search methods can be further studied to improve the stability of the algorithm.
  • The scope of application of the proposed rules and neighborhood partitioning strategies can be generalized in the future, and they can be embedded in the framework of metaheuristic algorithms to solve NWFSP. In the past, heuristic algorithms were mostly used to construct high-quality solutions, but the rules proposed in this paper satisfy the conditions of participating in all stages. The interaction method between heuristic rules driven by problem features and metaheuristics with the advantage of generality can be further studied to improve the ability of metaheuristics to solve NWFSP.
  • In real production, distributed requirements are becoming more and more extensive.
Since the matheuristic solves the NWFSP problem well, it can also be studied to solve the Distributed No-wait Flowshop Scheduling Problem (DNWFSP). On the one hand, it is possible to study the method of building a simplified model of DNWFSP based on the existing NWFSP model; on the other hand, how to use the obtained heuristic rules to guide the optimization direction of the optimizer, improve the speed of solving large-scale instances and expand the application range of exact algorithms is also a future research direction.

Author Contributions

Software, Y.G.; validation, Y.G.; writing—original draft preparation, Y.G. and Z.W.; writing—review and editing, Z.W.; supervision, L.G. and X.L.; project administration, L.G. and X.L.; funding acquisition, L.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arik, O.A. Genetic algorithm application for permutation flow shop scheduling problems. Gazi Univ. J. Sci. 2022, 35, 92–111. [Google Scholar] [CrossRef]
  2. Red, S.F. An effective new heuristic algorithm for solving permutation flow shop scheduling problem. Trans. Comb. 2022, 11, 15–27. [Google Scholar]
  3. Li, W.M.; Han, D.; Gao, L.; Li, X.Y.; Li, Y. Integrated production and transportation scheduling method in hybrid flow shop. Chin. J. Mech. Eng. 2022, 35, 12. [Google Scholar] [CrossRef]
  4. Jemmali, M.; Hidri, L.; Alourani, A. Two-stage hybrid flowshop scheduling problem with independent setup times. Int. J. Simul. Model. 2022, 21, 5–16. [Google Scholar] [CrossRef]
  5. Zhang, J.; Ding, G.F.; Zou, Y.S.; Qin, S.F.; Fu, J.L. Review of job shop scheduling research and its new perspectives under industry 4.0. J. Intell. Manuf. 2019, 30, 1809–1830. [Google Scholar] [CrossRef]
  6. Sun, Y.; Pan, J.S.; Hu, P.; Chu, S.C. Enhanced equilibrium optimizer algorithm applied in job shop scheduling problem. J. Intell. Manuf. 2022. Available online: https://www.webofscience.com/wos/alldb/full-record/WOS:000739739300001 (accessed on 1 January 2022). [CrossRef]
  7. Aghighi, S.; Niaki, S.T.A.; Mehdizadeh, E.; Najafi, A.A. Open-shop production scheduling with reverse flows. Comput. Ind. Eng. 2021, 153, 107077. [Google Scholar] [CrossRef]
  8. Ozolins, A. Dynamic programming approach for solving the open shop problem. Cent. Europ. J. Oper. Res. 2021, 29, 291–306. [Google Scholar] [CrossRef]
  9. Callahan, J.R. The Nothing Hot Delay Problems in the Production of Steel. Ph.D. Dissertation, Department of Mechanical & Industrial Engineering, Toronto University, Toronto, ON, Canada, 1971. [Google Scholar]
  10. Tang, J.F.; Song, J.W. Discrete particle swarm optimisation combined with no-wait algorithm in stages for scheduling mill roller annealing process. Int. J. Comput. Integr. Manuf. 2010, 23, 979–991. [Google Scholar] [CrossRef]
  11. Höhn, W.; Jacobs, T.; Megow, N. On Eulerian extensions and their application to no-wait flowshop scheduling. J. Sched. 2012, 15, 295–309. [Google Scholar] [CrossRef]
  12. Yuan, H.W.; Jing, Y.W.; Huang, J.P.; Ren, T. Optimal research and numerical simulation for scheduling No-Wait Flow Shop in steel production. J. Appl. Math. 2013, 2013, 498282. [Google Scholar] [CrossRef]
  13. Reklaitis, G.V. Review of scheduling of process operations. AIChE Symp. Ser. 1982, 78, 119–133. [Google Scholar]
  14. Hall, N.G.; Sriskandarajah, C. A survey of machine scheduling problems with blocking and no-wait in process. Oper. Res. 1996, 44, 510–525. [Google Scholar] [CrossRef]
  15. Babor, M.; Senge, J.; Rosell, C.M.; Rodrigo, D.; Hitzmann, B. Optimization of No-Wait Flowshop Scheduling Problem in Bakery Production with Modified PSO, NEH and SA. Processes 2021, 9, 2044. [Google Scholar] [CrossRef]
  16. Hsu, V.N.; de Matta, R.; Lee, C.Y. Scheduling patients in an ambulatory surgical center. Nav. Res. Logist. 2003, 50, 218–238. [Google Scholar] [CrossRef]
  17. Mannino, C.; Mascis, A. Optimal real-time traffic control in metro stations. Oper. Res. 2009, 57, 1026–1039. [Google Scholar] [CrossRef]
  18. Tomazella, C.P.; Nagano, M.S. A comprehensive review of Branch-and-Bound algorithms: Guidelines and directions for further research on the flowshop scheduling problem. Expert Syst. Appl. 2020, 158, 113556. [Google Scholar] [CrossRef]
  19. Graham, R.; Lawler, E.; Lenstra, J.; Kan, A. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Math. 1979, 5, 287–326. [Google Scholar]
  20. Garey, M.R.; Johnson, D.S. Computers and Intractability: A guide to the Theory of NP-Completeness; W.H. Freeman and Company: New York, NY, USA, 1979. [Google Scholar]
  21. Papadimitriou, C.H.; Kanellakis, P.C. Flowshop scheduling with limited temporary storage. J. ACM 1980, 27, 533–549. [Google Scholar] [CrossRef] [Green Version]
  22. Bonney, M.C.; Gundry, S.W. Solutions to the constrained flowshop sequencing problem. J. Oper. Res. Soc. 1976, 27, 869–883. [Google Scholar] [CrossRef]
  23. King, J.R.; Spachis, A.S. Heuristics for flowshop scheduling. Int. J. Prod. Res. 1980, 18, 343–357. [Google Scholar] [CrossRef]
  24. Gangadharan, R.; Rajendran, C. Heuristic algorithms for scheduling in the no-wait flowshop. Int. J. Prod. Econ. 1993, 32, 285–290. [Google Scholar] [CrossRef]
  25. Laha, D.; Chakraborty, U.K. A constructive heuristic for minimizing makespan in no-wait flow shop scheduling. Int. J. Adv. Manuf. Technol. 2009, 41, 97–109. [Google Scholar] [CrossRef]
  26. Ying, K.C.; Lu, C.C.; Lin, S.W. Improved exact methods for solving no-wait flowshop scheduling problems with due date constraints. IEEE Access 2018, 6, 30702–30713. [Google Scholar] [CrossRef]
  27. Land, A.H.; Doig, A.G. An automatic method of solving discrete programming problems. Econometrica 1960, 28, 497–520. [Google Scholar] [CrossRef]
  28. Pei, Z.; Zhang, X.F.; Zheng, L.; Wan, M.Z. A column generation-based approach for proportionate flexible two-stage no-wait job shop scheduling. Int. J. Prod. Res. 2020, 58, 487–508. [Google Scholar] [CrossRef]
  29. Allahverdi, A. A survey of scheduling problems with no-wait in process. Eur. J. Oper. Res. 2016, 255, 665–686. [Google Scholar] [CrossRef]
  30. Pan, Q.K.; Wang, L.; Tasgetiren, M.F.; Zhao, B.H. A hybrid discrete particle swarm optimization algorithm for the no-wait flow shop scheduling problem with makespan criterion. Int. J. Adv. Manuf. Technol. 2008, 38, 337–347. [Google Scholar] [CrossRef]
  31. Engin, O.; Guclu, A. A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Appl. Soft. Comput. 2018, 72, 166–176. [Google Scholar] [CrossRef]
  32. Zhao, F.Q.; Liu, H.; Zhang, Y.; Ma, W.M.; Zhang, C. A discrete water wave optimization algorithm for no-wait flow shop scheduling problem. Expert Syst. Appl. 2018, 91, 347–363. [Google Scholar] [CrossRef]
  33. Zhao, F.Q.; Qin, S.; Yang, G.Q.; Ma, W.M.; Zhang, C.; Song, H.B. A factorial based particle swarm optimization with a population adaptation mechanism for the no-wait flow shop scheduling problem with the makespan objective. Expert Syst. Appl. 2019, 126, 41–53. [Google Scholar] [CrossRef]
  34. Zhao, F.Q.; Zhang, L.X.; Liu, H.; Zhang, Y.; Ma, W.M.; Zhang, C.; Song, H.B. An improved water wave optimization algorithm with the single wave mechanism for the no-wait flow-shop scheduling problem. Eng. Optimiz. 2019, 51, 1727–1742. [Google Scholar] [CrossRef]
  35. Zhao, F.Q.; Qin, S.; Zhang, Y.; Ma, W.M.; Zhang, C.; Song, H.B. A hybrid biogeography-based optimization with variable neighborhood search mechanism for no-wait flow shop scheduling problem. Expert Syst. Appl. 2019, 126, 321–339. [Google Scholar] [CrossRef]
  36. Zhu, H.H.; Qi, X.M.; Chen, F.L.; He, X.; Chen, L.F.; Zhang, Z.Y. Quantum-inspired cuckoo co-search algorithm for no-wait flow shop scheduling. Appl. Intell. 2019, 49, 791–803. [Google Scholar] [CrossRef]
  37. Zhang, S.J.; Gu, X.S.; Zhou, F.N. An improved discrete migrating birds optimization algorithm for the no-wait flow shop scheduling problem. IEEE Access 2020, 8, 99380–99392. [Google Scholar] [CrossRef]
  38. Lai, R.S.; Gao, B.; Lin, W.G. Solving no-wait flow shop scheduling problem based on discrete wolf pack algorithm. Sci. Program. 2021, 2021, 4731012. [Google Scholar] [CrossRef]
  39. Burke, E.K.; Kendall, G.; Soubeiga, E. A tabu-search hyperheuristic for timetabling and rostering. J. Heuristics 2003, 9, 451–470. [Google Scholar] [CrossRef]
  40. Della Croce, F.; Salassa, F. A variable neighborhood search based matheuristic for nurse rostering problems. Ann. Oper. Res. 2014, 218, 185–199. [Google Scholar] [CrossRef]
  41. Kramer, R.; Subramanian, A.; Vidal, T.; Cabral, L.D.F. A matheuristic approach for the Pollution-Routing Problem. Eur. J. Oper. Res. 2015, 243, 523–539. [Google Scholar] [CrossRef] [Green Version]
  42. Hong, J.; Moon, K.; Lee, K.; Lee, K.; Pinedo, M.L. An iterated greedy matheuristic for scheduling in steelmaking-continuous casting process. Int. J. Prod. Res. 2021, 60, 623–643. [Google Scholar] [CrossRef]
  43. Lin, S.W.; Ying, K.C. Optimization of makespan for no-wait flowshop scheduling problems using efficient matheuristics. Omega-Int. J. Manag. Sci. 2016, 64, 115–125. [Google Scholar] [CrossRef]
  44. Bagchi, T.P.; Gupta, J.N.; Sriskandarajah, C. A review of TSP based approaches for flowshop scheduling. Eur. J. Oper. Res. 2006, 169, 816–854. [Google Scholar] [CrossRef]
  45. Helsgaun, K. An effective implementation of the Lin-Kernighan traveling salesman heuristic. Eur. J. Oper. Res. 2000, 126, 106–130. [Google Scholar] [CrossRef] [Green Version]
  46. Gouveia, L.; Pires, J.M. The asymmetric travelling salesman problem and a reformulation of the Miller-Tucker-Zemlin constraints. Eur. J. Oper. Res. 1999, 112, 134–146. [Google Scholar] [CrossRef]
  47. Campuzano, G.; Obreque, C.; Aguayo, M.M. Accelerating the Miller-Tucker-Zemlin model for the asymmetric traveling salesman problem. Expert Syst. Appl. 2020, 148, 113229. [Google Scholar] [CrossRef]
Figure 1. The composition chart of Cmax.
Figure 1. The composition chart of Cmax.
Symmetry 14 00913 g001
Figure 2. An example diagram of the evolution {102} in the N2 neighborhood.
Figure 2. An example diagram of the evolution {102} in the N2 neighborhood.
Symmetry 14 00913 g002
Figure 3. The simplified search mechanism contains three neighborhoods.
Figure 3. The simplified search mechanism contains three neighborhoods.
Symmetry 14 00913 g003
Figure 4. Schematic diagram of the HG1 and HG2.
Figure 4. Schematic diagram of the HG1 and HG2.
Symmetry 14 00913 g004
Figure 5. The evolution graph in Z3.
Figure 5. The evolution graph in Z3.
Symmetry 14 00913 g005
Figure 6. Data storage manner of D Y 0
Figure 6. Data storage manner of D Y 0
Symmetry 14 00913 g006
Figure 7. Data characteristics of the 0 iteration of CS.
Figure 7. Data characteristics of the 0 iteration of CS.
Symmetry 14 00913 g007
Figure 8. One iteration by the way of (T1, T2, T3).
Figure 8. One iteration by the way of (T1, T2, T3).
Symmetry 14 00913 g008
Figure 9. Two iterations by the way of (T1, T2, T3) and (Z1, Z2, Z3).
Figure 9. Two iterations by the way of (T1, T2, T3) and (Z1, Z2, Z3).
Symmetry 14 00913 g009
Figure 10. Evolution graph in Z4.
Figure 10. Evolution graph in Z4.
Symmetry 14 00913 g010
Figure 11. Data storage diagram in RS2.
Figure 11. Data storage diagram in RS2.
Symmetry 14 00913 g011
Figure 12. Data storage diagram in JSF.
Figure 12. Data storage diagram in JSF.
Symmetry 14 00913 g012
Figure 13. The Gantt chart of TA05 by HG2 (makespan = 1449).
Figure 13. The Gantt chart of TA05 by HG2 (makespan = 1449).
Symmetry 14 00913 g013
Figure 14. The Gantt chart of TA05 by Gurobi with HG1 (makespan = 1449).
Figure 14. The Gantt chart of TA05 by Gurobi with HG1 (makespan = 1449).
Symmetry 14 00913 g014
Figure 15. The PPRK value distribution ratio chart.
Figure 15. The PPRK value distribution ratio chart.
Symmetry 14 00913 g015
Table 1. Basic Neighborhood Features.
Table 1. Basic Neighborhood Features.
NameTotal
Area Lines
Total AreasConditions for Jumping Out of the Neighborhood
N223Being the three-exchange optimum
N334Being the four-exchange optimum
N445Being the five-exchange optimum
Table 2. Neighborhood classification.
Table 2. Neighborhood classification.
NoNameLevelCombinations to Be Tested for Each SequenceConstant Job
1Z2A{102}Last job
2Z3B{2103}Last job
3Z4C{03214}First and Last job
Table 3. Changed Paths of Boundary Values.
Table 3. Changed Paths of Boundary Values.
Evolution
Manner
Changed Paths of Boundary Values
(T1, T2, T3) DY(1,2) + DY(3,4) + DY(10,11)→DY(1,4) + DY(3,11) + DY(10,2)
(Z1, Z2, Z3)DY(2,3) + DY(6,7) + DY(8,9)→DY(2,7) + DY(6,9) + DY(8,3)
(R1, R2, R3)DY(1,2) + DY(7,8) + DY(9,10)→DY(1,8) + DY(7,10) + DY(9,2)
Table 4. Conditions for con2.
Table 4. Conditions for con2.
Conditions for the Establishment of con2
1. Same points constraint.
 The same 3 points can be found both in sequence before and after the iteration.
 Suppose their corresponding relationship is: (Z1, Z2, Z3)→(Z1′, Z2′, Z3′)
2. Position constraint.
Z1′ < Z2′ < Z3′ or Z2′ < Z3′ < Z1′ or Z3′ < Z1′< Z2′ (if Z1 < Z2 < Z3)
Table 5. Conditions for con3.
Table 5. Conditions for con3.
Conditions for the establishment of con3
1. Greater than 0 constraint, that is, DV > 0, where DV = it1.V + it2.V
2. Position constraint.
it1.DOT1 < it2.DOT1< it1.DOT2< it2.DOT2 or it2.DOT1 < it1.DOT1< it2.DOT2< it1.DOT2
Table 6. Conditions for con3.
Table 6. Conditions for con3.
Conditions for the Establishment of con4
1. Same points constraint.
 The same 4 points can be found both in sequence before and after the iteration.
 Suppose their corresponding relationship is: (Z1, Z2, Z3, Z4)→(Z1′, Z2′, Z3′, Z4′)
2. Position constraint.
 Suppose (1, 2, 3, 4, DV) is taken from JSF, then rearrange them from small to large according to their position numbers in the NS. There are 8 orders that satisfies the position constraint, that is {4321, 4123, 3412, 3214, 2341, 2143, 1234, 1432}
Table 7. Results of HGs on Tailard Benchmark (TA).
Table 7. Results of HGs on Tailard Benchmark (TA).
TAHG1HG2TAHG1HG2TAHG1HG2TAHG1HG2
CHG1 R P I _ _ _ _ _ CHG2 R P I _ _ _ _ _ CHG1 R P I _ _ _ _ _ CHG2 R P I _ _ _ _ _ CHG1 R P I _ _ _ _ _ CHG2 R P I _ _ _ _ _ CHG1 R P I _ _ _ _ _ CHG2 R P I _ _ _ _ _
0115054.1015090.83132202.2331980.836164881.9264530.8291155232.08154010.92
021590152832346734906263336264921532615164
031586146233331032476362326138931553615352
041634158834342433546461486055941530315221
051482144935339833926562746240951537915194
061522150236340333606662036099961532815129
071561151537329232516763266267971564315418
081548152138333632686862106144981546715245
091577147039316230826964876412991532015130
1014051377403407332470644564011001557315386
1122285.4021532.584144483.4042831.427182142.5081221.10101197732.18198401.61
1222482230424372423472808479591022033120179
1320511987434300417073826780841031999819986
1419451811444477451374853884301042029720122
1520772070454340434075804980141052023319962
1619251955464390433076795278751062032920148
1720311990474653449677812379661072047820478
1821452068484555436178810079571082023820117
1921201992494270425579833181991092031920233
2021262088504381435980824781791102019520006
2131025.3930011.455162604.0762101.7081110923.01107991.12111470341.71466001.28
223128287852599059358210904107861124756147219
233090302753609859728310783106311134692246820
243275300454600158838410938107441144713147108
253137307155617759248510762106291154687946728
263132302256620559198610898107021164727647095
273193311657624760428711128108801174659246503
283036287658607961128811147109071184709846977
293199315659611659498911101108431194702846870
303019300260619460249011039108551204724346824
Table 8. Results of HGs on REC.
Table 8. Results of HGs on REC.
RECHG1HG2RECHG1HG2RECHG1HG2RECHG1HG2
CHG1 R P I CHG2 R P I CHG1 R P I CHG2 R P I CHG1 R P I CHG2 R P I CHG1 R P I CHG2 R P I
0116588.6515340.521327487.9826534.242538416.9036692.123781091.26 80790.89
0313640.2213610.001526936.4825972.692736335.8935312.913986893.21 85691.78
0516146.8215452.251726572.7126572.712935447.6933802.704187193.3485431.26
0720661.1820661.181930095.5829061.963144122.4443831.76
0921676.1220661.182128661.6028350.503345583.0344771.20
1120106.8620116.912328495.5227190.703546515.7844070.23
Table 9. Results of MTZ model on REC (tolerance is 10−4).
Table 9. Results of MTZ model on REC (tolerance is 10−4).
REC R P I ( % ) Time(s)REC R P I ( % ) Time(s)REC R P I ( % ) Time(s)REC R P I ( % ) Time(s)
0100.05 1300.102500.13 3700.77
0300.08 1500.08 2709.503901.06
0500.07 1700.20 2900.13 4109.51
0700.26 1900.11 3100.35
0900.10 2102.39 3300.24
1100.04 2300.11 3500.37
Table 10. Results of MTZ model on TA01–TA60 (tolerance is 10−4).
Table 10. Results of MTZ model on TA01–TA60 (tolerance is 10−4).
TA R P I ( % ) Time(s)TA R P I ( % ) Time(s)TA R P I ( % ) Time(s)TA R P I ( % ) Time(s)TA R P I ( % ) Time(s)
0100.18 1300.34 2500.29 37NA36,00049072.10
020971.081400.05 2600.30 3800.29 5000.72
0300.05 1500.04 2700.15 3900.37 5100.98
0400.05 1600.09 2800.07 4000.21 5200.84
0500.10 1700.21 2900.08 4100.41 5300.27
0600.11 1800.07 3000.10 4200.37 5408.01
0700.05 1900.05 3100.36 4300.37 5500.68
0800.06 20059.293201.17 4400.38 5600.66
0900.05 2100.39 33018.694500.61 5700.56
1000.07 2200.49 3400.47 4600.31 5800.42
1100.23 2300.24 3500.35 4700.63 5901.28
1200.26 2400.14 3600.24 480154.336000.47
Table 11. Comparison Results of MH2, IMH2 and MTZ only (tolerance is 10−4).
Table 11. Comparison Results of MH2, IMH2 and MTZ only (tolerance is 10−4).
Ins R P I   T M T Z ( s ) R P I   T M H 2 ( s ) R P I   T I M H 2 ( s ) T M T Z / T I M H 2
REC2709.5007.2200.1373.08
REC4109.5102.8700.7612.51
TA020971.0801726.0300.0519,421.60
TA20059.29063.5000.08741.13
TA33018.6906.2905.353.49
TA37NA>36,000NA>36,00000.51>70,588.24
TA480154.330160.5500.33467.67
TA49072.10039.8707.0710.20
TA5408.01010.4105.941.35
Table 12. Comparison Results of MH2, IDMBO, DWWO, and SWWO on TA.
Table 12. Comparison Results of MH2, IDMBO, DWWO, and SWWO on TA.
n × m o p t . _ _ _ _ _ MH2IDMBO [37]DWWO [32]SWWO [34]
R P I _ _ _ _ _ _ _ _ ( % ) T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s )
20 × 51480.300.0400.050.584.50000.044.500004.50
20 × 10198300.080.020.080.779.0000.120.129.00000.049.00
20 × 202971.900.1100.020.3918.0100.010.4518.010.030.030.3218.00
50 × 53269.500.181.621.916.8611.260.831.096.5411.270.020.122.3211.25
50 × 104273.600.290.380.709.5822.510.200.375.6422.510.030.081.8522.50
50 × 205897.400.640.230.439.0445.010.100.289.5944.210.010.062.9345.00
100 × 56196.100.993.684.1720.4222.533.804.2720.9822.510.340.528.4522.50
100 × 10799101.561.902.2320.3945.031.651.9818.4745.010.280.4410.1345.00
100 × 2010,658.502.111.131.4924.3190.031.071.2816.7690.010.290.4010.1290.08
200 × 1015,124.706.394.144.4229.8890.075.275.6136.8790.021.091.3425.6290.00
200 × 2019,788.4010.552.853.1339.49180.073.423.7746.20180.021.001.1927.51180.00
500 × 2046,284.10112.824.925.1057.91450.216.366.5869.18450.102.312.5377.71450.02
Table 13. Comparison Results of MH2, IDMBO, DWWO, and SWWO on REC.
Table 13. Comparison Results of MH2, IDMBO, DWWO, and SWWO on REC.
RECopt.MH2IDMBO [37]DWWO [32]SWWO [34]
R P I _ _ _ _ _ _ _ _ ( % ) T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s ) Δ min _ _ _ ( % ) Δ A v g _ _ _ ( % ) S D _ _ _ T i m e _ _ _ _ _ _ _ _ ( s )
01152600.08 0004.50 0004.50 0004.50
03136100.03 0004.50 0004.50 0004.50
05151100.06 00.08 1.47 4.50 0004.50 0004.50
07204200.13 0009.01 0009.00 0009.00
09204200.05 00.04 0.40 9.00 0009.00 0009.00
11188100.04 0009.00 0009.00 0009.00
13254500.07 00013.51 00013.51 00013.50
15252900.09 00013.51 00013.51 00013.50
17258700.07 00013.50 00013.51 00013.50
19285000.04 0.14 0.30 4.84 13.50 00.25 5.98 13.51 00013.50
21282100.10 00.13 3.25 13.51 00.17 4.12 13.51 00013.50
23270000.06 00.30 7.29 13.50 00013.51 00013.50
25359300.13 00.06 1.83 20.26 00020.26 00020.25
27343100.15 00.01 0.40 20.27 00.01 0.40 20.26 00.01 0.49 20.25
29329100.06 00020.25 00.06 4.00 20.25 00020.25
31430700.41 0.60 0.75 4.92 22.52 0.23 0.36 3.26 22.50 00.02 2.00 22.50
33442400.14 0.47 1.26 22.92 22.51 0.52 0.77 11.21 22.50 00.14 8.58 22.50
35439700.24 0.52 0.65 5.71 22.51 0.07 0.32 11.15 22.50 00022.50
37800800.69 0.74 1.15 25.67 67.52 0.71 0.84 10.09 67.51 00.26 12.19 67.50
39841900.77 0.68 0.75 5.69 67.52 0.58 0.96 18.71 67.52 0.13 0.35 14.66 67.50
41843700.96 0.91 1.11 12.28 67.53 0.33 0.76 22.69 67.52 0.08 0.25 12.70 67.50
Table 14. The FJR of TA.
Table 14. The FJR of TA.
TAFJRTAFJRTAFJRTAFJRTAFJRTAFJRTAFJRTAFJR
01716103124606157609101061
0201703204706207729201071
0301803304806307809311085
04019134149164679094310914
0502013505026508059501100
0602133605126608119601110
0712203705216708259701120
0822323805346838309801131
0902403905436908419901141
10025540055070085010001150
11226041056071086010121160
121274420571727870102151172
131280434581736880103251180
14029544159074189110411190
151300450605750900105301200
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, Y.; Wang, Z.; Gao, L.; Li, X. A Matheuristic Approach for the No-Wait Flowshop Scheduling Problem with Makespan Criterion. Symmetry 2022, 14, 913. https://doi.org/10.3390/sym14050913

AMA Style

Gao Y, Wang Z, Gao L, Li X. A Matheuristic Approach for the No-Wait Flowshop Scheduling Problem with Makespan Criterion. Symmetry. 2022; 14(5):913. https://doi.org/10.3390/sym14050913

Chicago/Turabian Style

Gao, Yu, Ziyue Wang, Liang Gao, and Xinyu Li. 2022. "A Matheuristic Approach for the No-Wait Flowshop Scheduling Problem with Makespan Criterion" Symmetry 14, no. 5: 913. https://doi.org/10.3390/sym14050913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop