A Chance Constrained Programming Approach for No-Wait Flow Shop Scheduling Problem under the Interval-Valued Fuzzy Processing Time

: This work focuses on the study of robust no-wait ﬂow shop scheduling problem (R-NWFSP) under the interval-valued fuzzy processing time, which aims to minimize the makespan within an upper bound on total completion time. As the uncertainty of actual job processing times may cause signiﬁcant differences in processing costs, a R-NWFSP model whose objective function involves interval-valued fuzzy sets (IVFSs) is proposed, and an improved SAA is designed for its efﬁcient solution. Firstly, based on the credibility measure, chance constrained programming (CCP) is utilized to make the deterministic transformation of constraints. The uncertain NWFSP is transformed into an equivalent deterministic linear programming model. Then, in order to tackle the deterministic model efﬁciently, a simulated annealing algorithm (SAA) is specially designed. A powerful neighborhood search method and new acceptance criterion are applied to ﬁnd better solutions. Numerical compu-tations demonstrate the high efﬁciency of the SAA. In addition, a sensitivity analysis convincingly shows that the applicability of the proposed model and its solution strategy under interval-valued fuzzy sets.


Introduction
The no-wait flow shop scheduling problem (NWFSP) is one of the most active studies in the current production scheduling field. It arises due to the processing characteristics of certain products, where the processing of each job must be carried out continuously one after another, and there is no waiting time during processing [1]. The NWFSP has important applications in conventional industries such as steel rolling, pharmaceutical processing, plastic molding, and chemical processing. Apart from these conventional industries, there are also important applications for semiconductor manufacturing [2,3], printed circuit board manufacturing [4], flexible manufacturing systems [5], robotic cells [6], and just-in-time production systems [7]. Regarding the literature related to this problem, Ali Allahverdi [8] provided a comprehensive review of the research and showed the application for technology requirement with no-wait constraint.
NWFSP is a typical example of combinatorial optimization problems [9]. Abundant solution methods, including exact methods, heuristic approaches, and meta-heuristic techniques, have been proposed to solve the NWFSP [10,11]. However, when the number of machines exceeds three in the NWFSP, the problem has proven it is a NP-hard problem [12,13]. Due to NP-hard characteristics of the NWFSP, the problem will become more complicated as the problem size increases. Excessive computational time makes it difficult to find high-quality solutions by exact methods. Therefore, in recent years, many researcher have been turned their attention to developing various heuristic and meta-heuristic techniques to achieve effective solutions for scheduling problems [14,15]. leads to lower processing costs. Total completion time (TCT) indicates the inventory or holding costs in production scheduling. Therefore, in this paper, we chose two performance measures of makespan and TCT to develop our uncertainty formulation, and the problem is to minimize makespan with total completion time constraint. To consider the uncertainties of this problem, a R-NWFSP model whose objective function involves interval-valued fuzzy sets (IVFSs) will be proposed, and an algorithm will be designed to solve our deterministic model with high efficiency. The comparative experiments of the four algorithms will be carried out, and a sensitivity analysis will be performed to evaluate the applicability of the proposed model for this problem under interval-valued fuzzy sets.
The rest of this paper is organized as follows. In Section 2, the problem formulation of NWFSP and deterministic R-NWFSP model are stated. Section 3 presents the designed SAA. Next, in Section 4, numerical experiments are provided to show the effectiveness of our model and algorithm. Finally, in Section 5, we give some concluding remarks.

Problem Description
The NWFSP is described as follows: Each of the n jobs in the set N is to be processed in the same process sequence through m machines in the set M. Every job j(j = 1, 2, . . . , n) requires a predetermined processing time p ij on each machine i(i = 1, 2, . . . , m). Each machine can only process at most one job at a time, and a job can only be processed on one machine at a time. The no-wait constraint restricts the processing of each job between any two consecutive machines to be continuous. In other words, after the job is processed on one machine, it must be processed on the next machine immediately. In order to satisfy the no-wait constraint, the start of a job must be delayed on the first machine when necessary. In addition, other assumptions related to the permutation flow shop scheduling problem (PFSP) described by Gupta et al. [33] are applicable to this problem.
To provide a better understand of the problem formulation introduced above, the Gantt chart of a no-wait flow shop with three machines and four jobs is presented in Figure 1. As shown in the figure, the same color area indicates that the job is processed on different machines. Due to the no-wait constraint, the second job cannot be processed immediately after the first job is processed on the first machine.

Mathematical Formulation
This section introduces the mathematical formulation for the aforementioned problem. The parameters, decision variables, and specific models involved are as follows: Parameters subject to: x jk ∈ {0, 1}, ∀j, k ∈ N, j = k (11) where the objective of the problem is to minimize the makespan. Equations (2) and (3) are the definition formulas of makespan and total completion time. Equation (4) represents a total completion time constraint, where T is a value determined by the scheduler. Equation (5) represents the processing order constraint of any two jobs. Equation (6) ensures that the processing sequence of any jobs will not be contradictory. Equation (7) gives the relationship between the start time and the completion time of the job. Equation (8) guarantees no-wait constraint for the job to be processed on two consecutive machines. Equations (9) and (10) represent the constraint relationship of the start and completion time under the order of different jobs.

Interval-Valued Fuzzy Set
An interval-valued fuzzy setÃ in X ∈ (−∞, +∞) is defined as where the levels 0 ≤ ω L , ω U ≤ 1, and ω U , ω L are the height of trapezoidal fuzzy variables ξ U , ξ L , respectively. Specially, if ω U = 1, ω L = 1, the ξ U , ξ L are normalized. The upper and lower fuzzy numbers ofξ are shown in Figure 2. To consider the uncertainties of the job processing times in practical applications, the processing times are described as interval-value fuzzy numbers. According to Example 1, the uncertain processing timesp ij of job j on machine i are denoted as follows: where p U ij and p L ij are trapezoidal fuzzy variables defined on upper and lower MFs, ω U ij and ω L ij are the height of p U ij and p L ij . Moreover, p U ij,1 , p U ij,2 , p L ij,1 , and p L ij,2 are most optimistic values, p U ij,3 , p U ij,4 p L ij,3 , and p L ij,4 are most pessimistic values. The p U ij,1 , p U ij,2 , p U ij,3 , p U ij,4 , and p L ij,1 , p L ij,2 , p L ij,3 , p L ij,4 are given as the ratios to the most likely processing times p ij , ω U ij , and ω L ij are randomly assigned values from [0, 1], respectively. Although the NWFSP with uncertain processing times is considered. However, it is very complicated to deal with the scheduling problem with uncertain parameters. Therefore, in the next section, we will simplify this fuzzy programming model.

Robust NWFSP under Moment Uncertainty
In the following, a deterministic R-NWFSP model is suggested with the case of uncertain job processing times. There are some feasible approaches that can be used to deal with constrained optimization problems with uncertain parameters in scheduling. However, the chance-constrained programming [34] has been developed to solve these problems and proved to be one of the most effective methods. In the approach, the fuzzy constraint should be maintained under the assumption of predetermined credibility level to achieve the appropriate safety margin provided. For the upper and lower bounds of IVFSs that reflect the uncertain processing time of the job, two CCP models based on the credibility measure [11] are proposed for the R-NWFSP.
For the upper MF (p U ij,1 , p U ij,2 , p U ij,3 , p U ij,4 ; ω U ij ), the CCP model for the R-NWFSP related to processing time can be formulated as: For the lower MF (p L ij,1 , p L ij,2 , p L ij,3 , p L ij, 4 ; ω L ij ), the CCP model for the R-NWFSP related to processing time can be formulated as: where α U and α L are the predetermined credibility levels, the abbreviation Cr represents chance, P U ij and P L ij are realization values that replace C U ij − C U i−1,j and C L ij − C L i−1,j . Accordingly, after the determination, the decision maker will be sure that the actual realization of p U ij and p L ij will not be greater than P U ij and P L ij , and their credibility are at least equal to α U and α L .
is a trapezoidal fuzzy variable with a < b ≤ c < d, the level 0 < ω ≤ 1 and the predetermined credibility level 0 < α ≤ 1, then Cr{ξ ≤ x} ≥ α is given as follows [25] : where ω is called the height of ξ.
is a trapezoidal fuzzy variable with a < b ≤ c < d, the level 0 < ω ≤ 1 and the predetermined credibility level 0 < α ≤ 1, then Cr{ξ ≥ x} ≥ α is given as follows [25]: According to Theorem 1, for the interval-valued fuzzy processing time using credibility, we can do the following deterministic transformation of constraints: The corresponding deterministic models transformed into the models (14) and (15) have following equivalent expression: Finally, the R-NWFSP model using linear weighted method has the following expression: As we transform the uncertain NWFSP into a equivalent deterministic model, the method of solving this problem can be simplified to the greatest extent. By designing a suitable solution algorithm, the biggest benefit is that it can be more convenient to achieve effective solutions. In the next section, we will design an intuitive solution method to solve the problem efficiently.

Solution Method
To solve the robust NWFSP under the proposed deterministic model, we next design an effective simulated annealing algorithm (SAA). The overall structure of the SAA for the R-NWFSP includes the following three phases: a NEH algorithm to generate the initial schedule, a neighborhood search to deeply improve the current sequence, and a new probability-based adaptive acceptance strategy and stop criteria.

Initial Solution
The NEH [35] is the most effective constructive heuristic algorithm, which has been proved to be the most efficient heuristic for PFSP [36]. Meanwhile, the main idea of this heuristic algorithm is also suitable for solving the no-wait flow shop scheduling problem. To efficiently find better schedule sequence, a NEH algorithm with the same priorities is applied to generate a better initial schedule.
Step 1: The same priorities of the NEH are used to sort all jobs in the non-ascending order of the total fuzzy processing time.
Step 2: Select the first two jobs and evaluate the possible ordering. The sequence with the smaller makespan will be taken as the current sequence.
Step 3: Put the following jobs one by one in all possible positions at the position with the smallest makespan as the current sub-sequence, until all jobs are inserted, to generate a complete job sequence.

Neighborhood Search
After the initial sequence is obtained, the SAA begins to iterate to find more feasible solutions. The selection of multiple neighborhood operators is of great importance to the performance of the designed SAA. Therefore, a variable neighborhood search (VNS) method is applied to find more feasible sequences, which contain swap, reversion, and insertion.

Acceptance and Stop Criteria
When the neighborhood search is generated, it is necessary to decide whether the newly generated sequence π can be substituted into the next iteration. The Metropolis acceptance criterion [37] is the most important part of the simulated annealing, which allows accepting a slightly worse solution to take into account both global search and local optimization. Specifically, after the current sequence completes the neighborhood search in a given problem instance, the new generated π is accepted with probability p as the incumbent schedule for the next iteration. This probability p is defined as follows: where T is a parameter that controls the annealing temperature and gradually decreases with the execution of the algorithm. In the context of different calculation examples, this temperature parameter T setting is different. According to the above formulas (C max (π) − C max (π ))/T, the longer the makespan, the higher the corresponding temperature parameter. Following the suggestions of Osman and Potts [38], the temperature is set as follows: where T 0 is a parameter to be adjusted, the parameter of temperature is set at T 0 = 0.5, and the termination temperature is set at T = 1. In order to facilitate the performance comparison with other algorithms, the maximum computational time t max as the running termination condition. The recommended SAA stops when the maximum computational time is reached. The schedule finally found can be considered as the optimal sequence in all iterative search processes.

Algorithm Framework
The simulated annealing algorithm is a very simple and effective meta-heuristic for solving various types of scheduling problems. In order to adapt simulated annealing algorithms to solve our problem efficiently, we firstly adopt the NEH algorithm to generate a better initial sequence. Then, a powerful neighborhood search method is applied to have better quality schedules. Related neighborhood search strategies are discussed in Section 3.2. Finally, a new acceptance strategy is designed to determine whether to accept the new sequence as the incumbent solution. Based on the settings of these important components, the SAA is designed to tackle the uncertain NWFSP efficiently. The complete framework of the SAA is presented in Algorithm 1.
Algorithm 1: Simulated Annealing Algorithm (SAA) 1 (Initial solution): Generate an initial schedule using the NEH algorithm; Set initial iteration in = 0; 4 Set maximum number of inner iterations Inmax; 5 While the termination condition is not met do 6 While in ≤ Inmax do 7 Update iteration: in ← in + 1 8 (Neighborhood search): Use the neighborhood search method that hybridizes the swap, reversion, and insertion to improve sequence π in the previous step. 9 (Acceptance): Set π = π with the probability of . 12 end While 13 end While 14 Output: The best solution found.

Experimental Setup
To evaluate the experimental performance of SAA and R-NWFSP model, the developed algorithm for the R-NWFSP was coded using MATLAB. The simulation experiments were executed on a personal computer (PC) with an Intel(R) Core (TM) i5-4210 CPU @2.6 GHz and 16 GB RAM in a Windows 10 Operating System. It should be noted that the effectiveness of the algorithm is greatly affected by the number of different machines and jobs, and different data sets have been considered for parameter calibration and algorithm evaluation. To evaluate the performance of our designed algorithm with other approaches on different problem sizes, we used Taillard's [39] problem scale to randomly generate 12 combined instances with different sizes n and m, where n ∈ {20, 50, 100, 200, 500} and m ∈ {5, 10, 20}. For each scenarios, we carried out 10 independent replications for the designed SAA. The same stopping criterion applies to all comparison algorithms [40], but the criterion varies depending on the number of different machines and jobs. Specifically, the termination condition is set to the maximum running time t max = n 2 ms.
The objective function is to minimize makespan under the constraint that TCT is less than or equal to a given value. For a given problem, the upper bound value T for the TCT is usually given by the scheduler. However, it is necessary to know that a very large T value used for computational experiments means that there are almost no constrains. On the other hand, a very small T value means that there is no feasible solution. Therefore, a valid T value should be chosen as the calculation experiment constraint. In this work, the total completion time of NEH initialization has been selected as the T value. Table 1 summarizes the required test parameters of the manufacturing system. Moreover, the p U ij,1 , p U ij,2 , p U ij,3 , p U ij,4 are given as the ratios 0.7, 0.9, 1.1, 1.3, and the p L ij,1 , p L ij,2 , p L ij,3 , p L ij,4 are given as the ratios 0.8, 0.95, 1.05, 1.2 to the most likely processing times p ij , ω U ij , and ω L ij are randomly assigned values from [0, 1], respectively. To test the effectiveness and robustness of our designed SAA and compare the experiments results with other three algorithms visually, the average relative percentage deviation (ARPD) is introduced to measure the average relative quality of the solutions. The APRD has the following formulation: In addition, the standard deviation (SD) is employed to measure how close the solution is to the average solutions. The SD is given by: where C r is the current solution executed by a compared algorithm to the r-th iteration (r = 1, 2, . . . , R) on a given problem scenario, and C * is the best solution found so far. Obviously, the smaller the value of ARPD (SD), the better the performance of the algorithm.

Comparison with Other Algorithms
To demonstrate the effectiveness and efficiency of the designed SAA in finding better quality schedules. We compare the experimental results obtained by SAA with other results obtained by genetic algorithm (GA) [41], tabu search (TS) [42], and firefly algorithm (FA) [43]. These algorithms are common meta-heuristics for solving the scheduling problems. The predetermined credibility levels α L and α U are set to α U = α L = 0.3. The experimental results are summarized in Tables 2 and 3. In the table, we compare the their experimental results for the four approaches. It is worth noting that the result of executing multiple times on the same instance, the experimental results may be quite different due to the stochastic nature of the meta-heuristic. As the number of problem instances increases, this phenomenon will become more and more obvious. Table 2 summarizes the computational results of GA, TS, FA, and SAA. It is observed from Table 2 that the total average of ARPD by the designed SAA is low in most test cases. For the credibility levels α U = α L = 0.3, the average total ARPD obtained by SAA is only 0.35, which is better than the corresponding values of 3.21, 6.36, and 2.11 obtained by GA, TS, and FA, respectively. In other words, on average, the designed SAA can outperform GA, TS, and FA in terms of solution quality. In addition, a similar analysis of the SD can be found that the value obtained by SAA is only 0.33, which is much smaller than the value 3.48, 6.06, and 2.13 obtained by GA, TS, and FA. As far as the robustness of the solution is concerned, it fully illustrates the effectiveness of the designed algorithm.  Table 3 shows the comparison results of the aforementioned algorithms. It is revealed from Table 3 that the SAA is superior to other algorithms, and the results have been discovered through the extensive problem instances. Through further investigation of the statistical data in the table, we can find that the objective function of optimal value and average value obtained by our algorithm also performs best in these algorithms. Especially, when solving large-scale instances, the optimal and average value of the SAA are much better than other three algorithms. When the number of machines is small, the optimization problem is relatively easy to solve, and there is little difference in the effect of the algorithm. However, when the number of machines increases, the difficulty of the solution increases significantly.  7 illustrate the block diagrams of those approaches in the medium-and large-scale instances. It is obvious that the computational results obtained by the SAA is the smallest compared with the three corresponding comparison approaches, which fully proves that SAA has better performance. Since all tested algorithms are executed with the same stopping criterion in a similar computational environment, computational time consumption differs very little. Thus, we can conclude that the developed SAA performs well in terms of minimizing makespan from the best quality and average value of the algorithm execution.  In order to compare the quality and diversity of the solution under different algorithms more intuitively, the average relative percentage deviation of different scale instances are drawn in Figure 8. This figure clearly illustrates that when the number of jobs and machines are small, the ARPD value is also small. As the size of the instance increases, the corresponding ARPD value gradually tends to increase. In addition, the value of ARPD is proportional to the number of jobs.

Sensitivity Analysis
The previous analysis shows that the SAA for the R-NWFSP can outperform GA, TS, and FA. Therefore, in this subsection, SAA is adopted as a solution strategy to evaluate the performance of the proposed model under different credibility levels. We conducted a sensitivity analysis to evaluate the performance of R-NSFSP for the uncertain NWFSP under interval-valued fuzzy sets. For our experiments, the credibility level is selected from the set {0.3, 0.5, 0.7, 0.9}, respectively. Obviously, according to (23), the optimal solution of the same instance varies with the credibility levels. Figure 9 shows the experimental results for different credibility levels and different scales problems. These results indicate that different credibility levels of the model will affect the efficiency of SAA. For small-scale instances (a)-(c), experimental results show that the computational results under different confidence levels have small differences when α < 0.5. However, when α > 0.5 the makespan of this optimal solution increases as α increases. For the medium-and large-scale instances (d)-(l), the computational results show that the best solutions of the model under different credibility levels are significantly different. It is observed from Figure 9 that when the credibility level is less than 0.5, the objective value under the optimal scheduling scheme is uncertain, indicating that the parameters have a greater impact on the model. When the credibility level is greater than 0.5, the objective value under the best solution has a positive relationship with the credibility levels. These results convincingly prove the excellent performance of the deterministic R-NWFSP model. Therefore, in practical applications, in order to minimize the impact of parameters on the model, it is recommended to set the credibility levels at 0.5 to achieve better performance. It should be noted that these explanations are almost very few exceptions because of the stochastic nature of meta-heuristic.

Conclusions
In this work, we discuss the uncertain no-wait flow shop scheduling problem (NWFSP) with the objective to minimize makespan under the constraint that the total completion time is not greater than a given bound. Firstly, due to the uncertainty of the job processing time in the practical applications, the processing times are regarded as the interval-valued fuzzy numbers. We propose a deterministic counterpart to optimal the robust NWFSP under the interval-valued fuzzy processing time, and an improved SAA is designed for its efficient solution. For the IVFSs using credibility, a chance constrained programming (CCP) is utilized to make the deterministic transformation of constraints. The corresponding deterministic model of the robust NWFSP is established. Then, in order to solve the certain linear programming efficiently, a SAA is specially designed. The SAA involves swap, insertion, reversion, and new acceptance criterion to find more promising solution. Finally, the designed approach is compared with GA, TS, and FA in instances with different sizes. Experimental results demonstrate that the designed approach has a better searching ability than other three algorithms for the robust NWFSP. Moreover, the applicability of the proposed model and solution method for the problem under interval-valued fuzzy sets is proved through a sensitivity analysis. Therefore, this work not only designs a solution method, but also provides a promising model transformation direction for uncertain scheduling problem with IVFSs. Although our proposed model and solution strategy mainly solve the robust NWFSP, the study of this work is not limited to this. It can also be extended to many related fields of job assignment and sorting with uncertainties. In future work, it is meaningful to focus on the uncertainty and multi-objective attributes of the NWFSP, and develop more effective algorithms according to the characteristic of the problem to solve the realistic production scheduling problems.
Author Contributions: H.S. and A.J. performed the simulations and analyzed the data, A.J. designed the process scheme and optimization of the paper, D.G. wrote the paper and reviewed it, and X.Z. and F.G. checked the results of the whole manuscript. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: Data available on request due to restrictions, e.g., privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available, as our research group is still in the process of further research in this area.

Conflicts of Interest:
The author declare no conflict of interest.