Next Article in Journal
Thermodynamic Performance of Adsorption Working Pairs for Low-Temperature Waste Heat Upgrading in Industrial Applications
Next Article in Special Issue
An Integrated Model of Production, Maintenance, and Quality Control with Statistical Process Control Chart of a Supply Chain
Previous Article in Journal
Biogenic Calcium Phosphate from Fish Discards and By-Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multimodal Optimization of Permutation Flow-Shop Scheduling Problems Using a Clustering-Genetic-Algorithm-Based Approach

George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(8), 3388; https://doi.org/10.3390/app11083388
Submission received: 22 February 2021 / Revised: 4 April 2021 / Accepted: 6 April 2021 / Published: 9 April 2021
(This article belongs to the Special Issue Planning and Scheduling of Manufacturing Systems)

Abstract

:
Though many techniques were proposed for the optimization of Permutation Flow-Shop Scheduling Problem (PFSSP), current techniques only provide a single optimal schedule. Therefore, a new algorithm is proposed, by combining the k-means clustering algorithm and Genetic Algorithm (GA), for the multimodal optimization of PFSSP. In the proposed algorithm, the k-means clustering algorithm is first utilized to cluster the individuals of every generation into different clusters, based on some machine-sequence-related features. Next, the operators of GA are applied to the individuals belonging to the same cluster to find multiple global optima. Unlike standard GA, where all individuals belong to the same cluster, in the proposed approach, these are split into multiple clusters and the crossover operator is restricted to the individuals belonging to the same cluster. Doing so, enabled the proposed algorithm to potentially find multiple global optima in each cluster. The performance of the proposed algorithm was evaluated by its application to the multimodal optimization of benchmark PFSSP. The results obtained were also compared to the results obtained when other niching techniques such as clearing method, sharing fitness, and a hybrid of the proposed approach and sharing fitness were used. The results of the case studies showed that the proposed algorithm was able to consistently converge to better optimal solutions than the other three algorithms.

1. Introduction

Scheduling problems such distribution, transportation, management, construction, engineering and manufacturing, are omnipresent in real-world applications [1]. These scheduling problems are complex and very difficult to solve using conventional techniques. The complexity of these scheduling problems arises due to the presence of a large number of constraints. Due to these reasons, there was ample research conducted in the past few decades that focused on developing different techniques for the optimization of these scheduling problems. Permutation flow-shop scheduling problem (PFSSP), a variation of the classical flow-shop scheduling problem (FSSP), is a problem in operation research in which n ideal jobs, J1, J2, …, Jn, of varying processing times are assigned to m resources (usually machines). The objective is to allocate the available resources to the jobs to maximize (or minimize) performance indicators, such as makespan, tardiness, etc. In classical FSSP, for n jobs on m machines, there are (n!)m different alternatives for sequencing jobs on machines, while in permutation problems, the search space is reduced to n!. Like any scheduling problem, there are constraints and assumptions associated with PFSSP. In this paper, these constraints and assumptions are as follows.
(1)
On the shop floor, all machines are independent from each other.
(2)
The operation of all jobs have deterministic processing times.
(3)
No pre-emption of job operation is permitted.
(4)
The set-up times are not considered in these problems.
(5)
No machine down-time or maintenance time is considered.
(6)
The machine used for each operation of a job is known and fixed.
(7)
A job cannot revisit a machine group for more than one operation.
(8)
Operation constraints must always be followed.
(9)
All jobs must enter the machines in the same order.
Since PFSSP are commonly encountered in various industries, researchers developed many methods to optimize the key performance indicators (KPIs) of PFSSP. Liu and Liu [2] used a hybrid discrete artificial bee colony algorithm for minimizing the makespan in PFSSP. They utilized Greedy Randomized Adaptive Search Procedure (GRASP) to create the initial population and operators such as insert, swap, path relinking to generate new solutions. The authors also improved the local search by further optimizing the best solution. When applied to PFFP, their algorithm had superior performance compared to other algorithms like particle sawm optimization (PSO) algorithm, hybrid GA (HGA), etc. Govindan et al. [3] combined decision tree (DT) and scatter search (SS) algorithms to solve PFSSP. DT was used initially to convert the problem into a tree structure using the entropy function, followed by SS, to do an extensive investigation of the solution space. Simulation results and statistical test comparisons showed the advantage of the authors’ proposed algorithm. Ancău [4] proposed two algorithms for solving PFSSP. (1) A constructive greedy heuristic (CG) and (2) stochastic greedy heuristic (SG). The CG was based on a greedy selection, while the SG was a modified version of CG with iterative stochastic start. The authors validated the proposed algorithms by using them to solve benchmark problems, and the results showed that the algorithm was able to come within 6% of the best-known solution. Zobolas et al. [5] created a hybrid algorithm by combining greedy heuristic, GA, and variable neighborhood search (VNS) algorithm. The hybrid algorithm was able to take advantage of both GA and VNS, and obtain the best-known makespan for the benchmark problems in a short computational time.
Zhang et al. [6] proposed an enhanced GA to solve the distributed assembly permutation flow shop scheduling problems (DAPFSP). The authors used a greedy mating pool to select better candidates. They also updated the crossover strategy and incorporated local search strategies to improve local convergence. Andrade et al. [7] minimized the total flowtime of PFFSP by using a Biased Random-Key Genetic Algorithm (BRKGA). Their proposed technique used a shaking strategy to perturb individuals from the elite population, while resetting the remainder of the population. The authors validated their proposed approach by its application to 120 benchmark problems and comparing the results to those obtained by Iterative Greedy Search and Iterated Local Search. The results showed that the proposed approach was more efficient and was able to identify new bounds as well as optimal solutions for the problems. Ruiz et al. [8] used the iterated Greedy (IG) search algorithm to solve the distributed permutation flow shop scheduling problems. The authors improved the initialization, construction, and deconstruction procedures, and improved the local search (ILS) to obtain better results. A big advantage of the proposed technique was that it required very little problem-specific knowledge to be implemented. Pan et al. [9] proposed three constructive heuristics, DLR (inspired by LR heuristic), DNEH (inspired by NEH heuristic), a hybrid DLR-DNEH heuristic, and four metaheuristics—Discrete Artificial Bee Colony (DABC), Scatter Search (SS), ILS, & IG—for minimizing the total flowtime criterion of the distributed permutation flow shop scheduling problem (DPFSP). Both the heuristics and metaheuristic were able to significantly improve upon the existing results in literature. Their results also showed that the trajectory-based metaheuristics were considerably better than the population-based metaheuristics.
Though ample work is done in solving PFSSP, the current techniques only provide a single optimal schedule upon execution. Furthermore, real-life operators and schedulers have preferences that cannot be modeled and added to the fitness function of optimization algorithms. The single optimal schedule found using these techniques might theoretically satisfy the objective function. However, it might not be able to satisfy the aforementioned complexities. Although multimodal optimization provides multiple similar solutions for the same problem under consideration, it cannot handle complex situations such as machine breakdowns, delayed supply chain, etc., without changing the fitness function and constraints. However, having multiple similar solutions for the same problem could make it possible to provide opportunity for the operators to have the flexibility to handle emergency using their experience, while waiting to solve the underlying problem. Therefore, it is important to obtain multiple optimal schedules that satisfy the objective function, i.e., perform multimodal optimization (MMO). Over the years, many different techniques were developed to accomplish MMO, such as niching techniques [10,11,12,13,14,15,16], differential evolutionary strategies [17,18,19,20,21,22,23,24], use of GA [25,26,27,28] etc. However, these techniques were only tested on benchmark mathematical problems. Furthermore, the aim of these techniques was to find both global and local optima, which increases their computational complexity. Additionally, it is possible for local optima to have much worse objective value than global optima, which is undesirable in scheduling problems.
Therefore, in this study, a new algorithm is proposed for the MMO of PFSSP. This is accomplished by using the k-means clustering algorithm with GA. In each iteration of the proposed algorithm, first, solutions are clustered together, using k-means clustering algorithm. Next, the operators of GA are utilized within each cluster to generate solutions for the next generation. Various techniques used by Perez et al. were also modified and utilized for the MMO of PFSSP and the performance of the algorithms was tested on benchmark PFSSP. The rest of the paper is organized as follows. Section 2 presents information about the modified GA operators used in the proposed algorithm to represent a solution and create new solutions during the iterations of the algorithm. Section 3 provides detailed information about the proposed algorithm, the clearing method (CM), and the sharing fitness methods used for the MMO of PFSSP, as well as the features used to cluster the solutions. Section 4 presents information about the benchmark problems used to test the proposed algorithm and presents the results obtained by utilizing various algorithms. Section 5 draws upon conclusions based on the results obtained from the case studies and mentions the future direction of the proposed technique.

2. Modeling of PFSSP

2.1. Encoding Method and Initial Solution Creation

In this study, the optimization objective was to find the sequence of operations on each machine that would lead to a minimization of the makespan. An important task in any optimization problem is to determine the encoding method. Over the years, many different methods were used to encode the solutions of scheduling problems, such as the direct method [29], the binary method [30], the circular method [31], and the permutation with repetition [32] method. As the jobs must be processed in the same order on each machine in PFSSP, a solution is created using permutation encoding for a single machine. The length of the chromosome (solution) can be determined by the size of the problem. For example, for a 9 × 3 problem (9 jobs with 3 operations each) the chromosome length would be 9. An example of permutation for the 9 × 3 problem is shown in Figure 1. According to Figure 1, the order of operations on any machine will be job 3, 1, 2, 9, 5, 4, 6, 8 and 7. By utilizing permutation encoding, only feasible initial solutions would be created, which would reduce the computational cost of the algorithm.

2.2. Adaption of Genetic Operators

In the optimization process, GA operators are utilized to produce the initial population, as well as the population belonging to the next generations. To create only feasible solutions for the iterations of the algorithm, appropriate crossover and mutation operator must be developed. In the crossover operator, half the number of jobs of the problem under consideration were randomly selected and the corresponding genes were switched on in both the parents. An example of the crossover process is shown in Figure 2 for a PFSSP with nine jobs. In the example, jobs 2, 3, 4, 5, 6, and 9 were randomly chosen and the corresponding genes (3,2,4,9,5,6) in Parent A and (2,3,6,9,5,4) in Parent B were switched on. The crossover operator was limited to solutions belonging to the same cluster, in order to preserve some features of the cluster. Similarly, in the mutation operator, half the jobs were randomly chosen and their corresponding location within the solution was switched on. An example of the mutation operator is shown in Figure 3 for a PFSSP with nine jobs. As can be seen from Figure 2 and Figure 3, using the crossover and mutation operator only creates feasible solution.

3. Multimodal Optimization Methodologies

As stated earlier, an algorithm is proposed for the MMO of PFSSP. To accomplish this, the GA is combined with the k-means clustering algorithm, in order to converge to multiple optimal solutions. Additionally, GA is also combined with niching techniques, such as sharing fitness and CM, and the performance of the three algorithms is compared. Sharing fitness and CM were used as these were shown to have good performance when utilized for the MMO of JSSP [33,34]. The steps of the three different algorithms are outlined below.

3.1. Clustering-Optimization Approach

In traditional GA, the elite population survives each generation as they represent the best solution. Majority of the offspring are generated using crossover which has a high probability of selecting individuals with better fitness values. Though these operations ensure that the population would converge towards an optimal solution, they also reduce the diversity of the population significantly. This strategy is applicable when the objective is to obtain a single optimal solution. However, it is severely lacking when multiple optimal or near optimal solutions are required.
To achieve MMO, the diversity of the elite population and the remainder of the population must be maintained throughout the iterations of GA. In the proposed approach, this is achieved by splitting the population of GA in multiple niches, based on similarity and dissimilarities. Since cluster algorithms maximize the distance between different clusters and by assuming that different optimal solutions have high dissimilarity, we can utilize the cluster algorithms to split the search space into different clusters and find multiple optimal solution in each cluster. By limiting crossover and optimization within each niche, we can maintain the high diversity of the population, as we can avoid exposure from other clusters. Finally, to search the solution space more effectively, mutation is performed after every iteration. The basic procedure of the proposed algorithm is given below and is also shown in Figure 4.
  • Define the fitness function of the specific case study. In this study, the maximum completion time (makespan) was used as the fitness value.
  • Define the parameters for the k-means clustering algorithm (k value) as well as population size, crossover rate, mutation rate, generation limit, and initialize the best solution set.
  • Generate the initial population.
  • Evaluate the current population, calculate the feature matrix of every solution, and update the best solution set.
  • Using the k-means clustering algorithm, divide the current population into k clusters, based on the features calculated in step 4.
  • Within each cluster, select elite individuals that would be a part of the next generations population.
  • Within each cluster, select individuals for crossover using roulette wheel selection to produce offspring for the next generation.
  • Randomly select individuals (based on mutation probability) and mutate them.
  • Repeat Steps 4–8 until the generation limit is reached. Output the final population and the best solution set.
To cluster solutions together, a feature matrix must be defined. Typically, in MMO, the distance between two individuals is used as a feature to cluster the solutions together. However, in PFSSP, a distance cannot be defined using the proposed encoding method, therefore, the feature matrix used to cluster the solutions together in the proposed algorithm is defined as the sequence of jobs on a particular machine. For example, the feature matrix for Parent A shown in Figure 2 would be (1,3,2,4,9,5,8,6,7). An example of applying the proposed algorithm is shown in Figure 5. It should be noted that clustering would not improve the algorithm efficiency. However, that is not the top priority in this study. The focus is to obtain multiple and different solutions with the same objective value.

3.2. Sharing Fitness

In the sharing fitness method, the fitness of an individual decreases in accordance with the number of similar individuals in the population. Figure 6 shows the overall flow chart of the sharing fitness method.
After evaluating the individuals of the current population, their shared fitness is calculated according to Equation (1):
f i * = f i j = 1 N S h d i , j
where, fi is the fitness of individual i, and SH(d(i,j)) is the sharing function. Here, the sharing function depends on the distance (d(i,j)) between individuals i and j. The sharing function can be calculated using Equation (2).
S h d i , j = 1 d i , j σ s h a r e α If   d i , j < σ s h a r e 0 Otherwise
where σ s h a r e is the maximum distance of a niche and α is the slope of the sharing fitness function. A value of 1 is most commonly used for the slope, while σ s h a r e depends on the problem under consideration. Next, based on the new fitness f i * , parents are selected for crossover and mutation, using the roulette-wheel selection method. Once offspring are generated using modified crossover and mutation operators, their fitness is evaluated. Next, the fitness of the parents and offspring are updated to   f i * * using Equation (3).
f i * = β f i j = 1 2 N f i + 1 β 1 2     N     S H i 1 2     N j = 1 2 N 1 S H i
where
S H i = j = 1 2 N S h d i , j
The distance in Equations (1)–(4) is problem dependent and for the sharing fitness and CM, it was defined as the number of operations situated in different places for each machine. For example, the distance between Parents A and B shown in Figure 2 would be 8 (only job 1 is located in the same place for the two solutions). The above definition of distance was used for CM and the sharing fitness method, as it was shown to provide a large number of optimal solutions when used by Perez et al. for the MMO of JSSP.

3.3. Clearing Method

CM is similar to sharing fitness but is based on the concept of limited resources of the environment. CM is applied after evaluating all individuals but before the selection process. In CM, the individuals are ordered according to their fitness, from best to worst. The first individual is called the dominant (and belongs to the first niche) and it is compared with the remaining n−1 individuals. The comparison is done to determine whether the remaining individuals belong to the first niche. Only k individuals from each move forward to the next generation as elite individuals, while the rest of the individuals belonging to that niche have their fitness reset to zero. The above procedure is repeated for individuals whose fitness is greater than zero. A pseudocode of the CM is given below, and a flowchart is given in Figure 7.
Sort P(t)
FOR (i = 1 to N)
IF (fi > 0)
NicheCapacityi = 1
   FOR (j = i + 1 to N)
   IF (fj > 0 AND d(i,j)) < σ share
IF (NicheCapacityi < k)
     NicheCapacityi = NicheCapacityi + 1
     ELSE
     fj = 0
     ENDIF
      ENDIF
    ENDFOR
  ENDIF
ENDFOR

4. Experimental Study and Performance Benchmarking

To measure the performance of the algorithms outlined above, these were used for the MMO of multiple PFSSP. Eighteen benchmark problems from various datasets were taken for experiments. The first nine problems were proposed by Carlier [35], the next eight were proposed by Reeves [36] and the last one was proposed by Heller [37]. To better assess the algorithms, 10 independent simulations were performed for each test problems. The performance of the algorithms used was then measured on the basis of four indicators. (1) The best solution found in the 10 simulations. (2) The mean value and the standard deviation of the optimal solution found for the 10 simulations. (3) The number of times the algorithms converged to the best optimal solution. (4) The average number of best optimal solutions found for the 10 simulations. For indicator number 4, only simulations in which the algorithm converged to the best solution from (1) were taken into consideration i.e., if the algorithm only converged to the best solution in 4 out of 10 simulations, then the average number of optimal solutions found was calculated using those 4 simulations. The parameters used for the three algorithms were determined after numerous simulations and are given in Table 1. Additionally, the performance of the three algorithms was also compared to the performance of Improved Efficient Genetic Algorithm (IEGA) and the Hybrid IEGA (HIEGA) proposed by Basset et al. [38], when applicable. These algorithms were selected as they showed promising results when used for the optimization of benchmark PFSSPs.

4.1. The Best Solution Found

As mentioned above, 10 independent simulations were ran for each test problem using the three algorithms. The first method used to compare the performance of these algorithms included observing the best solution obtained during 10 simulations. The results obtained by these algorithms were also compared to the best-known solution, as well as the best-known solutions obtained using different algorithms in the available literature. These results are shown in Table 2.
According to Table 2, the proposed algorithm, sharing fitness algorithm, and HIEGA were able to find the best-known solutions for all Carlier benchmark problems. IEGA was able to converge to the best-known solution for all Carlier problems besides Car-03, while CM was able to find the best-known solution for Car-01, 05, 06, 07, and 08. For the Reeves problems, the HIEGA had the best performance, as it was able to converge to the best-known solution for 6 out of the 9 problems. The proposed algorithm was able to obtain the best-known solution for Rec-07, Rec-09, and Rec-11, and was able to obtain a better solution than HEIGA for Rec-17. It was also able to converge within 1% of the best-known solution for the remainder of the problems. Sharing fitness, IEGA, and CM were unable to converge to any of the best-known solutions for the Reeves problems and were within 4% and 15% of the best-known solutions. Similarly, only HIEGA converged to the best-known solution of the Heller problem. However, the proposed algorithm was within 0.7% of the best solution, while sharing fitness, IEGA, and CM were unable to do so.

4.2. Mean Value and Standard Deviation of the Optimal Solution

Next, to evaluate the stability of the algorithms, the mean value and the standard deviation of the optimal solution found in the 10 simulations was calculated. Since these were 10 independent simulations, the starting points were randomized in each simulation. These results are shown in Table 3.
Even though HIEGA had a better performance than the proposed algorithm when converging to the best-known solution, the proposed algorithm was able to consistently converge to a better solution than HEIGA, as indicated by the mean value and standard deviation. For the Reeves problems, the proposed algorithm had a better performance than HEIGA. For 7 out of 9 problems, its average optimal value was much closer to the best-known solution than that of HEIGA, IEGA, sharing fitness, and CM. The standard deviation varied between 0.00 and 16.54 for the proposed algorithm, while it varied between 4.64 and 35.10 for HEIGA, 11.74 and 29.53 for IEGA, 15.61 and 39.63 for sharing fitness and 20.34 and 30.15 for CM. The proposed algorithm had the best results for all Carlier problems, as it always converged to the best-known solution. HEIGA had a 100% convergence rate for 4 out of the 8 problems, sharing fitness had a 100% convergence rate to the best-known solutions of problems 1, 2, and 8, while CM and IEGA had a 100% convergence rate to the best-known solutions for 2 out of 8 problems. Lastly, for Heller, the proposed algorithm consistently converged to within 0.7% of the best-known solution. However, HIEGA converged to a better average. These results show the robustness of the proposed algorithm i.e., its ability to consistently converge to a better optimal solution.

4.3. Number of Times the Algorithm Converged to the Optimal Solution

To ensure that the algorithms could find multiple solutions consistently, the number of simulations in which the algorithm converged to the best-known solution (NC) was also recorded. Since none of the algorithms converged to majority of the best-known solution for the Reeves problems, the best optimal solution found was used instead. For example, the best-known solution for Rec-01 was 1247, while the best optimal solution found by the algorithms was 1249, therefore, 1249 was used to calculate NC. The results for NC are given in Table 4. A complete analysis for HEIGA and IEGA could not be done for this metric as these results were not reported by the authors (this is indicated by ‘NA’ in Table 4). However, some inferences could be made based on the results reported. For example, we could assume that both HEIGA and IEGA always converged to the best-known solution for Car-01 and Car-02, as the mean values were equal to the best-known solution and the standard deviations were zero.
Since CM had the worst optimal solution of the three algorithms for the Reeves problems, its NC’s were 0. Similarly, sharing fitness was unable to find the best optimal solution for all but problem 1, and therefore, its NC for those problems was 0. The proposed algorithm had the best performance of all algorithms (where results were available). It had a NC of 10 for problems 7 and 9, 1 for problems 5, 13, and 17, 2 for problems 1 and 15, 4 for problem 11, and 8 for problem 3.
The proposed algorithm had an NC of 10 for all Carlier problems, indicating that it had a 100% converge rate to the best-known solution for those problems. Sharing fitness also had an NC of 10 for problems, 1, 2, and 8, and had an NC of 4, 9, 3, 3, and 8 for the remaining problems. CM had the worst performance as it was unable to converge to the best-known solution for problems 2, 3, and 4. It had an NC of 2, 6, and 9 for problems 1, 5, and 6, respectively, and had an NC of 10 for problems 7 and 8. Lastly, for the Heller problem, the proposed algorithm had an NC of 10 while the other two algorithms had an NC of 0. These results further solidified the argument that the proposed algorithm had the best performance of the three algorithms, as it was able to consistently converge to the best optimal solution.

4.4. Average Number of Best Optimal Solutions Found

Lastly, the average number of best optimal solutions found for each test problem using the different algorithms was compared. For this index, only simulations that had an NC greater than 0 were used. For example, since the proposed algorithm had an NC of two for Rec-01, NO was calculated using the results of those two simulations. NO for the different test problems using the different algorithms are given in Table 5. Since the purpose of HIEGA and IEGA was to only find a single optimal solution, these were excluded for the purpose of this metric.
Like the previous three indicators, the proposed algorithm had a significantly better performance than the sharing fitness and CM. Though the sharing fitness found 18 solutions for Rec-01, it was unable to find multiple best optimal solutions for rest of the Reeves problems. The proposed algorithm was able to find 2–6.8 best optimal solutions for all Reeves. CM was unable to find multiple best optimal solutions for any of the Reeves problems, as it never converged to the best optimal solution.
Thought both CM and sharing fitness found more optimal solutions compared to the proposed algorithm for Car-01, their performance was worse for problems 2, 3, 4, and 5. The proposed algorithm found 30.40, 6.20, 13.50, and 1.90 different best-known solutions on average for those problems, while sharing fitness only obtained 9.89, 1.25, 8.67, and 1.00 and the CM obtained 0.00 for all. All algorithms were only able to find 1.00 different optimal solutions for problems 6–8, indicating that the global optimal solution is a unique one. Lastly, for the Heller problem, the proposed algorithm again had the best performance, as it was able to obtain 6.30 different optimal solutions on average, while both sharing fitness and CM obtained 0.00. These results indicate that the proposed algorithm was able to consistently converge to the best optimal solution for all test problems, while finding multiple best optimal solutions.
An attempt was also made to combine the proposed algorithm with sharing fitness, to take advantage of both algorithms, i.e., the ability of the proposed algorithm to consistently converge to the best optimal solution and the ability of sharing fitness to find multiple best optimal solutions. The above task was accomplished by calculating the distance used in the sharing fitness method using the feature matrix of the proposed method. This distance could be calculated using Equation (5).
d i , j   =   F i F j
where Fi is the feature matrix of individual i and Fj is the feature matrix of individual j. For the hybrid algorithm, the value of σ s h a r e was changed to 10. The performance of the hybrid algorithm was then compared to the performances of sharing fitness. The proposed algorithm and the results for the four indicators are given in Table 6, Table 7, Table 8 and Table 9.
As can be seen from the results above, the new hybrid algorithm was able to achieve the same best optimal makespan for the Carlier problems but not for the Reeves and Heller problems. In terms of the mean value and standard deviation of the optimal solution found, the new hybrid algorithm had a similar performance to the proposed algorithm for the Carlier problems 1, 4, and 8, while it had a higher mean value and standard deviation for rest of the benchmark problems. When comparing Nc, the new hybrid algorithm was able to converge to the best optimal solutions in all 10 simulations for Carlier problems 1, 4, and 8, but was unable to achieve similar results for the remaining problems. In terms of NO, the new hybrid algorithm was unable to obtain more optimal solutions for any benchmark problem. These results indicate that the new hybrid algorithm did not have a superior performance compared to sharing fitness or the proposed algorithm. The reason that the proposed algorithm was able to achieve better performance than sharing fitness, CM, and the hybrid algorithm, was its utilization of a feature vector to separate the individuals into different clusters rather than a single value.
In this section, the performance of the three algorithms was measured by their application to 18 benchmark PFSSP. Four indicators were used to measure the performance—(1) best optimal found, (2) mean value and standard deviation of the optimal solution found in 10 simulations, (3) number of simulations in which the algorithm converged to the best optimal solution, and (4) average number of best optimal solution found. The results indicated that the proposed algorithm had a better performance than the sharing fitness and CM, and also a hybrid algorithm created by combining the proposed algorithm and sharing fitness.

5. Conclusions

In this study, a new algorithm is proposed for the MMO of PFSSP, i.e., provide multiple solutions with the same objective value. The proposed algorithm utilizes GA for the optimization procedure with the k-means clustering algorithm being used to cluster the individuals of each generation. Unlike standard GA, where all individuals belong to the same cluster, the individuals in the proposed approach were split into multiple clusters and crossover operator is restricted to individuals belonging to the same cluster. Doing so enabled the possibility of the proposed algorithm to find multiple global optima in each cluster. Sharing fitness and CM utilized by Perez et al. for the MMO of JSSP were also modified and used for the MMO of PFSSP. The performance of the three algorithms was tested on various benchmark PFSSP and the results indicated that the proposed algorithm had a better performance than CM and sharing fitness, in terms of its ability to find the best-known optimal solution, consistently provide a better optimal solution, and converge to multiple optimal solutions.
As the sharing fitness method had a better performance on a few problems in terms of finding more optimal solutions, an attempt was made to combine the sharing fitness method with the proposed algorithm. This was done by using the feature matrix of the proposed algorithm in the sharing fitness method. When applied for MMO of benchmark PFSSP, the new hybrid algorithm was unable to outperform or equal the proposed algorithm.
Though the proposed algorithm had the best performance of the algorithms tested, further research needs to be performed in the following areas:
  • Introduction of more rigorous similarity measure metrics for a more efficient clustering.
  • Comparison of the performance of the proposed approach to algorithms specifically developed for optimization of PFSSP.
  • Embedding the proposed approach in algorithms specifically developed for optimization of PFSSP and use it to achieve the MMO of PFFSP.
  • Applying the proposed approach to real-life operational environment, testing the robustness of the proposed algorithms and its parameters, and measuring its performance under different criteria.

Author Contributions

P.Z. and M.R., analysis of the data, development of code, and writing of the manuscript. S.Y.L., guideline for the proposed approach and comments for the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets used in this paper are available at http://mariusz.makuchowski.staff.iiar.pwr.wroc.pl/download/courses/sterowanie.procesami.dyskretnymi/ (accessed on 22 February 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luh, G.C.; Chueh, C.H. A multi-modal immune algorithm for the job-shop scheduling problem. Inf. Sci. 2009, 179, 1516–1532. [Google Scholar] [CrossRef]
  2. Liu, Y.F.; Liu, S.Y. A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem. Appl. Soft Comput. 2013, 13, 1459–1463. [Google Scholar] [CrossRef]
  3. Govindan, K.; Balasundaram, R.; Baskar, N.; Asokan, P. A hybrid approach for minimizing makespan in permutation flowshop scheduling. J. Syst. Sci. Syst. Eng. 2017, 26, 50–76. [Google Scholar] [CrossRef]
  4. Ancău, M. On solving flowshop scheduling problems. Proc. Rom. Acad. Ser. A 2012, 13, 71–79. [Google Scholar]
  5. Zobolas, G.I.; Tarantilis, C.D.; Ioannou, G. Minimizing makespan in permutation flow shop scheduling problems using a hybrid metaheuristic algorithm. Comput. Oper. Res. 2009, 36, 1249–1267. [Google Scholar] [CrossRef]
  6. Zhang, X.; Li, X.T.; Yin, M.H. An enhanced genetic algorithm for the distributed assembly permutation flowshop scheduling problem. Int. J. Bio-Inspired Comput. 2020, 15, 113–124. [Google Scholar] [CrossRef]
  7. Andrade, C.E.; Silva, T.; Pessoa, L.S. Minimizing flowtime in a flowshop scheduling problem with a biased random-key genetic algorithm. Expert Syst. Appl. 2019, 128, 67–80. [Google Scholar] [CrossRef]
  8. Ruiz, R.; Pan, Q.K.; Naderi, B. Iterated Greedy methods for the distributed permutation flowshop scheduling problem. Omega 2019, 83, 213–222. [Google Scholar] [CrossRef]
  9. Pan, Q.K.; Gao, L.; Wang, L.; Liang, J.; Li, X.Y. Effective heuristics and metaheuristics to minimize total flowtime for the distributed permutation flowshop problem. Expert Syst. Appl. 2019, 124, 309–324. [Google Scholar] [CrossRef]
  10. Qu, B.Y.; Liang, J.J.; Suganthan, P.N. Niching particle swarm optimization with local search for multi-modal optimization. Inf. Sci. 2012, 197, 131–143. [Google Scholar] [CrossRef]
  11. Epitropakis, M.G.; Li, X.; Burke, E.K. A dynamic archive niching differential evolution algorithm for multimodal optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 79–86. [Google Scholar]
  12. Li, X. Niching without niching parameters: Particle swarm optimization using a ring topology. IEEE Trans. Evol. Comput. 2010, 14, 150–169. [Google Scholar] [CrossRef]
  13. Bandaru, S.; Deb, K. A parameterless-niching-assisted bi-objective approach to multimodal optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 95–102. [Google Scholar]
  14. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Wang, H.; Kwong, S.; Zhang, J. Automatic niching differential evolution with contour prediction approach for multimodal optimization problems. IEEE Trans. Evol. Comput. 2019, 24, 114–128. [Google Scholar] [CrossRef]
  15. Liu, Q.; Du, S.; van Wyk, B.J.; Sun, Y. Niching particle swarm optimization based on Euclidean distance and hierarchical clustering for multimodal optimization. Nonlinear Dyn. 2020, 99, 2459–2477. [Google Scholar] [CrossRef]
  16. Patra, G.R.; Mohanty, M.N. An Enhanced Local Neighborhood-Based Niching Particle Swarm Optimization Algorithm for Multimodal Fitness Surfaces. In Proceedings of the Second International Conference on Information Management and Machine Intelligence; Springer: Singapore, 2021; pp. 367–373. [Google Scholar]
  17. Qu, B.Y.; Suganthan, P.N.; Liang, J.J. Differential evolution with neighborhood mutation for multimodal optimization. IEEE Trans. Evol. Comput. 2012, 16, 601–614. [Google Scholar] [CrossRef]
  18. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans. Syst. ManCybern. Part B (Cybern.) 2012, 42, 482–500. [Google Scholar] [CrossRef]
  19. Civicioglu, P.; Besdok, E. A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif. Intell. Rev. 2013, 39, 315–346. [Google Scholar] [CrossRef]
  20. Basak, A.; Das, S.; Tan, K.C. Multimodal optimization using a biobjective differential evolution algorithm enhanced with mean distance-based selection. IEEE Trans. Evol. Comput. 2013, 17, 666–685. [Google Scholar] [CrossRef]
  21. Liang, J.; Xu, W.; Yue, C.; Yu, K.; Song, H.; Crisalle, O.D.; Qu, B. Multimodal multiobjective optimization with differential evolution. Swarm Evol. Comput. 2019, 44, 1028–1059. [Google Scholar] [CrossRef]
  22. Chen, Z.G.; Zhan, Z.H.; Wang, H.; Zhang, J. Distributed individuals for multiple peaks: A novel differential evolution for multimodal optimization problems. IEEE Trans. Evol. Comput. 2019, 24, 708–719. [Google Scholar] [CrossRef]
  23. Zhao, H.; Zhan, Z.H.; Lin, Y.; Chen, X.; Luo, X.N.; Zhang, J.; Kwong, S.; Zhang, J. Local binary pattern-based adaptive differential evolution for multimodal optimization problems. IEEE Trans. Cybern. 2019, 50, 3343–3357. [Google Scholar] [CrossRef]
  24. Li, Z.; Shi, L.; Yue, C.; Shang, Z.; Qu, B. Differential evolution based on reinforcement learning with fitness ranking for solving multimodal multiobjective problems. Swarm Evol. Comput. 2019, 49, 234–244. [Google Scholar] [CrossRef]
  25. Liang, Y.; Leung, K.S. Genetic Algorithm with adaptive elitist-population strategies for multimodal function optimization. Appl. Soft Comput. 2011, 11, 2017–2034. [Google Scholar] [CrossRef]
  26. Yao, J.; Kharma, N.; Grogono, P. Bi-objective multipopulation genetic algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 2010, 14, 80–102. [Google Scholar]
  27. Thakur, M. A new genetic algorithm for global optimization of multimodal continuous functions. J. Comput. Sci. 2014, 5, 298–311. [Google Scholar] [CrossRef]
  28. Bian, Q.; Nener, B.; Wang, X. A quantum inspired genetic algorithm for multimodal optimization of wind disturbance alleviation flight control system. Chin. J. Aeronaut. 2019, 32, 2480–2488. [Google Scholar] [CrossRef]
  29. Bruns, R. Direct chromosome representation and advanced genetic operators for production scheduling. In Proceedings of the 5th International Conference on Genetic Algorithms; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993; pp. 352–359. [Google Scholar]
  30. Nakano, R.; Yamada, T. Conventional genetic algorithm for job shop problems. ICGA 1991, 91, 474–479. [Google Scholar]
  31. Fang, H.L.; Ross, P.; Corne, D. A Promising Genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems; University of Edinburgh, Department of Artificial Intelligence: Edinburgh, UK, 1993; pp. 375–382. [Google Scholar]
  32. Mattfeld, D.C. Evolutionary Search and the Job Shop: Investigations on Genetic Algorithms for Production Scheduling; Springer Science & Business Media, Physica-Verlag: Heidelberg, Germany, 2013. [Google Scholar]
  33. Pérez, E.; Posada, M.; Herrera, F. Analysis of new niching genetic algorithms for finding multiple solutions in the job shop scheduling. J. Intell. Manuf. 2012, 23, 341–356. [Google Scholar] [CrossRef]
  34. Pérez, E.; Herrera, F.; Hernández, C. Finding multiple solutions in job shop scheduling by niching genetic algorithms. J. Intell. Manuf. 2003, 14, 323–339. [Google Scholar] [CrossRef]
  35. Carlier, J. Ordonnancements a contraintes disjonctives. Rairo-Oper. Res. 1978, 12, 333–350. [Google Scholar] [CrossRef] [Green Version]
  36. Reeves, C.R. A genetic algorithm for flow shop sequencing. Comput. Oper. Res. 1995, 22, 5–13. [Google Scholar] [CrossRef]
  37. Heller, J. Some numerical experiments for an M× J flow shop and its decision-theoretical aspects. Oper. Res. 1960, 8, 178–184. [Google Scholar] [CrossRef]
  38. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M.; Chakrabortty, R.K.; Ryan, M.J. A Simple and Effective Approach for Tackling the Permutation Flow Shop Scheduling Problem. Mathematics 2021, 9, 270. [Google Scholar] [CrossRef]
Figure 1. An example of permutation encoding in a 9 × 3 problem.
Figure 1. An example of permutation encoding in a 9 × 3 problem.
Applsci 11 03388 g001
Figure 2. Crossover operator adapted to permutation coding in a 9 × 3 FSSP.
Figure 2. Crossover operator adapted to permutation coding in a 9 × 3 FSSP.
Applsci 11 03388 g002
Figure 3. Mutation operator adapted to permutation coding in a 9 × 3 FSSP.
Figure 3. Mutation operator adapted to permutation coding in a 9 × 3 FSSP.
Applsci 11 03388 g003
Figure 4. Flow chart of the proposed algorithm.
Figure 4. Flow chart of the proposed algorithm.
Applsci 11 03388 g004
Figure 5. Example of the proposed algorithm.
Figure 5. Example of the proposed algorithm.
Applsci 11 03388 g005
Figure 6. Sharing fitness procedure [33].
Figure 6. Sharing fitness procedure [33].
Applsci 11 03388 g006
Figure 7. Clearing method procedure [33].
Figure 7. Clearing method procedure [33].
Applsci 11 03388 g007
Table 1. Parameters used for the three algorithms.
Table 1. Parameters used for the three algorithms.
GenerationsPopulation SizeAlgorithm Specific Parameters
Proposed Algorithm1000200Number of clusters = 3
Sharing Fitness1000200 σ s h a r e = 0 β = 1
CM1000200 σ s h a r e = 50Maximum niche size = 1
Table 2. Best optimal solution obtained by the different algorithms.
Table 2. Best optimal solution obtained by the different algorithms.
ProblemBest-Known SolutionProposed AlgorithmHIEGAIEGASharing FitnessCM
Rec-01124712491247131212491379
Rec-03110911111109111111141187
Rec-05124212451245125712451307
Rec-07156615661566162215891733
Rec-09153715371537161616001706
Rec-11143114311431156514751614
Rec-13193019541930202019802145
Rec-15195019621959204320072156
Rec-17190219191935203019892178
Car-01703870387038703870387038
Car-02716671667166716671667565
Car-03731273127312736673127399
Car-04800380038003800380038410
Car-05772077207720772077207720
Car-06850585058505850585058505
Car-07659065906590659065906590
Car-08836683668366836683668366
Hel-02136137136140139154
Table 3. Mean value and standard deviation of the optimal solution found after 10 independent simulations using the algorithms.
Table 3. Mean value and standard deviation of the optimal solution found after 10 independent simulations using the algorithms.
ProblemProposed AlgorithmHIEGAIEGASharing FitnessCM
Rec-011249.00 ± 0.001250.33 ± 4.831335.26 ± 14.531302.30 ± 30.651429.40 ± 29.14
Rec-031111.00 ± 0.001111.40 ± 4.641153.93 ± 13.921136.4 ± 15.611246.50 ± 29.01
Rec-051245.00 ± 0.001250.00 ± 5.481284.93 ± 11.741266.20 ± 39.631360.90 ± 25.59
Rec-071579.00 ± 8.071578.00 ± 8.481655.17 ± 20.451622.20 ± 25.211771.20 ± 24.48
Rec-091567.70 ± 16.541559.70 ± 21.511673.93 ± 22.151625.40 ± 12.291767.60 ± 30.15
Rec-111440.80 ± 11.781451.40 ± 17.781598.47 ± 18.991505.2 ± 21.821669.00 ± 29.45
Rec-131956.90 ± 2.281960.23 ± 15.682076.93 ± 29.532029.50 ± 30.872218.70 ± 38.50
Rec-151974.30 ± 6.311997.36 ± 35.102084.50 ± 19.422026.80 ± 12.922189.30 ± 20.34
Rec-171944.30 ± 11.621968.90 ± 25.982079.47 ± 24.352024.60 ± 20.672219.13 ± 21.71
Car-017038.00 ± 0.007038.00 ± 0.007038.00 ± 0.007038.00 ± 0.007468.50 ± 276.02
Car-027166.00 ± 0.007166.00 ± 0.007289.60 ± 94.777166.00 ± 0.007862.30 ± 163.70
Car-037312.00 ± 0.007338.30 ± 28.717516.17 ± 85.497351.40 ± 43.067832.50 ± 266.94
Car-048803.00 ± 0.008803.00 ± 0.008012.13 ± 22.408813.40 ± 32.898468.30 ± 70.84
Car-057720.00 ± 0.007743.07 ± 33.407802.87 ± 49.147759.70 ± 38.777731.20 ± 16.48
Car-068505.00 ± 0.008528.83 ± 31.238684.50 ± 147.228550.50 ± 31.408511.50 ± 20.55
Car-076590.00 ± 0.006591.77 ± 0.006602.37 ± 22.426600.60 ± 22.376590.00 ± 0.00
Car-088366.00 ± 0.008366.00 ± 0.008366.00 ± 0.008366.00 ± 0.008366.00 ± 0.00
Hel-02137.00 ± 0.00136.87 ± 0.72143.66 ± 1.77143.00 ± 1.89155.90 ± 1.45
Table 4. Number of simulations in which the algorithms were able to converge to the best optimal solution (NC).
Table 4. Number of simulations in which the algorithms were able to converge to the best optimal solution (NC).
ProblemProposed AlgorithmHEIGAIEGASharing FitnessCM
Rec-012NANA10
Rec-038NANA00
Rec-051NANA00
Rec-0710NANA00
Rec-0910NANA00
Rec-114NANA00
Rec-131NANA00
Rec-152NANA00
Rec-171NANA00
Car-01101010102
Car-021010NA100
Car-0310NANA40
Car-041010NA90
Car-0510NANA36
Car-0610NANA39
Car-07100NA810
Car-081010101010
Hel-0210NANA00
Table 5. Average number of best optimal solutions found (NO) using different algorithms.
Table 5. Average number of best optimal solutions found (NO) using different algorithms.
ProblemProposed AlgorithmSharing FitnessCM
Rec-016.6018.000.00
Rec-036.800.000.00
Rec-054.200.000.00
Rec-073.000.000.00
Rec-094.000.000.00
Rec-113.000.000.00
Rec-133.000.000.00
Rec-152.000.000.00
Rec-173.000.000.00
Car-0137.9040.0058.50
Car-0230.409.890.00
Car-036.201.250.00
Car-0413.58.670.00
Car-051.901.001.00
Car-061.001.001.00
Car-071.001.001.00
Car-081.001.001.00
Hel-026.3000.00
Table 6. Comparison of the best optimal solution found using the new hybrid algorithm and the previous three algorithms.
Table 6. Comparison of the best optimal solution found using the new hybrid algorithm and the previous three algorithms.
ProblemBest Optimal Solution Found Using the New Hybrid AlgorithmBest Optimal Solution Found Using Previous Three Algorithms
Rec-0112801249
Rec-0311241111
Rec-0512491245
Rec-0715841566
Rec-0915901537
Rec-1114571431
Rec-1320031954
Rec-1520121962
Rec-1720081919
Car-0170387038
Car-0271667166
Car-0373127312
Car-0488038803
Car-0577207720
Car-0685058505
Car-0765906590
Car-0883668366
Hel-02139137
Table 7. Comparison of the mean value and standard deviation of the optimal solutions found using the new hybrid algorithm and from the previous three algorithms.
Table 7. Comparison of the mean value and standard deviation of the optimal solutions found using the new hybrid algorithm and from the previous three algorithms.
ProblemBest Mean Value and Standard Deviation of the Optimal Solution Found Using the New Hybrid AlgorithmBest Mean Value and Standard Deviation of the Optimal Solution Found Using the Previous Three Algorithms
Rec-011310.60 ± 17.661249.00 ± 0.00
Rec-031139.80 ± 13.101111.00 ± 0.00
Rec-051269.60 ± 11.281245.00 ± 0.00
Rec-071623.90 ± 29.441579.00 ± 8.07
Rec-091617.70 ± 18.601567.70 ± 16.54
Rec-111457.20 ± 17.761440.80 ± 11.78
Rec-132042.20 ± 21.901956.90 ± 2.28
Rec-152047.60 ± 30.311974.30 ± 6.31
Rec-172028.00 ± 13.721944.30 ± 11.62
Car-017038.00 ± 0.007038.00 ± 0.00
Car-027187.00 ± 66.417166.00 ± 0.00
Car-037382.10 ± 27.897312.00 ± 0.00
Car-048803.00 ± 0.008803.00 ± 0.00
Car-057745.70 ± 18.377720.00 ± 0.00
Car-068561.20 ± 71.278505.00 ± 0.00
Car-076605.90 ± 25.606590.00 ± 0.00
Car-088366.00 ± 0.008366.00 ± 0.00
Hel-02143.10 ± 1.66137.00 ± 0.00
Table 8. Comparison of the Nc found using the new hybrid algorithm and the previous three algorithms.
Table 8. Comparison of the Nc found using the new hybrid algorithm and the previous three algorithms.
ProblemNC Obtained Using the New Hybrid AlgorithmBest NC Obtained Using the Previous Three Algorithms
Rec-0102
Rec-0308
Rec-0501
Rec-07010
Rec-09010
Rec-11010
Rec-1301
Rec-1502
Rec-1701
Car-011010
Car-02910
Car-03110
Car-041010
Car-05210
Car-06410
Car-07710
Car-081010
Hel-02010
Table 9. Comparison of the NO using the new hybrid algorithm and the previous three algorithms.
Table 9. Comparison of the NO using the new hybrid algorithm and the previous three algorithms.
ProblemNO Obtained Using the New Hybrid AlgorithmBest NO Obtained Using the Previous Three Algorithms
Rec-010.0018.00
Rec-030.004.20
Rec-050.003.00
Rec-070.004.00
Rec-090.003.00
Rec-110.003.00
Rec-130.002.00
Rec-150.003.00
Rec-170.006.80
Car-0133.8058.50
Car-0217.1130.40
Car-031.006.20
Car-0412.2013.5
Car-051.001.90
Car-061.001.00
Car-071.001.00
Car-081.001.00
Hel-020.006.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zou, P.; Rajora, M.; Liang, S.Y. Multimodal Optimization of Permutation Flow-Shop Scheduling Problems Using a Clustering-Genetic-Algorithm-Based Approach. Appl. Sci. 2021, 11, 3388. https://doi.org/10.3390/app11083388

AMA Style

Zou P, Rajora M, Liang SY. Multimodal Optimization of Permutation Flow-Shop Scheduling Problems Using a Clustering-Genetic-Algorithm-Based Approach. Applied Sciences. 2021; 11(8):3388. https://doi.org/10.3390/app11083388

Chicago/Turabian Style

Zou, Pan, Manik Rajora, and Steven Y. Liang. 2021. "Multimodal Optimization of Permutation Flow-Shop Scheduling Problems Using a Clustering-Genetic-Algorithm-Based Approach" Applied Sciences 11, no. 8: 3388. https://doi.org/10.3390/app11083388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop