# An Improved Shuffled Frog-Leaping Algorithm for Flexible Job Shop Scheduling Problem

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Description

_{1},M

_{2},…,M

_{m}} and a set of independent jobs J = {J

_{1},J

_{2},…,J

_{n}} are given. Additionally, each job J

_{i}consists of a sequence of operations O

_{ij}, where O

_{ij}represents the j-th operation of the job i. All of the operations of each job should be performed one after another in a given order. Each operation O

_{ij}can be processed on any machine among a subset M

_{i,j}⊆ M of compatible machines. Based on the different conditions of resource restriction, the flexible job shop scheduling problem can be divided into a total flexible job-shop scheduling (T-FJSP) problem and a partial flexible job-shop scheduling (P-FJSP) problem. In T-FJSP, every process of each operation can choose any one machine to be processed; in P-FJSP, some processes of some work pieces can be processed on any one of the machines, and some certain processes can only choose parts of machines.

_{mi}represents the processing time of operation O

_{ij}on machine i. If the operation O

_{ij}cannot be processed on machine k, the processing time pmk is assumed as ∞. Each operation must be completed without interruption during its process. The objective of FJSP is to assign each operation to one machine that is capable of performing it and to determine the starting time for each operation in order to minimize the makespan. In order to describe the FJSP well, Figure 1 is an example of FJSP with three jobs and five machines. In Figure 1, (0, 0, 0) means spending zero seconds on the first machine, zero seconds on the second one and zero seconds on the third one.

_{j}is the total number of operations of job j(j ∈ n); O

_{jh}is the h-th operation of job j (h = 1,2,…,h

_{j}); p

_{ijh}is the processing time of the h-th operation of job j on the machine (I ∈ m); s

_{jh}is the start processing time of the h-th operation of job j; c

_{jh}is the completion time of the h-th operation of job j; L is a given big integer; C

_{max}is the maximum completion time.

## 3. Shuffled Frog-Leaping Algorithm

_{i}= {f

_{i}

_{1}, f

_{i}

_{2},…,f

_{i}

_{D}}. The algorithm first randomly generates S frogs as the initial population and notes them in descending order according to the fitness of each frog. Then, the entire population is divided into m subgroups, and each subgroup contains n frogs. From the initial population, the first frog is selected in the first sub-group, and the second frog is selected in the second group, until the first m frog is selected in the m-th subgroup. Then, the (m + 1)-th frog is selected in the first subgroup. Repeat the process, until all frogs are distributed. In each subgroup, the frog with the best fitness and the one with the worst fitness are denoted as F

_{b}and F

_{w}, respectively; while in the total population, the frog with the best fitness is denoted as F

_{g}. The main work of SFLA is to update the position of the worst-performing frog through an iterative operation in each sub-memeplex. Its position is improved by learning from the best frog of the sub-memeplex or its own population and position. In each sub-memeplex, the new position of the worst frog is updated according to the following equation.

_{i}. Rand () is the random number between zero and one; Formula (5) updates the position of F

_{w}. d

_{max}is the maximum step. If a better solution is attained, then the better one will replace the worst individual. Otherwise, F

_{g}will be used instead of F

_{b}. Then, recalculate Formula (4). If you still cannot get a better solution, a new explanation generated randomly will replace the worst individual. Repeat until a predetermined number of iterations. Additionally, complete the round of the local search of various subgroups. Then, all subgroups of the frogs are re-ranked in mixed order and divided into sub-groups for the next round of local search.

**Step 1.**Initialization. According to the characteristics and scale of the problem, a reasonable m and n is determined at first.**Step 2.**Produce a virtual population with F individuals (F = m * n). For the D dimension optimization problem, the individuals of the population are D dimension variables and represent the frogs’ current position. The fitness function is used to determine if the performance of the first individual’s position is good.**Step 3.**Divide the total population and the number of individuals in descending order.**Step 4.**Divide the population into population m: Y_{1}, Y_{2}, …, Y_{m}. Each sub-population contains n frogs. For example: if m = 3, then the first frog is put into the first population, the second frog is put into the second population, the third frog is put into the third population, the fourth frog is put into the first population, and so on.**Step 5.**In each sub-population, with its evolution, the positions of individuals have been improved. The following steps are the process of the sub-population meme evolution.**Step 5.0.**Set i_{m}= 0. i_{m}represents the number of the sub-populations, which is from zero to m. Set it equal to zero. i_{m}represents the number of evolutions, which is from zero to N (the maximum evolution iteration in each sub-population).

**Step 5.1.**i_{m}= i_{m}+ 1.**Step 5.2.**i_{n}= i_{n}+ 1.**Step 5.3.**Try to adjust the position of the worst frog. The moving distance of the worst frog should be: d_{i}= Rand () * (F_{b}−F_{w}). After moving, the new position of the worst frog is: F_{w}= F_{w}(the current position) + d_{i}.**Step 5.4.**If Step 5.3 could produce a better solution, use the frog in the new position instead of the worst one; or use F_{g}instead of F_{b}.**Step 5.5.**If better frogs cannot be generated after trying the above method, immediately generate the next individual to replace the worst frog F_{w}.**Step 5.6.**If i_{n}< N, then perform Step 5.2.**Step 5.7.**If i_{m}< m, then perform Step 5.1.**Step 6.**Implementation of mixed operations.

_{1}, Y

_{2}, …, Y

_{m}to X, namely X = {Y

_{k}, k = 1, 2,… M}. Descend X again, and update the best frog F

_{g}in the populations.

**Step 7.**Check the termination conditions. If the iterative termination conditions are met, then stop. Otherwise, do Step 4 again.

## 4. Improved SFLA for FJSP

#### 4.1 Generation of Solutions

_{11}can be processed on five machines {M

_{1}, M

_{2}, M

_{3}, M

_{4}, M

_{5}}, so five is placed in the first square. The operation O

_{12}can be processed on two machines {M

_{2}, M

_{4}}, and in the same way, the machine section is coded. In the operation section, the code is noted by the time sequence of all of the operations of all jobs. For example, O

_{21}is the first operation; thus, the first square is placed at two, which is the job number. As shown in Figure 2, the code of the operation section is 2321321, which represents the operation sequence: O

_{21}-O

_{31}-O

_{22}-O

_{11}-O

_{31}-O

_{23}-O

_{12}.

#### 4.2 Local Search

_{max}means the maximum step size allowed to change by frog individual. Set F

_{w}= {1, 3, 5, 2, 4}, F

_{b}= {2, 1, 5, 3, 4}; the maximum step size allowed to change S

_{max}= 3, rand = 0.5. Therefore, F

_{g}(1) = 1 + min{int[0.5 × (2 − 1)],3} = 1; F

_{g}(2) = 3 + min{int[0.5 × (1 − 3)],3} = 2, and so on. Finally, the new solution F

_{g}can be attained as {1, 2, 5, 4, 3}.

#### 4.3 Improvement Strategies

#### 4.3.1 Adjustment Factor

^{i}) and the operation set is (1, 2,…, d), the adjustment is defined as TO(i

_{1},i

_{2})(i = 1,2,…,d). Thus, the operation of U

_{i}

_{1}will be put before U

_{i2}. U′ = U + TO(i

_{1},i

_{2}) is the new solution of U based on the adjustment factor. For example, if U = (1 3 5 2 4), the adjustment factor is TO (4,2), then U′ = U + TO (2,4) = (1 2 3 5 4).

#### 4.3.2 Adjustment Sequence

_{1}, TO

_{2},…, TO

_{N}) · TO

_{1},TO

_{2},…,TO

_{N}are the adjustment factors. U

_{A}and U

_{B}are two different solutions. The adjustment sequence ST(U

_{B}Θ U

_{A}) means adjusting U

_{A}to U

_{B}, namely, U

_{B}=U

_{A}+ ST (U

_{B}Θ U

_{A})=U

_{A}+ (TO

_{1}TO

_{2},…, TO

_{N}) = [(U

_{A}+ TO

_{1})+ TO

_{1}]+…+ TO

_{N}. For example, let U

_{A}= (1 3 5 2 4), U

_{A}= (3 1 4 2 5). We need make an adjustment sequence ST (U

_{B}Θ U

_{A}), and make sure ${U}_{A}+ST\left({U}_{B}\mathrm{\Theta}{U}_{A}\right)={U}_{B}\cdot {U}_{B}^{1}={U}_{A}^{2}=3$. Therefore, for the first adjustment sequence $T{O}_{2}\left(2,1\right)\cdot {U}_{A}^{1}={U}_{A}+T{O}_{1}$, we can get ${U}_{A}^{1}=\left(31524\right);{U}_{B}^{3}={U}_{A1}^{5}=4$, and the second adjustment sequence is $T{O}_{2}(5,3)\cdot {U}_{A}^{2}={U}_{A}^{1}+T{O}_{2}(5,3)$, and so on. Then, we can get ST(U

_{B}Θ U

_{A}) = (TO

_{1}(2,1), TO

_{2}(5,3), TO

_{3}(5,4)).

#### 4.3.3 Extremal Optimization

_{1}, x

_{2}, …, x

_{d}). Each component x

_{i}in the current individual S is considered to be a species and is assigned a fitness value k

_{i}. There is only a mutation operator in EO, and it is applied to the component with the worst species successively. The individual can update its components and evolve toward the optimal solution. The process requires a suitable representation that allows the solution components to be assigned a quality measure (i.e., fitness). This approach differs from holistic ones, such as evolutionary algorithms, which assign equal fitness to all components of a solution based on their collective evaluation against an objective function. The EO algorithm is as follows [27].

**Step 1.**Randomly generate an individual S = (x_{1}, x_{2}, x_{3}, x_{4}). Set the optimal solution S_{best}= S.**Step 2.**For the current S:- Evaluate the fitness λ
_{i}for each component x_{i}, i = 1, 2,…,d. And d is the vector number of the solution space; - Find j satisfying λ
_{j}≤ λ_{i}for all i, x_{j}being the worst species; - Choose S′ ∈ N(S), such that x
_{j}must change its state by mutation operator; N(S) is the neighborhood of S; - Accept S = S′ unconditionally;
- If the current cost function value is less than the minimum cost function value, i.e., C(S) < C(S
_{best}), then set S_{best}= S.

**Step 3.**Repeat Step 2 as long as desired;**Step 4.**Return S_{best}and C(S_{best}).

#### 4.4 The Update the Strategy of the Frog Individual

_{B}Θ U

_{W})) means the number of all adjustment factors in adjustment sequence ST(U

_{B}Θ U

_{W}), l means the number of adjustment factor in ST(U

_{B}Θ U

_{W}) chosen to update U

_{W}and s means update the adjustment sequence for U

_{W}. Figure 3 is used to explain an example of the update strategy of the frog individual from U

_{B}to U

_{q}. From Figure 3, it can be attained that the length of ST(U

_{B}Θ U

_{W}) = 5, l = 3, s = (TO1, TO2, TO3.

## 5. Numerical Experiments and Discussion

_{c}= 0.4 In order to conduct the experiment, we implement the algorithm in C++ language and run it on a PC with 2.0 GHz, 512 MB of RAM memory and a Pentium processor running at 1,000 MHz. The results of the ISFLA on the instances are shown in Table 1, which are compared with the genetic algorithm (GA) [28], tabu search (TS) [29], artificial immune algorithm (AIA) [20] and ant colony optimization (ACO) [2]. The numbers in bold are the best among the five approaches. We used 10 standard cases (MK01-MK02) proposed in [27] to test our model.

## 6. Conclusions

## References

- Garey, M.R.; Johnson, D.S.; Sethi, R. The complexity of f1owshop and jobshop scheduling. Math. Oper. Res.
**1976**, 1, 117–129. [Google Scholar] - Yao, B.Z.; Hu, P.; Lu, X.H.; Gao, J.J.; Zhang, M.H. Transit network design based on travel time reliability. Transp. Res. Part C
**2014a**, 43, 233–248. [Google Scholar] - Yao, B.Z.; Yao, J.B.; Zhang, M.H.; Yu, L. Improved support vector machine regression in multi-step-ahead prediction for rock displacement surrounding a tunnel. Scientia Iranica.
**2014b**, 21, 1309–1316. [Google Scholar] - Yao, B.Z.; Hu, P.; Zhang, M.H.; Jin, M.Q. A Support Vector Machine with the Tabu Search Algorithm For Freeway Incident Detection. Int. J. Appl. Math. Comput. Sci.
**2014c**, 24, 397–404. [Google Scholar] - Yu, B.; Yang, Z.Z.; Yao, B.Z. An Improved Ant Colony Optimization for Vehicle Routing Problem. Eur. J. Oper. Res.
**2009**, 196, 171–176. [Google Scholar] - Yu, B.; Yang, Z.Z. An ant colony optimization model: The period vehicle routing problem with time windows. Transp. Res. Part E
**2011**, 47, 166–181. [Google Scholar] - Yu, B.; Yang, Z.Z.; Li, S. Real-Time Partway Deadheading Strategy Based on Transit Service Reliability Assessment. Transp. Res. Part A
**2012a**, 46, 1265–1279. [Google Scholar] - Yu, B.; Yang, Z.Z.; Jin, P.H.; Wu, S.H.; Yao, B.Z. Transit route network design-maximizing direct and transfer demand density. Transp. Res. Part C
**2012b**, 22, 58–75. [Google Scholar] - Yao, B.Z.; Yang, C.Y.; Yao, J.B.; Hu, J.J.; Sun, J. An Improved Ant Colony Optimization for Flexible Job Shop Scheduling Problems. Adv. Sci. Lett.
**2011**, 4, 2127–2131. [Google Scholar] - Bak, P.; Sneppen, K. Punctuated equilibrium and criticality in a simple model of evolution. Phys. Rev. Lett.
**1993**, 71, 4083–4086. [Google Scholar] - Fattahi, P.; Mehrabad, M.S.; Jolai, F. Mathematical Modeling and Heuristic Approaches to Flexible Job Shop Scheduling Problems. J. Intell. Manuf.
**2007**, 18, 331–342. [Google Scholar] - Gao, L.; Sun, Y.; Gen, M. A Hybrid Genetic and Variable Neighborhood Descent Algorithm for Flexible Job Shop Scheduling Problems. Comput. Oper. Res.
**2008**, 35, 2892–2907. [Google Scholar] - Eusuff, M.; Lansey, K. Optimization of Water Distribution Network Design Using the Shuffled Frog Leaping Algorithm. J. Water Resour. Plan. Manag.
**2003**, 129, 10–25. [Google Scholar] - Kennedy, J.; Eberhart, R. Particle Swarm Optimization, Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia; 1995; pp. 1942–1948.
- Alireza, R.V.; Mostafa, D.; Hamed, R.; Ehsan, S. A novel hybrid multi-objective shuffled frog-leaping algorithm for a bi-criteria permutation flow shop scheduling problem. Int. J. Adv. Manuf. Technol.
**2009**, 41, 1227–1239. [Google Scholar] - Bhaduri, A. Color image segmentation using clonal selection-based shuffled frog leaping algorithm, Proceedings of the ARTCom 2009—International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, India, 27–28 October 2009; pp. 517–520.
- Babak, A.; Mohammad, F.; Ali, M. Application of shuffled frog-leaping algorithm on clustering. Int. J. Adv. Manuf. Technol.
**2009**, 45, 199–209. [Google Scholar] - Li, X.; Luo, J.P.; Chen, M.R.; Wang, N. An improved shuffled frog-leaping algorithm with extremal optimization for continuous optimization. Inf. Sci.
**2012**, 192, 143–151. [Google Scholar] - Luo, J.P.; Li, X.; Chen, M.R. Improved Shuffled Frog Leaping Algorithm for Solving CVRP. J. Electr. Inf. Technol.
**2011**, 33, 429–434. [Google Scholar] - Boettcher, S.; Percus, A.G. Extremal Optimization: Methods derived from Co-Evolution, Proceedings of the Genetic and Evolutionary Computation Conference, New York, NY, USA, 13 April 1999; pp. 101–106.
- Bagheri, A.; Zandieh, M.; Mahdavi, I.; Yazdani, M. An Artificial Immune Algorithm for The Flexible Job-Shop Scheduling Problem. Future Gener. Comput. Syst.
**2010**, 26, 533–541. [Google Scholar] - Boettcher, S. Extremal Optimization for the Sherrington-Kirkpatrick Spin Glass. Eur. Phys. J. B
**2005**, 46, 501–505. [Google Scholar] - Chen, M.R.; Lu, Y.Z. A novel elitist multiobjective optimization algorithm: Multiobjective extremal optimization. Eur. J. Oper. Res.
**2008**, 188, 637–651. [Google Scholar] - Alireza, R.V.A.; Ali, H.M. A hybrid multi-object shuffled frog leaping algorithm for a mixed-model assembly line sequencing problem. Comput. Ind. Eng.
**2007**, 53, 642–666. [Google Scholar] - Luo, X.H.; Yang, Y.; Li, X. Solving TSP with shuffled frog-leaping algorithm, Proceedings of the Eighth International Conference on Intelligent Systems Design and Applications, Kaohsiung, Taiwan, 26–28 November 2008; pp. 228–232.
- Wang, C.R.; Zhang, J.W.; Yang, J.; Hu, C. A modified particle swarm optimization algorithm and its application for solving traveling salesman problem, Proceedings of the ICNN&B ′05. International Conference on Neural Networks and Brain, Beijing, China, 13–15 October 2005; pp. 689–694.
- Brandimarte, P. Routing and Scheduling in a Flexible Job Shop by Taboo Search. Ann. Oper. Res.
**1993**, 41, 157–183. [Google Scholar] - Pezzella, F.; Morganti, G.; Ciaschetti, G. A Genetic Algorithm for the Flexible Job-Shop Scheduling Problem. Comput. Oper. Res.
**2008**, 35, 3202–3212. [Google Scholar] - Mastrolilli, M.; Gambardella, L.M. Effective Neighbourhood Functions for the Flexible Job Shop Problem. J. Sched.
**2000**, 3, 3–20. [Google Scholar]

**Table 1.**Performance comparison between improved shuffled frog-leaping algorithm (ISFLA) and other algorithms. The numbers in bold are the best among the five approaches. TS, tabu search; AIA, artificial immune algorithm.

Problem | n × m | f | (LB, UB) | GA | TS | AIA | ACO | ISFLA |
---|---|---|---|---|---|---|---|---|

MK01 | 10 × 6 | 2.09 | (36, 42) | 40 | 40 | 40 | 40 | 40 |

MK02 | 10 × 6 | 4.1 | (24, 32) | 26 | 26 | 26 | 26 | 26 |

MK03 | 15 × 8 | 3.01 | (204, 211) | 204 | 204 | 204 | 204 | 204 |

MK04 | 15 × 8 | 1.91 | (48, 81) | 60 | 60 | 60 | 60 | 60 |

MK05 | 15 × 4 | 1.71 | (168, 186) | 173 | 173 | 173 | 173 | 173 |

MK06 | 10 × 15 | 3.27 | (33, 86) | 63 | 58 | 63 | 58 | 58 |

MK07 | 20 × 5 | 2.83 | (133, 157) | 139 | 144 | 140 | 140 | 139 |

MK08 | 20 × 10 | 1.43 | 523 | 523 | 523 | 523 | 523 | 523 |

MK09 | 20 × 10 | 2.53 | (299, 369) | 311 | 307 | 312 | 307 | 307 |

MK10 | 20 × 10 | 2.98 | (165, 296) | 212 | 198 | 214 | 198 | 198 |

**Table 2.**Computational results of the MK09 problem of several methods. AF, adjustment factor; AO, adjustment order; EO, extremal optimization.

Type | Average value | Optimal value | Relative error/% | Average computing time |
---|---|---|---|---|

SFLA | 332.98 | 325 | 9.64 | 51 |

SFLA + AF | 317.17 | 312 | 2.77 | 27 |

SFLA + AO | 316.42 | 312 | 2.69 | 29 |

SFLA + EO | 313.83 | 307 | 2.09 | 38 |

ISFLA | 310.54 | 307 | 0.42 | 32 |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lu, K.; Ting, L.; Keming, W.; Hanbing, Z.; Makoto, T.; Bin, Y.
An Improved Shuffled Frog-Leaping Algorithm for Flexible Job Shop Scheduling Problem. *Algorithms* **2015**, *8*, 19-31.
https://doi.org/10.3390/a8010019

**AMA Style**

Lu K, Ting L, Keming W, Hanbing Z, Makoto T, Bin Y.
An Improved Shuffled Frog-Leaping Algorithm for Flexible Job Shop Scheduling Problem. *Algorithms*. 2015; 8(1):19-31.
https://doi.org/10.3390/a8010019

**Chicago/Turabian Style**

Lu, Kong, Li Ting, Wang Keming, Zhu Hanbing, Takano Makoto, and Yu Bin.
2015. "An Improved Shuffled Frog-Leaping Algorithm for Flexible Job Shop Scheduling Problem" *Algorithms* 8, no. 1: 19-31.
https://doi.org/10.3390/a8010019