Next Article in Journal
Research on Output Prediction Method of Large-Scale Photovoltaic Power Station Based on Gradient-Boosting Decision Trees
Previous Article in Journal
Research on Single-Phase Grounding Fault Line Selection in Resonant Grounding System Based on Median Complementary Ensemble Empirical Mode Decomposition and Multiscale Permutation Entropy Normalization and K-Means Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Fruit Fly Algorithm to Solve No-Idle Permutation Flow Shop Scheduling Problem

School of Mechanical and Transportation Engineering, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(2), 476; https://doi.org/10.3390/pr13020476
Submission received: 29 December 2024 / Revised: 16 January 2025 / Accepted: 19 January 2025 / Published: 10 February 2025
(This article belongs to the Section AI-Enabled Process Engineering)

Abstract

:
The no-idle permutation flow shop scheduling problem (NIPFSP), as a current hot topic, is widely present in practical production scenarios in industries such as aviation and electronics. However, existing methods may face challenges such as excessive computational time or insufficient solution quality when solving large-scale NIFSSP instances. In this paper, a discrete fruit fly optimization algorithm (DFFO) is proposed for solving the NIPFSP. DFFO consists of three phases, i.e., the smell search phase based on the variable neighborhood, the visual search phase based on the probabilistic model, and the local search phase. In the smell search phase, multiple perturbation operators are constructed to further expand the search range of the solution; in the visual search phase, a probabilistic model is constructed to generate a series of positional sequences using some elite groups, and the concept of shared sequences is adopted to generate new individuals based on the positional sequences and shared sequences. In the local search stage, the optimal individuals are refined with the help of an iterative greedy algorithm, so that the fruit flies are directed to more promising regions. Finally, the test results show that DFFO’s performance is at least 28.1% better than other algorithms, which verifies that DFFO is an efficient method to solve NIPFSP.

1. Introduction

Production scheduling plays a crucial role in achieving advanced manufacturing, and the modeling and optimization of production scheduling are important topics in the field of systems engineering. This paper investigates the no-idle permutation flow shop scheduling problem (NIPFSP), which is an extension of the classical permutation flow shop scheduling problem. In NIPFSP, each machine must operate continuously from the beginning until the completion of the final job without any interruption [1]. The application areas of NIPFSP include casting production and fiberglass processing, among others [2,3].
Implementing no-idle shop scheduling can effectively reduce idle energy consumption of machine tools, making it an effective approach for energy conservation and emission reduction in the machining industry. Given the engineering and academic significance of NIPFSP, researchers at home and abroad have conducted a series of studies on this topic. Yin et al. [4] addressed NIPFSP with the objective of minimizing makespan by employing the fruit fly optimization algorithm as the main framework and incorporating the immune algorithm’s stimulation mechanism to enhance the algorithm’s search capabilities. Yan et al. [5], inspired by the concepts of variable neighborhood search and iterative greedy algorithms, proposed a new hybrid bird swarm algorithm to minimize the maximum completion time for NIPFSP. Experimental tests demonstrated the effectiveness of the algorithm. Zhao et al. [6] investigated NIPFSP with the objective of minimizing total tardiness under release time constraints. They designed an iterative greedy algorithm to solve the problem, and extensive experiments validated the improved algorithm’s effectiveness. Zhao et al. [7] studied a multi-objective hybrid NIPFSP with the objectives of minimizing maximum completion time and minimizing maximum tardiness. They proposed a multi-objective discrete sine optimization algorithm. Pei et al. [8], targeting NIPFSP with the goal of minimizing total tardiness, proposed a hybrid evolutionary algorithm based on the critical block structure.
The fruit fly optimization algorithm (FFO), proposed by Pan [9] based on the foraging behavior of fruit flies, is an evolutionary algorithm. Wang et al. [10] compared the FFO with more than ten metaheuristic algorithms, including artificial bee colony, ant colony optimization, and fish swarm algorithms, using benchmark instances. The results verified the strong optimization capability of FFO, attracting widespread attention. FFO has been widely utilized across multiple domains, including electric load prediction and parameter estimation. Ibrahim et al. [11] implemented a wind-driven FFO for identifying parameters in photovoltaic cell models. Similarly, Saminathan and Thangavel [12], along with their team, integrated the whale algorithm with FFO to tackle energy-saving challenges involving delays. Hu et al. [13] applied a combination of FFO and deep reinforcement learning to forecast energy usage in datasets from three extensive pipelines in China. In recent years, the straightforward design of FOA, and its parallel search framework, has facilitated its integration with various problem-specific search mechanisms, driving its redevelopment and achieving significant results in addressing production scheduling problems. Zhu et al. [14] proposed a discrete knowledge-guided learning FFO to address no-wait distributed flow shop scheduling problems. Guo et al. [15] designed an FFO based on a differential flight strategy to solve distributed flow shop scheduling problems with sequence-dependent setup times. Furthermore, Guo et al. [16] introduced an FFO to minimize the total flow time in distributed flow shop scheduling problems. This algorithm employed an initialization method that considered both population quality and diversity, and its effectiveness was validated through experimental testing.
In summary, metaheuristic algorithms possess general applicability, making them suitable for various optimization problems. However, this generality also introduces certain limitations, as they often overlook the unique structures and characteristics of specific problems. For flow shop scheduling problems, such algorithms frequently fail to design operators tailored to problem-specific features, such as job processing sequences. Consequently, their search efficiency and solution quality may be suboptimal. This paper proposes a discrete fruit fly optimization algorithm (DFFO) to solve the NIPFSP with the objective of minimizing makespan. The main contributions of this paper are as follows.
(1)
Using the characteristics of specific problems, multiple perturbation operators are designed to enhance the global search ability of the algorithm.
(2)
A probabilistic model based on elite subsets is constructed, and the concept of common sequence is introduced. The evolution of fruit flies is achieved through location sequences and common sequences.
(3)
The iterative greedy algorithm is used to conduct local searches for the best individuals and guide the fruit fly population to move to a more promising area.
(4)
Finally, the experiment verifies that DFFO is an effective method to solve NIPFSP.
The rest of this article is organized as follows. The objectives and constraints of the NIPFSP are set out in Section 2. Section 3 describes the specific steps that DFFO takes to address NIPFSP. Section 4 presents numerical analysis and comparison. Section 5 summarizes this paper and provides prospects for further research.

2. Problem Description

The NIPFSP can be described as follows [5]: A set of n jobs J = { J 1 , J 2 , , J n } needs to be processed on m machines M = { M 1 , M 2 , , M m } following the same routing sequence. Each job consists of m operations, and the i-th operation can only be performed on machine Mi. Notably, there must be no idle time between two consecutive jobs on the same machine. Once an operation starts, it cannot be interrupted midway, and each machine can process at most one job at a time. This study aims to minimize the maximum completion time (makespan) C max . The meanings of the symbols used in this paper are shown in Table 1.
The maximum completion time C max is calculated as follows:
G ( π q ( 1 ) , k , k + 1 ) = P ( π ( 1 ) , k + 1 ) , k = 1 , 2 , , m 1
G ( π q ( j ) , k , k + 1 ) = max { G ( π q ( j 1 ) , k , k + 1 ) P ( π ( j ) , k } + P ( π ( j ) , k + 1 ) j = 2 , 3 , , n , k = 1 , 2 , , m 1
C max = k = 1 m 1 G ( π q ( n ) , k , k + 1 ) + j = 1 n P ( π ( j ) , 1 )
Equation (1) is the difference between the time to complete the first job on machine k and machine k + 1. Equation (2) is the difference in time between the completion of job π q ( j ) on machine k and machine k + 1. Equation (3) is the makespan of NIPFSP.

3. DFFO Algorithm

This section provides a detailed description of the DFFO, which comprises four main phases: population initialization, smell search based on variable neighborhood search, visual search using a probability model, and a local search phase to refine the current best solution. Figure 1 presents the flowchart of the DFFO algorithm.

3.1. Population Initialization

This paper adopts a job sequence-based encoding method, where each fruit fly represents a complete job sequence, and a feasible solution π = { π ( 1 ) , π ( 2 ) , , π ( n ) } corresponds to a scheduling plan. A high-quality initial population can facilitate rapid optimization in subsequent sub-populations, thereby improving the algorithm’s performance. The NEH (Nawaz–Enscore–Ham) heuristic [17] is a widely used method for obtaining good solutions quickly. Therefore, this paper combines two initialization strategies: 50% of the individuals are generated using the NEH heuristic, and the other 50% are generated using a random initialization approach.

3.2. Smell Search Phase Based on Variable Neighborhoods

The smell search phase represents the evolutionary phase of fruit flies. According to the literature [18], no single neighborhood structure is universally applicable to solve all problems. In the DFFO algorithm, a set of neighborhood structures is specifically designed based on the unique characteristics of NIPFSP. This neighborhood structure set is used to develop various local search operators, including swap, double swap, bundle insertion, and reverse-block insertion. The neighborhood structures are described as follows:
S1 (swap): randomly swaps the positions of two different jobs.
S2 (double swap): performs two consecutive swap operations.
S3 (bundle insertion): randomly removes two jobs from the solution and inserts them as a bundle into all possible positions in the remaining sequence.
S4 (insert-reverse block insertion): randomly selects multiple jobs from the solution, bundles them, shuffles their order, and inserts them into all possible positions in the remaining sequence.
The DFFO algorithm employs a variable neighborhood descent (VND) strategy [5] to explore potential search regions. Each individual sequentially applies S1 through S4. If a neighborhood solution improves upon the previous solution, it replaces the latter. To illustrate the process clearly, a detailed example is presented in Figure 2.

3.3. Visual Search Phase Based on a Probability Model

In traditional FFO algorithms, there is only a single central fruit fly, and all fruit flies fly towards it during the visual search phase. However, this approach provides limited information and may lead to suboptimal algorithm performance [16]. To address this, this study extends the single central fruit fly to multiple elite individuals. By collecting feature information from these elite fruit flies, a probability model is established. Fruit flies evolve by continuously learning from the probability model. In summary, this paper proposes a visual search phase based on a probability model, as described by Equation (4).
π n e w = π o l d { R ( p π c e n t e r ) π c o m }
where π n e w ,   π o l d represents the new solution and the old solution before and after the update, respectively. π c o m is the common sequence shared by the average elite individuals. denotes the crossover operation. R ( p π c e n t e r ) is the position of all jobs generated using the probability model. R ( p π c e n t e r ) π c o m refers to the new sequence obtained by inserting the positions generated by R ( p π c e n t e r ) into π c o m . The crossover method employs partial matching crossover (PMX). The detailed steps of the visual search phase based on the probability model are as follows:
(1) Construction of the probability model. The current population is sorted based on fitness values, and the top λ × P S individuals are selected to form the elite population, where the population size is PS. λ ( 0 , 1 ) is used to control the size of the elite population. Based on the elite population, a probability model is constructed. Let P ( t ) denote the probability matrix for the t-th generation, and p i , j represents the probability of job i appearing in position j. The calculation of p i , j is as follows:
p i , j ( t ) = ( 1 α ) [ ( 1 β ) p i , j ( t 1 ) + β j × P S × λ k = 1 P S × λ I i , j k ] + α j I i , j b e s t
I i , j k = 1 T h e   i - t h   j o b   i n   k   a p p e a r s   a t   o r   b e f o r e   p o s i t i o n   j   0 else
I i , j best = 1 T h e   i - t h   j o b   i n   t h e   o p t i m a l   e n t i t y   a p p e a r s   a t   o r   b e f o r e   p o s i t i o n   j 0 else
where p i , j ( 0 ) = 1 n , I i , j k , I i , j b e s t are the indicator functions for the k-th individual and the optimal individual in the fruit fly population, respectively. α , β ( 0 , 1 ) represents a parameter, and α = β = 0.1 . The optimal individual is the smallest makespan in the group.
(2) Calculation of the common sequence. The common sequence reflects the frequency with which a job appears in a specific position. Based on this concept, the DFFO algorithm introduces the idea of a common sequence, facilitating the evolution of fruit flies through interactions between position sequences and the common sequence. In this phase, the Borda method is employed to calculate the total score (votes) for each job appearing in each position within the common sequence. Jobs are then sequentially assigned to positions based on their scores. If multiple jobs have the same score, one is randomly selected from those with equal scores to be placed in that position.
(3) Model sampling. Reference [19] proposed the following sampling method: Use the roulette wheel method to select a job i and place it in position j in the sequence. Normalize the probability matrix P. Repeat the above process until a complete sequence is generated. However, during the normalization of the probability matrix P, the entire matrix needs to be traversed continuously, resulting in high time complexity. To address this, this study adopts a job-centric approach. The roulette wheel method is applied to select the position of each job in the common sequence. Then, based on the generated position information, all jobs are inserted into a new sequence.
First, for each job in π c o m , generate a random number r between 0 and 1. Assign the position with the smallest probability greater than r as the current job’s position. Let R p represent the generated position sequence, and π denote the newly generated sequence. For each job π c o m ( i ) in π c o m , if π ( R p ( i ) ) does not contain the job, perform π ( R p ( i ) ) = π c o m ( i ) . Otherwise, find the nearest vacant position after R p ( i ) and insert π c o m ( i ) into that position. If no vacant position exists after R p ( i ) , find the nearest vacant position before R p ( i ) and insert.
To provide a detailed example illustrating the above process, assume the population consists of five fruit flies: [ 4 , 3 , 1 , 0 , 2 ] , [ 0 , 3 , 4 , 2 , 1 ] , [ 1 , 2 , 4 , 3 , 0 ] , [ 1 , 0 , 4 , 3 , 2 ] , and [ 4 , 0 , 2 , 1 , 3 ] , where [ 4 , 3 , 1 , 0 , 2 ] , [ 0 , 3 , 4 , 2 , 1 ] , and [ 1 , 2 , 4 , 3 , 0 ] are elite individuals, and [ 4 , 3 , 1 , 0 , 2 ] is the best individual.
First, calculate the probability model P.
① Initialize P and calculate I and Ibest.
P ( 0 ) = 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 I = 1 1 1 2 3 1 1 2 2 3 0 1 2 2 3 0 2 2 3 3 1 1 2 3 3 I b e s t = 0 0 0 1 1 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 1 1 1 1
② Calculate P
P = 0.192 0.177 0.172 0.202 0.2 0.192 0.177 0.215 0.202 0.2 0.162 0.177 0.182 0.177 0.2 0.162 0.242 0.215 0.210 0.2 0.162 0.227 0.215 0.210 0.2
Normalization process:
P = 0.203 0.187 0.182 0.214 0.212 0.195 0.179 0.218 0.205 0.203 0.180 0.197 0.203 0.197 0.223 0.157 0.235 0.209 0.204 0.194 0.255 0.198 0.188 0.183 0.175
Next, calculate the common sequence π c o m .
[ 4 , 3 , 1 , 0 , 2 ] [ 0 , 3 , 2 , 4 , 1 ] [ 1 , 2 , 4 , 3 , 0 ] θ = [ 2.0 , 1.6 , 2.6 , 2.2 , 1.6 ] [ 1 , 0 , 4 , 3 , 2 ] [ 4 , 0 , 2 , 1 , 3 ]
π c o m ( 1 ) = 0 π c o m ( 4 ) = 1 π c o m ( 0 ) = 2 π c o m ( 3 ) = 3 π c o m ( 2 ) = 4
Afterward, calculate Rp based on P and π c o m .
(1) π c o m ( 0 ) = 2 , p p o s [ 0 ] = 0.18 , p p o s [ 1 ] = 0.377 , p p o s [ 3 ] = 0.574 , p p o s [ 2 ] = 0.774 , p p o s [ 4 ] = 1 , if r = 0.35, then r < p p o s [ 1 ] , R p ( 2 ) = 1 .
(2) π c o m ( 1 ) = 0 , p p o s [ 2 ] = 0.182 , p p o s [ 1 ] = 0.369 , p p o s [ 0 ] = 0.572 , p p o s [ 4 ] = 0.784 , p p o s [ 3 ] = 1 , if r = 0.60, then r < p p o s [ 4 ] , R p ( 0 ) = 4 .
(3) π c o m ( 2 ) = 4 , p p o s [ 4 ] = 0.175 , p p o s [ 3 ] = 0.358 , p p o s [ 2 ] = 0.546 , p p o s [ 1 ] = 0.744 , p p o s [ 0 ] = 1 , if r = 0.54, then r < p p o s [ 2 ] , R p ( 4 ) = 2 .
(4) π c o m ( 3 ) = 3 , p p o s [ 0 ] = 0.157 , p p o s [ 4 ] = 0.351 , p p o s [ 3 ] = 0.555 , p p o s [ 2 ] = 0.764 , p p o s [ 1 ] = 1 , if r = 0.25, then r < p p o s [ 4 ] , R p ( 3 ) = 4 .
(5) π c o m ( 4 ) = 1 , p p o s [ 1 ] = 0.179 , p p o s [ 0 ] = 0.374 , p p o s [ 4 ] = 0.577 , p p o s [ 3 ] = 0.782 , p p o s [ 2 ] = 1 , if r = 0.04, then r < p p o s [ 1 ] , R p ( 1 ) = 1 .
Finally, combine the position sequence to insert π c o m into π .
Insert π c o m ( 0 ) = 2 into position 1 π = [ 0 , 2 , 0 , 0 , 0 ]
Insert π c o m ( 1 ) = 0 into position 1 π = [ 0 , 2 , 0 , 0 , 0 ]
Insert π c o m ( 2 ) = 4 into position 2 π = [ 0 , 2 , 4 , 0 , 0 ]
Insert π c o m ( 3 ) = 3 into position 4 π = [ 0 , 2 , 4 , 3 , 0 ]
Insert π c o m ( 4 ) = 1 into position 1 π = [ 1 , 2 , 4 , 3 , 0 ] .

3.4. Local Search

The iterative greedy algorithm (IG) [20] is a highly effective algorithm for local searches, as verified by RUIZ. In this study, the IG algorithm is applied to perform a local search on the best individual in the population. First, identify the best individual π = 4 , 2 , 5 , 3 , 1 in the population. Next, randomly remove d jobs from the best individual, dividing the job sequence into two parts: π d and π r . Then, sequentially insert the d jobs from π d into their optimal positions in π r , thereby obtaining the locally optimized best individual π n e w . Figure 3 illustrates an example of the local search process.

3.5. Complexity Analysis

The DFFO algorithm consists of three phases: the smell search phase based on variable neighborhoods, the visual search phase based on a probability model, and the local search phase. After a detailed description of these phases, this section analyzes the time complexity of the DFFO algorithm. Smell search phase: in this phase, a variable neighborhood search is employed, and its time complexity is denoted as O ( P S × n ) . Visual search phase: First, the complexity of constructing the probability model depends on the population size and the size of the position matrix, with a time complexity of O ( P S × n 2 ) . Then, the time complexity for sampling and updating the positions of individuals is O ( n 2 ) . Local search phase: the IG algorithm is used to optimize the best individual, with a time complexity of O ( P S × d × n ) . In conclusion, the total time complexity of the DFFO algorithm is O ( P S × n ) + O ( P S × n 2 ) + O ( n 2 ) + O ( P S × d × n ) .

4. Numerical Results and Comparison

4.1. Experimental Setup

To test the performance of the DFFO algorithm, the standard test set for NIPFSP proposed in reference [21] was used for validation. The configuration of these instances was as follows: n { 50 , 100 , 150 , 200 , 250 , 300 } , m { 10 , 20 , 30 } , with processing times randomly generated between [1, 99]. Each instance was independently run 10 times, with a termination condition of T max = 30 × n × ( m / 2 ) milliseconds. The evaluation metrics used were the average relative percentage deviation (ARPD) and standard deviation (SD), and their detailed calculation methods are as follows:
A R P D = 1 R i = 1 R C i C * C * × 100 %
S D = 1 R i = 1 R C i C * C * × 100 A R P D 2
where Ci represents the solution obtained by a specific algorithm during the i-th run on a given test instance. C* denotes the best-known solution currently discovered. R is the number of runs the algorithm performs on a single test instance.

4.2. Parameter Configuration

The parameters of the DFFO algorithm include the population size PS, the elite population ratio λ , and the job length d. The parameter configurations are as follows: P S { 40 , 60 , 80 , 100 } , λ { 0.2 , 0.3 , 0.4 , 0.5 } , and d { 3 , 5 , 7 , 9 } . These three parameters result in a total of 4 × 4 × 4 = 64 different combination configurations. Using the same examples for algorithm calibration can lead to overfitting. To calibrate the parameters of the DFFO algorithm, the Taillard benchmark set [22] is used in this section. This dataset includes 12 groups of varying sizes, and each test instance is independently run 10 times. Additionally, a multivariate analysis of variance (ANOVA) method is employed to analyze the results. Table 2 presents the results of the multivariate analysis of variance, and Figure 4 shows the main effects plot for the parameters.
From Table 2, it can be observed that the p-values for the parameters PS, λ , and d are all less than the 0.05 confidence level. This indicates that these three parameters significantly influence the algorithm’s performance. Based on the multivariate analysis of variance results in Table 2 and the F-ratio, it is evident that λ has the greatest impact on the performance of the DFFO algorithm. This is because the proportion of the elite population contributes to enhancing the algorithm’s performance. The next most influential parameters are d and PS. In combination with the main effects plot shown in Figure 4, the DFFO parameters are selected as follows: PS = 80, λ = 0.3, and d = 7.

4.3. Performance Analysis of DFFO Components

This section validates the effectiveness of each component of the DFFO algorithm through experimental design. The DFFO algorithm mainly consists of the initialization strategy, the olfactory search phase based on variable neighborhoods, the visual search phase based on a probability model, and the local search strategy. To evaluate these components, the following variants are designed:
(1)
DFFO1: random initialization of the fruit fly population’s central positions.
(2)
DFFO2: replacing variable neighborhood search with single neighborhood search.
(3)
DFFO3: removing the visual search phase based on the probability model and using the original update mechanism of the algorithm.
(4)
DFFO4: removing the local search strategy.
The experimental setup remains the same as described earlier. Table 3 presents the average relative percentage deviation (APRD) and standard deviation (SD) values obtained by the DFFO algorithm and its variants. To verify whether the conclusions are statistically significant, a multivariate analysis of variance (MANOVA) was conducted to test the differences in the experimental results in Table 3. Figure 5 shows the 95% confidence interval plot for each variant, and Figure 6 illustrates the variance interval plot for each variant.
From Table 3, it can be observed that the ARPD of the DFFO algorithm is −0.230, which is better than those of the other variants: DFFO1 (0.044), DFFO2 (0.048), DFFO3 (0.140), and DFFO4 (2.481). This demonstrates that each strategy contributes positively to the algorithm’s performance. Additionally, the DFFO algorithm has the smallest standard deviation, indicating that the combination of these strategies enhances the robustness of the algorithm. Furthermore, as shown in Figure 5, the intervals of DFFO and its variants do not overlap, and in Figure 6, the variance intervals of the variants show the smallest fluctuations. These observations validate the above conclusions.

4.4. Comparative Analysis with Related Algorithms

In this section, DFFO is compared with more advanced algorithms for solving flow shop scheduling problems, such as novel hybrid bird swarm algorithm (NHBSA) [5], the forward-moving iterative greedy algorithm (FMIGA) [6], the improved fruit fly optimization algorithm (IFFO) [4], and the variable internal iterative algorithm (VIIA) [23]. In order to ensure the fairness of comparison, the parameter values of the comparison algorithm have been calibrated like those of the DFFO algorithm. For detailed values, see Table 4. The experimental setup remains the same as described previously. Table 5 presents the ARPD values of DFFO and the other comparative algorithms.
To verify the statistical significance of these conclusions, ANOVA was conducted on the experimental results in Table 5. Figure 6 shows the 95% confidence interval plot for each algorithm variant.
From Table 5, it can be observed that the ARPD of the DFFO algorithm is −0.230, which outperforms NHBSA (0.288), FMIGA (0.285), IFFO (0.188), and VIIA (0.171). This indicates that the DFFO algorithm is superior to the compared algorithms.
As shown in Figure 7, the intervals of DFFO and the comparative algorithms do not overlap, demonstrating that the DFFO algorithm exhibits statistically significant differences from the other algorithms. Additionally, from Figure 8, the variance intervals of DFFO and the comparative algorithms show the smallest fluctuations, further indicating that the DFFO algorithm has better robustness compared with the other algorithms.
Additionally, Figure 9 presents the convergence curves of DFFO, NHBSA, FMIGA, VIIA, and IFFO algorithms on large-scale test instances. From Figure 9, it can be observed that the overall performance of the DFFO algorithm is significantly better than that of NHBSA, FMIGA, VIIA, and IFFO on the two large-scale test instances. Moreover, as the problem scale increases, the advantages of DFFO become increasingly pronounced. Based on the experimental results mentioned above, the DFFO algorithm demonstrates greater competitiveness compared with other peer algorithms.

4.5. Experimental Summary

This section focuses on a summary of the above experiments. In order to demonstrate the effectiveness of the DFFO algorithm, the parameters of the DFFO algorithm are first calibrated (Section 4.2), then the performance of the components of the DFFO algorithm is analyzed (Section 4.3), and finally the accuracy and stability of the most advanced algorithms are tested (Section 4.4). After the above operation, it is proved that the DFFO algorithm is superior to the NHBSA, FMIGA, VIIA, and IFFO algorithms in convergence speed, precision, and robustness. Especially when solving the actual production scheduling problem, all the algorithms are under the same stopping criterion. The experimental results show that the performance of the DFFO algorithm is at least 28.1% better than the other algorithms, thus verifying the effectiveness of the DFFO algorithm.
Based on the above experimental results, the effectiveness of the DFFO algorithm is verified. On the one hand, DFFO guides bees to approach the real optimal solution by using specific problem operators and effective strategies. On the other hand, the evolutionary mechanism of DFFO can ensure the distribution of solutions as much as possible. Finally, the experimental results show that the MABC algorithm is more competitive than the other algorithms.

5. Conclusions

This paper proposes a DFFO algorithm to solve the NIPFSP. DFFO consists of three phases, i.e., the smell search phase based on the variable neighborhood, the visual search phase based on the probabilistic model, and the local search phase. In the smell search phase, multiple perturbation operators are constructed to further expand the search range of the solution; in the visual search phase, a probabilistic model is constructed to generate a series of positional sequences using some elite groups, and the concept of shared sequences is adopted to generate new individuals based on the positional sequences and shared sequences. In the local search stage, the optimal individuals are refined with the help of an iterative greedy algorithm, so that the fruit flies are directed to more promising regions. Finally, the test results show that DFFO’s performance is at least 28.1% better than other algorithms, which verifies that DFFO is an efficient method to solve the NIPFSP.
Since the DFFO algorithm’s evolutionary operators were designed based on the specific characteristics of the problem, it has certain limitations and may not be directly applicable to other scheduling problems. However, it provides valuable insights for solving other scheduling challenges.
For future research, we aim to integrate different strategies, such as combining deep reinforcement learning, into the fruit fly optimization framework to address multi-objective scheduling problems, path planning problems, and integrated scheduling issues. This is one of the directions we are actively pursuing.

Author Contributions

Conceptualization, F.Z.; methodology, F.Z.; software, F.Z.; validation, F.Z.; formal analysis, F.Z.; investigation, F.Z.; resources, F.Z.; data curation, F.Z.; writing—original draft preparation, F.Z.; writing—review and editing, F.Z.; visualization, F.Z.; supervision, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the need for further analysis in future studies. The researchers plan to conduct additional analyses, and, thus, the data are temporarily withheld to preserve the novelty of subsequent research findings.

Conflicts of Interest

All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Fernandez-Viagas, V.; Molina-Pariente, J.M.; Framinan, J.M. Generalised accelerations for insertion-based heuristics in permutation flowshop scheduling. Eur. J. Oper. Res. 2020, 282, 858–872. [Google Scholar] [CrossRef]
  2. Bagheri Rad, N.; Behnamian, J. Recent trends in distributed production network scheduling problem. Artif. Intell. Rev. 2022, 55, 2945–2995. [Google Scholar] [CrossRef]
  3. Huang, J.P.; Pan, Q.K.; Gao, L. An effective iterated greedy method for the distributed permutation flowshop scheduling problem with sequence-dependent setup times. Swarm Evol. Comput. 2020, 59, 100742. [Google Scholar] [CrossRef]
  4. Yin, R.X.; Feng, X.Q.; Wu, T. Improved fruit fly algorithm for no idle flow shop scheduling problem. Modul. Mach. Tool Autom. Manuf. Technol. 2022, 32, 142–150. [Google Scholar]
  5. Yan, H.C.; Tang, W.; Yao, B. New hybrid bird swarm Algorithm for no idle flow shop scheduling Problem. Microelectron. Comput. 2022, 39, 98–106. [Google Scholar]
  6. Zhao, Z.M.; Wang, J.H.; Zhu, K. Minimum iterative greedy algorithm for total delay in no-idle permutation flow shop. Modul. Mach. Tool Autom. Process. Technol. 2023, 3, 177–182. [Google Scholar]
  7. Zhao, R.; Lang, J.; Gu, X.S. Hybrid no-idle permutation flow shop scheduling based on multi-objective discrete sine optimization algorithm. J. East China Univ. Sci. Technol. (Nat. Sci. Ed.) 2022, 48, 76–86. [Google Scholar]
  8. Pei, X.B.; Li, Y.Z. Application of improved hybrid evolutionary Algorithm to no-idle permutation flow shop scheduling problem. Oper. Res. Manag. 2019, 29, 204–212. [Google Scholar]
  9. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  10. Wang, L.; Xiong, Y.N.; Li, S.; Zeng, Y.R. New fruit fly optimization algorithm with joint search strategies for function optimization problems. Knowl.-Based Syst. 2021, 176, 77–96. [Google Scholar] [CrossRef]
  11. Ibrahim, I.A.; Hossain, M.J.; Duck, B.C. A hybrid wind driven-based fruit fly optimization algorithm for identifying the parameters of a double-diode photovoltaic cell model considering degradation effects. Sustain. Energy Technol. Assess. 2022, 50, 101685. [Google Scholar] [CrossRef]
  12. Saminathan, K.; Thangavel, R. Energy efficient and delay aware clustering in mobile adhoc network: A hybrid fruit fly optimization algorithm and whale optimization algorithm approach. Concurr. Comput. Pract. Exp. 2022, 34, 6867. [Google Scholar] [CrossRef]
  13. Hu, G.; Xu, Z.; Wang, G.; Zeng, B.; Liu, Y.; Lei, Y. Forecasting energy consumption of long-distance oil products pipeline based on improved fruit fly optimization algorithm and support vector regression. Energy 2021, 224, 120153. [Google Scholar] [CrossRef]
  14. Zhu, N.; Zhao, F.; Wang, L.; Ding, R.; Xu, T. A discrete learning fruit fly algorithm based on knowledge for the distributed no-wait flow shop scheduling with due windows. Expert Syst. Appl. 2022, 198, 116921. [Google Scholar] [CrossRef]
  15. Guo, H.W.; Sang, H.Y.; Zhang, B.; Meng, L.L.; Liu, L.L. An effective metaheuristic with a differential flight strategy for the distributed permutation flowshop scheduling problem with sequence-dependent setup times. Knowl.-Based Syst. 2022, 242, 108328. [Google Scholar] [CrossRef]
  16. Guo, H.W.; Sang, H.Y.; Zhang, X.J.; Duan, P.; Li, J.Q.; Han, Y.Y. An effective fruit fly optimization algorithm for the distributed permutation flowshop scheduling problem with total flowtime. Eng. Appl. Artif. Intell. 2023, 123, 106347. [Google Scholar] [CrossRef]
  17. Namaz, M.; Enscore, E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar]
  18. Ceberio, J.; Irurozki, E.; Mendiburu, A.; Lozano, J.A. A distance-based ranking model estimation of distribution algorithm for the flowshop scheduling problem. IEEE Trans. Evol. Comput. 2014, 18, 286–300. [Google Scholar] [CrossRef]
  19. Wang, S.Y.; Wang, L.; Liu, M.; Xu, Y. An order-based estimation of distribution algorithm for stochastic hybrid flow-shop scheduling problem. Int. J. Comput. Integr. Manuf. 2014, 28, 307–320. [Google Scholar] [CrossRef]
  20. Ruiz, R.; Thoama, S. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  21. Ruiz, R.; Vallada, E.; Fernandez-Martinez, C. Scheduling in flowshops with no-idle machines. In Computational Intelligence in Flow Shop and Job Shop Scheduling; Springer: Berlin/Heidelberg, Germany, 2009; pp. 21–51. [Google Scholar]
  22. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  23. Li, J.; Li, Y.W. Variable block internal iteration algorithm for no idle flow shop problem. Appl. Res. Comput. 2022, 39, 3667–3672. [Google Scholar]
Figure 1. DFFO flow chart.
Figure 1. DFFO flow chart.
Processes 13 00476 g001
Figure 2. Four kinds of neighborhood perturbation operators.
Figure 2. Four kinds of neighborhood perturbation operators.
Processes 13 00476 g002
Figure 3. Local search process.
Figure 3. Local search process.
Processes 13 00476 g003
Figure 4. Main effect diagram of DFFO parameters.
Figure 4. Main effect diagram of DFFO parameters.
Processes 13 00476 g004
Figure 5. The 95% confidence intervals for DFFO and variants.
Figure 5. The 95% confidence intervals for DFFO and variants.
Processes 13 00476 g005
Figure 6. Variogram of DFFO and each variation.
Figure 6. Variogram of DFFO and each variation.
Processes 13 00476 g006
Figure 7. The 95% confidence interval diagram of DFFO and comparison algorithm.
Figure 7. The 95% confidence interval diagram of DFFO and comparison algorithm.
Processes 13 00476 g007
Figure 8. Variance interval diagram of DFFO and comparison algorithm.
Figure 8. Variance interval diagram of DFFO and comparison algorithm.
Processes 13 00476 g008
Figure 9. Convergence diagram on a DFFO large-scale test instance.
Figure 9. Convergence diagram on a DFFO large-scale test instance.
Processes 13 00476 g009
Table 1. Meaning of symbols.
Table 1. Meaning of symbols.
SymbolImplication
i Machine   index ,   i { 1 , 2 , k , m 1 }
j Job   index ,   j { 1 , 2 , , n }
π = [ π ( 1 ) , π ( 2 ) , , π ( n ) ] Represents a complete job ordering
π q ( j ) Represents a partial sequence of π
G ( π q ( j ) , k . k + 1 ) Represents the difference in completion times between machines k and k + 1 after completing sequence π q ( j )
P ( π ( j ) , k ) Denotes the processing time of job π ( j ) on machine k
C max The makespan
Table 2. Results of multivariate analysis of variance parameters.
Table 2. Results of multivariate analysis of variance parameters.
SourceSum of SquaresDegrees of FreedomMean SquareF-Ratiop-Value
P S 0.3930.18941.320.0000
λ 181.26335.2652266.320.0000
d 5.3230.83656.010.0000
P S λ 2.0390.21560.480.5489
P S d 2.8990.48560.520.6629
λ d 4.1690.27690.090.8413
Table 3. APRD and SD values of DFFO and variant algorithms.
Table 3. APRD and SD values of DFFO and variant algorithms.
nmDFFO1DFFO2DFFO3DFFO4DFFO
ARPDSDARPDSDARPDSDARPDSDARPDSD
100.1840.0910.2140.2210.4170.2183.4460.9650.1230.095
50200.0800.1300.0080.1070.4520.2734.2411.2350.0360.128
300.0210.3720.0250.2050.5210.5181.3540.025−0.0700.190
100.1020.0600.1230.0720.1500.0452.1540.6810.0620.070
100200.0600.168−0.0180.2310.1560.1843.0501.201−0.0850.130
30−0.1540.331−0.1330.2960.2760.4054.9031.201−0.3100.210
100.0040.0020.0080.0030.0080.0020.7170.3560.0030.002
150200.0930.1500.1850.0930.2320.1673.2650.6860.0010.088
300.0060.2580.1320.2210.0490.2323.4250.814−0.1780.165
100.0040.014−0.0010.0100.0220.0550.4720.252−0.0050.002
200200.1430.1460.0600.1380.0800.1032.2040.594−0.0010.108
30−0.0960.235−0.0250.145−0.0110.2393.3940.701−3.4780.156
100.0040.014−0.0030.007−0.0050.0040.4780.256−0.0050.004
250200.0880.1240.0790.0790.0880.0852.0230.591−0.0100.005
30−0.0200.165−0.0210.121−0.0590.1263.3811.083−0.1580.129
100.0010.0010.0030.0040.0060.0120.6280.2540.0000.002
300200.0980.0950.1120.0560.0460.0122.3640.487−0.0160.040
300.1320.1950.1160.1080.0990.1403.1750.745−0.0460.072
Mean 0.0440.1420.0480.1180.1400.1572.4810.699−0.2300.088
Table 4. Parameter values of DFFO and comparison algorithm.
Table 4. Parameter values of DFFO and comparison algorithm.
AlgorithmParameter Value
NHBSAPS = 80, learning factor c1 = 0.5, and learning factor c2 = 0.5
FGMIGAPS = 80 and d = 7
IFFOPS = 80 and local search times LS = 10
VIIAPS = 80 and d = 7
DFFOPS = 80, λ = 0.3, and d = 7
Table 5. DFFO and APRD values of the comparison algorithm.
Table 5. DFFO and APRD values of the comparison algorithm.
nmNHBSAFMIGAIFFOVIIADFFO
ARPDSDARPDSDARPDSDARPDSDARPDSD
100.4790.4150.4420.2990.3530.2890.5420.7090.1230.095
50201.5860.2840.3870.1860.3210.1831.1580.6730.0360.128
300.5830.3440.5740.1800.7920.1880.2160.801−0.0700.190
100.1540.4350.2430.1950.1870.1690.0900.8550.0620.070
100200.3190.3020.2990.1920.1770.1570.1580.563−0.0850.130
300.5080.3430.6310.2020.2100.2100.2430.805−0.3100.210
100.0050.3460.0340.2290.0090.5590.0200.7530.0030.002
150200.3940.4790.3850.247−0.0440.283−0.1270.9190.0010.088
300.2570.3490.2870.5770.1840.1950.1940.819−0.1780.165
100.0060.4630.0580.2830.0100.3370.0000.907−0.0050.002
200200.1030.4550.2740.1780.1920.2210.1240.931−0.0010.108
300.1190.3190.2500.1940.0910.1640.0010.699−3.4780.156
100.0060.4060.0510.1950.0400.1880.0350.941−0.0050.004
250200.1350.3160.1810.1900.0890.2330.0670.597−0.0100.005
300.1240.4340.3500.3300.0700.300−0.0120.816−0.1580.129
100.0010.4860.0440.2260.4140.1990.1030.8030.0000.002
300200.1620.3260.2230.1540.1290.1620.0870.754−0.0160.040
300.2390.3480.4200.1560.1550.1470.1810.741−0.0460.072
Mean 0.2880.1420.2850.1180.1880.1570.1710.699−0.2300.088
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zeng, F.; Cui, J. Improved Fruit Fly Algorithm to Solve No-Idle Permutation Flow Shop Scheduling Problem. Processes 2025, 13, 476. https://doi.org/10.3390/pr13020476

AMA Style

Zeng F, Cui J. Improved Fruit Fly Algorithm to Solve No-Idle Permutation Flow Shop Scheduling Problem. Processes. 2025; 13(2):476. https://doi.org/10.3390/pr13020476

Chicago/Turabian Style

Zeng, Fangchi, and Junjia Cui. 2025. "Improved Fruit Fly Algorithm to Solve No-Idle Permutation Flow Shop Scheduling Problem" Processes 13, no. 2: 476. https://doi.org/10.3390/pr13020476

APA Style

Zeng, F., & Cui, J. (2025). Improved Fruit Fly Algorithm to Solve No-Idle Permutation Flow Shop Scheduling Problem. Processes, 13(2), 476. https://doi.org/10.3390/pr13020476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop