Next Article in Journal
CIRGNN: Leveraging Cross-Chart Relationships with a Graph Neural Network for Stock Price Prediction
Previous Article in Journal
A Robust System for Super-Resolution Imaging in Remote Sensing via Attention-Based Residual Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Graph Knowledge-Enhanced Iterated Greedy Algorithm for Hybrid Flowshop Scheduling Problem

1
School of Management, Wuhan University of Science and Technology, Wuhan 430081, China
2
School of Computer Science, Liaocheng University, Liaocheng 252000, China
3
Hubei Digital Manufacturing Key Laboratory, School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430062, China
4
Key Laboratory of Metallurgical Equipment and Control Technology, Wuhan University of Science and Technology, Wuhan 430081, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2401; https://doi.org/10.3390/math13152401
Submission received: 24 June 2025 / Revised: 22 July 2025 / Accepted: 24 July 2025 / Published: 25 July 2025

Abstract

This study presents a graph knowledge-enhanced iterated greedy algorithm that incorporates dual directional decoding strategies, disjunctive graphs, neighborhood structures, and a rapid evaluation method to demonstrate its superior performance for the hybrid flowshop scheduling problem (HFSP). The proposed algorithm addresses the trade-off between the finite solution space corresponding to solution representation and the search space for the optimal solution, as well as constructs a decision mechanism to determine which search operator should be used in different search stages to minimize the occurrence of futile searching and the low computational efficiency caused by individuals conducting unordered neighborhood searches. The algorithm employs dual decoding with a novel disturbance operation to generate initial solutions and expand the search space. The derivation of the critical path and the design of neighborhood structures based on it provide a clear direction for identifying and prioritizing operations that have a significant impact on the objective. The use of a disjunctive graph provides a clear depiction of the detailed changes in the job sequence both before and after the neighborhood searches, providing a comprehensive view of the operational sequence transformations. By integrating the rapid evaluation technique, it becomes feasible to identify promising regions within a constrained timeframe. The numerical evaluation with well-known benchmarks verifies that the performance of the graph knowledge-enhanced algorithm is superior to that of a prior algorithm, and seeks new best solutions for 183 hard instances.

1. Introduction

Shop scheduling involves creating a rational problem model and developing scheduling optimization methods aimed at enhancing manufacturing efficiency and reducing production costs, which are crucial for modern manufacturing systems. The hybrid flowshop scheduling problem (HFSP) is a prevalent issue within the realm of shop scheduling. Initially introduced by Salvador in the context of synthetic fiber processing [1], the HFSP has since been a subject of interest. In a hybrid flow shop, there are multiple machines available for at least one stage of the process. This setup integrates the two fundamental models of flow shop [2] and parallel machine problems to effectively boost productivity. However, this complex shop model introduces a higher level of complexity and difficulty in decision-making compared to the simpler models. The HFSP continues to be a focus of research due to its extensive practical applications within industrial production systems [3], as shown in Figure 1. Industrial processes such as automotive welding are typical examples of hybrid flowshop processes.
In an HFSP, there is a set J with n jobs and a set M of m stages, where each stage i is composed of m i identical parallel machines, i M . All jobs are required to complete the same series of processes on m stages. Figure 2 illustrates a hybrid flowshop layout. The HFSP then consists in finding a schedule (i.e., a set of starting times for each job on each machine) that minimizes the makespan or maximum completion time among the jobs, i.e., C m a x . min C m a x means higher machine utilization, lower inventory of Work-In-Process, and higher customer satisfaction. This study aims to optimize the min C m a x by determining the processing sequence of jobs and machine allocation at each stage.
The HFSP is recognized as a strongly NP-hard problem due to its intricate nature [4]; traditional exact methods are only capable of yielding optimal solutions for a limited number of small-scale issues. These methods prove to be ineffective when scaled up [5]. Approximate algorithms are always used to address this complex optimization problem, while they also struggle to find optimal solutions for certain hard instances, often due to a lack of knowledge-driven learning strategies [3,4,5]. On the one hand, the representation of solutions, as one of the cores of approximate algorithms, has its limitations. The commonly used single-stage encoding based on job sequences (which provides the job sequence for the first stage, with the jobs in subsequent stages being sorted according to scheduling variables based on certain rules) and complete encoding (which determines the machine assignment and the processing order between all operations to represent all solutions), respectively, have the problems of narrowing the algorithm’s search space and having low search efficiency in large-scale issues [6]. Therefore, this study applies an initial solution generation method using dual permutation-based encoding and a novel perturbation strategy in the decoding process to expand the search space.
On the other hand, these algorithms often employ random neighborhood search operators, such as multi-point random insertion and two-point random swapping or mutation. Random and unstructured movements fail to achieve efficient neighborhood search [7]. It is worth noting that single-stage encoding cannot intuitively represent the operation movement of the neighborhood search because changing the job sequence in the initial stage will alter the sequences in other stages. Therefore, this encoding method cannot fully depict the sequence after the move. This study aims to optimize the min C m a x by determining the processing sequence of jobs and machine allocation at each stage. Therefore, this study combines the complete encoding method of the disjunctive graph model and designs a graph knowledge-enhanced strategy to mine domain knowledge of the HFSP, which in turn guides the design of efficient neighborhood search operators.
This study introduces a knowledge-driven enhanced iterative greedy algorithm tailored for the HFSP. The IG algorithm, a prominent metaheuristic, has demonstrated robust performance in addressing a variety of scheduling challenges, including HFSP [8], the flowshop scheduling problem [9], and the distributed scheduling problem [10]. The IG algorithm presented in this paper incorporates a local search that leverages a knowledge-driven neighborhood structure to produce new feasible solutions.
To assess the quality of a neighborhood solution, it is necessary to re-decode the new scheduling plan to obtain the fitness value of the neighborhood solution. However, a neighborhood move only adjusts a small portion of the operations on certain machines, and the processing order on most machines remains unchanged. Decoding the entire schedule multiple times would consume a significant amount of computational time. There is a dearth of research on rapid evaluation methods for neighborhood solutions. This study proposes a novel rapid evaluation method designed to locate promising areas within a limited timeframe. Current research on the aforementioned key issues is very limited. Therefore, this study aims to design a novel scheduling method capable of tackling certain hard instances present in existing benchmarks. The main contributions of this study can be summarized as follows:
  • An initial solution generation method that utilizes dual permutation-based encoding and a decoding technique with a novel perturbation strategy to broaden the search space.
  • A graph knowledge-enhanced IG algorithm with a knowledge-based neighborhood structure and rapid evaluation method for solving the HFSP.
  • The proposed graph knowledge-enhanced IG algorithm seeks new best solutions for 183 hard instances.
The structure of this paper is organized as follows. Section 2 summarizes recent literature about the HFSP. Section 3 describes the details of the enhanced IG algorithm. Section 4 investigates the effectiveness of the dual directional decoding strategy and knowledge-based neighborhood search strategy, and verifies the performance of IG. Finally, Section 5 gives the conclusion and future work.

2. Related Work

In the HFSP and its variants, many scheduling methods have been developed, which can be broadly categorized into exact methods, scheduling rules, metaheuristic algorithms, and learning-based algorithms. Initially, research efforts were concentrated on exact methods for tackling small-scale HFSPs using traditional mathematical programming techniques and their subsequent algorithmic derivations. For instance, He et al. [11] proposed a polynomial-time algorithm for the two-stage HFSP with machine constraints of batch processing. Sun et al. [12] propose a branch-and-bound method and mathematical programming for minimizing makespan. As the scale of the problem increases, the complexity of the problem increases exponentially, resulting in a dramatic increase in the search space. This has prompted researchers to develop scheduling rules that assign jobs to machines in a rational manner at each stage, yielding feasible solutions. Examples of such rules include the Johnson rule, the Nawaz-Enscore-Ham method, the shortest processing time rule, and the first available machine approach. Paterina et al. [13] designed machine selection rules based on the identification of bottleneck phases.
Distinct from the previous two methods, the metaheuristic algorithm integrates stochastic factors and utilizes them to perform multiple iterations and guided searches on the problem solution, which makes the algorithm produce a better solution in multiple iterations. Metaheuristic algorithms are praised for their simplicity and efficacy, prompting extensive research on their design and enhancement. Scholars have dedicated efforts to this field, with applications that span a wide array of practical uses in various industries, as well as serving as a benchmark for solution updates [8]. Lin et al. [14] proposed an improved simulated annealing algorithm with a chaos-enhanced annealing mechanism to solve the Jose benchmark instances and successfully updated 137 of 240 problems. Fan et al. [5] combined forward and backward decoding methods to expand the search space, and applied a disjunctive graph to represent each solution, which clearly shows its critical path. Huang et al. [15] applied discrete differential evolution with two-stage stochastic programming to solve the HFSP. Safari et al. [16] designed the leader and follower agents to decompose the HFSP and afterwards, the optimal solution is obtained by a co-evolutionary genetic algorithm for each sub-problem; the effects of the designed methods are validated on the tire manufacturing industry. For the HFSP with reentrant constraints and limited buffer, Lin et al. [17] proposed a hybrid harmony search and GA to minimize the makespan and flowtime. Ozsoydan and Sagir [18] solved an HFSP with machine setup times under a learning IG algorithm. Luan et al. [19] proposed a low-carbon HFSP and discrete whale optimization algorithm to minimize completion time cost and energy consumption cost. Some of the metaheuristic algorithms for solving distributed HFSP include IG [20,21], GA [22], the artificial bee colony algorithm [23,24], the nested variable neighborhood descent algorithm [25], and so on.
The efficacy of metaheuristic algorithms hinges on solution representation and neighborhood search. Early encoding methods, such as permutation-based encoding in the first stage, confined search within the limited solution space, potentially excluding optimal solutions. With AI advancements, learning-based algorithms have been applied to the HFSP [26], integrating agent systems and machine learning. Li et al. [27] proposed a bilevel scheduler with a constrained Markov decision process for initial global sequence identification and partial sequence refinements through Deep Q and graph pointer networks.
In recent years, HFSP research has increasingly integrated graph structures and learning algorithms, showing promising prospects. Wang et al. [28] proposed a mandatory operation acceleration mechanism in graph space, which reduces insertion positions by analyzing critical paths in the graph, thereby significantly lowering the computational cost of neighborhood search. This provides a theoretical foundation for designing efficient graph-guided search strategies. In the field of reinforcement learning, Liu et al. [29] proposed a PPO-GAT algorithm, combining graph attention networks with a difference reward mechanism to effectively handle delayed optimization in dynamic HFSP scenarios. Furthermore, Wan et al. [30] and Huang et al. [31], respectively, introduced a multi-agent graph reinforcement learning model and a multiexpert graph neural network approach, both enhancing the representational capacity and generalization ability in complex scheduling states.
Although these studies have made considerable progress in intelligent decision-making and scheduling modeling, they often suffer from uncontrollable structural operations and weak interpretability in the search process. In contrast, this study proposes a graph-augmented iterative greedy algorithm that leverages graph-based neighborhood perturbation and fast evaluation strategies, aiming to improve both the interpretability and efficiency of the search process, thereby addressing key limitations in existing literature.
The IG algorithm, with its flexibility in accommodating various scheduling problem constraints, has been further developed. Pan et al. [32] considered the due windows constraint and proposed a hybrid IG and greedy local search algorithm through an optimal idle time insertion operator to balance the earliness and tardiness. Öztop et al. [33] proposed four different IG algorithms to optimize the total flow time by combining the variable batch insertion heuristic. To clarify the research progress in this field, this paper provides a comparative summary of relevant literature in Table 1.
Through the evolution of algorithms, this paper concludes that the IG algorithm demonstrates significant advantages in solving the HFSP, yet there are also shortcomings in its design that require attention. However, despite its effectiveness, the algorithm still has areas that can be improved upon: (1) it lacks domain-specific knowledge of the problem, leading to a heavy reliance on specialized local searches; (2) there is no memory storage structure within the IG, meaning that a vast number of historical solutions are discarded during the evolution of the solution; and (3) the number of heuristic methods is relatively limited, which leads to a lack of diversity in the solutions. Therefore, this paper seeks to address the deficiencies in the current literature regarding IG algorithm design.

3. The Enhanced IG Algorithm for the HFSP

This section encompasses nine distinct parts, delving into the operational design under the IG algorithm framework. It highlights the representation of problem solutions, the derivation of critical paths for the HFSP, and the crafting of neighborhood structures predicated on these critical paths.

3.1. The Encoding and Decoding Disruption Mechanism

The enhanced IG algorithm employs a permutation-based encoding method. The encoding method is one of the core aspects of algorithm design. An excellent encoding scheme effectively covers the solution space, ensuring the search efficiency of the algorithm. Encoding based on job sequences has been proven to be the most optimal. In this section, the encoded vector, comprised of real numbers, signifies the job sequence at the initial stage. Subsequent stages of job sequencing and machine scheduling are determined based on this encoded vector. The job sequence is governed by the First Come First Served (FCFS) rule, while the First Available Machine (FAM) rule dictates the machine assignment. The rigidity of the FCFS and FAM decoding rules confines the search within a restricted space.
To transcend this limitation and enhance diversity in scheduling schemes, this paper introduces a decoding perturbation mechanism that probabilistically adjusts the job sequence of the current stage for refinement. The fine-tuning operation consists of two steps: The initial step involves arranging a group of jobs in non-descending order according to their completion times from the previous stage. In the second step, the completion times of two consecutive jobs are compared to determine whether their positions need to be swapped.
Furthermore, to fully improve the quality of the solution, this section also applies a hybrid approach combining forward and backward decoding techniques. This technique employs two distinct decoding approaches. In forward decoding, jobs are scheduled for the earliest possible start times, with the encoding vector dictating the initial job sequence. Subsequent stages adhere to the FCFS rule, scheduling new job sequences based on the release order from the previous stage. Machine allocation follows the FAM rule. Conversely, backward decoding schedules jobs for the latest possible start times, with the encoding vector representing the final stage job sequence. Remaining stages schedule jobs based on the start times of the following phase, with machine allocation also adhering to the FAM rule. In the proposed enhanced IG algorithm, an initial solution is randomly generated to optimize the utilization of the designed local search. This solution is decoded using a hybrid forward and backward decoding technique, further enhanced by a decoding perturbation mechanism.

3.2. Scheduling Scheme of HFSP Represented by a Disjunctive Graph

The permutation-based encoding method is unable to express such movements intuitively, as alterations to the initial job sequence will result in changes to the subsequent stages. Consequently, the encoding method cannot fully represent these sequences after the movements have occurred. This paper proposes the use of a disjunctive graph as a means of representing the scheduling plans for the HFSP, as this allows for the depiction of such plans in a manner that is both concise and accurate.
A disjunctive graph consists of a set of nodes and a set of arcs. A node represents an operation, and an arc indicates the relationship between two adjacent operations within a given job, or two consecutive operations on the same machine. This work introduces a pair of dummy nodes, which are defined as follows: Obegin and Oend represent the beginning and end of the schedule, respectively. If an operation is the first operation for a job or machine, its predecessor should be the dummy node Obegin. The node Oend can be connected to the current operation without a subsequent job or machine operation. In Figure 3, the Gantt chart on the left (a) and the disjunctive graph on the right (b) represent two different ways of presenting a solution for the 5 × 3 scale instance. Within the Gantt chart, the red solid line signifies a critical path. The disjunctive graph of a solution, where connecting arcs link two consecutive operations of the same job, and separating arcs match two consecutive operations processed on the same machine. The red solid arcs represent the three adjacent operations of Jobs. The green dashed arcs indicate two consecutive operations on the same machine, with the arrow pointing to indicate the order of operations on the same machine.
In the disjunctive diagram, the longest distance between operations Oi,j and Oh,g is denoted as L(Oi,j, Oh,g), and hence, the makespan can be expressed as L(Obegin, Oend). To clearly delineate the proposed neighborhood structure based on the disjunctive graph, the head length and tail length of operation Oi,j are defined as R O i , j and Q O i , j , respectively. The recursive computation for these lengths is detailed in Equations (1) to (2), where J P O i , j and M P O i , j denote the job predecessor operation and machine predecessor operation of the operation Oi,j, and J S O i , j and M S O i , j denote the job successor operation and machine successor operation of the operation Oi,j, respectively.
R O begin = Q O e n d = 0
R O i , j = max { R J P O i , j + p J P O i , j , R M P O i , j + p M P O i , j }
Q O i , j = max { Q J S O i , j + p J S O i , j , Q M S O i , j + p M S O i , j }

3.3. The Critical Path of HFSP and Its Neighborhood Structure

The critical path is the longest in the disjunctive graph from the Obegin to Oend, and operations included in the critical path are defined as critical operations. This paper constructs two theorems and their proofs are shown as follows:
Theorem 1.
The sum of the head length, tail length, and processing time of the operations on the critical path is equal to the longest path, i.e., R O i , j + p O i , j + Q O i , j = L O b e g i n , O e n d , where Oi,j is a critical operation and p O i , j is the processing time of Oi,j.
Proof. 
Assume Oi,j is any operation on the critical path; then the critical path is the longest path in the disjunctive graph from the head node Obegin to the tail node Oend, and the length of the critical path is C max = max { C O i , j } . Operation Oi,j on the critical path, so the length of the longest path from the head node to the tail node passing through Oi,j is C max = max { C O i , j } . R O i , j is the longest distance from the head node to Oi,j; Q O i , j is the longest distance from Oi,j to the tail node, which means R O i , j + p O i , j + Q O i , j is the length of the longest path from the head node to the tail node passing through Oi,j, i.e., Cmax. □
Theorem 2.
From the perspective of disjunctive graphs, if the disjunctive graph length from a non-first operation in any critical path block to the end of the entire sequence is greater than or equal to the disjunctive graph length from the last operation in that critical path block to the end of the entire sequence, then inserting this operation after the last operation for processing will not improve the solution.
Proof. 
As shown in Figure 4, let operations u and v be within a certain critical path block, with u being the first operation in the block and v being the last operation. Let w be an operation between u and v; js(w) is the successor of operation w on the same job; js(v) is the successor of operation v on the same job; mp(w) is the predecessor of operation w on the same machine; and ms(w) is the successor of operation w on the same machine. Let L[u,v] denote the disjunctive graph length from operation u to operation v. If the processing order of operations is not changed, the total completion time is C = L[s,mp(w)] + L[w,v] + L[sj(v),e]. When operation w is inserted after operation v for processing, the total completion time is given by C* = max{L[s,mp(w)] + L[ms(w),w] + L[sj(w),e], L[s,mp(w)] + L[ms(w),v] + L[sj(v),e]}. It is easy to see that the disjunctive graph length L[w,v] when the processing sequence is not changed is equal to the disjunctive graph length L[ms(w),w] after changing the processing sequence, i.e., L[w,v] = L[ms(w),w]. When the disjunctive graph length from the successor of a non-first operation in the critical path block to the end of the entire sequence is greater than or equal to the disjunctive graph length from the successor of the last operation in that critical path block to the end of the entire sequence, i.e., L s j w , e L s j v , e .
Therefore, C = L s , m p w + L w , v + L s j v , e L s , m p w + L m s w , w + L s j w , e , and C L s , m p w + L m s w , w + L s j w , e , therefore C C . Theorem 2 is thus proven. □

3.4. The Local Search Based on the Critical Path

This section introduces two neighborhood structures, N7 [34] and k-insertion [35], which are proposed to address the flexible job shop scheduling problem (FJSP). These structures are categorized under insertion operations and have demonstrated significant efficacy across various algorithmic approaches. Based on the N7 and k-insertion, three critical path-based neighborhood structures are used randomly. The selected neighborhood structure is used to perform a neighborhood search on the current solution, and the best solution in the neighborhood solutions is selected. If the solution is better than the current solution, then the search proceeds to the next stage and continues. The pseudo code is shown in Algorithm 1.
Algorithm 1. The proposed local search
Input: Current solution π
Output: Improved solution π*
1: getCriticalPath(π*);// Get critical path of current solution
2: do {
3:        switch (rand(1, 4)) {
4:              case (1): //Randomly select one neighborhood operation
5:                    choose one of {N7, k-insertion, k-swap};
6:                    update π*; // Apply selected neighborhood operation to update solution
7:                    break;
8:              case (2): // Randomly select two neighborhood operations
9:                    choose two of {N7, k-insertion, k-swap};
10:                  update π*; // Apply two neighborhood operations sequentially
11:                  break;
12:            case (3): // Randomly select three neighborhood operations
13:                  choose three of {N7, k-insertion, k-swap};
14:                  update π*; // Apply all three neighborhood operations sequentially
15:                  break;
16:        }
17: } while (!terminationCondition); // Repeat until termination condition is met
18: return π*;

3.5. The Two-Step Insertion Neighborhood Operation

Both N7 and k-insertion utilize a one-step insertion operation on the current solution to produce a neighboring solution, subsequently selecting the optimal solution for the subsequent insertion step. However, due to the predefined decoding rules, this one-step insertion operation restricts the potential for a suboptimal solution to evolve into a superior one after the next insertion. To overcome this limitation, this paper extends the one-step operation to a two-step operation. This approach accepts initially inferior neighborhood solutions that may improve in the second step. A novel neighborhood structure incorporating a two-step neighborhood operation is thus designed.
As depicted in Figure 5, the job swap between machines within the same stage exemplifies the two-step neighborhood operation. It is crucial to emphasize that the stage decoding rule of the HFSP ensures that movement operations do not result in infeasible solutions. The newly proposed neighborhood structure maintains the feasibility of movement operations for the HFSP.
A 5 × 3 instance is given to demonstrate the efficacy of the proposed neighborhood structure. Table 2 gives the processing information of the 5 × 3 instance. A feasible solution to this instance is optimized by the two-step neighborhood operation. The initial job sequence for the first stage is denoted as {J3,1, J5,1, J2,1, J4,1, J1,1}. The Gantt chart is generated by decoding rules, as shown in the left diagram of Figure 6. A new solution is obtained by exchanging the J4,1 on machine M4 with J5,1 on machine M3, and the Gantt chart is depicted in the right diagram of Figure 4. Upon comparing, it is observed that the objective value has improved from 26 to 22, showing the optimization achieved through the two-step neighborhood operation.

3.6. The Destruction and Construction

The process of destruction can be categorized into two primary types: single-point and multi-point destruction. Single-point destruction involves the removal of a randomly selected job sequence position within the encoding vector. In contrast, multi-point destruction entails the random selection of multiple, non-repeating positions in the encoding vector for deletion. Following the application of either destruction method, a destroyed job sequence and a list of all deleted jobs are created. Subsequently, a construction operation is executed through greedy insertion, where the deleted jobs are individually inserted into the optimal position within the current destroyed job sequence.
The single-point destruction and construction speeds up the search, but the consistent search reduces the probability of finding the optimal solution. The multi-point destruction increases the search time of construction when the number of deleted jobs is too high. Therefore, the proposed enhanced IG algorithm selects the number of deleted jobs dynamically within a certain range. The operation is shown in Algorithm 2.
Algorithm 2. Destruction and construction
Input: Current solution π, destruction size x
Output: Reconstructed solution π*
1: for i = 0 to x do // Destruction phase: remove x jobs
2:        place = rand(0, length of Seq); // Randomly select removal position
3:        Seq -> erase(Seq_place); // Remove job from current sequence
4:        Del_Seq -> insert(Seq_place); // Add removed job to deletion list
5: End for
6: π*->Seq = Del_Seq; / Update current sequence to post-destruction sequence
7: for i = 0 to x do // Construction phase: reinsert deleted jobs
8:        job = Del_Seq_i; //Get the i-th deleted job
9:        π*1->Seq = insert job to the first of π*->Seq;// Insert job at beginning as baseline
10:      for j = 1 to (length of π*->Seq) do // Try all possible insertion positions
11:             π*2->Seq = insert job to the j-th place of π*->Seq; // Insert at j-th position
12:             if (makespan(π*2) < makespan(π*1))// Compare makespan values
13:                   π*1 = π*2 // Update to better solution
14:             end if
15:      end for
16:      π* = π*1// Confirm best insertion position for current job
17: end for
18: π = π*; // Update final solution
19: return π; // Return reconstructed solution

3.7. The Update Mechanism of Poor-Quality Solutions

Since the IG algorithm is a greedy local search algorithm, the proposed enhanced IG algorithm introduces a simulated annealing strategy to prevent the algorithm from falling into a local optimum. Some of the poor-quality solutions are accepted with a certain probability to avoid a local optimum. The acceptance rule depends on the probability value g(x), which is calculated as follows:
g ( x )   = a     exp   ^   (   ( f 1     f 2   )   /   T   )
The current solution and neighborhood solution are denoted by f 1 and f 2 , respectively. The fitness value is calculated by Equation (1). If the value of f 2 is less than a random number between [0, 1], then f 1 is replaced with f 2 ; otherwise, it is not replaced.

3.8. A Novel Rapid Evaluation Method

The quality of a neighborhood solution is ascertained through re-decoding the new scheduling scheme. However, a neighborhood search typically modifies only the local job sequence on a few machines, leaving the majority of job sequences unchanged. Conducting numerous neighborhood searches with full decoding can be computationally expensive. While there is a fast approximate evaluation method for the FJSP that can effectively assess the quality of neighborhood solutions, it does not guarantee 100% accuracy and is not directly applicable to the proposed two-step neighborhood structure of the critical path. Consequently, this paper introduces a rapid evaluation method for neighborhood solutions, leveraging the stage-based decoding rule of the HFSP.
The rapid evaluation method divides the scheduling into three parts. If the current neighborhood operation is for stage a, then from 0 to the a stage is the first part, a is the second part, and from the a stage to the s stage is the third part. Before the neighborhood operation, the starting time and completion time of each operation O are denoted by s and t, respectively. After the neighborhood operation, the starting time and completion time are denoted by s and t, respectively. The machine predecessor and successor of operation are expressed as MP and MS. Pij represents the processing time of job j at stage i. After performing a neighborhood move, the second part will adjust part of the job sequence on machines and the job sequence will remain unchanged in the first and the second parts. The first part always satisfies s = s, and the second part always satisfies t = t.
There is always a critical path through the operation Oa j from the beginning to the end of the process, so m a k e s p a n = max { j J | s O a j + t O a j + P O a j }
where,
s O a , j = m a x s O a , j 1 + P O a , j 1 , s O a , j M P + P O a , j M P = m a x s O a , j 1 + P O a , j 1 , s O a , j M P + P O a , j M P
t O a , j = m a x t O a , j + 1 + P O a , j + 1 , t O a , j M S + P O a , j M S = m a x t O a , j + 1 + P O a , j + 1 , t O a , j M S + P O a , j M S
And then,
m a k e s p a n = m a x j J P O a , j + m a x s O a , j 1 + P O a , j 1 , s O a , j M P + P O a , j M P + m a x t O a , j + 1 + P O a , j + 1 , t O a , j M S + P O a , j M S
When the number of stages is zero, s = 0; When the number of stages is S, t = 0. This evaluation obtains the longest path from the beginning to the end of processing. In other words, the proposed method is equal to the actual objective value through full decoding. The conventional evaluation method requires recalculating all jobs across S stages, resulting in a time complexity of O(n×S). In contrast, the proposed rapid evaluation method divides the scheduling process into three parts: the first part remains unchanged and requires no recalculation; the second part involves only the re-evaluation of jobs affected by the neighborhood operation at stage a; and the third part is updated recursively based on the results of the second part. This approach reduces the overall computational complexity to O(2n), and it can be further reduced when applied to single-machine job sequences. The proposed method maintains 100% accuracy by preserving the complete temporal dependencies in the schedule while avoiding redundant computations. By adopting this stage-wise local update strategy, the computational efficiency is significantly improved. As the number of stages increases, the acceleration ratio can reach approximately S/2, enabling the evaluation of a larger number of neighborhood solutions within a limited time and facilitating more effective exploration of promising search regions. The detailed procedure of this method is presented in Algorithm 3.
Algorithm 3. Rapid Evaluation Method
Input: Solution π, changed stage a
Output: Updated makespan
1: affected_ops = GetAffectedOperations(move, stage_a); // Identify operations in stage a
2: s_new = π->start_times; // Copy current start times
3: t_new = π->completion_times; // Copy current completion times
4: // Part 1: stages 0 to (a-1) remain unchanged (s = s)
5: // Part 3: stages (a+1) to S remain unchanged (t = t)
6: for each operation O[i,a] in affected_ops do // Part 2: update stage a only
7:        job_pred_time = t_new[i][a − 1] + P[i][a − 1]; // Job predecessor constraint
8:        machine_pred = GetMachinePredecessor(O[i,a], move); // Get machine predecessor
9:        if (machine_pred ≠ NULL)
10:           machine_pred_time = t_new[machine_pred] + P[machine_pred];
11:       else machine_pred_time = 0;
12:       end if
13:       s_new[i][a] = max(job_pred_time, machine_pred_time); // Equation (5)
14:       t_new[i][a] = s_new[i][a] + P[i][a]; // Update completion time (6)
15: end for
16: new_makespan = 0; // Initialize makespan calculation
17: for each job j ∈ J do // Apply Equation (7)
18:        path_length = s_new[j][a] + P[j][a] + max(t_new[j][a+1] + P[j][a+1], t_new[machine_successor]);
19:        new_makespan = max(new_makespan, path_length); // Update maximum
20: end for
21: return new_makespan;

3.9. Procedure of the Graph Knowledge-Enhanced IG Algorithm

The essence of the enhanced IG algorithm lies in conducting a methodical and sustained exploration of the neighborhood solution within the solution space. The proposed IG algorithm is structured around four distinct phases: initialization, destruction, construction, and local search. The process begins with the generation of an initial solution, which then undergoes a series of searches through the destruction and construction phases, steering towards more promising regions within the solution space. To balance exploration and exploitation during the search process, the total runtime is divided into two phases by introducing a strategy-switching threshold at 30% of the total StopTime. In the first phase (0–30% of runtime), the algorithm emphasizes global exploration using destruction and reconstruction operations to enhance solution diversity. After this point, the search intensifies through critical-path-based neighborhood structures, focusing on local exploitation and fine-tuning of promising areas. The choice of 0.3 *StopTime follows a widely used empirical rule in metaheuristic design, where a 30/70 runtime allocation is commonly adopted to achieve a practical trade-off between diversification and intensification. The destruction and construction process introduces a disturbance to the current solution. As the algorithm approaches convergence, local search is initiated during the iterations to broaden the search space and facilitate further enhancements. These phases are iteratively executed until the termination criterion is satisfied. The workflow of the enhanced IG algorithm is depicted in Figure 7.

4. Experimental Study

To test the proposed enhanced IG algorithm, three groups of experiments are shown, including Calier 77 instances [36], Liao 10 instances [37], and 240 small-scale instances [8]. These benchmarks and results were adopted from other published papers. Before the start of the three experiments, we needed to carry out the parameter experiment of the enhanced IG algorithm. The proposed enhanced IG algorithm is implemented in C++(Visual studio 2019) on a computer with an Inter(R) Core i7 processor, 3.5 GHz, RAM 32 GB, and tested on three types of HFSP benchmarks. To evaluate the quality of the solutions, the relative percent deviation (DEV) is adopted for each instance of the HFSP.
D E V = m a k e s p a n L B L B × 100 %
where LB means the low bound of each instance.

4.1. Evaluation of the Rapid Evaluation Methods

To test the effectiveness of the rapid evaluation methods, the comparative experiments are designed to compare the enhanced IG algorithm without the rapid evaluation method through 24 small-scale instances and 24 large-scale instances. A total of 48 different instances are generated with n {10, 20, 30, 40, 60, 80, 100, 120}, and s {5, 10, 15, 20, 30, 40}. The number of machines and processing times of machines per stage are uniformly distributed according to [1, 3] and [1, 30], respectively. Each instance runs five times for forward decoding and backward decoding independently. The proposed evaluation strategy and full decoding method are run 20 times for each instance. Each time it executes 100,000 times. The full decoding is used to compare the longest time-consuming part in the rapid evaluation, which is a situation where all calculations are performed on the second part of the job. Figure 8a,b show the average of 10 run times for rapid evaluation and full decoding, and the horizontal and vertical axes, respectively, indicate the number of stages and the time required for one decoding.
The experimental results show that the computational speed of rapid evaluation is faster than that of the full decoding method. When the number of jobs is constant, the computational speed of the rapid evaluation is not affected by the change in the number of stages, while the running time of the full decoding method increases with the increase in the number of stages. In Figure 8c, the horizontal axis represents the number of processing stages, and the vertical axis represents the ratio of the full decoding time to the rapid evaluation time. When the number of stages increases, the computational time for rapid evaluation is much smaller than that for full decoding. As the number of stages increases, the proposed method becomes more effective.

4.2. Experiment 1—Carlier Benchmark

Experiment 1 contains 77 instances, which were adopted from the well-known Carlier benchmark, in which the problem size and No. of all instances can be identified by its name. “j10c5a2” means 10 jobs, 5 stages, and the second instance. The proposed enhanced IG algorithm is compared with algorithms, which have achieved good results in the literature: the branch-and-bound (B&B) algorithm [38], the artificial immune system (AIS) algorithm [39], the particle swarm optimization (PSO) algorithm [37], and LABC [40]. These algorithms used 1600s as the StopTime. The StopTime of the enhanced IG algorithm is reduced to 60s with limited computational costs, and the optimal solution is recorded by 20 independently run times. In Table 3, experimental results show that the enhanced IG algorithm obtains optimal solutions for the given 77 instances, such as the good results of “j10c10c” (114), “j10c10a4” (119), and “j10c10e6” (105). In terms of the deviation of the values obtained by these algorithms, the deviation between the proposed enhanced IG algorithm and the lower bound is merely 1.49%; however, the lowest deviation of the four compared algorithms is 1.54%. In terms of the running time to obtain the optimal solution, the average time of the proposed algorithm is only 0.08 for all 77 instances, while the lowest average time of the other compared algorithms is 0.33. It can be seen from the experimental results that the proposed enhanced IG algorithm has better search speed and efficiency.

4.3. Experiment 2—Liao Benchmark

Regarding the well-known Liao benchmark, the termination condition is 1600s, and each instance is run 10 times to record the optimal solution and the time taken to obtain the optimal solution. Table 4 compares the experimental results of the proposed enhanced IG algorithm with the current superior algorithms: PSO [37], LABC [40], IBBO [41], and IGT [33]. The “MINC” and “MINT” in the table indicate the minimum makespan of all results and the corresponding run time. Among the whole of 10 instances, the proposed enhanced IG algorithm and IGT can obtain the best solution in 9 instances, LABC can obtain 2 instances, while PSO and IBBO merely obtain 1 instance. The proposed enhanced IG algorithm obtains the new optimal solution (571) of “j30c5e10”. About the deviation of the values, the deviation between the proposed enhanced IG algorithm and the lower bound is merely 0.02%; however, the lowest deviation of the four comparing algorithms is 0.04%. It is verified that the introduced critical path can improve the quality of the algorithm.

4.4. Experiment 3—Jose Benchmark

The 240 instances are adopted from the Jose benchmark [8], where n {10, 15, 20, 25, 30, 35}, and s {5, 10, 15, 20}. The name of each instance indicates the scale of the problem. Notably, “10 × 5 × 1” means the first problem with 10 jobs and 5 stages. Fernandez and Framinan [8] used an IG algorithm to solve these 240 instances and set an upper bound with the designed StopTime = 50 × (the number of jobs^3) × (the number of stages). In this paper, the termination time is reduced by 25 times (StopTime = 2 × (job number^3) × (stage number)). The optimal solution and its running time are recorded by running 10 times independently. The results are compared with the given upper bound, and related comparison results are shown in Table 5. The proposed enhanced IG algorithm obtains 182 new upper bounds and the same upper bounds for the other 42 instances. Computational results prove that the proposed strategies and IG algorithm can effectively improve the solution quality compared to the given IG algorithm [8].

5. Conclusions and Future Work

This paper presents a graph knowledge-enhanced IG algorithm that integrates a critical path analysis and rapid evaluation technique to address the high-frequency scheduling problem—HFSP. An initial solution generation method that utilizes dual permutation-based encoding and a decoding technique with a novel perturbation strategy to broaden the search space. The integration of critical path analysis provides the algorithm with a strategic advantage in identifying and prioritizing tasks that have the most significant impact on the overall schedule. Coupled with the rapid evaluation technique, the algorithm can quickly discard infeasible or suboptimal solutions, thus accelerating the convergence towards an optimal or near-optimal schedule. Experimental evaluation demonstrates that the proposed algorithm surpasses the performance of existing algorithms when applied to the HFSP. The proposed graph knowledge-enhanced IG algorithm seeks new best solutions for 183 hard instances.
This study still has several limitations. The proposed algorithm involves multiple parameters that require manual tuning and lacks adaptive control mechanisms. As the problem scale increases, memory consumption grows significantly, and its effectiveness on ultra-large-scale instances remains underexplored. Moreover, the method is specifically designed for the hybrid flowshop scheduling problem, and its generalizability to other domains has not yet been examined. The objective function focuses solely on makespan minimization, without considering multi-objective scenarios such as energy consumption or tardiness penalties. Additionally, the model assumes a static scheduling environment, ignoring dynamic job arrivals, machine breakdowns, and other real-world uncertainties. Integration with manufacturing execution systems (MES) also presents certain practical challenges.
Future research could be extended in several directions, including automated algorithmic framework construction, incorporation of self-learning mechanisms, and the development of hybrid neighborhood structures driven by both knowledge and data. Furthermore, the proposed method has the potential for adaptation to other complex scheduling and optimization problems, such as flexible job shop scheduling, cloud task scheduling, and logistics distribution optimization.

Author Contributions

Conceptualization, Y.W.; Methodology, L.Z.; Software, Z.Z.; Formal analysis, Y.W.; Writing—original draft, Y.L.; Writing—review & editing, B.Z. and K.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Project supported by Hubei Provincial Natural Science Foundation Youth Project (Grant No.2023AFB088); Education Department’s Social Science Research Youth Project (Grant No.24Q121); National Science Foundation of China (Grant No.62303204); National Natural Science Foundation of China (Grant No. 52475524, 62303358); and Natural Science Foundation of Shandong Province (Grant No. ZR2024MF054, ZR2021QF036).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Michaels, S. A Solution to a Special Class of Flow Shop Scheduling Problems. In Proceedings of the International Conference on Flow Shop Scheduling; Springer: Berlin/Heidelberg, Germany, 1973; pp. 83–91. [Google Scholar]
  2. Xu, J.Y.; Lin, W.C.; Chang, Y.W.; Chung, Y.H.; Chen, J.H.; Wu, C.C. A Two-Machine Learning Date Flow-Shop Scheduling Problem with Heuristics and Population-Based GA to Minimize the Makespan. Mathematics 2023, 11, 4060. [Google Scholar] [CrossRef]
  3. Zhang, B.; Meng, L.L.; Lu, C.; Li, J.Q. Real-time data-driven automatic design of multi-objective evolutionary algorithm: A case study on production scheduling. Appl. Soft Comput. 2023, 138, 110187. [Google Scholar] [CrossRef]
  4. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman and Company: New York, NY, USA, 1979. [Google Scholar]
  5. Fan, J.; Li, Y.L.; Xie, J.; Zhang, C.J.; Shen, W.M.; Gao, L. A hybrid evolutionary algorithm using two solution representations for hybrid flow-shop scheduling problem. IEEE Trans. Cybern. 2023, 53, 1752–1764. [Google Scholar] [CrossRef] [PubMed]
  6. Fernández-Viagas, V.; Pérez-González, P.; Framinan, J.M. Efficiency of the Solution Representations for the Hybrid Flow Shop Scheduling Problem with Makespan Objective. Comput. Oper. Res. 2019, 109, 77–88. [Google Scholar] [CrossRef]
  7. Ruiz, R.; Vázquez-Rodríguez, J.A. The Hybrid Flow Shop Scheduling Problem. Eur. J. Oper. Res. 2010, 205, 1–18. [Google Scholar] [CrossRef]
  8. Victor, F.V.; Framinan, J.M. Design of a Testbed for Hybrid Flow Shop Scheduling with Identical Machines. Comput. Ind. Eng. 2020, 141, 106288. [Google Scholar] [CrossRef]
  9. Rubén, R.; Stützle, T. A Simple and Effective Iterated Greedy Algorithm for the Permutation Flowshop Scheduling Problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  10. Shao, W.S.; Shao, Z.S.; Pi, D.C. Modeling and Multi-Neighborhood Iterated Greedy Algorithm for Distributed Hybrid Flow Shop Scheduling Problem. Knowl.-Based Syst. 2020, 194, 105527. [Google Scholar] [CrossRef]
  11. He, L.M.; Sun, S.J.; Luo, R.Z. A Hybrid Two-Stage Flowshop Scheduling Problem. Asia-Pac. J. Oper. Res. 2007, 24, 45–56. [Google Scholar] [CrossRef]
  12. Sun, Z.W.; Lv, D.Y.; Wei, C.M.; Wang, J.B. Flow Shop Scheduling with Shortening Jobs for Makespan Minimization. Mathematics 2025, 13, 363. [Google Scholar] [CrossRef]
  13. Paternina, A.; Carlos, D.; Jairo, R.M.; Milton, J.; Maria, C.H. Scheduling Jobs on a K-Stage Flexible Flow-Shop. Ann. Oper. Res. 2007, 164, 29–40. [Google Scholar] [CrossRef]
  14. Lin, S.W.; Cheng, C.Y.; Pourya, P.; Ying, K.C.; Lee, C.H. New Benchmark Algorithm for Hybrid Flowshop Scheduling with Identical Machines. Expert Syst. Appl. 2021, 183, 115422. [Google Scholar] [CrossRef]
  15. Huang, Y.P.; Deng, L.B.; Wang, J.L.; Qiu, W.W.; Liu, J.F. Modelling and Solution for Hybrid Flow-Shop Scheduling Problem by Two-Stage Stochastic Programming. Expert Syst. Appl. 2023, 233, 120846. [Google Scholar] [CrossRef]
  16. Safari, G.; Ashkan, H.; Hiva, M.; Mohammad, K. Competitive Scheduling in a Hybrid Flow Shop Problem Using Multi-Leader-Multi-Follower Game: A Case Study from Iran. Expert Syst. Appl. 2022, 195, 116584. [Google Scholar] [CrossRef]
  17. Lin, C.C.; Liu, W.Y.; Chen, Y.H. Considering Stockers in Reentrant Hybrid Flow Shop Scheduling with Limited Buffer Capacity. Comput. Ind. Eng. 2020, 139, 106154. [Google Scholar] [CrossRef]
  18. Ozsoydan, F.B.; Müjgan, S. Iterated Greedy Algorithms Enhanced by Hyper-Heuristic Based Learning for Hybrid Flexible Flowshop Scheduling Problem with Sequence Dependent Setup Times: A Case Study at a Manufacturing Plant. Comput. Oper. Res. 2021, 125, 105044. [Google Scholar] [CrossRef]
  19. Luan, F.; Cai, Z.Y.; Wu, S.Q.; Liu, S.Q.; He, Y.X. Optimizing the Low-Carbon Flexible Job Shop Scheduling Problem with Discrete Whale Optimization Algorithm. Mathematics 2019, 7, 688. [Google Scholar] [CrossRef]
  20. Ying, K.C.; Lin, S.W. Minimizing Makespan for the Distributed Hybrid Flowshop Scheduling Problem with Multiprocessor Task. Expert Syst. Appl. 2018, 92, 132–141. [Google Scholar] [CrossRef]
  21. Wang, J.J.; Wang, L. An Iterated Greedy Algorithm for Distributed Hybrid Flowshop Scheduling Problem with Total Tardiness Minimization. In Proceedings of the 2019 IEEE 15th International Conference on Automation Science and Engineering, Vancouver, BC, Canada, 22–26 August 2019; pp. 350–355. [Google Scholar]
  22. Cui, H.H.; Li, X.Y.; Gao, L. An Improved Multi-Population Genetic Algorithm with a Greedy Job Insertion Inter-Factory Neighborhood Structure for Distributed Heterogeneous Hybrid Flow Shop Scheduling Problem. Expert Syst. Appl. 2023, 222, 119805. [Google Scholar] [CrossRef]
  23. Li, Y.L.; Li, X.Y.; Gao, L.; Zhang, B.; Pan, Q.K.; Tasgetiren, M.F. A Discrete Artificial Bee Colony Algorithm for Distributed Hybrid Flowshop Scheduling Problem with Sequence-Dependent Setup Times. Int. J. Prod. Res. 2020, 59, 3880–3899. [Google Scholar] [CrossRef]
  24. Li, M.; Wang, G.G.; Yu, H.L. Sorting-Based Discrete Artificial Bee Colony Algorithm for Solving Fuzzy Hybrid Flow Shop Green Scheduling Problem. Mathematics 2021, 9, 2250. [Google Scholar] [CrossRef]
  25. Zhang, B.; Lu, C.; Meng, L.L.; Han, Y.Y.; Sang, H.Y.; Jiang, X.C. Reconfigurable distributed flowshop group scheduling with a nested variable neighborhood descent algorithm. Expert Syst. Appl. 2023, 217, 119548. [Google Scholar] [CrossRef]
  26. Jia, Y.H.; Qi, Y.; Wang, H.F. Q-Learning Driven Multi-Population Memetic Algorithm for Distributed Three-Stage Assembly Hybrid Flow Shop Scheduling with Flexible Preventive Maintenance. Expert Syst. Appl. 2023, 232, 120837. [Google Scholar] [CrossRef]
  27. Li, L.K.; Fu, X.J.; Zhen, H.L.; Yuan, M.X.; Wang, J.; Lu, J.W.; Tong, X.L.; Zeng, J.; Schnieders, D. Bilevel Learning for Large-Scale Flexible Flow Shop Scheduling. Comput. Ind. Eng. 2022, 168, 108140. [Google Scholar] [CrossRef]
  28. Wang, Y.; Han, Y.; Li, H.; Li, J.; Gao, K.; Liu, Y. Theoretical Analysis and Implementation of Mandatory Operations-Based Accelerated Search in Graph Space for Hybrid Flow Shop Scheduling. Expert Syst. Appl. 2024, 257, 125026. [Google Scholar] [CrossRef]
  29. Liu, Y.; Fan, J.; Shen, W. A Deep Reinforcement Learning Approach with Graph Attention Network and Multi-Signal Differential Reward for Dynamic Hybrid Flow Shop Scheduling Problem. J. Manuf. Syst. 2025, 80, 643–661. [Google Scholar] [CrossRef]
  30. Wan, L.; Fu, L.; Li, C.; Li, K. An Effective Multi-Agent-Based Graph Reinforcement Learning Method for Solving Flexible Job Shop Scheduling Problem. Eng. Appl. Artif. Intell. 2025, 139, 109557. [Google Scholar] [CrossRef]
  31. Huang, D.; Zhao, H.; Tian, W.; Chen, K. A Deep Reinforcement Learning Method Based on a Multiexpert Graph Neural Network for Flexible Job Shop Scheduling. Comput. Ind. Eng. 2025, 200, 110768. [Google Scholar] [CrossRef]
  32. Pan, Q.K.; Ruiz, R.; Pedro, A.F. Iterated Search Methods for Earliness and Tardiness Minimization in Hybrid Flowshops with Due Windows. Comput. Oper. Res. 2017, 80, 50–60. [Google Scholar] [CrossRef]
  33. Öztop, H.; Tasgetiren, M.F.; Deniz, T.E.; Pan, Q.K. Metaheuristic Algorithms for the Hybrid Flowshop Scheduling Problem. Comput. Oper. Res. 2019, 111, 177–196. [Google Scholar] [CrossRef]
  34. Zhang, C.Y.; Li, P.G.; Guan, Z.L.; Rao, Y.Q. A Tabu Search Algorithm with a New Neighborhood Structure for the Job Shop Scheduling Problem. Comput. Oper. Res. 2007, 34, 3229–3242. [Google Scholar] [CrossRef]
  35. Mastrolilli, M.; Luca, M.G. Effective Neighbourhood Functions for the Flexible Job Shop Problem. J. Sched. 2000, 3, 3–20. [Google Scholar] [CrossRef]
  36. Carlier, J.; Néron, E. An Exact Method for Solving the Multi-Processor Flow-Shop. RAIRO-Oper. Res. 2000, 34, 1–25. [Google Scholar] [CrossRef]
  37. Liao, C.J.; Evi, T.; Chung, T.P. An Approach Using Particle Swarm Optimization and Bottleneck Heuristic to Solve Hybrid Flow Shop Scheduling Problem. Appl. Soft Comput. 2012, 12, 1755–1764. [Google Scholar] [CrossRef]
  38. Neron, E.; Baptiste, P.; Gupta, J.N.D. Solving hybrid flow shop problem using energetic reasoning and global operations. Omega-Int. J. Manag. Sci. 2001, 29, 501–511. [Google Scholar] [CrossRef]
  39. Engin, O.; Doyen, A. A new approach to solve hybrid flow shop scheduling problems by artificial immune system. Future Gener. Comput. Syst. 2004, 20, 1083–1095. [Google Scholar] [CrossRef]
  40. Li, J.Q.; Han, Y.Q. A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system. Clust. Comput.-J. Netw. Softw. Tools Appl. 2019, 23, 2483–2499. [Google Scholar] [CrossRef]
  41. Li, Z.P.; Gu, X.S. Improved Biogeography-based Optimization Algorithm used in Solving Hybrid Flow Shop Scheduling Problem. CIESC J. 2016, 67, 751–757. [Google Scholar]
Figure 1. Schematic diagram of the welding process flow.
Figure 1. Schematic diagram of the welding process flow.
Mathematics 13 02401 g001
Figure 2. A schematic diagram of a hybrid flow shop with N jobs and M stages.
Figure 2. A schematic diagram of a hybrid flow shop with N jobs and M stages.
Mathematics 13 02401 g002
Figure 3. Two ways of presenting a solution for the 5 × 3 scale instance. (a) The Gantt chart; (b) The disjunctive graph.
Figure 3. Two ways of presenting a solution for the 5 × 3 scale instance. (a) The Gantt chart; (b) The disjunctive graph.
Mathematics 13 02401 g003
Figure 4. Diagram illustrating the critical path involving key blocks.
Figure 4. Diagram illustrating the critical path involving key blocks.
Mathematics 13 02401 g004
Figure 5. The proposed two-step neighborhood operation.
Figure 5. The proposed two-step neighborhood operation.
Mathematics 13 02401 g005
Figure 6. Makespan with two-step insertion neighborhood operation.
Figure 6. Makespan with two-step insertion neighborhood operation.
Mathematics 13 02401 g006
Figure 7. Flowchart of the graph knowledge-enhanced IG algorithm.
Figure 7. Flowchart of the graph knowledge-enhanced IG algorithm.
Mathematics 13 02401 g007
Figure 8. (a) The results of the rapid evaluation; (b) The results of the full decoding; (c) The ratio of the full decoding to rapid evaluation.
Figure 8. (a) The results of the rapid evaluation; (b) The results of the full decoding; (c) The ratio of the full decoding to rapid evaluation.
Mathematics 13 02401 g008
Table 1. Literature Summary.
Table 1. Literature Summary.
ReferenceProblem TypeDomain KnowledgeMemory StructureFast EvaluationMain Algorithm
Sun et al. [12]Two-machine flow shopDominance rulesNoneNoneBranch-and-bound
Lin et al. [14]HFSPChaos annealingNoneNoneImproved simulated annealing
Huang et al. [15]HFSP-TSPScenario treeNoneNonePointer-based discrete DE
Safari et al. [16]Competitive HFSPGame theoryNoneNoneCo-evolutionary genetic algorithm
Lin et al. [17]Reentrant HFSPNoneCentralized bufferNoneHybrid harmony search GA
Luan et al. [19]Low-carbon FJSPNoneNoneNoneDiscrete whale optimization
Ying et al. [20]Distributed HFSPCocktail decodingNoneNoneSelf-tuning iterated greedy
Cui et al. [22]Distributed heterogeneous HFSPGreedy insertion neighborhoodMulti-populationNoneImproved multi-population GA
Li et al. [23]Distributed HFSPTwo-level encodingNoneNoneDiscrete artificial bee colony
Jia et al. [26]Distributed assembly HFSPNoneQ-learning memoryNoneQ-learning multi-population memetic
Li et al. [27]Large-scale FFSPSliding window samplingBilevel learningNoneDouble DQN + Graph pointer network
Wang et al. [28]HFSPCritical pathNoneYesMandatory operations accelerated IG
Liu et al. [29]Dynamic HFSPGraph attentionGraph memoryNonePPO-GAT algorithm
Wan et al. [30]FJSPGraph attentionMulti-agent memoryNoneMulti-agent graph RL
Huang et al. [31]FJSPMixture of expertsMultiexpert structureNoneMultiexpert graph neural network
Pan et al. [32]HFSPIdle time insertionNoneNoneIterated greedy + local search
Öztop et al. [33]HFSPNEH heuristicNoneNoneIterated greedy variants
This paperHFSPCritical path, disjunctive graph, dual decodingGraph knowledgeYesGraph knowledge-enhanced IG
Table 2. Processing time of the 5 × 3 instance.
Table 2. Processing time of the 5 × 3 instance.
Stage Machines Processing Time
Job1 Job2 Job3 Job4 Job5
1324516
2278363
3359472
Table 3. Computational results of the Carlier benchmark.
Table 3. Computational results of the Carlier benchmark.
InstanceLBB&BAISPSOLABCEnhanced IG%Deviation
MINCMINTMINCMINTMINCMINTMINCMINTMINCMINTB&BAISPSOLABCEIG
j10c5a2888813881880.0028808800.000.000.000.000.00
j10c5a3117117711711170.0021170.0211700.000.000.000.000.00
j10c5a4121121612111210.003121012100.000.000.000.000.00
j10c5a51221221112211220.0131220.0212200.000.000.000.000.00
j10c5a6110110611041100.1741100.121100.0010.000.000.000.000.00
j10c5b11301301313011300.003130013000.000.000.000.000.00
j10c5b2107107610711070.003107010700.000.000.000.000.00
j10c5b3109109910911090.012109010900.000.000.000.000.00
j10c5b4122122612221220.025122012200.000.000.000.000.00
j10c5b5153153615311530.001153015300.000.000.000.000.00
j10c5b61151151111511150.001115011500.000.000.000.000.00
j10c5c16868286832680.332680.08680.0310.000.000.000.000.00
j10c5c2747419744740.535742.8740.0450.000.000.000.000.00
j10c5c3717124072-7136.99772-710.1820.001.410.001.410.00
j10c5c466661017663660.215660.15660.0220.000.000.000.000.00
j10c5c57878427814780.122780.08780.010.000.000.000.000.00
j10c5c6696948656912690.405690.37690.0160.000.000.000.000.00
j10c5d166666490665660.185660.1660.0010.000.000.000.000.00
j10c5d2737326177331731.158730.13730.0170.000.000.000.000.00
j10c5d364644816415640.098640.02640.0330.000.000.000.000.00
j10c5d47070493705700.337700.1700.0050.000.000.000.000.00
j10c5d56666393661446660.515660.44660.8670.000.000.000.000.00
j10c5d662621627628620.383620.07620.010.000.000.000.000.00
j10c10a1139139686113911390.0551390.021390.0070.000.000.000.000.00
j10c10a215815841158181580.871580.221580.0090.000.000.000.000.00
j10c10a31481482114811480.0171480.051480.0040.000.000.000.000.00
j10c10a41491495814921490.085149014900.000.000.000.000.00
j10c10a51481482114811480.10214801480.0040.000.000.000.000.00
j10c10a61461463614641460.2391460.031460.010.000.000.000.000.00
j10c10b11631632016311630.013163016300.000.000.000.000.00
j10c10b21571573615711570.22115701570.0040.000.000.000.000.00
j10c10b31691696616911690.014169016900.000.000.000.000.00
j10c10b41591591915911590.021159015900.000.000.000.000.00
j10c10b51651652016511650.037165016500.000.000.000.000.00
j10c10b61651653316511650.056165016500.000.000.000.000.00
j10c10c111312734115-115-115-114-12.391.771.771.770.88
j10c10c2116116-119-117-119-1160.9970.002.590.862.590.00
j10c10c3981331100116-116-116-116-35.7118.3718.3718.3718.37
j10c10c4103135-120-120-120-119-31.0716.5016.5016.5015.53
j10c10c5121145-126-125-125-125-19.834.133.313.313.31
j10c10c697112-106-106-106-105-15.469.289.289.288.25
j15c5a11781781817811780.06178017800.000.000.000.000.00
j15c5a21651653516511650.0051650.0316500.000.000.000.000.00
j15c5a31301303413011300.0061300.0213000.000.000.000.000.00
j15c5a41561562115621560.0131560.021560.0020.000.000.000.000.00
j15c5a51641643416411640.004164016400.000.000.000.000.00
j15c5a61781783817811780.00617801780.0020.000.000.000.000.00
j15c5b11701701617011700.003170017000.000.000.000.000.00
j15c5b21521522515211520.005152015200.000.000.000.000.00
j15c5b31571571515711570.03157015700.000.000.000.000.00
j15c5b4147147371471147014701470.0010.000.000.000.000.00
j15c5b51661662016621660.08616601660.0020.000.000.000.000.00
j15c5b61751752317511750.016175017500.000.000.000.000.00
j15c5c18585213185774854.205854.47852.2640.000.000.000.000.00
j15c5c2909018491-901198903.24900.3080.001.11 0.000.000.00
j15c5c387872028716872.398871.16870.220.000.000.000.000.00
j15c5c48990-89317892.208896.85890.1331.120.000.000.000.00
j15c5c57384-74-74-74-74-15.071.371.371.371.37
j15c5c69191579119910.191910.16910.0150.000.000.000.000.00
j15c5d11671672416711670167016700.000.000.000.000.00
j15c5d28285-84-84-84-84-3.662.442.442.442.44
j15c5d37796-83-82-82-82-24.687.796.496.496.49
j15c5d461101-84-84-84-84-65.5737.7037.7037.7037.70
j15c5d56797-80-79-79-79-44.7819.4017.9117.9117.91
j15c5d67987-81-81-81-81-10.132.532.532.532.53
j15c10a12362364023612360.018236023600.000.000.000.000.00
j15c10a2200200154200302000.21420002000.0080.000.000.000.000.00
j15c10a31981984519841980.171198019800.000.000.000.000.00
j15c10a422522578225122250.07222502250.0040.000.000.000.000.00
j15c10a5182183-18221820.509182018200.55 0.000.000.000.00
j15c10a62002004420022000.468200020000.000.000.000.000.00
j15c10b12222227022232220.01722202220.0010.000.000.000.000.00
j15c10b21871878018711870.0121870.091870.0030.000.000.000.000.00
j15c10b32222228022212220.007222022200.000.000.000.000.00
j15c10b42212218422112210.007221022100.000.000.000.000.00
j15c10b52002008420012000.1352000.12000.0020.000.000.000.000.00
j15c10b62192196721912190.00621902190.0050.000.000.000.000.00
mean value 469.42 44.81 19.26 0.33 0.083.641.641.541.581.49
LB: upper bounds; MINC: the minimum makespan of all results MINT: the time to get the minimum makespan.
Table 4. Computational results of Liao benchmark.
Table 4. Computational results of Liao benchmark.
InstancePSOIBBOLABCIGTEnhanced IG%Deviation
MINCMINTMINCMINTMINCMINTMINCMINTMINCMINTPSOIBBOLABCIGTEIG
j30c5e147196.16474152.346758.5446214.94462252.581.952.601.080.000.00
j30c5e261655.28616146.861619.666160.216162.510.000.000.000.000.00
j30c5e360264.56610170.159664.0459322.69593370.651.522.870.510.000.00
j30c5e457586.98577149.457154.7556319.4564100.622.132.491.420.000.18
j30c5e560579.84609168.360342.9360022.6600121.380.831.500.500.000.00
j30c5e66050.996615144.260747.3860012.45600126.650.832.501.170.000.00
j30c5e762987.18629150.662637.176261.586263.180.480.480.000.000.00
j30c5e867897.67685186.967888.026747.076745.050.591.630.590.000.00
j30c5e965183.8654177.36468264215.964288.441.401.870.620.000.00
j30c5e1059477.46596189.558088.5757323.92571113.814.034.381.580.350.00
mean value602.6072.99606.50163.54599.0058.31594.9014.08594.80118.491.382.030.750.040.02
MINC: the minimum makespan of all results; MINT: the time to get the minimum makespan.
Table 5. Computational results of Jose benchmark.
Table 5. Computational results of Jose benchmark.
Instance Enhanced IGInstance Enhanced IGInstance Enhanced IG
UBMINCMINTUBMINCMINTUBMINCMINT
10 × 5 × 14104080.0220 × 5 × 16606601.79630 × 5 × 164964949.432
10 × 5 × 23943840.5120 × 5 × 25875843.62630 × 5 × 27897890.108
10 × 5 × 34534530.01220 × 5 × 355955811.77430 × 5 × 3793792114.571
10 × 5 × 44524520.01520 × 5 × 455255018.52530 × 5 × 4680680131.298
10 × 5 × 53893890.0220 × 5 × 55265261.55930 × 5 × 569869540.53
10 × 5 × 63603570.02220 × 5 × 651351176.45830 × 5 × 6661660105.725
10 × 5 × 74434370.28320 × 5 × 767867644.78630 × 5 × 7575572117.615
10 × 5 × 84364330.08120 × 5 × 851751563.98530 × 5 × 8805803135.638
10 × 5 × 94094060.01720 × 5 × 968167913.47230 × 5 × 982182083.351
10 × 5 × 103733730.00420 × 5 × 1052552437.49430 × 5 × 1076275978.509
10 × 10 × 18067990.4720 × 10 × 179779415.75330 × 10 × 11057105799.627
10 × 10 × 27857850.01120 × 10 × 2844839118.23630 × 10 × 293093150.322
10 × 10 × 37557550.05820 × 10 × 385885281.62130 × 10 × 311261126146.122
10 × 10 × 49229170.02120 × 10 × 41015101350.05430 × 10 × 410951090433.476
10 × 10 × 59699570.02520 × 10 × 5973973144.3530 × 10 × 5944942267.769
10 × 10 × 6100110010.01420 × 10 × 6796793118.73930 × 10 × 6972976233.728
10 × 10 × 79479420.05820 × 10 × 77717688.95330 × 10 × 7977980150.051
10 × 10 × 85455430.13920 × 10 × 895094849.65830 × 10 × 810101006320.229
10 × 10 × 95165117.53420 × 10 × 99539519.71530 × 10 × 990990283.447
10 × 10 × 106846840.02620 × 10 × 1086686553.69730 × 10 × 101098109943.633
10 × 15 × 19599590.00820 × 15 × 11067106440.74630 × 15 × 112051202296.344
10 × 15 × 2129012900.08520 × 15 × 213331324112.89430 × 15 × 212711277291.569
10 × 15 × 3109110910.02920 × 15 × 31295129331.65430 × 15 × 312091208503.331
10 × 15 × 48758660.57720 × 15 × 41031102662.14430 × 15 × 415301526449.351
10 × 15 × 588387912.38420 × 15 × 51015101581.66230 × 15 × 511381142778.377
10 × 15 × 68438360.01120 × 15 × 6127712778.43530 × 15 × 614361437397.775
10 × 15 × 79129046.4520 × 15 × 712741272118.41430 × 15 × 714551449750.599
10 × 15 × 87707650.04120 × 15 × 81261125950.64130 × 15 × 814381430542.952
10 × 15 × 97647511.16720 × 15 × 91748173234.74630 × 15 × 92019201946.02
10 × 15 × 108668495.53720 × 15 × 1096796650.64430 × 15 × 1012581262796.534
10 × 20 × 1135313450.39320 × 20 × 11332132941.89830 × 20 × 114881489910.785
10 × 20 × 2115611550.06420 × 20 × 21325132376.60430 × 20 × 215981602135.08
10 × 20 × 3150315030.00720 × 20 × 313241316281.62830 × 20 × 315721563624.561
10 × 20 × 4148314590.29920 × 20 × 415801576260.77330 × 20 × 415441545539.408
10 × 20 × 51505149421.59520 × 20 × 513201317301.93530 × 20 × 516261626419.98
10 × 20 × 6130913025.34720 × 20 × 61284128217.33530 × 20 × 614991499528.826
10 × 20 × 7142014120.86720 × 20 × 71632163079.07930 × 20 × 7153115301009.76
10 × 20 × 81522151714.29820 × 20 × 818471846312.69730 × 20 × 818231838177.356
10 × 20 × 99028842.33620 × 20 × 913021300162.10330 × 20 × 923622367423.745
10 × 20 × 10109910888.64920 × 20 × 101202120073.63230 × 20 × 1023912402987.849
15 × 5 × 14864822.00725 × 5 × 17747740.06835 × 5 × 1108010800.24
15 × 5 × 24234222.85225 × 5 × 26966960.13235 × 5 × 2107410740.088
15 × 5 × 35045030.27425 × 5 × 370770652.72335 × 5 × 38888887.724
15 × 5 × 444043624.74525 × 5 × 479079056.86335 × 5 × 486386296.401
15 × 5 × 54204181.74925 × 5 × 560359963.69335 × 5 × 5887884180.531
15 × 5 × 64144110.20425 × 5 × 6704703122.76635 × 5 × 6895889249.473
15 × 5 × 74844840.66525 × 5 × 769269119.28635 × 5 × 7669666133.477
15 × 5 × 85255202.39125 × 5 × 869469275.44835 × 5 × 8716714241.015
15 × 5 × 95575544.27825 × 5 × 966866851.30335 × 5 × 9845843130.4
15 × 5 × 104434420.28725 × 5 × 106466460.23935 × 5 × 10948946164.397
15 × 10 × 175775214.89625 × 10 × 1862861183.7235 × 10 × 1108510850.624
15 × 10 × 27046997.31325 × 10 × 29749740.47535 × 10 × 2124712471.735
15 × 10 × 38538530.07125 × 10 × 3911911128.31435 × 10 × 3106110610.774
15 × 10 × 48868842.19725 × 10 × 49949943.07835 × 10 × 4109210929.347
15 × 10 × 5108710840.1225 × 10 × 594294063.49435 × 10 × 512681270676.615
15 × 10 × 6104210385.43125 × 10 × 6995995158.13235 × 10 × 611651163810.119
15 × 10 × 7102010200.01725 × 10 × 787987573.17335 × 10 × 710151013602.434
15 × 10 × 81011101127.01925 × 10 × 883683779.19335 × 10 × 8991995327.622
15 × 10 × 965964656.82425 × 10 × 91061106049.37635 × 10 × 911431147611.077
15 × 10 × 107367349.9125 × 10 × 10919922195.62935 × 10 × 102115210516.263
15 × 15 × 1102910279.01825 × 15 × 112051213209.35835 × 15 × 113721378155.04
15 × 15 × 21059105847.47325 × 15 × 21143114134.11635 × 15 × 213521350825.419
15 × 15 × 31151114465.99825 × 15 × 312221222203.43435 × 15 × 315991605509.558
15 × 15 × 41173116610.88625 × 15 × 413541354279.45735 × 15 × 4153915371153.27
15 × 15 × 51190118643.11225 × 15 × 513501353121.54335 × 15 × 512471249537.357
15 × 15 × 6116811659.20725 × 15 × 611531156186.79535 × 15 × 613371337389.042
15 × 15 × 71570155612.31525 × 15 × 710591062235.40635 × 15 × 715321534789.701
15 × 15 × 89439405.88525 × 15 × 811101113352.43235 × 15 × 814941494830.902
15 × 15 × 990989742.9625 × 15 × 911481152206.26935 × 15 × 9223622360.927
15 × 15 × 1087687544.79225 × 15 × 1011651164170.25935 × 15 × 1013121314367.28
15 × 20 × 11264126223.55825 × 20 × 114491448153.37335 × 20 × 1156915671085.53
15 × 20 × 2156415640.11425 × 20 × 213531353208.02735 × 20 × 217011699675.721
15 × 20 × 3121312069.08525 × 20 × 31430143399.77935 × 20 × 315721569682.232
15 × 20 × 4155715570.10125 × 20 × 414101410441.39935 × 20 × 4191219141667.38
15 × 20 × 51558153940.2225 × 20 × 514071405277.59235 × 20 × 5185618571610.45
15 × 20 × 61692168638.84925 × 20 × 613711372553.7935 × 20 × 6141514141123.51
15 × 20 × 71731170048.6625 × 20 × 716801681544.10735 × 20 × 7159215921070.21
15 × 20 × 81712169913.14725 × 20 × 813101320192.06335 × 20 × 819381930657.221
15 × 20 × 910039956.06525 × 20 × 913381343274.52335 × 20 × 919811982561.683
15 × 20 × 101098108924.8425 × 20 × 1012281231517.14535 × 20 × 1018341845168.102
UB: upper bounds; MINC: the minimum makespan of all results; MINT: the time to get the minimum makespan.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Zhang, B.; Wang, K.; Zhang, L.; Zhang, Z.; Wang, Y. Graph Knowledge-Enhanced Iterated Greedy Algorithm for Hybrid Flowshop Scheduling Problem. Mathematics 2025, 13, 2401. https://doi.org/10.3390/math13152401

AMA Style

Li Y, Zhang B, Wang K, Zhang L, Zhang Z, Wang Y. Graph Knowledge-Enhanced Iterated Greedy Algorithm for Hybrid Flowshop Scheduling Problem. Mathematics. 2025; 13(15):2401. https://doi.org/10.3390/math13152401

Chicago/Turabian Style

Li, Yingli, Biao Zhang, Kaipu Wang, Liping Zhang, Zikai Zhang, and Yong Wang. 2025. "Graph Knowledge-Enhanced Iterated Greedy Algorithm for Hybrid Flowshop Scheduling Problem" Mathematics 13, no. 15: 2401. https://doi.org/10.3390/math13152401

APA Style

Li, Y., Zhang, B., Wang, K., Zhang, L., Zhang, Z., & Wang, Y. (2025). Graph Knowledge-Enhanced Iterated Greedy Algorithm for Hybrid Flowshop Scheduling Problem. Mathematics, 13(15), 2401. https://doi.org/10.3390/math13152401

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop