Next Article in Journal
The Impacts of Large Language Model Addiction on University Students’ Mental Health: Gender as a Moderator
Previous Article in Journal
Affordable Audio Hardware and Artificial Intelligence Can Transform the Dementia Care Pipeline
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Continuous Flow-Shop Scheduling Considering Due Dates

1
Glorious Sun School of Business and Management, Donghua University, Shanghai 200051, China
2
School of Economics & Management, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(12), 788; https://doi.org/10.3390/a18120788
Submission received: 30 October 2025 / Revised: 5 December 2025 / Accepted: 10 December 2025 / Published: 12 December 2025

Abstract

For a no-wait flow shop with continuous-flow characteristics, this study simultaneously considers machine setup times and rated processing speed constraints, aiming to minimize the sum of the maximum completion time and the maximum tardiness. First, lower bounds for the maximum completion time, the maximum tardiness, and the total objective function are developed. Second, a mixed-integer programming (MIP) model is formulated for the problem, and nonlinear elements are subsequently linearized via time discretization. Due to the computational complexity of the problem, two algorithms are proposed: a heuristic algorithm with fixed machine links and greedy rules (HAFG) and a genetic algorithm based on altering machine combinations (GAAM) for solving large-scale instances. The Earliest Due Date (EDD) rule is used as baselines for algorithmic comparison. To better understand the behaviors of the two algorithms, we observe the two components of the objective function separately. The results show that, compared with the EDD rule and GAAM, the HAFG algorithm tends to focus more on optimizing the maximum completion time. The performance of both algorithms is evaluated using their relative deviations from the developed lower bounds and is compared against the EDD rule. Numerical experiments demonstrate that both HAFG and GAAM significantly outperform the EDD rule. In large-scale instances, the HAFG algorithm achieves a gap of about 4%, while GAAM reaches a gap of about 3%, which is very close to the lower bound. In contrast, the EDD rule shows a deviation of about 10%. Combined with a sensitivity analysis on the number of machines, the proposed framework provides meaningful managerial insights for continuous-flow production environments.

1. Introduction

The flow shop, as one of the most common production modes in modern manufacturing, has been extensively studied in the academic field [1]. Classical flow shop models, such as hybrid flow shops, flexible flow shops, and permutation flow shops, mostly focus on discrete job processing and rarely explore processing modes with continuous-flow characteristics [2,3]. In many modern production systems, however, the material moves continuously through several processing stages and is subject to strict no-wait constraints between them. A representative example of such a production system is the biscuit baking and packaging line. After the molding stage, the biscuits move at a constant speed through an oven of a certain length, completing the baking process during transit, and are then packed and boxed in two subsequent packaging stages. The machines at each stage can be regarded as a set of parallel machines and can process the biscuits under different processing specifications according to order requirements.
This study investigates a three-stage production line that can be modeled as a no-wait flow shop with continuous-flow characteristics. The first stage consists of a single upstream processing unit, referred to as the upstream processor, which continuously outputs semi-finished products at a constant rate. The second and third stages are two downstream processing stages, each equipped with a group of identical parallel machines called the primary processing machines and the secondary processing machines. The processing mission is a set of orders characterized by its required quantity, specifications at the two downstream stages, and the due date.
The first important characteristic of this production system is that each order consists of a very large number of small units, such that treating individual units as separate jobs is neither realistic nor computationally tractable. Instead, it is more appropriate to describe the system in terms of processing rates and machine processing specifications. Under this perspective, each order exhibits continuous-flow behavior along the production line, and different portions of the same order may be processed simultaneously at different stages. Viewing each order as a job, a single job can undergo overlapping processing on two consecutive stages.
The second important characteristic of this production system is that the downstream parallel machines must instantly process the items output from the upstream stage. If, at any point in time, the aggregate processing capacity of a downstream stage cannot match the upstream output rate, the entire line must be halted. This induces strict no-wait constraints between consecutive stages: once the processing of an order is initiated at the upstream stage, the semi-finished products immediately enter downstream processing without any intermediate waiting.
The third important characteristic of this production system is that a setup operation may be required when a machine switches from processing an order to another. During a setup operation, the corresponding machine cannot operate, which reduces the processing capacity and may even force the entire line to be temporarily halted. As a result, sequence-dependent setup times and rated processing speeds play a critical role in determining the overall performance of the system.
Based on these three fundamental characteristics, the production system considered in this study can be modeled as a three-stage no-wait flow shop with constant-speed continuous processing. In this system, the downstream stages are equipped with parallel machines and the setup times are sequence-dependent. Each order is characterized by its required quantity, specifications at the two downstream stages, and due date. Machines operating under different specifications have different rated processing speeds, representing the maximum achievable processing rate under a given specification. The optimization objective is to minimize the sum of the maximum completion time and the maximum tardiness. The above production process is not only applicable to manufacturing scenarios such as biscuit production, where individual units have very small volume but the overall quantity is large, but also to liquid-processing operations in metal and chemical industries, and thus has very broad applicability in practice.

2. Literature Review

As one of the most commonly used processing modes in modern mass production, the flow shop has been extensively studied in the academic field with respect to its various characteristics. With the continuous advancement of manufacturing automation, the complex processing conditions in flow-shop scheduling—such as multiple operations, multiple machines, and resource constraints—pose significant challenges to production duration control and delivery performance. Current studies mainly focus on issues related to flow shop processing characteristics, setup times, and delivery deadlines [4,5]. These studies aim to improve equipment utilization, enhance production flexibility, and reduce delivery delays by optimizing job scheduling and minimizing interruptions during the production process. The related literature can be reviewed from the following three aspects: (1) no-wait flow-shop scheduling optimization; (2) flow-shop optimization with machine setup operations; and (3) flow-shop scheduling optimization with order due dates.

2.1. No-Wait Flow-Shop Scheduling Optimization

The No-Wait flow-shop Scheduling Problem (NWFSP) is an important class of production scheduling problems characterized by the prohibition of waiting time between consecutive operations. Such problems are widely encountered in continuous production industries such as chemical engineering, metallurgy, printing, and food processing, and have therefore attracted extensive attention from researchers. In recent years, various exact and heuristic/metaheuristic algorithms have been proposed for different variants of the NWFSP to improve solution efficiency and quality. Koulamas [6] addressed the no-wait flow-shop scheduling problem with job rejection options, aiming to minimize the combined maximum completion time and rejection cost through dynamic programming algorithms. Cheng et al. [7] addressed the no-wait flow shop group scheduling problem with sequence-dependent setup times, aiming to minimize maximum completion time while reducing non-value-added activities such as setup and waiting times, through metaheuristic algorithms including simulated annealing variants. Cheng et al. [8] addressed the mixed no-wait flow-shop scheduling problem with sequence-dependent setup times, aiming to minimize maximum completion time through a metaheuristic algorithm for medium- and large-scale instances. Dong et al. [9] addressed the multitask flexible no-wait two-stage flow-shop scheduling problem where first-stage machines can process second-stage operations, aiming to minimize maximum completion time through a combinatorial algorithm with theoretical approximation guarantees. Yüksel et al. [10] established a mixed-integer linear programming model for the bi-objective no-wait flow-shop scheduling problem, aiming to simultaneously minimize total flow time and maximum completion time through Q-learning-based metaheuristic algorithms.
To address the complexities of modern multi-factory production environments, several scholars have investigated distributed no-wait flow-shop scheduling problems. Avci et al. [11] proposed an exact solution method based on a branch-and-cut algorithm for the Distributed No-Wait flow-shop Scheduling Problem (DNWFSP), considering multi-factory job allocation and the no-wait characteristics between consecutive operations, while combining heuristic methods to improve the quality of the upper bound solutions. Experimental results show that the algorithm achieves excellent performance on multiple benchmark instances and successfully solves parts of large-scale problems, demonstrating strong practical potential. Avci et al. [12] proposed an efficient Iterative Local Search (ILS) algorithm for the DNWFSP. The algorithm integrates two local search strategies based on Variable Neighborhood Descent (VND) and adaptively adjusts the perturbation strength according to the search space structure, thereby improving search efficiency and solution quality. Validation shows that the algorithm can obtain high-quality solutions within a relatively short computational time, demonstrating its potential for broader application in distributed no-wait flow-shop scheduling. Li et al. [13] addressed the distributed heterogeneous no-wait flow-shop scheduling problem, considering factory heterogeneity in machine numbers, equipment technologies, and raw material supply, aiming to minimize maximum completion time. The study proposed a discrete artificial bee colony algorithm with neighborhood search operators and variable neighborhood descent strategy. Pan et al. [14] proposed a novel evolutionary algorithm for distributed no-wait permutation flow-shop scheduling, utilizing a two-dimensional solution representation and incorporating a jigsaw puzzle-inspired initialization strategy combined with relative local search to minimize maximum completion time. Experimental validation on instances of varying scales demonstrates the computational competitiveness of the proposed approach. Zhao et al. [15] proposed a population-based iterated greedy algorithm (PBIGA) for the distributed assembly no-wait flow-shop scheduling problem, incorporating an accelerated job assignment mechanism and differentiated local search strategies for product and job sequences. Comprehensive experiments on large-scale benchmark instances demonstrate that PBIGA outperforms state-of-the-art algorithms in minimizing total flowtime.
With the promotion of green manufacturing concepts, some scholars have incorporated energy efficiency into the optimization objectives of no-wait flow-shop scheduling problems. Zhao et al. [16] proposed a policy-based meta-heuristic algorithm (MHA-PG) for the energy-aware distributed no-wait flow-shop scheduling problem in heterogeneous factory systems, incorporating optimal allocation rules and energy-saving strategies to minimize total energy consumption and tardiness. Experimental comparisons demonstrate that MHA-PG achieves superior performance over state-of-the-art algorithms on the addressed problem. Zhao et al. [17] proposed a reinforcement learning-driven brain storm optimisation algorithm (RLBSO) for the multi-objective energy-efficient distributed assembly no-wait flow-shop scheduling problem, integrating Q-learning to guide operation selection and incorporating energy-saving strategies specific to no-wait characteristics. Experimental results on large-scale instances demonstrate that RLBSO outperforms competing algorithms in balancing maximum completion time, energy consumption, and resource allocation.

2.2. Flow-Shop Optimization with Machine Setup Operations

Setup time is a common and critical factor to consider in flow-shop scheduling problems, especially in Sequence-Dependent Setup Time (SDST) problems, where the required setup time varies depending on the processing sequence of consecutive jobs. In recent years, researchers have proposed various exact, heuristic, and metaheuristic methods for different practical production scenarios considering machine setup times, yielding abundant research results.
Considering the critical impact of setup times on production efficiency, researchers have addressed distributed flow-shop scheduling problems with sequence-dependent setup time constraints across various problem settings. Zhang et al. [18] formulated a mixed-integer linear programming model for the distributed no-wait flow shop group scheduling problem with sequence-dependent setup times, aiming to minimize maximum completion time through an improved estimation of distribution algorithm incorporating problem-specific knowledge and multiple local search operators. Yu et al. [19] addressed the distributed permutation flow-shop scheduling problem with sequence-dependent setup times, aiming to minimize total flow time through a discrete artificial bee colony algorithm with variable neighborhood structures. Song et al. [20] addressed the distributed assembly permutation flow-shop scheduling problem with sequence-dependent setup times, aiming to minimize maximum completion time through a two-stage heuristic algorithm based on the equivalence between total setup time plus idle time and maximum completion time minimization. Zhao et al. [21] addressed the distributed heterogeneous assembly permutation flow-shop scheduling problem with batch delivery and sequence-dependent setup times, aiming to minimize maximum completion time through a knowledge-based two-population optimization algorithm combining cooperative evolution and reinforcement learning. Wang et al. [22] formulated a mixed-integer linear programming model for the distributed assembly flow-shop scheduling problem with factory eligibility, transportation capacity, and setup time constraints, aiming to minimize maximum completion time and total tardiness through a Q-learning-based artificial bee colony algorithm. Rifai et al. [23] addressed the distributed reentrant permutation flow-shop scheduling problem with sequence-dependent setup times, aiming to minimize maximum completion time, production cost, and tardiness through an improved multi-objective adaptive large neighborhood search algorithm.
Several scholars have addressed hybrid and flexible flow-shop scheduling problems incorporating setup time constraints and various operational requirements. Ozsoydan et al. [24] addressed the hybrid flexible flow-shop scheduling problem with sequence-dependent setup times and release times, aiming to minimize maximum completion time. The study proposed an iterative greedy search algorithm combining Q-learning and cloud computing to adaptively optimize scheduling strategies. Missaoui et al. [25] addressed the hybrid flow-shop scheduling problem with sequence-dependent setup times and delivery time windows, aiming to minimize maximum completion time. The study proposed a parameter-free iterative greedy algorithm employing innovative local search strategies to enhance computational efficiency. Qiao et al. [26] proposed an adaptive genetic algorithm for two-stage hybrid flow-shop scheduling with sequence-independent setup time and no-interruption constraints, incorporating dynamic adjustment of crossover and mutation probabilities based on population diversity and a problem-specific local search method. Experimental results demonstrate that the algorithm achieves solutions within 1% of the lower bound within three minutes, confirming its efficiency and effectiveness. Cai et al. [27] constructed a mathematical model for the distributed assembly hybrid flow-shop scheduling problem, aiming to minimize maximum completion time. The study proposed a shuffled frog-leaping algorithm integrated with Q-learning to dynamically select search strategies during evolution. He et al. [28] formulated a mixed-integer linear programming model for the multi-objective flow shop group scheduling problem with sequence-dependent setup times and partial due dates, aiming to minimize completion time and total tardiness. The study proposed an iterative greedy-based algorithm incorporating local search operators and cone-weighted scalarization method to balance conflicting objectives.
Recent studies have addressed energy-efficient distributed flow-shop scheduling problems that simultaneously optimize production performance and energy consumption. Zhong et al. [29] addressed the energy-efficient distributed permutation flow-shop scheduling problem with sequence-dependent setup times, aiming to minimize total flow time and total energy consumption. The study proposed a multi-objective heuristic algorithm and a neighborhood iterative greedy algorithm incorporating exchange and energy-saving operators.

2.3. Flow-Shop Scheduling Optimization with Order Due Dates

Order due dates directly affect customer satisfaction; therefore, due dates and tardiness have become important considerations in flow-shop scheduling. To address the combined optimization problem balancing job tardiness and production scheduling efficiency, researchers have proposed various models and algorithms to optimize objectives such as unified due date settings and minimization of maximum tardiness. The following provides a review of recent studies on flow-shop scheduling with consideration of order due dates.
Chen et al. [30] studied a two-machine flow-shop scheduling problem, aiming to maximize the total early-completed work under a common due date constraint. The study focused on the unweighted model and proposed a dynamic programming algorithm with time complexity O ( n 2 d 2 ) , significantly improving computational efficiency compared to the O ( n 2 d 4 ) algorithm for the weighted model reported in prior literature. Additionally, the authors noted that the classical Johnson algorithm performs poorly for this problem and cannot guarantee high-quality solutions, and they designed a polynomial-time approximation scheme (FPTAS) to enhance practical applicability. Schaller et al. [31] studied a no-wait flow-shop scheduling problem with the objective of minimizing total earliness and tardiness. They proposed a two-stage solution procedure based on scheduling heuristics and designed multiple insertion improvement algorithms to enhance solution quality. Compared with traditional heuristics and simple rules, the two-stage solution procedure outperforms conventional scheduling rules across instances of various sizes and due date tightness. Li et al. [32] addressed a bi-objective hybrid flow-shop scheduling problem with a common due date, formulating an objective function based on total waiting time and earliness/tardiness. Given the NP-hardness of the problem, an improved genetic algorithm inspired by the NSGA-II framework was proposed to enhance the solution efficiency for large-scale instances. By synchronizing the production of multiple components of the same product, the algorithm effectively reduces waiting and earliness/tardiness, and numerical experiments indicate superior performance compared with conventional PSO and genetic algorithms. Nasrollahi et al. [33] investigated single-agent and two-agent scheduling problems in a two-machine flow shop environment with common due date assignment, proposing a polynomial-time optimal algorithm for the single-agent case and a branch-and-bound algorithm with efficient lower bounds and dominance rules for the NP-hard two-agent case to minimize the weighted sum of maximum earliness and tardiness. Computational results demonstrate that the algorithm can optimally solve large-size instances.
Considering the complexities of due date management in practice, several studies have examined flow-shop scheduling problems with various due date configurations such as stochastic, multiple, and assignable due dates. Koulamas et al. [34] investigated a flow-shop scheduling problem with two different job due dates, proving that the proportional flow shop problem with variable machine speeds remains NP-hard. They showed that in no-wait flow shops, if jobs are sequenced and the last machine has the largest processing time, the problem can be solved in O ( n 2 ) time. The problem of minimizing the number of tardy jobs can be solved in O ( n 4 ) time. For a hierarchical bi-objective proportional flow shop problem with two different due dates, where the primary objective is to minimize the number of tardy jobs and the secondary objective is to minimize total or maximum tardiness, the problem can be solved in O ( n 3 ) time. Geng et al. [35] investigated the proportionate flow-shop scheduling problem with job rejection and common due date assignment, proposing dynamic programming algorithms to minimize earliness, tardiness, due date cost, and rejection cost under total rejection and semi-rejection scenarios. The approach leverages the structural relation between proportionate flow shops and single-machine problems to develop efficient solution methods with established algorithm complexity. Liu et al. [36] studied a batch flow-shop scheduling problem with stochastic due dates, focusing on minimizing expected tardiness. The study derived closed-form expressions for expected tardiness under various due date distributions, formulated corresponding mathematical models, and proposed linearization methods to handle model nonlinearity. A logic-based Benders decomposition combined with branch-and-bound was used to construct an efficient solution framework, with new tight lower bounds and acceleration strategies to improve computational performance. Numerical results confirmed the importance of considering stochastic due dates and validated the effectiveness of the proposed algorithm, providing strong theoretical and methodological support for flow-shop scheduling in stochastic environments. Xiong  et al. [37] developed a mixed-integer linear programming (MILP) model for the distributed concrete precast flow-shop scheduling problem and proposed hybrid algorithms including a hybrid iterated greedy (HIG) and hybrid tabu search with iterated greedy (HTS-IG), integrating due-date-related heuristics and problem-specific knowledge to minimize total weighted earliness and tardiness. Experimental results demonstrate the effectiveness of the MILP model and the proposed metaheuristics, with HTS-IG achieving superior performance among all proposed approaches. Geng et al. [38] investigated proportionate flow-shop scheduling with job rejection, determining which jobs to accept or reject and their processing sequence to minimize total late work and rejection costs. They analyzed both predetermined and assignable due date scenarios, established NP-hardness results, and developed pseudo-polynomial dynamic programming algorithms for both cases (Table 1).

2.4. Research Gaps

A comparison of the existing literature shows that while numerous studies have separately examined no-wait flow shops, machine setup times, and due dates, relatively few have addressed all three factors simultaneously. In addition, only a limited number of studies have investigated continuous-flow production lines. It is worth noting that rated processing-speed constraint of machines is likewise uncommon for the literature to incorporate. It is also worth noting that: in traditional no-wait flow shop problems, “no-wait” typically refers to a single job entering the next operation immediately after completing the current one. In this study, if an order is treated as a job, its processing can occur simultaneously across multiple operations; if a single unit of product is treated as a job, each product completes the sequence of operations according to the processing order without any waiting time between operations. This characteristic distinguishes the no-wait feature of the present problem from that of traditional no-wait flow shops. In conventional studies, sequence-dependent setup times refer to the machine setup time required when consecutively processing different jobs, which depends on the job sequence. In this study, when machines consecutively process two orders with different specifications, at least one primary or secondary processing machine requires setup, reflecting the unique nature of machine setup times in this problem. Furthermore, in traditional problems, each machine processes only one job at a time, considering only the processing time of a single operation, without a concept of processing speed. In the present study, an order has a production quantity and can be processed simultaneously across multiple operations, with machines having rated processing speeds based on the order specifications. This gives the flow shop a continuous-flow characteristic, in contrast to previous studies, which can be considered as discrete flow shops. Continuous-flow production lines are common in practice, suitable for processes where products can be infinitely divisible (e.g., liquids) or where single-unit volumes are small and order quantities are large (e.g., biscuits, dried fruits). Research on continuous no-wait flow shops thus addresses a gap in the existing literature.

3. Problem Description and Lower Bound Analysis

3.1. Problem Description

Based on the biscuit processing background, we consider a three-stage no-wait flow shop processing system with continuous material flow. In the first stage, a single upstream processor continuously outputs semi-finished products and conveys them to the downstream stage at a constant speed v 0 . The second and third stages are two processing stages, each consisting of a set of homogeneous parallel machines that operate on the semi-finished products in sequence. The machines in these two stages are referred to as primary processing machines and secondary processing machines, respectively. Let the set of primary processing machines be I, indexed as { 1 , 2 , , n 1 } , and the set of secondary processing machines be J, indexed as { 1 , 2 , , n 2 } . Let n 1 and n 2 denote the number of primary and secondary processing machines, respectively. We assume that n 1 n 2 . At any time, a primary processing machine can transmit its processed products to only one secondary processing machine. However, a secondary processing machine can simultaneously receive products from multiple upstream primary processing machines, provided that these products share the same primary processing specification.
During production, both primary and secondary processing machines can switch between different processing specifications, which is referred to as a machine setup operation. Let the set of primary processing specifications be A and the set of secondary processing specifications be B. The setup times for primary and secondary processing machines are denoted by t 1 and t 2 , respectively; machines cannot perform processing during a setup operation. Furthermore, machines processing orders with different specifications have different rated processing speeds, depending on the order’s primary and secondary processing specifications. Figure 1 illustrates a simple example of production line stoppages caused by machine setup operations. This example considers a production system with only one primary processing machine and one secondary processing machine, in which only one processing specification can be handled at a time. In such a system, any setup operation on either machine will cause the entire production line to halt. The system processes three orders, O 1 , O 2 , and  O 3 , sequentially. When switching the processing specification from O 1 to O 2 , the primary processing machine must undergo a setup operation, resulting in a production interruption of duration t 1 . Similarly, when switching the processing specification from O 2 to O 3 , the secondary processing machine requires a setup operation, causing a production interruption of duration t 2 .
Let O denote the set of orders, where each order o O is represented as { a , b , Q o , D o } . Here, a and b denote the primary and secondary processing specifications of order o, Q o represents the required quantity of products, and  D o denotes the due date. Let v i , o , t 1 represents the primary rated processing speed of order o O , and  v j , o , t 2 represents the secondary rated processing speed of order o O .
Figure 2 illustrates the processing status at a given time with three primary processing machines and two secondary processing machines. As indicated by the arrows, two primary processing machines are producing the order with { a 1 , b 1 } specification requirement and feeding into one secondary processing machine, while the remaining primary and secondary processing machines are processing order with { a 2 , b 2 } specification requirement.
Since the upstream processor outputs semi-finished products at a constant speed and the machines perform setup operations during production, the processing capacity of the flow shop fluctuates due to these setup activities and may drop below the upstream processor’s processing speed. When this happens, the entire production line, including the upstream processor, will be temporarily halted. Such interruptions can prolong the overall maximum completion time. Furthermore, since each order has a due date but its processing rate is limited, an order may fail to meet its due date. The optimization objective of this problem is to minimize the sum of the maximum completion time and the maximum tardiness. Since both the maximum completion time and the maximum tardiness are defined at the order level, their units are consistent. The maximum completion time refers to the completion time of the last order, while the maximum tardiness represents the tardiness of the order with the largest delay. Therefore, we use a simple additive form as the objective function without introducing any weighting factors.
To facilitate subsequent descriptions, the following two concepts are introduced.
Definition 1.
At any given production moment, each secondary processing machine is connected to one or more primary processing machines. A machine combination is defined as a secondary processing machine together with all the primary processing machines connected to it.
Definition 2.
Even distribution: Among the secondary processing machines, ( n 1 n 1 n 2 · n 2 ) machines are each combined with ( n 1 n 2 + 1 ) primary processing machines, while the remaining ( n 2 n 1 + n 1 n 2 · n 2 ) secondary processing machines are each combined with n 1 n 2 primary processing machines. This allocation assigns all primary processing machines to all secondary machines evenly, forming n 2 machine combinations. This connection scheme between machines is referred to as the even distribution method.

3.2. Assumptions

This study is based on the following assumptions:
  • Multiple orders are allowed to be processed simultaneously on different primary and secondary processing machines, and a single order can also be processed concurrently on multiple primary and secondary processing machines.
  • Since the upstream processor operates at a constant output speed, the processing capacity of the flow shop for any order is not lower than the upstream processor’s processing speed under even distribution method. Therefore, each order can be processed independently on the flow shop.
  • The processing time for any order independently is not shorter than the setup time of the primary or secondary processing machines.
  • Multiple orders may share the same processing specifications but have different due dates, or conversely, have identical due dates but different processing specifications.
  • If setup operations on certain primary or secondary processing machines cause the overall processing speed of the remaining machines to fall below the upstream processor’s processing speed, the flow shop will stop temporarily. semi-finished products that remain on the processing line will be fully processed after the interruption.
  • The number of secondary processing machines is no more than that of primary processing machines, and the processing speed of each secondary processing machine is no less than that of a primary processing machine.
It should be noted that individual processing refers to the situation in which all machines on the production line are dedicated to producing a single order. Assumption 2 ensures that orders can be processed sequentially, without a certain order cannot be produced corresponding to its processing specification and limited processing rate. Assumption 3 defines the minimum production quantity required for each order. In practical production, the processing time of any order is always longer than the setup time of the machines, which is a fundamental requirement for maintaining production efficiency.

3.3. Lower Bound Analysis

First, we analyze the minimum possible value of the maximum completion time in an optimal production schedule and obtain the following result:
Lemma 1.
The lower bound of C m a x for the three-stage no-wait flow-shop scheduling problem is o O Q o v 0 .
Proof. 
Since the processing speed of any order cannot exceed the constant output rate of the upstream processor, the processing time for each order o O is no less than Q o v 0 . Therefore, the completion time of the last order cannot be earlier than the total processing time of all orders, i.e.,  o O Q o v 0 . The lemma is thus proved.    □
Next, we analyze the minimum possible value of the maximum tardiness in an optimal schedule and obtain the following result:
Property 1.
When setup times are ignored, to minimize the maximum tardiness, no order needs to be split for production in the three-stage no-wait flow-shop scheduling problem.
Proof. 
Let S = { C 1 , , C j , , C | O | } denote a production sequence, where C i represents the completion time of the i-th finished order, satisfying C 1 C j C | O | , and the corresponding order indices are O 1 , , O j , , O | O | . For any production sequence S, suppose we split an order O j and move part of it to be produced before several preceding orders. This adjustment does not change the relative completion order of S. Since setup times are ignored, it also does not affect the completion times or tardiness of O j and subsequent orders. However, the tardiness of preceding orders may increase. Therefore, for any given production sequence, splitting orders does not reduce the maximum tardiness. Hence, to minimize the maximum tardiness, there is no need to split any order. The property is thus proved.    □
Lemma 2.
When setup times are ignored, producing orders according to the Earliest Due Date (EDD) rule yields the minimum possible maximum tardiness, which is the lower bound of the maximum tardiness for the three-stage no-wait flow-shop scheduling problem.
Proof. 
According to Property 1, when setup times are ignored, minimizing the maximum tardiness in the three-stage no-wait flow-shop scheduling problem does not require splitting orders. Therefore, the problem degenerates into the single-machine scheduling problem 1 | | L m a x . In the 1 | | L m a x problem, the EDD rule provides the optimal schedule for minimizing maximum tardiness [31]. Since the original problem additionally considers setup times, the minimum maximum tardiness obtained under the 1 | | L m a x problem serves as a lower bound for the original problem. The lemma is thus proved.    □
Combining the above lemmas, we derive the lower bound for the overall objective as follows:
Theorem 1.
The lower bound of the objective function for this problem is the sum of the lower bounds of the maximum completion time and the maximum tardiness.
Proof. 
Since the objective function of this problem is defined as the sum of the maximum completion time and the maximum tardiness, its lower bound can be expressed as the sum of the lower bounds of these two components, i.e., o O Q o v 0 + max o O { C o D o } where C o represents the completion time of order o when orders are processed according to the EDD rule, ignoring setup times. The theorem is thus proved.    □

4. Mathematical Model

4.1. A Mixed-Integer Nonlinear Programming Model with Integral and Limit Forms

Since this problem involves discrete decision factors such as processing specifications and machine setup operations, integer variables are introduced to characterize the setup process. Meanwhile, the production system exhibits both continuous and instantaneous features: the actual processing speed of each machine can vary continuously within the range below its rated speed, while the start and end of a setup operation, as well as the initiation and completion of any order, occur instantaneously. Accordingly, this study employs a mixed-integer nonlinear programming model with integral and limit forms to mathematically describe the problem. Specifically, the production quantity of an order is represented as the integral of its processing speed over the entire time horizon. By using the integral formulation, it is only necessary to ensure that the cumulative production quantity of each order meets the demand, without the need to continuously update the remaining unprocessed quantity of each order. Moreover, the instantaneous nature of setup operations makes their start and end time points more appropriately expressed in limit form, thereby accurately capturing the dynamic characteristics of the production system (Table 2).

4.1.1. Objective

The objective function is the sum of the maximum completion time and maximum tardiness.
m i n C m a x + T T

4.1.2. Constraints

j J x i , j , t 1 i I , t [ 0 , )
v i , o , t 1 v a o j J x i , j , t i I , o O , t [ 0 , )
o O s i , o , t 1 = 1 i I , t [ 0 , )
o O s j , o , t 2 = 1 j J , t [ 0 , )
i I x i , j , t s i , o , t 1 | I | s j , o , t 2 j J , o O , t [ 0 , )
v i , o , t 1 v a o o O s i , o , t 1 i I , t [ 0 , )
v j , o , t 2 v b o o O s j , o , t 2 j J , t [ 0 , )
t t + t 1 α i , t 1 d t 1 i I , t [ 0 , )
v i , o , t 1 v a o ( 1 t t + t 1 α i , t 1 d t ) i I , o O , t [ 0 , )
o O lim t t + s i , o , t 1 lim t t s i , o , t 1 = 2 α i , t 1 i I , t [ 0 , )
t t + t 2 α j , t 2 d t 1 j J , t [ 0 , )
v j , o , t 2 v b o ( 1 t t + t 2 α j , t 2 d t ) j J , o O , t [ 0 , )
o O lim t t + s j , o , t 2 lim t t s j , o , t 2 = 2 α j , t 2 j J , t [ 0 , )
v t = v 0 e t t [ 0 , )
v t = i I o O v i , o , t 1 t [ 0 , )
i I o O x i , j , t v i , o , t 1 = o O v j , o , t 2 j J , t [ 0 , )
j J 0 C o v j , o , t 2 s j , o , t 2 d t = Q o o O , C o [ 0 , )
C m a x e t t t [ 0 , )
T T max { 0 , C o D o } o O
Constraint (2) indicates that each primary processing machine can be connected to only one secondary processing machine at any given time. Constraint (3) specifies that the processing speed of any primary processing machine not connected to a secondary processing machine must be zero. Constraint (4) ensures that each primary processing machine can process only one order at a time. Constraint (5) ensures that each secondary processing machine can process only one order at a time. Constraint (6) states that connected primary and secondary processing machines must process the same order simultaneously. Constraint (7) limits the real-time processing speed of each primary processing machine to not exceed its rated speed for the current product specification. Constraint (8) limits the real-time processing speed of each secondary processing machine to not exceed its rated speed for the current product specification. Constraints (9) and (10) ensure the continuity of the setup process for primary processing machines and specify that machines stop production during setup. Constraint (11) indicates that a primary processing machine switches to a new specification after completing setup. Constraints (12) and (13) ensure the continuity of the setup process for secondary processing machines and specify that machines stop production during setup. Constraint (14) indicates that a secondary processing machine switches to a new specification after completing setup. Constraints (15), (16), and (17) enforce the flow balance between upstream and downstream processing speeds along the production line. Specifically, the model ensures that the real-time output rate of the upstream processor equals the total real-time processing speed of all primary processing machines, and that the real-time processing speed of each secondary processing machine equals the total real-time processing speed of the primary processing machines connected to it. When a mismatch in processing capacity occurs, a line stoppage will inevitably happen. In this case, e t takes the value 0, and the speeds of the upstream processor, the primary processing machines, and the secondary processing machines all become 0.
Constraint (18) ensures that every order is fully completed. Constraints (19) and (20) define the auxiliary variables of the objective function.

4.2. Model Linearization

The nonlinear expressions with integral and limit forms in the above model can be linearized by discretizing time. The linearized model is presented below.

4.2.1. Objective

m i n C m a x + T T

4.2.2. Constraints

j J x i , j , t 1 i I , t [ 0 , T ]
v i , o , t 1 v a o j J x i , j , t i I , o O , t [ 0 , T ]
o O s i , o , t 1 = 1 i I , t [ 0 , T ]
o O s j , o , t 2 = 1 j J , t [ 0 , T ]
i I x i , j , t s i , o , t 1 | I | s j , o , t 2 j J , o O , t [ 0 , T ]
v i , o , t 1 v a o o O s i , o , t 1 i I , t [ 0 , T ]
v j , o , t 2 v b o o O s j , o , t 2 j J , t [ 0 , T ]
t t + t 1 1 α i , t 1 d t 1 i I , t [ 0 , T t 1 + 1 ]
v i , o , t 1 v a o ( 1 t t + t 1 1 α i , t 1 d t ) i I , o O , t [ 0 , T t 1 + 1 ]
o O s i , o , t 1 s i , o , t 1 = 2 α i , t 1 i I , t [ 0 , T ]
t t + t 2 1 α j , t 2 d t 1 j J , t [ 0 , T t 2 + 1 ]
v j , o , t 2 v b o ( 1 t t + t 2 1 α j , t 2 d t ) j J , o O , t [ 0 , T t 2 + 1 ]
o O s j , o , t 2 s j , o , t 2 = 2 α j , t 2 j J , t [ 0 , T ]
v t = v 0 e t t [ 0 , T ]
v t = i I o O v i , o , t 1 t [ 0 , T ]
i I o O x i , j , t v i , o , t 1 = o O v j , o , t 2 j J , t [ 0 , T ]
j J t T v j , o , t 2 s j , o , t 2 d t = Q o o O , t [ 0 , T ]
C m a x e t t t [ 0 , T ]
T T max { 0 , C o D o } o O
Constraints (22)–(40) correspond, respectively, to Constraints (2)–(20) before linearization. We discretize the time horizon into a sequence of sufficiently small time slots and assume that the start and end of any machine setup operation occur within a single time slot.We consider a slot length of one minute to be appropriate. Moreover, since Assumption 3 states that the setup time is no greater than the processing time of any order processed independently, it is sufficient to set the horizon length T to twice the lower bound of the maximum completion time. Accordingly, after linearization, the limit operators in the original model can be removed, and the integral expressions can be represented by summations over discrete time slots. In particular, Constraints (9) and (12) correspond to Constraints (29) and (32), which describe the setup processes. For each primary processing machine, at most one setup operation can occur within any interval of t 1 consecutive time slots; similarly, for each secondary processing machine, at most one setup operation can occur within any interval of t 2 consecutive time slots. Moreover, Constraints (11) and (14) correspond to Constraints (31) and (34), ensuring that once a setup operation is completed, the processing specification of the machine is indeed switched.

5. Solution Method

Since the above model is difficult to solve using commercial solvers, we develop dedicated solution algorithms based on the problem’s intrinsic characteristics. First, based on the lower bound and the problem structure, we developed a heuristic algorithm that can quickly generate high-quality feasible solutions for large-scale production scenarios. Such rapid solution generation is crucial for improving factory productivity and reducing computational time. In addition, due to the flexibility and diversity of the setup operations on each machine, we employ a genetic algorithm (GA) to explore a broader range of possible production schedules. Compared with the heuristic method, the GA offers stronger global search capabilities over the solution space, and has been widely applied in various scheduling and optimization domains. In summary, two algorithms are proposed: a heuristic algorithm with fixed machine links and greedy rules (HAFG), and a genetic algorithm based on altering machine combinations (GAAM). These algorithms are designed to efficiently obtain near-optimal solutions for large-scale real-world production scheduling problems.

5.1. Heuristic Algorithm with Fixed Machine Links and Greedy Rules

The flexibility of machine connection configurations and the resulting variations in production capacity make this problem difficult to analyze. Meanwhile, considering that frequently changing machine connections is impractical and difficult to manage in real production environments, a heuristic algorithm is designed based on a connection scheme that adopts even distribution method. The optimization objective of this problem is to minimize the sum of the maximum completion time and the maximum tardiness, both of which are affected by machine setup operations. A machine setup operation is a non-productive activity during which the machines cannot participate in processing operations. The interruptions caused by such adjustments can increase both objectives. Employing a greedy strategy, wherein a maximal number of orders are processed simultaneously, can significantly mitigate the likelihood of interruptions typically caused by setup operations. Once an order is completed, all machines involved in processing that order begin setup operations, resulting in a temporary reduction in system capacity. The more machine combinations simultaneously entering this state, the higher the likelihood of production interruptions. Therefore, when multiple orders are processed concurrently, each individual order occupies fewer machines and a smaller proportion of the upstream processor’s output rate. As a result, when one order finishes processing, the speed gap it leaves can be more easily compensated by other ongoing machine combinations, thereby reducing the risk of production interruptions.
In the order assignment strategy for each machine combination, orders with earlier due dates are prioritized for processing. It is important to note that determining the processing rates of different active machine combinations is equivalent to designing an allocation scheme for the upstream processor’s output rate. Before allocating speeds, it must be ensured that the total processing capacity of the system is sufficient to absorb the upstream processor’s output rate in real time. Meanwhile, the maximum feasible speed is allocated to the order with the smaller remaining workload, enabling its earlier possible completion in the production sequence. This strategy maintains as many machine combinations as possible working continuously, thereby reducing frequent setup adjustments. The ultimate goal is to ensure the processing system can match the upstream processor’s output rate, minimizing production interruptions caused by machine adjustments.
In summary, a heuristic algorithm based on fixed machine connections and greedy rules (HAFG) is developed. The HAFG algorithm operates according to the following four rules:
  • The pairing of the primary processing and secondary processing machines follows the principle of even distribution method;
  • Each machine combination is assigned to process a different order as much as possible, maximizing the number of different orders being processed simultaneously;
  • Orders enter production sequentially in ascending order of their due dates;
  • When determining the processing speeds of different machine combinations, the maximum feasible speed is allocated to the order with smaller remaining workload.
The pseudocode of the HAFG algorithm is shown in Appendix A.

5.2. Genetic Algorithm Based on Altering Machine Combinations

As a classical intelligent optimization method, the genetic algorithm (GA) has been widely applied to various combinatorial optimization problems such as flow-shop scheduling and logistics distribution. Considering the specific structure of the problem, a genetic algorithm based on altering machine combinations (GAAM) is proposed, which integrates the randomness of machine connections with the diversity of order priorities. In this algorithm, key information such as order sequencing and machine connection patterns is encoded into the chromosome structure. During the decoding phase, machine processing rates and machine specification switching decisions are determined to guide the search effectively toward near-optimal solutions.

5.2.1. Chromosome Encoding and Initialization

In the GAAM algorithm, the chromosome is composed of two parts. The first part contains n 1 sub-segments, each consisting of a permutation of secondary processing machine indices, representing the priority of connecting different secondary processing machines for each primary processing machine. The second part contains n 2 sub-segments, each consisting of a permutation of order indices, representing the processing priority of different orders for each secondary processing machine. The total chromosome length is n 1 · n 2 + n 2 · | O | . The chromosomes are initialized using a random generation strategy.

5.2.2. Fitness Function

The fitness function serves as the core of the genetic algorithm. It simultaneously evaluates the dynamic connections between primary processing machines and secondary processing machines, the order selection decisions, and the processing speed assignments for each machine combination. The pseudocode of the corresponding scheduling process is presented in Appendix B.
In Appendix B, machine connections and order assignments are determined based on the priority sequences encoded in the chromosomes, following a two-step process: First, each primary processing machine selects its connected secondary processing machine according to its encoded priority sequence, thereby establishing the machine combinations. Second, each secondary processing machine determines the processing sequence of orders based on its priority sequence. Each primary processing machine connects to exactly one secondary processing machine, while multiple secondary processing machines can process the same order.
Before the speeds of the machine combinations are calculated, an operation to extract redundant machines is performed. When multiple machine combinations operate simultaneously, the total processing capacity may exceed or equal the upstream processor’s production rate. In such cases, some machines may be redundant and unnecessary for continued operation. To identify and manage these redundant machines, two successive extraction checks are performed iteratively. First extraction check identifies and removes redundant machine combinations. If the flow shop continues to operate normally without a particular machine combination, those machines are temporarily excluded from production and designated as a standby machine combination. Second extraction check is performed to examine each secondary processing machine currently involved in production after first extraction check. If the total processing capacity remains sufficient (no less than the upstream processor’s rate) without a particular secondary processing machine, that machine is temporarily excluded as a standby secondary processing machine. When an order completes, the involved machines enter setup operations and cannot immediately process new orders. However, standby machines can be deployed immediately when needed, thereby substantially reducing production interruptions.
The bottleneck processing speed of each machine combination can be calculated based on the assigned orders. This bottleneck speed represents the maximum feasible processing rate that the machine combination can achieve for the current order. However, the real-time production speed of different machine combinations is difficult to encode directly within the chromosome structure. Therefore, Appendix B allocates and computes the processing speeds of different machine combinations according to the following two principles:
  • The more machine combinations simultaneously engaged in processing the same order, the lower the processing speed assigned to each pair;
  • The greater the remaining unprocessed quantity of an order, the lower the processing speed assigned to it.
The priority of Principle 1 is higher than that of Principle 2.
Compared with HAFG, the GAAM algorithm provides greater flexibility in machine connection configurations and order sequencing. The algorithm proceeds through an iterative cycle: first, it assigns orders to idle machine combinations not undergoing setup operations; second, it eliminates redundant machines while retaining normal production; third, it calculates processing speeds for active machine combinations; fourth, it calculates the completion time of the current phase (either when an order finishes or when machines complete setup operations); finally, it reassigns new orders to the idle machines. This cycle repeats iteratively until all orders are processed.

5.2.3. Crossover and Mutation Operations

In the GAAM algorithm, each chromosome consists of two main parts: one represents the priority sequence of secondary processing machines selected by each primary processing machine, and the other represents the priority sequence of orders processed by each secondary processing machine. Both parts are subject to crossover and mutation operations simultaneously.
During the crossover operation, chromosomes in the population are first randomly paired. For each pair, two-point crossover is performed on both chromosome segments. Specifically, two crossover points are randomly selected within each segment, and the sequence between these two points is exchanged between the paired chromosomes. Since such exchanges may introduce duplicate elements in the order priority sequences, the resulting sequences are re-sorted, and the new positional indices replace the original values as updated priority sequences, thereby ensuring no duplicate elements exist in any order processing priority sequence of a secondary machine.
The mutation operation adopts a two-point mutation mechanism. For the first part of each chromosome, corresponding to the priority sequence of secondary machines connected to each primary machine, two random positions within the sequence are selected and their elements are swapped. Similarly, for the second part of each chromosome, corresponding to the order processing priority sequence of each secondary machine, two random positions are chosen and their elements are exchanged. This operation enables local adjustments within each priority sequence, enriching the diversity of chromosomes in the population and exploring a broader range of production configurations. Since the mutation only swaps the positions of two elements within a sequence, it achieves local perturbation of processing priorities while maintaining the reasonability of the priority sequences, no duplicate elements are introduced.

6. Numerical Experiments

6.1. Parameter Settings

To evaluate the performance of the two proposed algorithms in solving practical scale problems, five groups of test instances are designed for both small-scale and large-scale production scenarios, as shown in Table 3. The four numbers in the production scale represent the number of primary processing machines, the number of secondary processing machines, the number of primary processing specifications, and the number of secondary processing specifications, respectively. In the small-scale production instances, the number of primary processing machines is 7, the number of secondary processing machines is 3, the upstream processor speed is set to 15, and the number of orders is 16. In the large-scale production instances, the number of primary processing machines is 20, the number of secondary processing machines is 7, the upstream processor speed is set to 30, and the number of orders is 64. In these instances, the quantity of each order is randomly generated within the range of [ 800 , 1500 ] , and the delivery due dates are randomly generated within the range of [ 100 , 300 ] .

6.2. Numerical Results and Analysis

Both small-scale and large-scale production instances are used in the numerical experiments to test the performance of the proposed two algorithms. We use the lower bound of the problem as a benchmark for solution quality. In addition, as the EDD rule is widely used in practical production settings, its objective value is also included for comparison. The objective value of production following the EDD rule can be expressed as:
max o O { C o D o } + o O Q o v 0 + o O t r o ,
where C o denotes the completion time of order o under the EDD rule considering setup time, and  t r o represents the setup time before order o starts production. For the first order, the setup time is 0. In summary, our numerical experiments report four comparative indicators: the lower bound, EDD, HAFG, and GAAM. All experiments are conducted on a personal computer equipped with an Intel Core i7 2.60 GHz CPU and 16 GB of RAM, running Windows 10. The algorithms are implemented and executed in Matlab R2018a.
We first separate the two parts of the objective function—maximum completion time and maximum tardiness to identify algorithm-specific trade-offs and differences. The decomposition results for small-scale and large-scale instances are reported in Table 4 and Table 5, respectively. To further illustrate the behavior of the algorithms, scatter plots are generated for both instance scales, where each color represents a test instance and each marker shape denotes an algorithm. The horizontal axis represents the maximum completion time for each large-scale instance, while the vertical axis represents the maximum tardiness. The corresponding visualization is provided in Figure 3. The results show that HAFG tends to favor reducing maximum completion time, whereas EDD, and GAAM exhibit more balanced performance on the two components, with proportions similar to those of the lower bound. This occurs because, compared with GAAM, HAFG adopts a less flexible machine-connection mechanism and relies on a setup strategy that prioritizes reducing the number of interruptions, thereby contributing more significantly to the reduction of maximum completion time.
We further conduct numerical experiments on both small-scale and large-scale instances using the total objective function. The comparisons include four indicators: the lower bound, EDD, HAFG, and GAAM. For the genetic algorithm, the population size is set to 80, the crossover rate to 0.8, the mutation rate to 0.6, and the maximum number of iterations to 100.
The numerical experiment results are shown in Table 6 and Table 7. Among them, L B represents the lower bound of the current instance. o b j E D D , o b j H A F G , and  o b j G A A M denote the objective values obtained by the simple sequential production under the EDD rule, the HAFG algorithm, and the GAAM algorithm, respectively. The gaps between the objective values of each algorithm and the lower bound are calculated as follows: G A P E D D = ( o b j E D D L B ) / L B × 100 % , G A P H A F G = ( o b j H A F G L B ) / L B × 100 % , G A P G A A M = ( o b j G A A M L B ) / L B × 100 % . t H A F G , and  t G A A M represent the computational times (in seconds) of the HAFG, and GAAM algorithms, respectively, for the current instance.
The numerical results show that, for small-scale instances, the relative gap between the solutions obtained by the developed HAFG algorithm and the lower bound ranges from 5% to 9%, making it the least effective among the four algorithms. This is because, in small-scale settings with fewer machines, the setup strategy of HAFG cannot effectively prevent interruptions. The sequential production method based on the EDD rule achieved comparable results, with relative gaps fluctuating between 5% and 6%. The developed GAAM algorithm performs the best, achieving optimal results with a relative gap of 0% in all five small-scale instances. This demonstrates that the genetic algorithm’s capability to explore diverse machine combination configurations and production sequences enables GAAM to identify superior solutions.
For large-scale instances, the sequential production method based on the EDD rule exhibits an average relative gap of around 10% compared with the lower bound, which is worse than the performance of both developed algorithms. The GAAM algorithm performs best, with a relative gap generally between 3% and 4%. The HAFG algorithm performs slightly worse than GAAM, with a relative gap ranging from 4% to 5%. It is worth noting that, due to their structural simplicity, both the EDD algorithm and the heuristic algorithm based on greedy rules exhibit a clear advantage in computation time. In contrast, the GAAM algorithm consumes significantly more time, requiring an average of approximately 327 s when solving large-scale instances.
In addition, we conducted a stability analysis based on the five comparative indicators. Specifically, for both small-scale and large-scale instances, we computed the variance across five repeated runs of each algorithm. The results show that, in the small-scale setting, the GAAM algorithm exhibits the smallest variance and thus the highest stability. The HAFG algorithm displays the poorest stability among the methods. In the large-scale setting, both GAAM demonstrates high stability, exceeding that of the lower bound. HAFG again shows the lowest level of stability. These observations suggest that the GAAM algorithm possesses strong search capabilities, which enable them to avoid highly suboptimal solutions even though they may not always reach the optimum. In contrast, the heuristic HAFG algorithm is more sensitive to instance-specific randomness because its setup operations and decision rules are relatively rigid. As a result, it performs well in some instances, but yields inferior results in others, leading to a higher overall variance.
A sensitivity analysis is conducted on the processing time of large-scale instances under different machine scales. To provide more intuitive numerical insights, the results are presented in the form of a bar chart, as shown in Figure 4. The horizontal axis represents the five large-scale instances used in the previous experiments, while the vertical axis shows the objective function values obtained by the algorithm under six different machine scales. Each instance displays six bars corresponding to six different machine scales. The four numbers characterizing the scale represent the number of primary processing machines, the number of secondary processing machines, the number of primary processing types, and the number of secondary processing types.
Since the genetic algorithm outperforms the heuristic algorithm for large-scale instances, the objective values shown here are based on the results of the genetic algorithm, with each result being the best value from five repeated runs. From the numerical experiments for machine scales (20,7,8,8), (20,8,8,8), and (20,9,8,8), it can be observed that adding a single secondary processing machine significantly reduces the total objective. In contrast, the results for machine scales (20,7,8,8), (21,7,8,8), and (22,7,8,8) show that adding a primary processing machine produces inconsistent effects, with modest improvements in processing time Moreover, the results for machine scales (20,7,8,8), (22,7,8,8), and (20,8,8,8) indicate that adding two primary processing machines simultaneously yields less improvement in the total objective value than adding a single secondary processing machine. Therefore, for manufacturers operating such continuous-flow production lines, adding secondary processing machines is a wiser choice. It should be noted that when the number of machines is already sufficient, the objective function value approaches the problem’s lower bound, and in such cases, further machine additions provide minimal benefit.

7. Conclusions

This study addresses a no-wait flow-shop scheduling problem with continuous material flow, considering both machine setup times and rated processing speed limits. The optimization objective is to minimize the sum of the maximum completion time and maximum tardiness. We analyze the fundamental properties of the problem and derive a lower bound for the objective function. A mixed-integer nonlinear programming model is established with the objectives of minimizing the sum of the maximum completion time and maximum tardiness, followed by linearization using a discrete-time formulation, converting the continuous-flow problem into a mixed-integer linear programming model. Two efficient solution algorithms are proposed, including a heuristic algorithm with fixed machine links and greedy rules (HAFG) and a genetic algorithm based on altering machine combinations (GAAM). The numerical results indicate that both the HAFG algorithm and the GAAM algorithm significantly outperform the traditional EDD sequencing strategy, providing better scheduling solutions. Among them, the GAAM algorithm obtains solutions closest to the lower bound, while the HAFG algorithm demonstrates stronger performance in optimizing the maximum completion time. Moreover, the sensitivity analysis reveals that increasing the number of secondary processing machines improves the overall scheduling performance more effectively than increasing the number of primary processing machines, offering targeted management insights for practical production.
While this study provides valuable insights into the three-stage no-wait flow-shop scheduling problem with continuous flow, several avenues warrant further investigation. First, the mathematical properties of this problem warrant deeper investigation. Unlike traditional job-based scheduling, this problem exhibits two distinctive features: orders can be split, and a single order can be simultaneously processed across two consecutive stages. These characteristics substantially increase the problem’s computational complexity. Future research should concentrate on proving the NP-hardness of this problem and determining whether polynomial-time algorithms exist for specific problem instances. Second, the integration of machine heterogeneity and workforce scheduling presents an important research direction. Considering these practical constraints would enhance the applicability of scheduling models to real-world continuous-flow manufacturing operations. Third, scheduling algorithms can be further optimized by incorporating machine learning and reinforcement learning techniques to improve computational efficiency and solution quality. These AI-based approaches may discover hidden data patterns in production processes and identify superior scheduling strategies, ultimately delivering better production outcomes.

Author Contributions

Conceptualization, F.Z.; Methodology, C.Z.; Writing—original draft preparation, C.Z.; Writing—review and editing, C.Z. and M.L.; Supervision, F.Z. and M.L.; Funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 72271051) and the Fundamental Research Funds for the Central Universities (Grant No. 2232018H-07).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study were partly generated through randomized simulation, while the remaining data are fully described within the paper. No separate publicly archived dataset was created.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

    The following abbreviations are used in this manuscript:
BCbranch and cut
BC-BIHQLBi-Criteria Block Insertion Heuristic Algorithm with Q-Learning
BC-IGQLBi-Criteria Iterated Greedy Algorithm with Q-Learning
DABCDiscrete Artificial Bee Colony Algorithm
DNWFSPDistributed No-Wait Flow-Shop Scheduling Problem
DPDynamic Programming
EDD RuleScheduling rule that sequences orders by earliest due dates
FPTASFully Polynomial-Time Approximation Scheme
GAGenetic Algorithm
GAAMGenetic Algorithm Based on Altering Machine Combinations.
HAFGHeuristic Algorithm with Fixed Machine Links and Greedy Rules
HGHeuristic Algorithm based on Greedy Rules
IGIterated Greedy
LBLower Bound, theoretical minimum value that the objective function cannot go below
MIPMixed-Integer Programming
NSGA-IINon-dominated Sorting Genetic Algorithm II
NWFSPNo-Wait Flow-Shop Scheduling Problem
PIGPairwise Iterated Greedy
PLIGParameter-Less Iterated Greedy
PPDPPseudo-polynomial Dynamic Programming
QL-IGS      Iterated Greedy Search Enhanced by Q-learning Method
SDSTSequence-Dependent Setup Time
TPHTwo-phased heuristics
TSHATwo-Stage Heuristic Algorithm
VNDVariable Neighborhood Descent

Appendix A. Pseudocode of the HAFG Algorithm

Algorithm A1 Dynamic Scheduling with Flexible Machine Loading
Require: 
Order parameters, machine parameters, production time t = 0 , maximum tardiness T T = 0
  1:
Sort pending orders by due date D o in ascending order to form priority queue q
  2:
Initialize machine combination status matrix M O using average allocation
  3:
while number of pending orders n 2  do
  4:
    Identify idle machine combinations not in setup state
  5:
    Assign orders from q to available combinations, remove from q, update M O
  6:
    Calculate bottleneck speed for each combination
  7:
    Compute actual speed and identify order o with earliest completion time t
  8:
     t t + t , T T max { T T , max { t D o , 0 } } , Update M O
  9:
    if remaining combinations can match upstream processor speed then
10:
            Determine next order from q and calculate setup time t t r
11:
            Update actual speeds and identify order j with earliest completion time t
12:
            if  t t t r  then
13:
                  t t + t t r ,    Update production quantities at time t and M O
14:
            else
15:
                 Update production quantities at time t + t
16:
                  T T T T + max { t + t D o , 0 } , t t + t t r ,Update M O
17:
            end if
18:
   else
19:
            Determine next order from q and calculate setup time t t r
20:
             t t + t t r , Update M O
21:
   end if
22:
end while
23:
while at least one order remains incomplete do
24:
    Identify idle machine combinations not in setup state
25:
    Assign order with largest remaining quantity to available combination
26:
    Calculate bottleneck speeds and actual speeds
27:
    Identify order j with earliest completion time t
28:
     t t + t , T T T T + max { t D o , 0 }
29:
    if remaining combinations can match upstream processor speed then
30:
            Calculate setup time t t r for order with largest remaining quantity
31:
            Update actual speeds and identify order j with earliest completion time t
32:
            if  t t t r  then
33:
                  t t + t t r , Update production quantities at time t and M O
34:
         else
35:
                 Update production quantities at time t + t
36:
                  T T T T + max { t + t D o , 0 } , t t + t t r , Update M O
37:
         end if
38:
    else
39:
         Calculate t t r for order with largest remaining quantity, t t + t t r , Update M O
40:
    end if
41:
end while
Ensure: 
Total production time t, maximum tardiness T T

Appendix B. Pseudocode of the GAAM Algorithm Decoding Method

Algorithm A2 Genetic Algorithm Decoding
Require: 
A chromosome
1:
Initialize production time t, maximum tardiness T T , remaining quantity q i for order i
2:
while at least one order remains incomplete do
3:
     Identify idle machines not in setup state and incomplete orders
4:
     Form machine combinations based on priority preference of primary processing machines for secondary processing machines
5:
     Assign orders to each machine combination based on priority preference of secondary processing machines for orders
6:
     Calculate bottleneck speed for each combination based on order quantities and number of assigned machine combinations
7:
     if current allocation cannot match upstream processor output rate then
8:
          Perform setup and reallocate machines using average allocation method
9:
          Calculate processing speed for each machine combination
10:
          Identify order i with earliest completion and processing time t
11:
          if a machine completes setup within time t  then
12:
                t time of earliest setup completion
13:
               Update order quantities and status of machines that completed setup operation
14:
               break
15:
          else
16:
                t t + t , q i 0 , T T max { T T , t D o }
17:
          end if
18:
     else
19:
          Remove redundant machines and calculate actual speed for active combinations
20:
          Identify order i with earliest completion and completion time t
21:
          if a machine completes setup within time t  then
22:
                t time of earliest setup completion
23:
               Update order quantities and status of machines that completed setup
24:
               break
25:
          else
26:
                t t + t , q i 0 , T T max { T T , t D o }
27:
          end if
28:
     end if
29:
end while
Ensure: 
Total production time t, maximum tardiness T T

References

  1. İnce, N. A comprehensive literature review of the flow shop group scheduling problems: Systematic and bibliometric reviews. Int. J. Prod. Res. 2024, 62, 4565–4594. [Google Scholar] [CrossRef]
  2. Perez-Gonzalez, P. A review and classification on distributed permutation flow shop scheduling problems. Eur. J. Oper. Res. 2024, 312, 1–21. [Google Scholar] [CrossRef]
  3. Tang, H. Integrated scheduling of multi-objective lot-streaming hybrid flow shop with AGV based on deep reinforcement learning. Int. J. Prod. Res. 2025, 63, 1275–1303. [Google Scholar] [CrossRef]
  4. Zhao, F. A cooperative scatter search with reinforcement learning mechanism for the distributed permutation flow shop scheduling problem with sequence-dependent setup times. IEEE Trans. Syst. 2023, 53, 4899–4911. [Google Scholar]
  5. Li, Q. Self-adaptive population-based iterated greedy algorithm for distributed permutation flow shop scheduling problem with part of jobs subject to a common deadline constraint. Expert Syst. Appl. 2024, 248, 123278. [Google Scholar] [CrossRef]
  6. Koulamas, C. The no-wait flow shop with rejection. Int. J. Prod. Res. 2021, 59, 1852–1859. [Google Scholar] [CrossRef]
  7. Cheng, C.Y. Minimizing maximum completion time in mixed no-wait flow shops with sequence-dependent setup times. Comput. Ind. Eng. 2019, 130, 338–347. [Google Scholar] [CrossRef]
  8. Cheng, C.Y. New benchmark algorithms for No-wait flow shop group scheduling problem with sequence-dependent setup times. Appl. Soft Comput. 2021, 111, 107705. [Google Scholar] [CrossRef]
  9. Dong, J. No-wait two-stage flow shop problem with multi-task flexibility of the first machine. Inf. Sci. 2021, 544, 25–38. [Google Scholar] [CrossRef] [PubMed]
  10. Yüksel, D. Q-learning guided algorithms for bi-criteria minimization of total flow time and maximum completion time in no-wait permutation flow shops. Swarm Evol. Comput. 2024, 89, 101617. [Google Scholar] [CrossRef]
  11. Avci, M. A branch-and-cut approach for the distributed no-wait flow shop scheduling problem. Comput. Oper. Res. 2022, 148, 106009. [Google Scholar] [CrossRef]
  12. Avci, M. An effective iterated local search algorithm for the distributed no-wait flow shop scheduling problem. Eng. Appl. Artif. Intell. 2023, 120, 105921. [Google Scholar] [CrossRef]
  13. Li, H. A discrete artificial bee colony algorithm for the distributed heterogeneous no-wait flow shop scheduling problem. Appl. Soft Comput. 2021, 100, 106946. [Google Scholar] [CrossRef]
  14. Pan, Y. A novel evolutionary algorithm for scheduling distributed no-wait flow shop problems. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 3694–3704. [Google Scholar] [CrossRef]
  15. Zhao, F. A population-based iterated greedy algorithm for distributed assembly no-wait flow-shop scheduling problem. IEEE Trans. Ind. Inform. 2022, 19, 6692–6705. [Google Scholar] [CrossRef]
  16. Zhao, F. A policy-based meta-heuristic algorithm for energy-aware distributed no-wait flow-shop scheduling in heterogeneous factory systems. IEEE Trans. Syst. Man, Cybern. Syst. 2024, 55, 620–634. [Google Scholar] [CrossRef]
  17. Zhao, F. A reinforcement learning-driven brain storm optimisation algorithm for multi-objective energy-efficient distributed assembly no-wait flow shop scheduling problem. Int. J. Prod. Res. 2023, 61, 2854–2872. [Google Scholar] [CrossRef]
  18. Zhang, Z.Q. An enhanced estimation of distribution algorithm with problem-specific knowledge for distributed no-wait flow shop group scheduling problems. Swarm Evol. Comput. 2024, 87, 101559. [Google Scholar] [CrossRef]
  19. Yu, Y. A discrete artificial bee colony method based on variable neighborhood structures for the distributed permutation flow shop problem with sequence-dependent setup times. Swarm Evol. Comput. 2022, 75, 101179. [Google Scholar] [CrossRef]
  20. Song, H.B. An effective two-stage heuristic for scheduling the distributed assembly flow shops with sequence dependent setup times. Comput. Oper. Res. 2025, 173, 106850. [Google Scholar] [CrossRef]
  21. Zhao, C. A knowledge-based two-population optimization algorithm for distributed heterogeneous assembly permutation flow shop scheduling with batch delivery and setup times. Swarm Evol. Comput. 2025, 97, 102035. [Google Scholar] [CrossRef]
  22. Wang, J. A Q-learning artificial bee colony for distributed assembly flow shop scheduling with factory eligibility, transportation capacity and setup time. Eng. Appl. Artif. Intell. 2021, 123, 106230. [Google Scholar] [CrossRef]
  23. Rifai, A.P. Multi-objective distributed reentrant permutation flow shop scheduling with sequence-dependent setup time. Expert Syst. Appl. 2021, 183, 115339. [Google Scholar] [CrossRef]
  24. Ozsoydan, F.B. A trajectory-based algorithm enhanced by Q-learning and cloud integration for hybrid flexible flow shop scheduling problem with sequence-dependent setup times: A case study. Comput. Oper. Res. 2025, 181, 107079. [Google Scholar] [CrossRef]
  25. Missaoui, A. A parameter-Less iterated greedy method for the hybrid flow shop scheduling problem with setup times and due date windows. Eur. J. Oper. Res. 2022, 303, 99–113. [Google Scholar] [CrossRef]
  26. Qiao, Y. Adaptive genetic algorithm for two-stage hybrid flow-shop scheduling with sequence-independent setup time and no-interruption requirement. Expert Syst. Appl. 2022, 208, 118068. [Google Scholar] [CrossRef]
  27. Cai, J. A novel shuffled frog-leaping algorithm with reinforcement learning for distributed assembly hybrid flow shop scheduling. Int. J. Prod. Res. 2023, 61, 1233–1251. [Google Scholar] [CrossRef]
  28. He, X. Minimising Makespan and total tardiness for the flow shop group scheduling problem with sequence dependent setup times. Eur. J. Oper. Res. 2025, 324, 436–453. [Google Scholar]
  29. Zhong, Q. An iterative greedy algorithm based on neighborhood search for energy-efficient scheduling of distributed permutation flow shop with sequence-dependent setup time. Expert Syst. Appl. 2026, 296, 129012. [Google Scholar] [CrossRef]
  30. Chen, X. Two-machine flow shop scheduling with a common due date to maximize total early work. Eur. J. Oper. Res. 2022, 300, 504–511. [Google Scholar] [CrossRef]
  31. Schaller, J. Scheduling in a no-wait flow shop to minimise total earliness and tardiness with additional idle time allowed. Int. J. Prod. Res. 2022, 60, 5488–5504. [Google Scholar] [CrossRef]
  32. Li, Z. Bi-objective hybrid flow shop scheduling with common due date. Oper. Res. 2021, 21, 1153–1178. [Google Scholar] [CrossRef]
  33. Nasrollahi, V. Minimizing the weighted sum of maximum earliness and maximum tardiness in a single-agent and two-agent form of a two-machine flow shop scheduling problem. Oper. Res. 2022, 22, 1403–1442. [Google Scholar] [CrossRef]
  34. Koulamas, C. Flow shop scheduling with two distinct job due dates. Comput. Ind. Eng. 2022, 163, 107835. [Google Scholar] [CrossRef]
  35. Geng, X.N. Scheduling on proportionate flow shop with job rejection and common due date assignment. Comput. Ind. Eng. 2023, 181, 109317. [Google Scholar] [CrossRef]
  36. Liu, R. Lot-streaming flow shop scheduling under stochastic due dates. Int. J. Prod. Res. 2025, 63, 7039–7060. [Google Scholar] [CrossRef]
  37. Xiong, F. Just-in-time scheduling for a distributed concrete precast flow shop system. Comput. Oper. Res. 2021, 129, 105204. [Google Scholar] [CrossRef]
  38. Geng, X.N. Scheduling on proportionate flow shop with total late work and job rejection. Oper. Res. 2025, 25, 82. [Google Scholar]
Figure 1. Illustration of Flow Shop Halt and Resume Operations.
Figure 1. Illustration of Flow Shop Halt and Resume Operations.
Algorithms 18 00788 g001
Figure 2. Schematic Diagram of Three-Stage flow shop Production.
Figure 2. Schematic Diagram of Three-Stage flow shop Production.
Algorithms 18 00788 g002
Figure 3. Scatter plot of the decomposed objective function results in small-scale and large-scale experiments.
Figure 3. Scatter plot of the decomposed objective function results in small-scale and large-scale experiments.
Algorithms 18 00788 g003
Figure 4. Sensitivity Analysis of Target Value and Machine Quantity.
Figure 4. Sensitivity Analysis of Target Value and Machine Quantity.
Algorithms 18 00788 g004
Table 1. Comparison of Studies on flow-shop Scheduling Optimization.
Table 1. Comparison of Studies on flow-shop Scheduling Optimization.
LiteratureNo-Wait Flow ShopSetup TimeDue DateFlow TypeOptimization Method
Avci  et al. [11] DiscreteBC & VND
Yüksel  et al. [10] DiscreteBC-IGQL, BC-BIHQL
Li  et al. [13] DiscreteDABC
Cheng  et al. [7] DiscretePIG
Ozsoydan  et al. [24] DiscreteQL-IGS
He  et al. [28] DiscreteIG
Song  et al. [20] DiscreteTSHA
Chen  et al. [30] DiscreteFPTAS
Li  et al. [32] DiscreteNSGA-II
Missaoui  et al. [25] DiscretePLIG
Geng  et al. [35] DiscreteDP
Schaller  et al. [31]DiscreteTPH
Geng et al. [38] DiscretePPDP
This researchContinuousHG & GA
Table 2. Parameters and variables.
Table 2. Parameters and variables.
Sets
ISet of primary processing machines, induced by i.
JSet of secondary processing machines, induced by j.
OSet of orders, induced by o.
ASet of primary processing specifications, induced by a.
BSet of secondary processing specifications, induced by b.
Parameters
a o The primary processing specification of order o O .
b o The secondary processing specification of order o O .
t 1 The setup time of primary processing machines to change processing specifications.
t 2 The setup time of secondary processing machines to change processing specifications.
v a The nominal processing speed of the primary processing machine under specification a.
v b The nominal processing speed of the secondary processing machine under specification b.
v 0 Constant processing speed of the upstream processor.
Q o Quantity required by order o O .
D o Due date required by order o O .
Variables
x i , j , t Binary variable, 1 if the product processed by primary processing machine i at time t is sent to secondary processing machine j; otherwise, 0.
e t Binary variable, 1 if the upstream processor is operating normally at time t; otherwise, 0.
s i , o , t 1 Binary variable, 1 if primary processing machine i is processing order o at time t; otherwise, 0.
s j , o , t 2 Binary variable, 1 if secondary processing machine j is processing order o at time t; otherwise, 0.
α i , t 1 Binary variable, 1 if primary processing machine i starts setup operation at time t; otherwise, 0.
α j , t 2 Binary variable, 1 if secondary processing machine j starts setup operation at time t; otherwise, 0.
C o Continuous variable, Completion time of order o.
C m a x Continuous variable, maximum completion time.
T T Continuous variable, Maximum tardiness.
v t 0 Continuous variable, Processing speed of the upstream processor at time t.
v i , o , t 1 0 Continuous variable, Processing speed of primary processing machine i for order o at time t.
v j , o , t 2 0 Continuous variable, Processing speed of secondary processing machine j for order o at time t.
Table 3. Parameter settings for small-scale and large-scale test instances.
Table 3. Parameter settings for small-scale and large-scale test instances.
Production ScalePrimary Processing BottleneckSecondary Processing BottleneckUpstream Processor SpeedNumber of Orders
Small-scale instances(7,3,8,8)[3,4,3,4,3,4,3,4][5,6,5,6,5,6,5,6]1516
Large-scale instances(20,7,8,8)[3,4,3,4,3,4,3,4][5,6,5,6,5,6,5,6]3064
Table 4. Decomposition results of the two components of the objective function in small-scale experiments.
Table 4. Decomposition results of the two components of the objective function in small-scale experiments.
LBEDDHAFGGAAM
C max TT C max TT C max TT C max TT
11308.61016.61368.61076.61312.61164.61308.61016.6
21162.1874.11220.1932.11162.1975.11162.1874.1
31151.3854.31210.3913.31155.21022.31151.3854.3
41223.2939.21283.2999.21227.21033.21223.2939.2
51274.9976.91332.81034.91274.81119.81274.9976.9
Table 5. Decomposition results of the two components of the objective function in large-scale experiments.
Table 5. Decomposition results of the two components of the objective function in large-scale experiments.
LBEDDHAFGGAAM
C max TT C max TT C max TT C max TT
12605.02313.02847.02555.02618.42492.42641.92415.0
22372.92076.92598.92302.92381.12242.12431.92166.7
32354.92057.92572.92275.92368.32247.32441.82156.8
42396.72112.72625.72341.72409.22300.62434.92155.0
52441.52143.52684.52386.52454.22343.22458.22178.9
Table 6. Numerical Results for Small-Scale Instances: 7 primary processing machines, 3 secondary processing machines, and 8 processing specifications for both stages.
Table 6. Numerical Results for Small-Scale Instances: 7 primary processing machines, 3 secondary processing machines, and 8 processing specifications for both stages.
EDDHAFGGAAM
LB obj EDD GAP EDD obj HAFG GAP HAFG t HAFG obj GAAM GAP GAAM t GAAM
12325.22445.20.052477.20.0712325.20.00147
22036.12152.10.062137.10.0512036.10.00143
32005.52123.50.062177.50.0912005.50.00133
42162.42282.40.062260.40.0512162.40.00162
52251.72367.70.052394.70.0612251.70.00151
Average2156.22274.20.052289.40.0612156.20.00147
Variance14,97115,133-16,852--14,971--
Table 7. Numerical Results for Large-Scale Instances: 20 primary processing machines, 7 secondary processing machines, and 8 processing specifications for both stages.
Table 7. Numerical Results for Large-Scale Instances: 20 primary processing machines, 7 secondary processing machines, and 8 processing specifications for both stages.
EDDHAFGGAAM
LB obj EDD GAP EDD obj HAFG GAP HAFG t HAFG obj GAAM GAP GAAM t GAAM
14918.05402.00.105110.90.0415056.90.03323
24449.94901.90.104623.30.0414598.50.03344
34412.84848.80.104615.70.0514589.90.04326
44509.54967.50.104709.90.0414637.00.03297
54585.15071.10.104797.50.0514720.70.03315
Average4575.15038.30.104771.50.0414720.60.03327
Variance32,80038,576-33,180--30,416--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, F.; Zhang, C.; Liu, M. Optimization of Continuous Flow-Shop Scheduling Considering Due Dates. Algorithms 2025, 18, 788. https://doi.org/10.3390/a18120788

AMA Style

Zheng F, Zhang C, Liu M. Optimization of Continuous Flow-Shop Scheduling Considering Due Dates. Algorithms. 2025; 18(12):788. https://doi.org/10.3390/a18120788

Chicago/Turabian Style

Zheng, Feifeng, Chunyao Zhang, and Ming Liu. 2025. "Optimization of Continuous Flow-Shop Scheduling Considering Due Dates" Algorithms 18, no. 12: 788. https://doi.org/10.3390/a18120788

APA Style

Zheng, F., Zhang, C., & Liu, M. (2025). Optimization of Continuous Flow-Shop Scheduling Considering Due Dates. Algorithms, 18(12), 788. https://doi.org/10.3390/a18120788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop