Next Article in Journal
Study on the Multi-Scale Evolution Mechanism of Shear Bands and Cobweb Effect in Solidified Silt Considering Strain Rate
Previous Article in Journal
Silicone Replication Technology Reveals HPWJ Hole Formation Mechanisms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LLM-Assisted Non-Dominated Sorting Genetic Algorithm for Solving Distributed Heterogeneous No-Wait Permutation Flowshop Scheduling

1
School of Mechanical Engineering, Xian Jiaotong University, Xi’an 710049, China
2
Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10131; https://doi.org/10.3390/app151810131
Submission received: 11 August 2025 / Revised: 8 September 2025 / Accepted: 12 September 2025 / Published: 17 September 2025

Abstract

In distributed manufacturing systems, minimizing completion time and improving resource utilization are critical for enhancing operational efficiency. Conventional scheduling models for centralized flowshops struggle to capture the complexity of distributed heterogeneous systems, while existing studies often overlook the combined challenges of heterogeneous factories, no-wait constraints, and sequence-dependent setup times (SDST). This study focuses on the distributed heterogeneous no-wait permutation flowshop scheduling problem with SDST (DHNPFSP-SDST), which is proven NP-hard via polynomial reduction to the classic permutation flowshop scheduling problem (PFSP). We first establish a bi-objective optimization model to simultaneously minimize makespan and total machine non-working time, serving as a standard experimental foundation. The core innovation is a large language model-assisted non-dominated sorting genetic algorithm (LLM-NSGAII), Through a structured prompt framework, LLM-NSGAII leverages LLM’s zero-shot in-context learning to dynamically orchestrate selection, crossover, and mutation operations—replacing the fixed operators of traditional NSGAII. Experiments on extended benchmarks show that when compared with mainstream, multi-objective algorithms demonstrate competitiveness across most instances and provide a proof of concept for integrating LLMs with evolutionary algorithms, opening new avenues for algorithmic optimization.

1. Introduction

The global shift toward digital transformation in manufacturing has necessitated the transition from single-factory production to distributed heterogeneous factory networks, driven by demands for resilience, flexibility, and sustainability [1]. In this paradigm, production tasks are allocated across geographically dispersed facilities with varying capabilities, introducing complexities in synchronization and resource optimization.
The traditional factory scheduling model takes single-factory single-objective optimization as the core, focusing on solving the production scheduling problem of flowshop (FSP) and job shop (JSP) through mathematical programming and heuristic algorithm. Since Johnson’s rule first proposed the optimal scheduling method of two-machine flowshop, a research paradigm with the main goal of minimizing makespan has gradually formed in this field [2]. The traditional FSP is inherently constrained to idealized scenarios with homogeneous machines and fixed process routes, rendering it inadequate for addressing the complexities of multi-stage parallel processing. This limitation has spurred research into the flexible flowshop scheduling problem (FFSP), whose core innovation lies in permitting parallel machines at each production stage, enabling dynamic machine selection for workpieces within stages. Hybrid flowshop scheduling problem (HFSP) allows independent optimization of a workpiece processing sequence in different stages while setting parallel machines in some stages, promoting the development of a meta-heuristic algorithm [3,4]. As an important variant of FSP, permutation flowshop scheduling problem (PFSP) requires that the processing sequence of workpieces on all machines is completely consistent, and the problem is transformed into a single sequence optimization by simplifying the sequence consistency assumption [5]. The algorithm improvement for PFSP focuses on operator design: Wang et al. [6] hybridizes genetic algorithm and variable neighborhood search to provide better exploration and exploitation in the search space. Li et al. [7] designed a single insert-based local search, a multiple local search strategy, and a doubling perturbation mechanism to exploit the new individual. Li et al. [8] proposed an improved artificial bee colony algorithm with Q-learning for solving PFSP by minimizing the maximum completion time.
However, the traditional scheduling model designed for centralized process shop cannot solve the challenge of a distributed heterogeneous environment, which needs to consider the differences in productivity levels and machine efficiency of different factories [9]. Zhang et al. [10] study the scheduling problem of the multi-stage fine-manufacturing system, which includes distributed fabrication for jobs, assembly from jobs to products, and further differentiation for products to meet customized requirements. Shao et al. [11] study a distributed heterogeneous hybrid flowshop scheduling problem under nonidentical electricity, which can be used in many manufacturing enterprises that have several heterogeneous factories. Li et al. [12] were inspired by a real-world problem encountered in blanking workshop systems within the manufacturing of large engineering equipment, where they developed a double deep Q-network-based co-evolution to address the practical issue by tackling the distributed heterogeneous hybrid flowshop scheduling problem with multiple priorities of jobs. At the same time, the sequence-dependent setup time (SDST) and the no-wait constraint cannot be ignored for the practical application of just-in-time production systems such as chemistry and pharmacy. Zhao et al. [13] investigate an energy-efficient distributed no-wait flowshop scheduling problem with SDST, where a mixed-integer linear programming model is constructed.
A recent comprehensive review of the literature on distributed scheduling highlights the significant opportunity to enhance the representation of operational constraints in the optimization model [14]. First, although heterogeneous factory configurations and machine-dependent processing parameters are common in industrial settings, many existing models either assume homogeneous machine capabilities or have not fully addressed the variability in setup times between consecutive job sequences. Integrating SDST into optimization frameworks presents a complex challenge, as it can introduce computational considerations that affect algorithmic efficiency. Furthermore, the no-wait constraint where jobs must proceed without interruption between machines is an important requirement in industries like steel casting and food processing [15], yet it has been less frequently incorporated into multi-objective distributed scheduling models. In addition, minimizing the maximum completion time is the optimization goal of most studies. However, empirical data show that idle machines in distributed systems will cause energy waste in high-throughput manufacturing environments. To some extent, the index of maximizing completion time can be said to only pay attention to the key machines in key factories, and for the machines in non-key factories, their utilization rate should also be concerned to save resources. Table 1 presents an overview of key studies in scheduling research.
In recent years, the rapid development of large language models (LLMs) has brought about transformative impacts across various research domains, which unlocked new possibilities in tackling optimization problems. Endowed with robust natural language understanding and generation capabilities, LLMs can adeptly handle complex problem descriptions and constraints, emerging as a promising tool in the optimization domain [16]. Huang et al. [17] propose to enhance the optimization performance using a multi-modal LLM capable of processing both textual and visual prompts for deeper insights of the processed optimization problem. In the context of evolutionary algorithms, the integration of LLMs represents a cutting-edge frontier [18]. Brahmachary et al. [19] introduce a novel population-based method for numerical optimization using LLMs; their hypothesis is supported through numerical examples, spanning benchmark and industrial engineering problems. Chiquier et al. [20] introduce an evolutionary search algorithm that uses a large language model and its in-context learning abilities to iteratively mutate a concept bottleneck of attributes for classification. Leung et al. [21] propose three reusable, specific, and customizable prompts by using prompt engineering to assist in designing course syllabi, lesson materials, and assessment questions, aiming to improve course content and save educators’ time. In the recruitment field, LLMs are integrated into job portals. They extract key information from resumes and job descriptions through fine-tuned prompt engineering and, combined with retrieval-augmented generation techniques, achieve more accurate candidate-job matching, streamlining the traditional time-consuming and manual recruitment process [22]. Moreover, the performance of LLMs has been a research hotspot. Genetic algorithms have been customized to generate and optimize prompts for LLMs. By using LLM-defined crossover and mutation operators on textual individuals, this approach enables the automated discovery of high-performing prompt configurations, achieving an average accuracy 24% higher than that of manual crafted prompts across various tasks [23]. Li et al. [24] constructs prompts using triplets from the knowledge graph and task descriptions, allowing LLMs to generate text for effectively embedding secret information while maintaining text coherence and contextual consistency. Although the cooperation between LLM and EA is still in its early stages, the collaboration between EA and LLM has enormous potential for future development [25].
Aiming at the above gap, this paper proposes a distribute heterogeneous no-wait permutation flowshop scheduling problem with sequence-dependent setup times (DHNPFSP-SDST) and establishes a multi-objective mathematical model aiming at minimizing the maximum completion time (Cmax) and the total non-working time of machines (Ttot). To solve this problem, a novel LLM-assisted non-dominated sorting genetic algorithm (NSGAII) [26] is proposed, named LLM-NSGAII, which combines LLM with evolutionary computation for adaptive operator selection. This interdisciplinary method uses the reasoning ability of LLM to supplement the traditional evolutionary method, which may achieve faster convergence and higher quality Pareto solution. The contributions of this work are threefold:
(1)
A multi-objective optimization framework is developed for the distributed heterogeneous no-wait permutation flowshop scheduling problem with sequence-dependent setup times (DHNPFSP-SDST), explicitly minimizing both Cmax and Ttot, which extends the single-objective optimization paradigm from prior work and addresses practical production scenarios where decision-makers must balance operational efficiency.
(2)
We propose the LLM-NSGAII that integrates LLM into the traditional NSGAII framework to solve DHNPFSP-SDST. By designing a structured prompt system to guide evolutionary operations via in-context learning, it introduces natural language-based intelligence into multi-objective optimization. This innovation demonstrates LLMs’ potential to enhance evolutionary algorithms, offering a new paradigm for algorithm design in complex combinatorial optimization.
(3)
Experiments on extended benchmark examples show that the LLM-assisted method achieves competitive results equivalent to the most advanced algorithm on different problem scales, which verifies the feasibility of integrating LLMs into evolutionary computation of complex scheduling optimization.
The rest of this paper is organized as follows. Section 2 describes the proposed HMAS problem and the mathematical model. Section 3 details the concrete algorithm flow and strategies of the LLM-NSGAII. Section 4 introduces the generation of instances, and comparison experiments are conducted. Finally, the conclusion and subsequent directions for research are given in Section 5.

2. Problem Formulation of the DHNPFSP-SDST

2.1. Problem Definition

The DHNPFSP-SDST can be defined as follows. There are n jobs J = {J1, J2, …, Jn} that will be processed by a set of f factories F = {F1, F2, …, Ff}. All the factories have the same workflows, each with a flowshop production system that consists of the same set of m machines {M1, M2, …, Mm} in the fixed permutation. Each factory can process all jobs, each machine can process at most one operation at a time, and each job can only be processed by one factory and handled by at most one machine. Each job will be sequenced through, i.e., M1 →M2 →… → Mm. The pl,i,j is the processing time of Jj on Mi in Fl, and the SDST of immediately processing Jv after Jj on Mi in Fl is denoted by sdstl,i,j,v. On each machine, the SDST of the conversion operation is performed after the prior work is completed and before the next job starts processing, depending on the job that is being processed at the moment and the job that is scheduled to be processed after it. The DHNPFSP-SDST requirement is to reasonably assign jobs to distributed factories and obtain the optimal processing sequence of jobs in the distributed factories to minimize the Cmax and the Ttot. In addition, the problem understudy satisfied the following basic assumptions:
  • A job can only be processed completely in a shop and cannot be transferred to another shop during processing.
  • There are no considerations on transportation in a factory and between factories, and transportation times are included in processing time.
  • Machines can process jobs continuously and do not experience facility malfunctions or maintenance issues.

2.2. Mathematical Model

In this section, a mixed-integer linear programming (MILP) model adapted to DHNWPFSD-SDST is presented. This model is developed based on the MILP model for the distributed heterogeneous flowshop scheduling problem with no-wait and SDST constraints (DHNWFSP-SDST) proposed by Zhao et al. [27,28]. It is further extended and tailored to align with the “permutation flowshop” characteristics and “multi-objective optimization” requirements specific to this study. The key modifications and extensions are twofold: First, in terms of optimization objectives, the original single-objective framework of Zhao et al.’s model—focused solely on minimizing the makespan—is expanded to a bi-objective formulation. A second objective, minimizing the total non-working time of machines across all factories, is incorporated to address the practical production trade-off between operational efficiency and resource utilization. Second, regarding constraint logic adjustments, modifications are made to accommodate the defining features of permutation flowshops, namely a consistent number of machines across all factories and a fixed processing sequence of jobs through these machines. Specifically, differential parameters in the original model that catered to variable machine quantities across heterogeneous factories are removed to enforce a unified machine sequence for all factories. Notably, the core constraint governing SDST is retained to reflect real-world production preparation requirements. The relevant notations and their meanings are defined in Abbreviations. The DHNWPFSP-SDST MILP model is described in detail as follows.
Objective:
Minimize   C m a x = min   m a x { C l , m , k } k 1 , 2 , , n , l 1 , 2 , , f
Minimize   T t o t = l = 1 f i = 1 m k = 2 n ( C l , i , k C l , i , k 1 )
Subject to the following:
l = 1 f k = 1 n X l k j = 1 ,   j { 1 , 2 , , n }
l = 1 f j = 1 n X l k j = 1 ,   k { 1 , 2 , , n }
C l , 1 , 1 = j = 1 n X l , 1 , j · p l 1 j ,   l { 1 , 2 , , f }
C l , i , 1 = C l , i 1 , 1 + j = 1 n X l , 1 , j · p l i j ,   l { 1 , 2 , , f } , i { 2 , 3 , , m }
C l , i , k C l , i , k 1 + j = 1 n X l , k , j · X l , k 1 , v · s d s t l , i , j , v + j = 1 n X l , k , j · p l i j , l { 1 , 2 , , f } , i { 2 , , m } , k { 2 , , n }
C l , i , k C l , i 1 , k + j = 1 n X l , k , j · X l , k 1 , v · s d s t l , i , j , v + j = 1 n X l , k , j · p l i j , l { 1 , 2 , , f } , i { 2 , , m } , k { 2 , , n }
C l , i , k 0 ,   l { 1 , 2 , , f } , i { 1 , 2 , , m } , k { 1 , 2 , , n }
X l , k , j { 0 , 1 } ,   l { 1 , 2 , , f } , k { 1 , 2 , , n } , j { 1 , 2 , , n }
The objective function (1) minimizes the makespan, and the objective function (2) minimizes the total non-working time of machines, which is defined as the interval between the end of the previous job and the start of the next job for each machine, including waiting time and preparing time. Constraints (3) and (4) specify that Jj occurs only once in the scheduling sequence of the assigned factory. Constraints (5) and (6) indicate the completion time constraint of Jj processed at the first position in the factory on each machine. Constraint (7) represents the completion time constraint of two adjacent jobs on the same machine in Fl. Constraint (8) ensures that a job must be completed on the previous machine before it can start processing on the next machine. Constraint (9) defines the domain of the completion time of the job as greater than or equal to 0. Constraint (10) is the limit range of the decision variable, and the value of the decision variable can only be 0 or 1.

2.3. Illustrative Example

This sub-section briefly introduces an example to describe the considered problem clearly. Seven jobs are assigned to three factories and processed on four machines, f = 3 , m = 4 , n = 7 , π 1 = { 1 , 2 } , π 2 = { 3 , 4 , 5 } , π 3 = { 6 , 7 } . Gantt charts for three factories are presented in Figure 1, where a colored rectangle represents the processing time of an operation of a job and a gray rectangle represents the SDST, and Δ t indicates the non-working time of the machine.
Note that the same color rectangle indicates the operation of the same job; for example, four operations of J1: J1-1, J1-2, J1-3, and J1-4 are marked in purple. As shown in Figure 2, the corresponding job sequence in three factories is π = { ( 1 , 2 ) , ( 3 , 4 , 5 ) , ( 6 , 7 ) } .

2.4. Problem Complexity Analysis

To establish the theoretical foundation for algorithm selection and validate the rationality of using heuristic methods to solve DHNPFSP-SDST, this section formally proves that DHNPFSP-SDST is an NP-hard problem via polynomial time reduction.
The PFSP is a classic combinatorial optimization problem with proven NP-hard complexity [5]. Its core definition is as follows: Given n jobs and m machines arranged in a fixed order, each job must be processed on the machines in the sequence M1 →M2 →… →Mm, and the processing order of jobs on all machines is consistent. The objective of PFSP is to determine the optimal job permutation to minimize the maximum completion time across all jobs.
We demonstrate the NP-hardness of DHNPFSP-SDST by showing that it can be reduced to PFSP in polynomial time. The key is to prove that DHNPFSP-SDST degenerates into a standard PFSP under specific conditions, and this degeneration process is computationally feasible within polynomial time. The degeneration conditions are defined as follows:
Single-factory constraint: Set the number of distributed factories f = 1. This eliminates the job assignment step across multiple factories, reducing the problem to a single-factory scheduling scenario.
Zero setup time constraint: Set all sequence-dependent setup times sdstl,i,j,v = 0. This removes the sequence-dependent preparation time between adjacent jobs on the same machine, aligning with the basic PFSP assumption that ignores setup times.
Relaxed no-wait constraint: Lift the no-wait requirement, including allowing waiting time between successive operations of a job. This adjustment ensures consistency with PFSP, which does not impose mandatory no-wait constraints.
When the above three conditions are satisfied, DHNPFSP-SDST is equivalent to PFSP: the single factory’s m machines follow a fixed processing sequence, jobs are processed in a consistent permutation across all machines, and the optimization objective is identical to PFSP. Notably, the degeneration process only involves adjusting three sets of parameters (f, sdstl,i,j,v, and no-wait constraints) without altering the core structure of the problem. The time complexity of this adjustment is O(f + n2mf)
According to the transitivity property of NP-hard problems, if problem A can be reduced to problem B in polynomial time, and A is NP-hard, then B is also NP-hard. In this study, we have shown that the NP-hard PFSP can be obtained by degenerating DHNPFSP-SDST in polynomial time. Thus, DHNPFSP-SDST is proven to be NP-hard. This conclusion justifies the use of heuristic algorithms for solving DHNPFSP-SDST, as exact algorithms are computationally infeasible for NP-hard problems when the problem scale increases.

3. Proposed LLM-NSGAII for the DHNPFSP-SDST

3.1. Framework of Proposed LLM-NSGAII

The proposed LLM-NSGAII uses the advanced knowledge understanding and reasoning ability of LLM to try a new evolutionary paradigm of NSGAII. LLM-NSGAII uses LLMs to completely replace the selection, crossover, and mutation processes, thus establishing a new intelligent evolution mechanism for solving the proposed DHNPFSP-SDST.
As shown in Figure 1, similar to traditional NSGAII, the overall framework is launched by generating the initial population of candidate solutions. However, in the subsequent evolution process, the role of LLM begins to play a role. After the initialization of population, the information related to these solutions, including job assignment, job sequence, and corresponding objectives values, is encoded into a natural language-like format suitable for LLM processing and input into the large model. LLM is equipped with pre-trained knowledge and complex reasoning algorithms, and then undertakes the tasks initially performed by genetic operations in NSGAII. For the selection process, LLM analyzes the encoded solution information, evaluates the suitability and potential of each candidate solution based on the multi-objective criteria related to DHNPFSP-SDST, and selects the most promising solution for the next generation. In the crossover operation, LLM generates new combinations of solution elements by understanding the structural and functional relationships in the solution, mimicking the concept of genetic crossover, but using intelligent reasoning driven combination. Similarly, for the mutation process, LLM introduces strategic changes into the selected solution to explore the unknown region in the solution space while maintaining the quality of the solution.
After the LLM-driven evolutionary operation, the obtained solutions are non-dominated sorted, which is very important to deal with the multi-objective nature of the problem and maintain population diversity. This evolutionary iteration process based on LLM is followed by sorting until the predefined termination conditions are met. Finally, the Pareto frontier of the near optimal solution is output to provide a set of trade-offs of DHNPFSP-SDST for decision-makers.
The time complexity of LLM-NSGAII can be decomposed into two primary components: the core evolutionary algorithm operations inherited from traditional NSGAII, and the additional overhead introduced by LLM-mediated genetic operators. The total time complexity is expressed as follows: O(G × N × (L + Tinf) + G × N2), where O(G × N × (L + Tinf)) accounts for the LLM-specific overhead—each generation requires O(N) calls, each incurring costs proportional to prompt length (L) and inference latency (Tinf), and O(G × N2) retains the core complexity of traditional NSGAII, primarily dominated by non-dominated sorting operations.

3.2. Solution Representation and Initialization

The individual encoding in the proposed LLM-NSGAII is designed as a one-dimensional integer array. This encoding strategy combines n job elements and (f − 1) delimiter elements to represent the processing sequences of jobs across multiple factories in a compact and computationally friendly format. Specifically, each job (Jj, j = 1, 2, …, n) is represented by a unique integer identifier. The delimiter, set as “−1”, serves to separate the job sequences corresponding to different factories. An individual X can be expressed as X = [x1, −1, x2, −1, …,−1, xf], where the xl form the sequence of job processing in factory Fl.
The initialization of the population in the LLM-NSGAII draws inspiration from the NEH algorithm [29] and introduces two multi-individual-applicable greedy algorithms: the Cmax-Greedy Algorithm and the SDST-Greedy Algorithm, both of which are used with a probability of 0.5 and include two main steps of Job Sequence Shuffling and Sequential Job Allocation.
  • Cmax-Greedy Initialization
Sequence Shuffling: First, randomly permute the sequence of all n jobs, generating a new job order Xshuffled.
Sequential Allocation: Initialize the current completion time of each factory Clcurrent = 0. Then, for each job Jj in Xshuffled, find the factory l that satisfies the l* = arg minl Clcurrent; the Jj will be add to xl*.
  • SDST-Greedy Initialization
Sequence Shuffling: Similarly to the Time-Greedy Algorithm, randomly permute the sequence of all n jobs, generating a new job order Xshuffled.
Sequential Allocation: For each job Jj in Xshuffled, the Jj is assigned to the factory that needs the least SDST time to process the Jj.

3.3. LLM-Assisted Evolution Operation

In the LLM-NSGAII, the evolution operations, including selection, crossover, and mutation, are orchestrated by LLM in a zero-shot manner. The combination of prompt engineering techniques and problem-tailored operators harnesses the advanced language understanding and reasoning capabilities of LLM, endowing the algorithm with enhanced adaptability and search efficiency in tackling the DHNPFSP-SDST. Specifically, the process of parent selection and genetic variations, encompassing crossover and mutation, is executed through the in-context learning mechanism of the LLM, which is enhanced by meticulously crafted prompts. The prompt is designed of four parts:
(1)
Problem definition and solution representation: This section outlines the DHNPFSP-SDST, including the number of factories and jobs. And it specifies the input format of solution, which is encoded as a one-dimensional integer array with delimiters separating factory-specific job sequences; each solution has corresponding attributes to help make decision. The optimization objectives are clearly defined.
(2)
Input solutions: A set of solutions from the current population is provided, each formatted as an integer array and accompanied by attributes values reflecting their performance against the defined objectives. These examples serve as in-context learning material for the LLM, illustrating the relationship between solution structures and their optimization outcomes.
(3)
Evolutionary operation instructions: Explicit instructions guide the LLM to perform parent selection, crossover, and mutation. Selection involves randomly choosing diverse parent solutions. Crossover operators (CrO1, CrO2, CrO3) and mutation operators (M1, M2, M3) are described, with parameters such as the number of crossover points or mutation sites specified. The LLM is tasked with generating new solutions by applying these operations in a zero-shot manner.
(4)
Output legalization: The output is required to verify that generated solutions adhere to the specified encoding format, contain no duplicate jobs, and satisfy problem constraints. It must return a predefined number of valid solutions, formatted as integer arrays, suitable for direct integration into the next generation of the NSGAII population.
Figure 2 illustrates an example of the constructed prompt when using LLM to replace the evolution process. The problem definition and solution representation explain the background of the problem, the input solutions provide actual data for solutions, evolutionary operation instructions guide the criteria for generating new legal codes, and the output legalization standardizes the output format.
It is important to underscore that, unlike the traditional approach of implementing evolutionary operations via explicit coding, the LLM-NSGAII only outlines the general processes of parent selection, crossover, and mutation for the LLM. Instead of detailed procedural instructions, it relies on natural-language-based prompts to communicate high-level directives to the LLM. To enhance the LLM’s comprehension of domain-specific knowledge, three types of crossover and mutation methods tailored to the individual encoding format are provided as part of the prompt, which should also be input to LLM.
CrO1: Obtain the list of solution.factories_completion_time attributes for two parent solutions. Identify the factory index corresponding to the maximum value in this list. Then, swap the job sequences representing this factory in the solution.code of the two parent solutions.
CrO2: Obtain the list of solution.factories_total_sdst attributes of two parent solutions. Locate the factory index corresponding to the maximum value in the list. After that, exchange the job sequences representing this factory in the solution.code of the two parent solutions.
CrO3: Perform multi-point crossover on the code of two parent solutions. That is, randomly select multiple crossover points in the code sequences of the two parents, and exchange the segments between these points to generate new offspring codes.
M1: Randomly select any two elements in the code and swap their positions.
M2: Choose an arbitrary interval of elements in the code and reverse the order of the elements within this interval.
M3: Select the job sequences of any two factories presented in the code and swap these two factory job sequences.
This approach significantly reduces the barrier to entry and boosts the algorithm’s adaptability across diverse problem scenarios. Moreover, by incorporating these problem-specific operators into the prompt design, the LLM-NSGAII ensures that the generated solutions are not only relevant but also directly applicable to the DHNPFSP-SDST, streamlining the integration of the LLM’s output into the overall evolutionary framework.

4. Experiment Results and Analysis

4.1. Dataset and Metrics

The experimental datasets employed in this study are developed by extending the datasets proposed by Huang et al. [30]. These original datasets can be downloaded from https://people.idsia.ch/~monaldo/fjsp.html (accessed on 4 May 2023), which have been widely utilized in the research of DHPFSP, providing a solid foundation for our investigation. To address the specific requirements of the DHNPFSP-SDST, we introduce an additional layer of complexity by randomly generating the SDST data, which is using a uniform distribution U [10, 30], consistent with the setup in industrial scheduling benchmarks. The total number of jobs ranges from n ∈ {20, 50, 100, 200}, the total number of factories ranges from f ∈ {2, 3}, the total number of machines in a factory ranges from m ∈ {5, 10, 20}, and the sdst are randomly generated in [0, 50]. Finally, 22 instances are named by DHNPFSP-SDST_n_m_f; for instance, DHNPFSP-SDST_50_8_3 denotes a problem instance with 50 jobs, eight machines per factory, and three distributed heterogeneous factories. The stop criterion is MaxFE = 20 × n; this setting follows the standard practice in evolutionary algorithm evaluations for scheduling problems, ensuring that the computational effort scales proportionally with problem size. To confirm fairness, all compared algorithms adhere to the same MaxFE limit, eliminating bias from unequal computational resources.
Hypervolume (HV) [31] is used to evaluate the comprehensive performance of all algorithms. HV represents the volume among a normalized PF obtained by the algorithm and a reference point. The reference point is selected by the worst objective value about delay time and the number of conflicts from all non-dominated solutions obtained by all algorithms and is normalized to (0.5, 0.5), which is chosen based on the normalized range of our multi-objective optimization targets. This reference point was consistently applied across all experiments to guarantee the comparability of HV results. The larger the HV of the solution set, the better its diversity and convergence efficiency.

4.2. Comparison Experiment

On the one hand, to assess the effectiveness of LLM in LLM-NSGAII, the standard NSGAII, is employed to solve the DHNWPFSP-SDST by directly applying NSGAII without the assistance of LLMs, establishing a baseline for comparison. On the other hand, to evaluate the performance of the proposed algorithm in solving the DHNPFSP-SDST, it is compared with three mainstream multi-objective optimization algorithms: MOEAD [32], IBEA [33], and GREA [34].
All algorithms are executed on the same stop criterion, and the size of populations are set as 50. Other parameters in contrast algorithms are configured with their recommended parameter settings to ensure a fair comparison. For LLM-NSGAII, we use the deepseek-reasoner model points to DeepSeek-R1-0528 as the LLM. Due to the complexity of DHNPFSP-SDST, all algorithms independently run 20 times in all instances. After the experiments, the obtained Pareto fronts and the values of evaluation metrics for each algorithm are carefully analyzed. Table 2 shows statistical results (mean and standard deviation values) of all comparison algorithms for HV in 22 instances. Moreover, the symbol “−/=/+” means significantly inferior, equal, or superior to LLM-NSGAII. Meanwhile, the best value is marked with bold.

4.3. Results and Analysis

4.3.1. Comparison Between LLM-NSGAII and NSGAII

As observed from Table 2, across most instances, the mean HV values of LLM-NSGAII generally outperform or are comparable to those of the traditional NSGAII. For example, in the “20_5_2” instance, the mean HV of LLM-NSGAII (0.539) is slightly lower than NSGAII’s 0.541, yet the difference is marginal. In more representative cases like “50_5_2”, LLM-NSGAII achieves a mean HV of 0.531, whereas NSGAII only reaches 0.428, indicating a notable performance gap. Such results tentatively suggest that integrating LLM into the evolutionary process may enhance the algorithm’s ability to explore the solution space. By leveraging in-context learning and prompt-driven operations, LLM-NSGAII introduces adaptivity into selection, crossover, and mutation phases—capabilities that traditional NSGAII, reliant on predefined operators, lacks. While not definitive, these outcomes hint at the potential of LLMs to augment evolutionary algorithms, offering a new avenue for algorithmic development. Pareto front visualizations in Figure 3 further support this trend.

4.3.2. Comparison Between LLM-NSGAII and Other Multi-Objective Algorithms

When benchmarked against mainstream multi-objective algorithms (MOEAD, IBEA, GREA), LLM-NSGAII demonstrates competitive performance across most instances but shows limitations at larger scales. For example, in “50_5_2”, LLM-NSGAII’s mean HV (0.531) surpasses MOEAD (0.414), IBEA (0.433), and GREA (0.444). However, in instances with a larger number of jobs n, IBEA occasionally exhibits stronger scalability, achieving higher HV in select cases. These findings suggest LLM-NSGAII holds promise for solving complex, multi-constraint combinatorial problems like DHNWPFSP-SDST. By combining LLMs’ natural language reasoning with NSGAII’s evolutionary framework, the algorithm introduces flexibility in navigating solution spaces—an advantage traditional algorithms, bound by fixed operator designs, struggle to match. Still, its performance is relative to mature methods like MOEAD, underscoring room for refinement, particularly in handling large-scale problems. Even so, LLM-NSGAII’s performance across most scenarios offers a valuable proof of concept: LLMs can meaningfully contribute to algorithmic optimization. While not yet surpassing all traditional methods, it validates the potential of merging LLM intelligence with evolutionary frameworks—opening exploratory paths for both fields to address complex real-world challenges.

4.3.3. Pareto Front Analysis

The Pareto fronts visualized in Figure 3 and Figure 4 offer valuable insights into the multi-objective optimization performance of LLM-NSGAII when juxtaposed against contrast algorithms.
The Pareto fronts clearly show that LLM-NSGAII generally demonstrates a tendency to exhibit wider spread and broader coverage of the Pareto front relative to NSGAII in most instances. Take the “04DHNPFSP-SDST_50_5_2” instance as an illustration: LLM-NSGAII’s front spans a larger range of makespan and workload deviation values. This indicates a more extensive exploration of trade-off solutions, which can be attributed to the LLM’s capacity to dynamically adjust crossover and mutation strategies according to the problem context. However, it is important to note that this advantage is not absolute across all problem scales and configurations; there are cases where the improvement might be marginal or less pronounced.
When compared to GREA, IBEA, and MOEAD, a Pareto front obtained by LLM-NSGAII tends to lie closer to the better values in both objectives, especially when it comes to minimizing Cmax while balancing the Ttot. The degree of proximity can vary, and in some cases, the difference from other algorithms might not be statistically significant, depending on the problem instance and evaluation metrics. Overall, when pitted against state-of-the-art methods, LLM-NSGAII shows promise in terms of diversity maintenance and adaptability to complex constraints. Although it may not uniformly and decisively outperform these algorithms in all aspects of convergence and superiority, its ability to adapt to the problem and exhibit efficiency in exploring trade-offs is evident.
In conclusion, even if LLM-NSGAII does not achieve a clear-cut and overwhelming superiority over all comparison algorithms in every aspect, its adaptability to different problem characteristics and its efficiency in exploring diverse solutions make it a valuable candidate for further investigation. The Pareto front analysis reveals that LLM-NSGAII, while not a perfect solution that outshines all others in every metric, exhibits notable adaptability and efficiency in handling multi-objective scheduling problems. It contributes to the growing body of research exploring the integration of large language models with evolutionary algorithms, offering new perspectives on how such hybrid approaches can be tailored to complex real-world optimization tasks.

4.3.4. Statistical Significance Testing

To further evaluate the statistical significance of performance differences among algorithms, the Wilcoxon test and Friedman test are designed to further compare the performance of the algorithms The results of the Wilcoxon test are described in Table 3. The average rank of four algorithms is listed in Table 4.
The Wilcoxon test is utilized to compare the significant difference between the two algorithms. In the Wilcoxon symbolic rank test, ”+” means the total number of instances where the LLM-NSGAII is superior to the comparison algorithm. “−“ means the total number of instances where the LLM-NSGAII is inferior to the comparison algorithm. “≈” expresses the total number of instances where the performance of the LLM-NSGAII is equivalent to that of the comparison algorithm. The normalized value of the Wilcoxon statistic is represented by Z. The asymptotic significance between LLM-NSGAII and the comparison algorithm is denoted by the p-value. The terms “significant” and “not significant” are utilized to describe the significant difference between the LLM-NSGAII and the comparison algorithm. In the comparisons between LLM-NSGAII and NSGAII, MOEAD, or GREA, the p-values are all less than 0.05, and the number of instances where LLM-NSGAII outperforms these algorithms is substantially larger than the number of instances where it underperforms. This indicates that, from a statistical perspective, LLM-NSGAII significantly outperforms these three traditional evolutionary algorithms. For the comparison between LLM-NSGAII and IBEA, the p-value is 0.2076, which is greater than 0.05, and the disparity between the number of winning and losing instances is small (13 vs. 9). This is consistent with the observation in Table 2 that IBEA performs marginally better in large-scale instances (n = 200,500), while LLM-NSGAII excels in small and medium-scale instances.
The Friedman test, in conjunction with Holm correction, are used to assess whether there were significant differences in the overall performance rankings of the four algorithms (LLM—NSGAII, NSGAII, MOEAD, IBEA, GREA) across all 22 instances. The critical difference in the Friedman test is calculated as follows. C D = q α k ( k + 1 ) 6 * i n s , where k is the number of algorithms, ins is the total number of experimental instances, and qα is the critical value from the Studentized range distribution; that is α = 0.05, q0.05 = 2.807 for five algorithms, and the CD for α = 0.05 is calculated as 1.337. Table 4 presents the total rank sums and average ranks of the five algorithms across all 22 instances. LLM-NSGAII achieves the lowest average rank, followed by MOEAD, GREA, NSGAII, and IBEA. The global Friedman test yields a statistic of χ F 2 = 32.67 with a p-value< 0.001, indicating significant overall performance differences among the algorithms. After applying Holm correction for multiple comparisons, the results show that LLM-NSGAII’s average rank is significantly lower (better) than those of GREA, NSGAII, and MOEAD (differences exceed the CD of 1.337). However, the difference between LLM-NSGAII and IBEA is within the CD range, confirming no significant gap between these two algorithms—consistent with the Wilcoxon test findings. These results collectively validate that LLM-NSGAII exhibits statistically significant superiority over three traditional evolutionary algorithms (NSGAII, GREA, MOEAD) and maintains competitive performance comparable to IBEA, reinforcing the effectiveness of integrating LLM to orchestrate evolutionary operations.

5. Conclusions and Future Work

This study delves into the application of LLM-NSGAII in addressing the proposed DHNWPFSP-SDST. Through a structured prompt-based framework, we leverage the LLM’s language understanding and reasoning capabilities to drive evolutionary operations. In the comparative experiments, when pitted against traditional NSGAII, LLM-NSGAII demonstrates comparable or better performance in most instances. The HV metric results and Pareto front analyses show that the integration of LLM enhances the evolutionary algorithm’s ability to explore the solution space flexibly, enabling more flexible exploration of the solution space. This demonstrates the potential of LLMs to promote the advancement of evolutionary algorithms, suggesting that the combination of these two fields could be a fruitful avenue for future research.
It is important to contextualize LLM-NSGAII’s performance relative to baseline algorithms (e.g., NSGAII, IBEA, GREA) implemented in the MATLAB(3.6)-based PlatEMO platform. A direct comparison of computation times between LLM-NSGAII and these baselines is methodologically challenging: whereas the baselines rely solely on in-platform algorithmic operations (selection, crossover, mutation), LLM-NSGAII integrates calls to external LLM services (cloud-based or locally deployed). The latency of these LLM calls is influenced by factors unrelated to the core algorithmic design, such as cloud service load, GPU configuration, or API response variability, making runtime comparisons unrepresentative of the framework’s inherent efficiency. Thus, the value of LLM-NSGAII lies not in incremental gains in runtime or solution quality, but in its innovative paradigm for bridging LLMs and evolutionary computation: it demonstrates how language models can autonomously guide evolutionary operations without manual parameter tuning, reducing the “human effort” traditionally required to adapt algorithms to complex problems. Notably, recent research has delved into the performance implications of LLM inference. For instance, some studies focus on understanding how to optimize LLM inference on CPUs [35,36].
Meanwhile, our exploration also reveals other limitations: while LLM-NSGAII shows promise, there is still room for improvement, especially in handling extremely complex and large-scale combinatorial optimization problems. In such cases, the overhead of LLM calls can become non-trivial, and the current prompt design may not fully capture the nuances of high-dimensional solution spaces. For future work, we will explore remedies such as lightweight prompt engineering strategies and hybridization of LLM-guided operators with traditional coded operators to address these scalability challenges. Another limitation of this work is the use of a single extended benchmark for evaluation, which leaves room for validating generalizability in more varied real-world contexts.
In conclusion, this study takes a preliminary but meaningful step in combining LLMs and evolutionary algorithms for solving complex scheduling problems. While challenges remain, the potential benefits of this integration—including more adaptive, less human-dependent optimization—are substantial. We anticipate that future research in this area will focus on optimizing LLM integration, for example, lightweight model fine-tuning and prompt compression, to mitigate runtime overhead, as well as expanding the framework to more diverse optimization scenarios. Such advances could yield more efficient and intelligent optimization solutions for a wide range of real-world problems.

Author Contributions

Z.Z.: Writing—original draft preparation, methodology; H.Z.: Writing—review and editing, supervision; W.Z.: Investigation, formal analysis, resources; X.B.: Conceptualization; X.Y.: Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program (2023YFB3307400).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

Notations of DHNPFSP MILP:
NotationDescription
Indices
imachine index, i ∈ {1,2,…,m}
j, vjob index, j,v ∈ {1,2,…,n}
lfactory index, l ∈ {1,2,…,f}
kposition index, k ∈ {1,2,…,n}
Parameters
fthe total number of factories
nthe total number of jobs
mthe total number of machines in a factory
F{F1, F2, …, Ff}, factory set
N{J1, J2, …, Jn}, job set
M{M1, M2, …, Mm}, machine set in a factory
pl,i,jprocessing time of Jj on Mi in Fl
sdstl,i,j,vsetup time of processing Jv after Jj on Mi in Fl
Decision Variable
Xl,k,jbinary value that is set to 1 when Jj is processed at position k and assigned to Fl, otherwise set to 0
Cl,i,kcompletion time of job at position k in Fl on Mi
Cmaxthe makespan
Ttotthe total non-working time of machines in all factories

References

  1. Soori, M.; Arezoo, B.; Dastres, R. Digital twin for smart manufacturing. A review. Sustain. Manuf. Serv. Econ. 2023, 62, 100017. [Google Scholar] [CrossRef]
  2. Xu, Y.; Wang, L. Differential evolution algorithm for hybrid flow-shop scheduling problems. Syst. Eng. Electron. 2011, 22, 794–798. [Google Scholar] [CrossRef]
  3. Jiang, E.; Wang, L.; Wang, J. Decomposition-based multi-objective optimization for energy-aware distributed hybrid flowshop scheduling with multiprocessor tasks. Tsinghua Sci. Technol. 2021, 26, 646–663. [Google Scholar] [CrossRef]
  4. Yuan, Y.; Xu, H. Multiobjective flexible job shop scheduling using memetic algorithms. IEEE Trans. Autom. Sci. Eng. 2015, 12, 336–353. [Google Scholar] [CrossRef]
  5. Men, T.; Pan, Q.K. A distributed heterogeneous permutation flowshop scheduling problem with lot-streaming and carryover sequence-dependent setup time. Swarm Evol. Comput. 2021, 60, 100804. [Google Scholar]
  6. Wang, K.; Luo, H.; Liu, F.; Yue, X. Permutation flowshop scheduling with batch delivery to multiple customers in supply chains. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1826–1837. [Google Scholar] [CrossRef]
  7. Li, X.; Li, M. Multiobjective local search algorithm-based decomposition for multiobjective permutation flowshop scheduling problem. IEEE Trans. Eng. Manag. 2015, 62, 544–557. [Google Scholar] [CrossRef]
  8. Li, H.; Gao, K.; Duan, P.Y.; Li, J.Q.; Zhang, L. An improved artificial bee colony algorithm with q-learning for solving permutation flow-shop scheduling problems. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 2684–2693. [Google Scholar] [CrossRef]
  9. Chen, S.Y.; Wang, X.W.; Wang, Y.; Gu, X.S. A modified adaptive switching-based many-objective evolutionary algorithm for distributed heterogeneous flowshop scheduling with lot-streaming. Swarm Evol. Comput. 2023, 81, 101353. [Google Scholar] [CrossRef]
  10. Zhang, G.; Liu, B.; Wang, L.; Xing, K. Distributed heterogeneous co-evolutionary algorithm for scheduling a multistage fine-manufacturing system with setup constraints. IEEE Trans. Cybern. 2024, 54, 1497–1510. [Google Scholar] [CrossRef]
  11. Shao, W.; Shao, Z.; Pi, D. An ant colony optimization behavior-based moea/d for distributed heterogeneous hybrid flowshop scheduling problem under nonidentical time-of-use electricity tariffs. IEEE Trans. Autom. Sci. Eng. 2022, 19, 3379–3394. [Google Scholar] [CrossRef]
  12. Li, R.; Gong, W.; Wang, L.; Lu, C.; Pan, Z.; Zhuang, X. Double DQN-based coevolution for green distributed heterogeneous hybrid flowshop scheduling with multiple priorities of jobs. IEEE Trans. Autom. Sci. Eng. 2024, 21, 6550–6562. [Google Scholar] [CrossRef]
  13. Zhao, F.; Jiang, T.; Wang, L. A reinforcement learning driven cooperative meta-heuristic algorithm for energy-efficient distributed no-wait flow-shop scheduling with sequence-dependent setup time. IEEE Trans. Ind. Inform. 2023, 19, 8427–8440. [Google Scholar] [CrossRef]
  14. Fu, Y.; Hou, Y.; Wang, Z.; Wu, X.; Gao, K.; Wang, L. Distributed scheduling problems in intelligent manufacturing systems. Tsinghua Sci. Technol. 2021, 26, 625–645. [Google Scholar] [CrossRef]
  15. Li, H.R.; Li, X.Y.; Gao, L. A discrete artificial bee colony algorithm for the distributed heterogeneous no-wait flowshop scheduling problem. Appl. Soft Comput. 2021, 100, 106946. [Google Scholar] [CrossRef]
  16. Liu, S.; Chen, C.; Qu, X.; Tang, K.; Ong, Y.S. Large language models as evolutionary optimizers. In Proceedings of the IEEE Congress on Evolutionary Computation, Yokohama, Japan, 30 June–5 July 2024; pp. 1–8. [Google Scholar]
  17. Huang, Y.; Zhang, W.; Feng, L.; Wu, X.; Tan, K.C. How multimodal integration boost the performance of LLM for optimization: Case study on capacitated vehicle routing problems. arXiv 2024, arXiv:2403.01757. [Google Scholar] [CrossRef]
  18. Wu, X.; Wu, S.H.; Wu, J.; Feng, L.; Tan, K.C. Evolutionary computation in the era of large language model: Survey and roadmap. IEEE Trans. Evol. Comput. 2025, 29, 534–554. [Google Scholar] [CrossRef]
  19. Brahmachary, S.; Joshi, S.M.; Panda, A.; Koneripalli, K.; Sagotra, A.K.; Patel, H.; Sharma, A.; Jagtap, A.D.; Kalyanaraman, K. Large language model-based evolutionary optimizer: Reasoning with elitism. arXiv 2024, arXiv:2403.02054. [Google Scholar] [CrossRef]
  20. Chiquier, M.; Mall, U.; Vondrick, C. Evolving interpretable visual classifiers with large language models. arXiv 2024, arXiv:2404.09941. [Google Scholar] [CrossRef]
  21. Leung, J.; Shen, Z. Prompt Engineering for Curriculum Design. In Proceedings of the International Conference on Educational Technology, Wuhan, China, 13–15 September 2024; pp. 97–101. [Google Scholar]
  22. Haneef, F.; Varalakshmi, M.; P U, P.M. Leveraging RAG for Effective Prompt Engineering in Job Portals. In Proceedings of the International Conference on Computational Intelligence, Okayama, Japan, 21–23 November 2025; pp. 717–721. [Google Scholar]
  23. Loss, L.A.; Dhuvad, P. From Manual to Automated Prompt Engineering: Evolving LLM Prompts with Genetic Algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation, Hangzhou, China, 8–12 June 2025; pp. 1–8. [Google Scholar]
  24. Li, Y.; Zhang, R.; Liu, J.; Lei, Q. A Semantic Controllable Long Text Steganography Framework Based on LLM Prompt Engineering and Knowledge Graph. IEEE Signal Process. Lett. 2024, 31, 2610–2614. [Google Scholar] [CrossRef]
  25. Petke, J.; Haraldsson, S.O.; Harman, M.; Langdon, W.B.; White, D.R.; Woodward, J.R. Genetic improvement of software: A comprehensive survey. IEEE Trans. Evol. Comput. 2017, 22, 415–432. [Google Scholar] [CrossRef]
  26. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  27. Zhao, F.; Hu, X.; Wang, L.; Zhao, J.; Tang, J. A reinforcement learning brain storm optimization algorithm (BSO) with learning mechanism. Knowl.-Based Syst. 2022, 235, 107645. [Google Scholar] [CrossRef]
  28. Zhao, F.; Wang, Z.; Wang, L. A reinforcement learning driven artificial bee colony algorithm for distributed heterogeneous no-wait flowshop scheduling problem with sequence-dependent setup times. IEEE Trans. Autom. Sci. Eng. 2023, 20, 2305–2320. [Google Scholar] [CrossRef]
  29. Ruiz, R.; Pan, Q.K.; Naderi, B. Iterated greedy methods for the distributed permutation flowshop scheduling problem. Omega 2019, 83, 213–222. [Google Scholar] [CrossRef]
  30. Huang, K.H.; Li, R.; Gong, W.Y.; Wang, R.; Wei, H. BRCE: Bi-roles co-evolution for energy-efficient distributed heterogeneous permutation flowshop scheduling with flexible machine speed. Complex Intell. Syst. 2023, 9, 4805–4816. [Google Scholar] [CrossRef]
  31. Shang, K.; Ishibuchi, H.; He, L.; Pang, L.M. A survey on the hypervolume indicator in evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 2021, 25, 1–20. [Google Scholar] [CrossRef]
  32. Zhang, Q.F.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  33. Zitzler, E.; Künzli, S. Indicator-based selection in multi-objective search. In Proceedings of the Parallel Problem Solving from Nature, Birmingham, UK, 18–22 September 2004; pp. 832–842. [Google Scholar]
  34. Yang, S.; Li, M.; Liu, X.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  35. Na, S.; Jeong, G.; Ahn, B.H.; Young, J.; Krishna, T.; Kim, H. Understanding Performance Implications of LLM Inference on CPUs. In Proceedings of the IEEE International Symposium on Workload Characterization, Vancouver, BC, Canada, 15–17 September 2024; pp. 169–180. [Google Scholar]
  36. Shahandashti, K.K.; Sivakumar, M.; Mohajer, M.M.; Belle, A.B.; Wang, S.; Lethbridge, T. Assessing the Impact of GPT-4 Turbo in Generating Defeaters for Assurance Cases. In Proceedings of the IEEE/ACM First International Conference on AI Foundation Models and Software Engineering, Lisbon, Portugal, 14 April 2024; pp. 52–56. [Google Scholar]
Figure 1. Example of the Gantt charts of f1, f2 and f3.
Figure 1. Example of the Gantt charts of f1, f2 and f3.
Applsci 15 10131 g001
Figure 2. Flowchart of LLM-NSGAII (“...” means that the set consisting of all solutions is represented in JSON format. With the exception of the specific attributes of one solution that has been presented, the ellipsis denotes other solutions that share identical attributes to the displayed one.).
Figure 2. Flowchart of LLM-NSGAII (“...” means that the set consisting of all solutions is represented in JSON format. With the exception of the specific attributes of one solution that has been presented, the ellipsis denotes other solutions that share identical attributes to the displayed one.).
Applsci 15 10131 g002
Figure 3. Pareto fronts of some instances obtained by different algorithms.
Figure 3. Pareto fronts of some instances obtained by different algorithms.
Applsci 15 10131 g003
Figure 4. Comparison of algorithms across different problem scales based on HV.
Figure 4. Comparison of algorithms across different problem scales based on HV.
Applsci 15 10131 g004
Table 1. Key literature review on scheduling optimization.
Table 1. Key literature review on scheduling optimization.
ReferenceProblem TypeKey ConstraintsOptimization Objective
Johnson [2]FSPIdealized scenario, homogeneous machines, fixed process routesmakespan
Wang et al. [6]
Li et al. [7]
Li et al. [8]
PFSPSequence consistency of jobs across machinesmakespan
Zhang et al. [10]DPFSPDistributed fabrication–assembly–differentiation process, customized requirementsmakespan
Shao et al. [11]
Li et al. [12]
DHHFSPHeterogeneous factories, nonidentical electricity costsmakespan
Zhao et al. [13]DNW-FSPDistributed factories, no-wait constraint, sequence-dependent setup time (SDST), energy efficiencymakespan
energy consumption
This studyDHNWPFSP-SDSTDistributed heterogeneous factories, no-wait constraint, SDSTmakespan
idle time
Table 2. The comparison of LLM-NSGAII and other algorithms based on HV.
Table 2. The comparison of LLM-NSGAII and other algorithms based on HV.
InstanceHV
n_m_fLLM-NSGAIINSGAIIMOEADIBEAGREA
meanstdmeanstdmeanstdmeanstdmeanstd
20_5_20.5390.0060.5410.0030.4910.0030.4840.0040.5150.002
20_10_20.5250.0120.5250.0040.5140.0030.5210.0020.5230.005
20_20_20.5270.0080.5310.0030.5220.0030.5320.0050.5260.003
50_5_20.4310.0080.4280.0040.4140.0040.4430.0020.4440.003
50_10_20.5470.0070.5300.0040.5440.0020.5450.0040.5410.004
50_20_20.5230.0080.5070.0040.5250.0040.5270.0030.5140.003
100_5_20.4160.0050.3950.0020.3990.0040.4150.0040.4060.004
100_10_20.5330.0050.5160.0030.5310.0020.5400.0020.5270.004
100_20_20.5210.0060.5240.0060.5130.0050.5170.0040.5190.004
200_10_20.4060.0070.4030.0050.3960.0040.3940.0050.3930.003
200_20_20.5250.0060.5160.0060.5040.0060.5070.0060.5110.005
500_20_20.5100.0090.5020.0050.5100.0040.5270.0050.5060.005
20_5_30.5680.0100.5640.0030.5650.0050.5620.0040.5560.003
20_10_30.5730.0090.5650.0020.5650.0020.5600.0030.5640.005
20_20_30.5760.0070.5780.0030.5740.0030.5400.0020.5630.004
50_5_30.4650.0090.4580.0030.4520.0020.4550.0040.4620.005
50_10_30.5920.0080.5970.0060.5920.0050.6010.0030.5960.006
50_20_30.5740.0080.5570.0050.5720.0060.5470.0030.5660.005
100_5_30.4970.0080.4810.0070.5040.0030.5170.0050.4930.006
100_10_30.5510.0120.5420.0040.5500.0050.5830.0060.5520.008
100_20_30.5780.0150.5750.0050.5710.0070.5620.0050.5810.008
200_10_30.4660.0140.4650.0070.4690.0090.4870.0080.4680.009
−/=/+ 16/1/517/0/513/0/917/0/5
Table 3. Wilcoxon signed-rank test result for LLM-NSGAII and other algorithms based on HV.
Table 3. Wilcoxon signed-rank test result for LLM-NSGAII and other algorithms based on HV.
p-Value Based on the Rank-Sum Test of Wilcoxon
LLM-NSGAII vs. the compared algorithms+Zp-valueA = 0.05
NSGAII1651−3.8720.0001significant
MOEAD1750−3.2150.0013significant
IBEA1390−1.2640.2076not significant
GREA1750−2.9410.0034significant
Table 4. The average rank of LLM-NSGAII and other algorithms based on HV.
Table 4. The average rank of LLM-NSGAII and other algorithms based on HV.
AlgorithmTotal Rank SumAverage Rank
LLM-NSGAII38.81.72
IBEA47.302.15
GREA67.763.08
NSGAII78.323.56
MOEAD98.784.49
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zhao, H.; Zhao, W.; Bian, X.; Yun, X. LLM-Assisted Non-Dominated Sorting Genetic Algorithm for Solving Distributed Heterogeneous No-Wait Permutation Flowshop Scheduling. Appl. Sci. 2025, 15, 10131. https://doi.org/10.3390/app151810131

AMA Style

Zhang Z, Zhao H, Zhao W, Bian X, Yun X. LLM-Assisted Non-Dominated Sorting Genetic Algorithm for Solving Distributed Heterogeneous No-Wait Permutation Flowshop Scheduling. Applied Sciences. 2025; 15(18):10131. https://doi.org/10.3390/app151810131

Chicago/Turabian Style

Zhang, ZhaoHui, Hong Zhao, Wanqiu Zhao, Xu Bian, and Xialun Yun. 2025. "LLM-Assisted Non-Dominated Sorting Genetic Algorithm for Solving Distributed Heterogeneous No-Wait Permutation Flowshop Scheduling" Applied Sciences 15, no. 18: 10131. https://doi.org/10.3390/app151810131

APA Style

Zhang, Z., Zhao, H., Zhao, W., Bian, X., & Yun, X. (2025). LLM-Assisted Non-Dominated Sorting Genetic Algorithm for Solving Distributed Heterogeneous No-Wait Permutation Flowshop Scheduling. Applied Sciences, 15(18), 10131. https://doi.org/10.3390/app151810131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop