Next Article in Journal
Reasonable Width of Deteriorated Coal Pillars and Surrounding Rock Control for Roadways in Thick Coal Seams: A Case Study of Datong Coal Mine Area, China
Previous Article in Journal
Droplet Diameter Variability Induced by Flow Oscillations in a Micro Cross-Junction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning

1
School of Computer Science and Technology/School of Artificial Intelligence, China University of Mining and Technology, Xuzhou 221000, China
2
Xuzhou Construction Machinery Group, Xuzhou 221004, China
3
Engineering Research Center of Mine Digitalization, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 10108; https://doi.org/10.3390/app151810108
Submission received: 29 July 2025 / Revised: 5 September 2025 / Accepted: 10 September 2025 / Published: 16 September 2025
(This article belongs to the Section Applied Industrial Technologies)

Abstract

The flexible job shop scheduling problem (FJSP) is a fundamental challenge in modern industrial manufacturing, where efficient scheduling is critical for optimizing both resource utilization and overall productivity. Traditional heuristic algorithms have been widely used to solve the FJSP, but they are often tailored to specific scenarios and struggle to cope with the dynamic and complex nature of real-world manufacturing environments. Although deep learning approaches have been proposed recently, they typically require extensive feature engineering, lack interpretability, and fail to generalize well under unforeseen disturbances such as machine failures or order changes. To overcome these limitations, we introduce a novel hierarchical reinforcement learning (HRL) framework for FJSP, which decomposes the scheduling task into high-level strategic decisions and low-level task allocations. This hierarchical structure allows for more efficient learning and decision-making. By leveraging policy gradient methods at both levels, our approach learns adaptive scheduling policies directly from raw system states, eliminating the need for manual feature extraction. Our HRL-based method enables real-time, autonomous decision-making that adapts to changing production conditions. Experimental results show our approach achieves a cumulative reward of 199.50 for Brandimarte, 2521.17 for Dauzère, and 2781.56 for Taillard, with success rates of 25.00%, 12.30%, and 19.00%, respectively, demonstrating the robustness of our approach in real-world job shop scheduling tasks.

1. Introduction

The flexible job shop scheduling problem (FJSP)  [1,2,3,4,5,6], which originated from vehicle scheduling, has become a fundamental challenge in contemporary industrial systems. Traditional approaches to FJSP [7] typically assume a static manufacturing environment, where complete shop floor information is available in advance and scheduling tasks are relatively simple. As a result, these methods are limited to scenarios with fixed job orders and machine availability, and struggle to adapt to the dynamic and uncertain nature of real-world manufacturing. With the increasing complexity of modern manufacturing systems, the scheduling process must address not only optimal sequencing and resource allocation, but also cope with disruptive factors such as machine failures, urgent order insertions, and other unexpected events that can severely impact production efficiency. In this context, the development of dynamic FJSP (DFJSP) [8,9,10,11,12] methodologies is of critical importance, as intelligent and adaptive scheduling strategies are essential for enhancing the resilience and efficiency of smart manufacturing workshops in the face of uncertainty.
With the advancement of technology, a variety of approaches beyond traditional scheduling schemes have been developed for the flexible job shop scheduling problem. Meta-heuristic algorithms such as genetic algorithms (GA) [13], simulated annealing (SA) [14], and tabu search (TS) [15] have been widely adopted to address scheduling optimization. Nevertheless, these methods typically require frequent rescheduling in response to environmental changes and face considerable challenges in handling dynamic and uncertain production scenarios. More recently, reinforcement learning (RL) has emerged as a promising paradigm for tackling complex scheduling tasks in uncertain and rapidly evolving manufacturing environments [16]. By formulating the scheduling problem as a Markov decision process (MDP), RL enables the learning of adaptive policies that can autonomously optimize scheduling decisions through iterative interaction with the environment. This approach has demonstrated superior performance over traditional heuristic rules, particularly in terms of improving production efficiency and system adaptability. However, existing RL-based methods may still encounter limitations in handling large-scale problems and continuous state spaces, as well as challenges related to convergence speed and stability [15]. Hierarchical reinforcement learning (HRL)  [17,18,19] is an advanced framework that decomposes complex scheduling tasks into multi-level decision processes, typically separating high-level planning from low-level execution. This structure enhances scalability and adaptability, allowing the algorithm to efficiently handle large-scale and dynamic environments.
Inspired by the recent successes of HRL in various scheduling and resource allocation domains, we designed our approach to specifically address the scalability and adaptability challenges in dynamic flexible job shop scheduling. Traditional flat RL methods often struggle with the high-dimensional action spaces and hierarchical dependencies present in real-world manufacturing systems. To overcome these limitations, we propose a novel HRL framework that decomposes the scheduling process into high-level strategic decisions and low-level task allocations. This HRL approach formulates the scheduling problem as a Markov Decision Process (MDP) at both levels and optimizes policies through the policy gradient algorithm. The high-level controller is responsible for global scheduling strategies, such as task grouping and order sequencing, while the low-level controller manages detailed assignments of tasks to specific machines. Our method incorporates a supervised pre-training phase, where the high-level policy is initialized using high-quality scheduling sequences generated by advanced heuristic algorithms, providing an efficient starting point for subsequent learning. Thereafter, both high-level and low-level policies are jointly refined through reinforcement learning, enabling the agent to adaptively generate real-time scheduling actions based on continuous interaction with the environment. This hierarchical structure facilitates robust optimization of scheduling decisions via reward-driven feedback, ensuring the system can effectively adapt to dynamic, uncertain manufacturing scenarios without reliance on handcrafted rules or extensive feature engineering.
The main contributions of this work are as follows:
  • We model the job shop scheduling problem as a hierarchical structure, decomposing the scheduling task into high-level strategic decisions and low-level task allocations, allowing for more efficient learning and decision-making.
  • We conduct comprehensive comparative experiments, demonstrating the effectiveness of our HRL-based approach against both heuristic and intelligent algorithms, including benchmark datasets, and showing its superiority in terms of scheduling efficiency and robustness.
The remaining parts of this manuscript are organized as follows. We reviewed the related works in Section 2 and formulated our problem in Section 3. Section 4 introduces our policy gradient-based approach for FJSP. Experimental verifications will be conducted in Section 5 where our approach is compared to several deep-learning-based approaches for job scheduling. Section 6 concludes our study and discusses future works.

2. Related Work

FJSP problems have two categories: the totalFJSP(T-FJSP), partialFJSP(P-FISP) [20]. According to Xie, Jin, and Gao’s paper [21], in the T-FJSP, each process of all tasks can be processed in any one of the optional machines; in the P-FJSP, the processing machine with at least one procedure can only be part of the optional machine, that is, the true subset of the machine set to process. The P-FJSP is more in line with the scheduling problem in the actual production system. The study of the P-FJSP is more practical than the T-FJSP, and the P-FJSP is more complex than the T-FJSP. In our paper, we mainly focus on the P-FJSP. In the field of job shop scheduling, non-deep learning methods continue to hold a prominent position, particularly in industrial scenarios where interpretability, reliability, and optimization stability are critical. Updating Bounds [22] applied OR-Tools along with various heuristic strategies to update the lower and upper bounds of classic JSSP benchmark instances, contributing to progress in optimality verification for several longstanding problems. D-DEPSO [23] proposed a hybrid optimization algorithm that integrates discrete differential evolution, particle swarm optimization, and critical-path-based neighborhood search to solve energy-efficient flexible job shop scheduling problems involving multi-speed machines and setup times. The algorithm achieved strong performance on multi-objective indicators such as IGD and C-metric. ABGNRPA [24] introduced an adaptive bias mechanism into a nested rollout policy framework, effectively enhancing search efficiency and solution quality in complex or resource-constrained scheduling settings. Quantum Scheduling [25] explored the role of quantum computing in industrial optimization, covering gate-based quantum computing, quantum annealing, and tensor networks in tasks such as bin-packing, routing, and job shop scheduling. Lastly, the Learning-Based Review [26] provided a comprehensive survey of machine learning approaches for JSSP, including non-deep methods such as traditional supervised learning, support vector machines, and evolutionary strategies, highlighting current trends and research gaps. These methods, grounded in domain-specific knowledge and algorithmic transparency, remain valuable for real-world deployment where explainability and robustness are prioritized.
In recent years, artificial intelligence [27,28,29] methods have increasingly been applied to solve flexible job shop scheduling problems (FJSP). For instance, Xiaolin Gu, Ming Huang et al. [30] proposed an enhanced genetic algorithm (IGA-AVNS) to address more complex FJSP problems. Their approach first employs genetic algorithms to randomly assign tasks to machines and group them, and then uses the IGA-AVNS search method to find the optimal solution within each group. In another study, A. Mathew, A. Roy, and J. Mathew [31] utilized deep reinforcement learning (DRL) for energy management systems, demonstrating that their method, superior to existing mixed integer linear programming techniques, reduces load peaks and significantly lowers electricity bills, thereby increasing monthly savings for consumers. Additionally, Nie et al. [32] developed a gene expression programming (GEP)-based method to tackle the responsive FJSP new job insertion problem, utilizing genetic operators and evolutionary techniques to create optimal machine allocations and operation sequences.
However, in contrast to traditional methods, deep learning-based approaches are increasingly being explored to enhance the generalizability and adaptability of scheduling algorithms. For example, Starjob [33] introduced a large-scale JSSP dataset paired with natural language task descriptions, and fine-tuned a LLaMA-8B model using LoRA, demonstrating the potential of large language models (LLMs) for interpreting and solving structured optimization problems. Another approach, BOPO [34], proposed a neural combinatorial optimization framework that combines best-anchored sampling with preference optimization loss, improving both solution diversity and quality. Unraveling the Rainbow [35] compared value-based deep reinforcement learning methods, such as Rainbow, to policy-based methods like PPO, showing that value-based strategies can perform competitively or even outperform policy-based approaches in certain scenarios. ALAS [36] introduced a multi-agent LLM system capable of planning under disruptions, addressing challenges such as context degradation and the lack of state-awareness in single-pass models. REMoH [37] integrated NSGA-II with LLM-driven heuristic generation, enabling reflective and interpretable multi-objective optimization. Collectively, these studies highlight the promising potential of deep learning, particularly through LLMs and DRL, to develop intelligent, scalable, and context-aware scheduling policies that surpass static heuristics, adapting to dynamic industrial environments.

3. Problem Formulation

The overall structure of the dynamic flexible job shop scheduling problem (DFJSP) is illustrated in Figure 1. In the figure, each machine possesses distinct processing capabilities: some machines can handle only specific types of tasks, while others can accommodate a wider variety. Moreover, the processing time for a given task may differ significantly across machines, adding to the complexity of optimal task allocation.
We further illustrate the overview of the intelligent scheduling framework in Figure 2. In the figure, the scheduling agent sequentially assigns incoming tasks to available machines, dynamically considering machine status, task attributes, and global scheduling context. The agent operates in a closed-loop manner, continuously interacting with the production environment and adapting its decisions in response to task arrivals and state changes.
Consider a dynamic flexible job shop scheduling problem (DFJSP) consisting of a set of N tasks (jobs) J = { J 1 , J 2 , , J N } and a set of M machines M = { M 1 , M 2 , , M M } . Each job J n ( 1 n N ) is characterized by a sequence of operations O n = { O n , 1 , O n , 2 , , O n , K n } , where K n is the number of operations required for job J n . Each operation O n , k ( 1 k K n ) must be processed on a specific subset of machines, denoted as M n , k M , and cannot start until its predecessor O n , k 1 has finished (if k > 1 ).
Let a n and d n denote the arrival time and deadline of job J n , respectively. For each operation O n , k assigned to machine M m , we define the processing time as T n , k , m . If machine M m cannot process O n , k , then T n , k , m = 1 .
Let x n , k , m { 0 , 1 } be a binary variable indicating whether operation O n , k is assigned to machine M m ( x n , k , m = 1 ) or not ( x n , k , m = 0 ). The start time of O n , k is denoted as S n , k , and its completion time as C n , k .
The problem is subject to the following constraints:
  • Assignment constraint:
    M m M n , k x n , k , m = 1 , n , k
    Each operation must be assigned to exactly one capable machine.
  • Machine capacity constraint:
    S n , k max a n , C n , k 1 , S n , 1 a n
    S n , k C n , k or S n , k C n , k if x n , k , m = x n , k , m = 1 , ( n , k ) ( n , k )
    No two operations can be processed simultaneously on the same machine.
  • Precedence constraint:
    S n , k C n , k 1 , n , k > 1
    Each operation starts only after the previous one finishes.
  • Processing time:
    C n , k = S n , k + M m M x n , k , m · T n , k , m
    The completion time is determined by the assigned machine’s processing time.
For each job J n , its final completion time is defined as
C n = C n , K n
and its tardiness as
L n = max { 0 , C n d n }
Our goal is to optimize the scheduling policy π that dynamically assigns machines to operations so as to minimize the overall job completion time and tardiness, while maximizing task punctuality and resource utilization. The main optimization objective is:
max π n = 1 N ( d n C n )
or, equivalently, to minimize the total weighted completion time or tardiness, depending on the application requirements:
min π n = 1 N L n
In summary, the DFJSP seeks an assignment and sequencing of all operations to eligible machines under time and capacity constraints, with the objective of optimizing overall production efficiency and meeting job deadlines. This formalization provides the mathematical basis for designing and evaluating intelligent scheduling policies using reinforcement learning or other advanced algorithms.

4. Policy Gradient-Based Scheduling Framework for FJSP

To address the dynamic flexible job shop scheduling problem (DFJSP), we developed a policy optimization framework grounded in deep reinforcement learning, specifically leveraging the policy gradient (PG) algorithm [38]. This section details the formalization of the Markov Decision Process (MDP), network architecture, training strategy, and the overall optimization procedure. We also introduce some novel mathematical formulations for improving the scheduling efficiency and robustness.

4.1. Hierarchical Markov Decision Process (HMDP) Formalization

We reformulate the scheduling problem using a hierarchical reinforcement learning framework, characterized by a two-level Markov Decision Process (HMDP). This model decomposes the scheduling problem into a high-level decision process and a low-level task allocation process, where the high-level controller defines the general scheduling strategy, and the low-level controller refines task assignments. The HMDP is formally defined by the tuple ( S H , A H , P H , r H , γ H , S L , A L , P L , r L , γ L ) , where
  • S H denotes a high-level state space. Each state s H S H encodes global information such as the overall task queue and aggregated machine availability. This space is discrete because the job arrivals and machine states are represented as finite categorical features.
  • A H denotes a high-level action space, consisting of discrete group-level scheduling decisions (e.g., assigning the next task batch to one of several machine groups).
  • P H ( s H | s H , a H ) denotes a high-level state transition function, representing the evolution of the high-level state after a scheduling decision is made. The probability of transitioning from state s H to state s H after performing action a H is given by:
    P H ( s H | s H , a H ) = P ( s H = f H ( s H , a H ) ) = i = 1 d H P ( s H i = f H i ( s H i , a H i ) ) ,
    where f H i represents the evolution of the i-th feature of the high-level state, and  P denotes the transition probability.
  • r H ( s H , a H ) denotes a high-level reward function, designed to encourage decisions that optimize global scheduling goals (e.g., throughput and tardiness reduction). The reward is formulated as:
    r H ( s H , a H ) = α H · Throughput ( s H ) β H · Tardiness ( s H ) ,
    where α H and β H are learnable parameters that balance throughput and tardiness reduction.
  • γ H [ 0 , 1 ) denotes the discount factor at the high level, reflecting the importance of future rewards in the high-level decision process.
  • S L denotes a low-level state space. Each state s L S L represents the concrete system status, including (i) the running conditions of the five machines and (ii) the deadlines of tasks waiting to be scheduled. Since both machines and deadlines are drawn from finite sets, S L is also discrete.
  • A L denotes a low-level action space. Each action a L A L corresponds to selecting one of the five machines to allocate the current incoming task. Thus, A L = { 1 , 2 , 3 , 4 , 5 } is discrete.
  • P L ( s L | s L , a L ) denotes a low-level state transition function, representing the evolution of the low-level state after a task is allocated to a machine. The probability of transitioning from state s L to state s L after performing action a L is given by:
    P L ( s L | s L , a L ) = P ( s L = f L ( s L , a L ) ) = i = 1 d L P s L i = f L i ( s L i , a L i ) = i = 1 d L P ( s L i = f L i ( s L i , a L i ) ) ,
    where f L i represents the evolution of the i-th feature of the low-level state, and  P denotes the transition probability.
  • r L ( s L , a L ) denotes a low-level reward function, designed to encourage efficient task completion with minimal tardiness. The reward is formulated as:
    r L ( s L , a L ) = T completion ( s L , a L ) + w penalty · lateness ( s L , a L ) ,
    where T completion ( s L , a L ) is the total completion time for the allocated tasks, and lateness ( s L , a L ) represents the time beyond the task deadlines. The hyper-parameter w penalty trades off the balance between the total completion time and the time beyond the task deadline.
  • γ L     [ 0 , 1 ) denotes the discount factor at the low level, reflecting the importance of future rewards in the low-level decision process.
This HMDP formulation captures the sequential nature of the scheduling problem while allowing for the decomposition of the decision-making process into manageable high-level and low-level tasks. The high-level controller focuses on strategic task groupings and scheduling priorities, while the low-level controller allocates specific tasks to machines, optimizing task-level completion. By separating the task planning and task allocation stages, this model enhances the efficiency of scheduling in complex environments.

4.2. Hierarchical Reinforcement Learning for Factory Scheduling

We propose a novel hierarchical RL framework for factory scheduling, where the scheduling problem is decomposed into two levels: a high-level controller that defines broad scheduling strategies, and a low-level controller that executes task-level scheduling actions. The key innovation of our approach is the introduction of entropy regularization and value-based regularization to improve the exploration, convergence, and stability of the training process, while maintaining efficiency in complex scheduling tasks.

4.2.1. High-Level Controller: Task Planning

The high-level controller generates a task sequence by evaluating long-term objectives such as maximizing throughput or minimizing tardiness. This process is modeled as a Markov Decision Process (MDP), where the state space S H represents the global scheduling environment, which includes the availability of machines and the overall system status. The action space A H consists of high-level scheduling decisions, such as task assignment to machine groups. The reward function for the high-level controller is defined as:
r H ( s t , a t ) = α H · Throughput ( s t ) β H · Tardiness ( s t ) λ H · H ( π H ) ,
where α H and β H are weight parameters prioritizing throughput and tardiness reduction, respectively, and  λ H is the regularization term controlling the entropy H ( π H ) , which encourages exploration. The high-level policy π H is trained by maximizing the expected return, incorporating entropy regularization to balance exploration and exploitation:
J H ( θ H ) = E π H t = 1 T γ H t 1 r H ( s t , a t ) λ H H ( π H ) ,
where γ H is the discount factor at the high level, and  H ( π H ) is the entropy of the policy π H at the high level.

4.2.2. Low-Level Controller: Task Scheduling

The low-level controller is responsible for executing task schedules generated by the high-level controller. This controller optimizes task allocation, focusing on specific machines and task completion times. The state space S L includes the local environment, such as machine status and task queue information, and the action space A L consists of decisions like selecting specific machines for task execution. The reward function for the low-level controller is modified to incorporate the task completion time and a regularization term to prevent overfitting:
r L ( s t , a t ) = α L · T s ( s t ) β L · T c ( s t , a t ) λ L · H ( π L ) ,
where T s ( s t ) denotes task completion time, T c ( s t , a t ) represents the completion time of task a t in state s t , and  α L , β L are learnable parameters. λ L is the entropy regularization coefficient, and  H ( π L ) is the entropy of the low-level policy. The low-level policy π L is trained using the standard policy gradient method, but with added entropy regularization to facilitate exploration:
J L ( θ L ) = E π L t = 1 T γ L t 1 r L ( s t , a t ) λ L H ( π L ) ,
where γ L is the discount factor at the low level.

4.2.3. Hierarchical Training Strategy

The overall training process involves joint optimization of the high- and low-level policies. The high-level controller provides guidance to the low-level controller while the low-level controller refines the action selection process based on the task assignments received. The objective function for the entire system, incorporating both controllers, is:
J ( θ H , θ L ) = E π H , π L t = 1 T γ H t 1 r H ( s t , a t ) + t = 1 T γ L t 1 r L ( s t , a t ) λ H H ( π H ) λ L H ( π L ) ,
which combines the high- and low-level rewards and incorporates entropy regularization at both levels. To stabilize the training process, we apply baseline subtraction to reduce variance in gradient estimates:
θ H J ( θ H , θ L ) = E π H , π L t = 1 T θ H log π H ( a t | s t ) ( G H b H ) ,
θ L J ( θ H , θ L ) = E π H , π L t = 1 T θ L log π L ( a t | s t ) ( G L b L ) ,
where G H and G L represent the Monte Carlo returns for the high- and low-level controllers, respectively, and  b H and b L are the corresponding baseline values.
We introduce a value-based regularization term to prevent overfitting in the high-dimensional scheduling problem. The total loss function is formulated as:
L total = t = 1 T r H ( s t , a t ) + r L ( s t , a t ) λ H H ( π H ) λ L H ( π L ) + ρ H · | V H ( s t ) V H target ( s t ) | 2 + ρ L · | V L ( s t ) V L target ( s t ) | 2 ,
where V H and V L represent the value functions for the high- and low-level policies, respectively, and  ρ H and ρ L are the regularization coefficients. The target value functions V H target and V L target are derived from the expected reward signals.
In the algorithm, the learning rates α H and α L are crucial for stable and coordinated learning in the hierarchical framework. They control the update pace for each policy. If  α H is too high, the high-level strategy evolves faster than the low-level can execute, causing misalignment and instability. If  α L is too high, the low-level over-specializes to a poor strategy, hindering high-level improvement. Well-balanced rates ensure both policies learn synchronously, enabling efficient credit assignment and the stable, rapid convergence demonstrated in the results.
The hierarchical structure of our approach is grounded in option theory in reinforcement learning, where the high-level controller provides options (sub-policies) for the low-level controller, allowing more efficient exploration and faster learning in complex environments. By decomposing the scheduling task into two levels, the high-level controller focuses on global scheduling strategies, while the low-level controller handles fine-grained task allocation, which is computationally efficient. Additionally, entropy regularization ensures the model explores a diverse range of policies, promoting robust learning across the environment.
This hierarchical approach, enhanced with entropy regularization and value-based loss, allows for the integration of both long-term scheduling goals and short-term task allocation decisions, yielding improved overall scheduling performance compared to traditional single-level reinforcement learning methods. The overall learning and scheduling process is outlined in Algorithm 1. We conducted all experiments using NumPy (v1.24.3), SciPy (v1.10.1), and PyTorch (v2.0.1).
Algorithm 1 Hierarchical Reinforcement Learning-Based Scheduling for FJSP
1:
Input: Training set D , heuristic policy π base , discount factors γ H , γ L , learning rates α H , α L
2:
Initialize: High-level policy parameters θ H , low-level policy parameters θ L
3:
// Supervised Pre-Training (High-Level)
4:
for epoch = 1 to E 1  do
5:
    Sample scheduling sequences ( s H , a H base ) from heuristic policy π base
6:
    Compute supervised loss L sup ( θ H ) :
L sup ( θ H ) = t = 1 T H a ^ H base ( s H ) a H base 2
7:
    Update θ H by minimizing L sup
8:
    Update:  θ H θ H α H θ H L sup
9:
end for
10:
// Reinforcement Learning Fine-Tuning (Hierarchical Training)
11:
for epoch = 1 to E 2  do
12:
    for each episode k = 1 to K do
13:
     High-Level: Generate high-level trajectory τ H = { ( s H 1 , a H 1 ) , , ( s H T H , a H T H ) } under π θ H
14:
     Compute high-level rewards { r H ( t ) } and high-level returns { G H ( t ) } :
G H ( t ) = t = t T H γ H t t r H ( t )
15:
     Update θ H using policy gradient:
θ H θ H + α H t = 1 T H θ H log π θ H ( a H t | s H t ) · G H ( t )
16:
     Low-Level: For each high-level action a H , generate low-level trajectory τ L = { ( s L 1 , a L 1 ) , , ( s L T L , a L T L ) } under π θ L
17:
     Compute low-level rewards { r L ( t ) } and low-level returns { G L ( t ) } :
G L ( t ) = t = t T L γ L t t r L ( t )
18:
     Update θ L using policy gradient:
θ L θ L + α L t = 1 T L θ L log π θ L ( a L t | s L t ) · G L ( t )
19:
    end for
20:
end for
21:
Output: Trained high-level policy π θ H and low-level policy π θ L
Unlike meta-heuristic algorithms, e.g., the grey wolf optimizer [39], that search the solution space in a problem-agnostic manner, our RL-based optimization directly exploits feedback from the scheduling environment, enabling the agent to improve decision-making policies through trial-and-error interactions. This design allows the model to generalize across varying task distributions and industrial scenarios, which was our primary goal.

5. Experimental Results

To comprehensively evaluate the performance of our proposed reinforcement learning-based scheduling framework, we conducted a series of comparative experiments on a simulated dynamic flexible job shop environment. Specifically, we randomly generated a set of 1000 tasks, each characterized by randomly sampled arrival times, deadlines, and task types. The arrival times were generated using two different distributions: a uniform distribution (used for training) and a normal distribution (used to evaluate generalization). A total of 30 machines with heterogeneous processing capabilities were simulated. Each task was assigned to one of the machines by different scheduling strategies. To ensure a fair comparison, all algorithms were evaluated on the exact same task and machine configuration across repeated trials. All experiments were executed on on a Windows 11 platform equipped with an NVIDIA GeForce RTX 3090 Founders Edition GPU (NVIDIA Corporation, Santa Clara, CA, USA, manufactured in China).

5.1. Baselines and Compared Methods

To benchmark the effectiveness of our RL-based scheduling agent, we compare it with five rule-based heuristics (as stated in Algorithms 2–6) widely used in production environments:
  • Random Selection: This randomly selects a capable machine for each task.
Algorithm 2 Random Selection Heuristic
1:
Input: Mask array m { 0 , 1 } M
2:
repeat
3:
    Randomly sample m from { 1 , , M }
4:
until m [ m ] = 1
5:
Assign task to machine m
2.
Shortest Processing Time (SPT): Each task is assigned to the machine with the shortest processing duration for its type.
Algorithm 3 Shortest Processing Time Heuristic.
1:
Input: Task type t
2:
Find m * = arg min m T t , m where T t , m > 0
3:
Assign task to machine m *
3.
Half Min-Max Selection: Odd-numbered tasks are assigned to the slowest machine, and even-numbered ones to the fastest.
Algorithm 4 Half Min-Max Heuristic
1:
Input: Task index i, task type t
2:
if i is odd then
3:
     m * = arg max m T t , m
4:
else
5:
     m * = arg min m T t , m
6:
end if
7:
Assign task to machine m *
4.
Deadline-Aware Selection: This is based on task urgency ( d a ) and a threshold τ .
Algorithm 5 Deadline-Aware Heuristic
1:
Input: Deadline d, arrival a, threshold τ , task type t
2:
if d a > τ then
3:
     m * = arg max m T t , m
4:
else
5:
     m * = arg min m T t , m
6:
end if
7:
Assign task to machine m *
5.
Earliest Idle Selection: The task is assigned to the machine that will become idle the soonest.
Algorithm 6 Earliest Idle Heuristic
1:
Input: Remaining times r
2:
m * = arg min m r m
3:
Assign task to machine m *
In addition to the simulation environment, we further evaluated the performance of our algorithm on several well-established public benchmark instances, including Brandimarte [40], Dauzère [41], Taillard [42], Demirkol [43], and Lawrence [44], which are commonly used for evaluating FJSP and job shop scheduling problems (JSSP). These benchmarks allow for a direct comparison of our reinforcement learning-based method with other state-of-the-art scheduling algorithms, such as PPO, DQN, and DDQN.

5.2. Evaluation Metrics and Visualization

Performance is evaluated using three key metrics:
  • Cumulative Reward: The overall scheduling efficiency based on time and resource utilization.
  • Task Success Rate: The proportion of tasks completed before their deadlines.
  • Average Response Time: The average duration from task arrival to its completion.

5.3. Experiment Results

After extensive comparative evaluation, the performance of our proposed reinforcement learning-based scheduling algorithm was assessed against several heuristic-based methods, including Random Selection, Half Min-Max, multiple Deadline-Aware heuristics with thresholds of 4.25, 4.5, and 4.75, the Suitable heuristic, and the Shortest Processing Time (SPT) baseline. The results are summarized in Table 1. Our algorithm consistently outperformed all baselines under both uniform and normal task arrival distributions across multiple performance metrics. In terms of cumulative reward, the proposed method achieved 3114.8 under the uniform distribution and 6564.1 under the normal distribution. These values represent improvements of 0.7% and 0.2%, respectively, compared to the second-best baseline (SPT), which recorded 3090.2 and 6545.5. This clearly demonstrates the ability of our reinforcement learning approach to improve scheduling efficiency beyond traditional methods. For the task completion success rate, our method achieved a perfect 100% under the uniform distribution and 99.9% under the normal distribution. These results surpass all heuristics, and are marginally better than the SPT baseline, which attained 99% and 99.8%. Most importantly, our method excels in minimizing the response time, a key indicator for real-time planning in dynamic industrial environments. As shown in Table 1, the proposed approach achieved average response times of 1.38 s (uniform) and 1.42 s (normal), corresponding to reductions of 0.7% and 1.4% compared with SPT (1.39 s and 1.44 s). These reductions, though numerically small, are critical for time-sensitive industrial scheduling, where even minor improvements in latency can lead to significant practical benefits. The negative rewards in Table 1 mean the certain algorithm failed in finding a feasible solution in the constrained time.
In addition to the simulated task distribution, we evaluated the performance of our algorithm on several well-established public benchmark instances, including Brandimarte, Dauzère, Taillard, Demirkol, and Lawrence. These benchmarks cover both flexible job shop scheduling (FJSP) and job shop scheduling problems (JSSP) and allow for a direct comparison of our method with other state-of-the-art scheduling algorithms such as PPO, DQN, and DDQN. As shown in Table 2, our algorithm consistently performed well across all benchmark instances. Specifically, our approach achieved a cumulative reward of 199.50 for Brandimarte (FJSP), 2521.17 for Dauzère (FJSP), and 2781.56 for Taillard (JSSP), with success rates of 25.00%, 12.30%, and 19.00%, respectively. This performance is competitive with, and in many cases superior to, other methods like PPO and DQN, demonstrating the robustness of our approach in real-world job shop scheduling tasks. Moreover, our algorithm consistently maintained a lower response time, indicating its potential for real-time scheduling applications in dynamic industrial settings.
Our algorithm achieved strong performance across all benchmarks. For the Brandimarte (FJSP) benchmark, our method obtained a C m a x of 199.50 with a gap of 25.00%, which is only 0.6% higher than the best PPO result (199.10, 24.75%), and 0.9% better than Rainbow (198.30, 24.39%). This shows that our method can match or slightly exceed the best-performing baselines on this dataset.
For the Dauzère (FJSP) benchmark, our approach achieved C m a x = 2521.17 with a gap of 12.30%. Compared to PPO’s 2442.14 ( + 3.2 % gap) and DDQN’s 2440.33 ( + 3.3 % gap), our method performed within a narrow margin, maintaining competitiveness across multiple runs.
On the Taillard (JSSP) benchmark, our method recorded C m a x = 2781.56 with a gap of 19.00%. While PPO achieved the lowest C m a x of 2478.95 (18.97%), our method was only 0.03 percentage points higher in relative gap, effectively matching PPO’s performance.
For the Demirkol (JSSP) benchmark, our algorithm reached C m a x = 6013.86 and a gap of 29.90%. Compared with PER (5877.50, 26.30%), our method was about 2.6% worse in terms of gap, but still maintained competitive results when compared to DQN and DDQN, which had larger deviations.
Finally, on the Lawrence (JSSP) benchmark, our approach achieved C m a x = 1243.15 with a gap of 10.80%. This result is within +0.8% of DDQN’s best gap (9.99%) and significantly better than PPO (15.45%) and Noisy (14.24%).
Taken together, these results demonstrate that our algorithm maintains high robustness across diverse benchmark instances, often matching or closely trailing the best specialized methods (e.g., PPO or DDQN), while achieving notable improvements over weaker baselines (e.g., DQN, Noisy, Multi-step). Moreover, our algorithm consistently sustains competitive response times across all benchmarks, underscoring its potential for real-time scheduling applications in dynamic industrial environments.
We further render the training details of our approach with respect to the accumulated rewards in Figure 3. From the figure, the Full Model began to outperform the other baselines very early in training, underscoring that our approach does not merely match traditional methods but rapidly surpasses them by learning a dynamic, adaptive policy. The graph provides empirical validation that our hierarchical structure mitigates the classic RL challenge of slow convergence in high-dimensional spaces, successfully enabling the agent to efficiently credit actions to long-term outcomes and learn a robust scheduling strategy directly from raw state representations.
In addition to these comparisons with heuristic baselines, we further examined the robustness of our approach by combining the hierarchical reinforcement learning framework with different optimization backbones, as summarized in Table 3. The results show that our HRL-based variant with policy gradient (Ours+PG) achieved consistent improvements over PPO and DQN alone, with cumulative rewards of 3114.8 and 6564.1 under uniform and normal task distributions, respectively, together with near-optimal response times of 1.38 s and 1.42 s. When integrated with DQN (Ours+DQN), the performance remained competitive, achieving higher rewards and success rates than vanilla DQN while maintaining comparable response times. Most importantly, the combination with PPO (Ours+PPO) yielded the best overall performance across all metrics, improving cumulative reward by approximately +1.0% compared to Ours+PG (3146.9 vs. 3114.8 under uniform distribution and 6629.7 vs. 6564.1 under normal distribution), while also further reducing response times to 1.36 s and 1.40 s, respectively. These findings confirm that our framework is not only effective as a standalone HRL method but can also be seamlessly hybridized with state-of-the-art reinforcement learning algorithms to deliver superior scheduling performance in dynamic environments.

5.4. Ablation Study

To evaluate the contributions of various components of our proposed reinforcement learning-based scheduling algorithm, we conducted an ablation study. The goal was to analyze the impact of each component (e.g., the policy gradient, semi-supervised pre-training, and mask mechanism) on the overall performance.
We performed the ablation experiments using the same 1000 tasks and 30 machines, with both uniform and normal task arrival distributions. The following configurations were tested:
  • Baseline (No RL): A heuristic-based algorithm using the Shortest Processing Time (SPT) rule without reinforcement learning.
  • Policy Gradient Only: Our proposed method with the policy gradient but without the semi-supervised pre-training or the mask mechanism.
  • Semi-Supervised Pre-training: The agent was pre-trained using heuristic methods before applying reinforcement learning with the mask mechanism.
  • Mask Mechanism Only: The agent uses the mask mechanism with the policy gradient but without semi-supervised pre-training.
  • Full Model: Our complete reinforcement learning-based model with all components (policy gradient, semi-supervised pre-training, and mask mechanism).
As shown in Table 4, the full model consistently outperformed all ablated variants across all evaluation metrics. The Full Model (before lightweighting) already achieved excellent performance with a cumulative reward of 3098.5, which is +44.7 higher than the semi-supervised pre-training configuration (3053.8), while also attaining a perfect success rate of 100.0% compared to 97.6% for the second-best variant. Its response time (2.85 s) is nearly identical to that of semi-supervised pre-training (2.87 s), but with reduced average tardiness of 3.15 s, improving deadline adherence by 1.06 s. After applying lightweight optimization, the Full Model (after lightweighting) further boosted the cumulative reward to 3114.8 (+16.3 compared to the unoptimized full model) while maintaining a 100.0% success rate. More importantly, it dramatically reduced the response time to 1.38 s, achieving a 1.49 s reduction relative to the best non-lightweight variant. This highlights that the lightweighting step not only preserves the superior scheduling quality of the full model but also significantly improves its efficiency in real-time decision making. Although both versions of the full model required the longest training time of 2500 s, this additional cost is justified by their substantial improvements in reward, success rate, and latency compared to all other configurations.

6. Conclusions

In this work, we investigated the dynamic flexible job shop scheduling problem (FJSP), an abstraction of real-world production shop scenarios. To address the inherent uncertainty and complexity of such environments, we proposed a deep reinforcement learning framework based on the policy gradient algorithm. Our approach incorporates a supervised pre-training stage and subsequently refines the policy through reinforcement learning, enabling adaptive and robust scheduling decisions. Comprehensive experiments were conducted against a suite of classical heuristic algorithms and rule-based baselines, including multiple thresholding strategies based on task arrival characteristics. The results consistently demonstrate that our method achieves superior performance across multiple evaluation metrics, including cumulative reward, task completion success rate, and average response time, under both training and generalization settings. For future work, we plan to extend our approach by incorporating more complex and realistic data, reflecting production environments with additional constraints such as machine breakdowns, maintenance schedules, and dynamically arriving task batches. Moreover, further investigation into advanced model architectures and multi-objective optimization strategies will be explored to enhance both the efficiency and generalizability of the proposed scheduling framework in practical industrial settings.
The compelling comparative results presented in Section 5.1 and Section 5.3 substantiate the efficacy of our proposed HRL framework; however, its foundational advantages extend beyond superior metrics to offer a paradigm shift in handling dynamic scheduling. The core innovation lies in the hierarchical decomposition itself, which directly addresses the fundamental limitations of both traditional heuristics and flat reinforcement learning architectures. Unlike static rules like SPT or Earliest Idle, which lack adaptability, our scheme learns a meta-policy for strategic decision-making (e.g., task grouping and prioritization at the high level) coupled with reactive, fine-grained allocation (at the low level). This structure is inherently more robust to disruptions—such as machine failures or urgent order insertions—as the high-level controller can adjust its strategy based on global state changes, while the low-level controller executes these adjustments with tactical precision. This dual adaptability is a qualitative leap over single-strategy heuristics. Furthermore, compared to other deep RL methods like PPO or DQN, our hierarchical approach conquers the curse of dimensionality by decomposing the vast state-action space into manageable components, resulting in the sample-efficient and stable convergence empirically demonstrated in the reward graph. This efficiency is paramount for real-world deployment where training data and time are constrained. Therefore, our contributions are not merely algorithmic but conceptual: we provide a scalable, learning-based architecture that embodies a principled solution to dynamism, uncertainty, and complexity in industrial systems, effectively bridging the gap between rigid optimization and adaptive intelligence.
While the proposed hierarchical reinforcement learning (HRL) framework demonstrates its advantages in handling dynamic flexible job shop scheduling, it is essential to acknowledge a fundamental limitation inherent to reinforcement learning approaches: their slow convergence in the face of sudden or drastic environmental changes. RL algorithms, including policy gradient methods, typically require extensive interaction with the environment to learn effective policies. This process can be time-consuming and computationally expensive, especially in highly dynamic settings where machine failures, urgent order insertions, or abrupt changes in task priorities occur unexpectedly. Although we have incorporated techniques such as entropy regularization and supervised pre-training to improve exploration and initial policy quality, these measures may not fully mitigate the latency in responding to unforeseen disruptions.
Future work should explicitly address these challenges by integrating meta-learning or context-aware adaptation mechanisms that enable faster retraining or fine-tuning in response to environmental shifts. Furthermore, hybrid approaches that combine RL with reactive rule-based systems could provide a fallback mechanism during periods of high uncertainty or change. By openly discussing these limitations, the research community can better guide the development of more resilient and adaptive RL-based schedulers capable of thriving in truly dynamic and unpredictable industrial settings.

Author Contributions

Conceptualization, X.Z., S.F., Z.L. and G.Y.; methodology, X.Z.; software, Q.X.; validation, X.Z., Z.D. and Q.X.; formal analysis, X.Z.; investigation, X.Z.; resources, S.F.; data curation, X.Z. and Z.L.; writing—original draft preparation, X.Z.; writing—review and editing, Z.L. and S.F.; visualization, Q.X. and G.Y.; supervision, Z.L. and G.Y.; project administration, Z.L.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

Xiang Zhang, Zhongfu Li, Simin Fu, Qiancheng Xu and Zhaolong Du ware employed by the company Xuzhou Construction Machinery Group. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Liu, J.; Bo, R.; Wang, S.; Chen, H. Optimal Scheduling for Profit Maximization Energy Storage Merchants Considering Market Impact Based on Dynamic Programming. Comput. Ind. Eng. 2021, 155, 107212. [Google Scholar] [CrossRef]
  2. Gao, K.; Cao, Z.; Zhang, L.; Chen, Z.; Han, Y.; Pan, Q. A review on swarm intelligence and evolutionary algorithms for solving flexible job shop scheduling problems. IEEE/CAA J. Autom. Sin. 2019, 6, 904–916. [Google Scholar]
  3. Dauzère-Pérès, S.; Ding, J.; Shen, L.; Tamssaouet, K. The flexible job shop scheduling problem: A review. Eur. J. Oper. Res. 2024, 314, 409–432. [Google Scholar] [CrossRef]
  4. Gui, L.; Li, X.; Zhang, Q.; Gao, L. Domain knowledge used in meta-heuristic algorithms for the job-shop scheduling problem: Review and analysis. Tsinghua Sci. Technol. 2024, 29, 1368–1389. [Google Scholar] [CrossRef]
  5. Yuan, E.; Wang, L.; Cheng, S.; Song, S.; Fan, W.; Li, Y. Solving flexible job shop scheduling problems via deep reinforcement learning. Expert Syst. Appl. 2024, 245, 123019. [Google Scholar] [CrossRef]
  6. Xiong, H.; Shi, S.; Ren, D.; Hu, J. A survey of job shop scheduling problem: The types and models. Comput. Oper. Res. 2022, 142, 105731. [Google Scholar] [CrossRef]
  7. Shao, W.; Pi, D.; Shao, Z. Local Search Methods for a Distributed Assembly No-Idle Flow Shop Scheduling Problem. IEEE Syst. J. 2018, 13, 1945–1956. [Google Scholar] [CrossRef]
  8. Mohan, J.; Lanka, K.; Rao, A.N. A review of dynamic job shop scheduling techniques. Procedia Manuf. 2019, 30, 34–39. [Google Scholar] [CrossRef]
  9. Wu, X.; Yan, X.; Guan, D.; Wei, M. A deep reinforcement learning model for dynamic job-shop scheduling problem with uncertain processing time. Eng. Appl. Artif. Intell. 2024, 131, 107790. [Google Scholar] [CrossRef]
  10. Lu, S.; Wang, Y.; Kong, M.; Wang, W.; Tan, W.; Song, Y. A double deep q-network framework for a flexible job shop scheduling problem with dynamic job arrivals and urgent job insertions. Eng. Appl. Artif. Intell. 2024, 133, 108487. [Google Scholar] [CrossRef]
  11. Huang, J.P.; Gao, L.; Li, X.Y. A hierarchical multi-action deep reinforcement learning method for dynamic distributed job-shop scheduling problem with job arrivals. IEEE Trans. Autom. Sci. Eng. 2024, 22, 2501–2513. [Google Scholar] [CrossRef]
  12. Wang, L.; Hu, X.; Wang, Y.; Xu, S.; Ma, S.; Yang, K.; Liu, Z.; Wang, W. Dynamic job-shop scheduling in smart manufacturing using deep reinforcement learning. Comput. Netw. 2021, 190, 107969. [Google Scholar] [CrossRef]
  13. Zwickl, D.J. Genetic Algorithm Approaches for the Phylogenetic Analysis of Large Biological Sequence Datasets Under the Maximum Likelihood Criterion. Ph.D. Thesis, University of Texas at Austin, Austin, TX, USA, 2008. [Google Scholar]
  14. Bangert, P. Optimization for Industrial Problems. In Optimization: Simulated Annealing; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  15. Rezazadeh, F.; Chergui, H.; Alonso, L.; Verikoukis, C. Continuous Multi-objective Zero-touch Network Slicing via Twin Delayed DDPG and OpenAI Gym. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2021. [Google Scholar]
  16. Golui, S.; Pal, C.; Saha, S. Continuous-Time Zero-Sum Games for Markov Decision Processes with Discounted Risk-Sensitive Cost Criterion on a general state space. Stoch. Anal. Appl. 2021, 41, 327–357. [Google Scholar] [CrossRef]
  17. Pateria, S.; Subagdja, B.; Tan, A.h.; Quek, C. Hierarchical reinforcement learning: A comprehensive survey. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
  18. Barto, A.G.; Mahadevan, S. Recent advances in hierarchical reinforcement learning. Discret. Event Dyn. Syst. 2003, 13, 341–379. [Google Scholar] [CrossRef]
  19. Qin, M.; Sun, S.; Zhang, W.; Xia, H.; Wang, X.; An, B. Earnhft: Efficient hierarchical reinforcement learning for high frequency trading. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 14669–14676. [Google Scholar]
  20. Kacem, I.; Hammadi, S.; Borne, P. Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2002, 32, 1–13. [Google Scholar] [CrossRef]
  21. Xie, J.; Gao, L.; Peng, K.; Li, X.; Li, H. Review on flexible job shop scheduling. IET Collab. Intell. Manuf. 2019, 1, 67–77. [Google Scholar] [CrossRef]
  22. Coupvent des Graviers, M.E.; Kobrosly, L.; Guettier, C.; Cazenave, T. Updating Lower and Upper Bounds for the Job-Shop Scheduling Problem Test Instances. arXiv 2025, arXiv:2504.16106. [Google Scholar]
  23. Wang, D.; Zhang, Y.; Zhang, K.; Li, J.; Li, D. Discrete Differential Evolution Particle Swarm Optimization Algorithm for Energy Saving Flexible Job Shop Scheduling Problem Considering Machine Multi States. arXiv 2025, arXiv:2503.02180. [Google Scholar] [CrossRef]
  24. Kobrosly, L.; Graviers, M.E.C.d.; Guettier, C.; Cazenave, T. Adaptive Bias Generalized Rollout Policy Adaptation on the Flexible Job-Shop Scheduling Problem. arXiv 2025, arXiv:2505.08451. [Google Scholar] [CrossRef]
  25. Osaba, E.; Delgado, I.P.; Ali, A.M.; Miranda-Rodriguez, P.; de Leceta, A.M.F.; Rivas, L.C. Quantum Computing in Industrial Environments: Where Do We Stand and Where Are We Headed? arXiv 2025, arXiv:2505.00891. [Google Scholar] [CrossRef]
  26. Rihane, K.; Dabah, A.; AitZai, A. Learning-Based Approaches for Job Shop Scheduling Problems: A Review. arXiv 2025, arXiv:2505.04246. [Google Scholar] [CrossRef]
  27. Ferreira, C. Gene expression programming: A new adaptive algorithm for solving problems. arXiv 2001, arXiv:cs/0102027. [Google Scholar] [CrossRef]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Xu, T.; Xu, S.; Chen, X.; Chen, F.; Li, H. Multi-core token mixer: A novel approach for underwater image enhancement. Mach. Vis. Appl. 2025, 36, 37. [Google Scholar] [CrossRef]
  30. Gu, X.; Huang, M.; Liang, X. An improved genetic algorithm with adaptive variable neighborhood search for FJSP. Algorithms 2019, 12, 243. [Google Scholar] [CrossRef]
  31. Mathew, A.; Roy, A.; Mathew, J. Intelligent Residential Energy Management System using Deep Reinforcement Learning. IEEE Syst. J. 2020, 14, 5362–5372. [Google Scholar] [CrossRef]
  32. Nie, L.; Gao, L.; Li, P.; Li, X. A GEP-based reactive scheduling policies constructing approach for dynamic flexible job shop scheduling problem with job release dates. J. Intell. Manuf. 2013, 24, 763–774. [Google Scholar] [CrossRef]
  33. Abgaryan, H.; Cazenave, T.; Harutyunyan, A. Starjob: Dataset for LLM-Driven Job Shop Scheduling. arXiv 2025, arXiv:2503.01877. [Google Scholar]
  34. Liao, Z.; Chen, J.; Wang, D.; Zhang, Z.; Wang, J. BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization. arXiv 2025, arXiv:2503.07580. [Google Scholar]
  35. Corrêa, A.; Jesus, A.; Silva, C.; Moniz, S. Unraveling the Rainbow: Can value-based methods schedule? arXiv 2025, arXiv:2505.03323. [Google Scholar] [CrossRef]
  36. Chang, E.Y.; Geng, L. ALAS: A Stateful Multi-LLM Agent Framework for Disruption-Aware Planning. arXiv 2025, arXiv:2505.12501. [Google Scholar]
  37. Forniés-Tabuenca, D.; Uribe, A.; Otamendi, U.; Artetxe, A.; Rivera, J.C.; de Lacalle, O.L. REMoH: A Reflective Evolution of Multi-objective Heuristics approach via Large Language Models. arXiv 2025, arXiv:2506.07759. [Google Scholar]
  38. Sutton, R.S.; McAllester, D.; Singh, S.; Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst. 1999, 12, 1057–1063. [Google Scholar]
  39. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  40. Brandimarte, P. Routing and scheduling in a flexible job shop by tabu search. Ann. Oper. Res. 1993, 41, 157–183. [Google Scholar] [CrossRef]
  41. Dauzère-Pérès, S.; Paulli, J. An integrated approach for modeling and solving the general multiprocessor job-shop scheduling problem using tabu search. Ann. Oper. Res. 1997, 70, 281–306. [Google Scholar] [CrossRef]
  42. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  43. Demirkol, E.; Mehta, S.; Uzsoy, R. Benchmarks for shop scheduling problems. Eur. J. Oper. Res. 1998, 109, 137–141. [Google Scholar] [CrossRef]
  44. Lawrence, S. Resouce Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement); Graduate School of Industrial Administration, Carnegie-Mellon University: Pittsburgh, PA, USA, 1984. [Google Scholar]
Figure 1. Illustration of task-machine compatibility in the DFJSP.
Figure 1. Illustration of task-machine compatibility in the DFJSP.
Applsci 15 10108 g001
Figure 2. Schematic overview of the proposed intelligent scheduling framework.
Figure 2. Schematic overview of the proposed intelligent scheduling framework.
Applsci 15 10108 g002
Figure 3. Schematic overview of the proposed intelligent scheduling framework.
Figure 3. Schematic overview of the proposed intelligent scheduling framework.
Applsci 15 10108 g003
Table 1. Performance metrics for the scheduling algorithm under uniform and normal distributions. The last row corresponds to our proposed method, with percentage improvements compared to the second-best baseline.
Table 1. Performance metrics for the scheduling algorithm under uniform and normal distributions. The last row corresponds to our proposed method, with percentage improvements compared to the second-best baseline.
MethodUniform DistributionNormal Distribution
Cumulative Reward ↑ Success Rate (%) ↑ Response Time (s) ↓ Cumulative Reward ↑ Success Rate (%) ↑ Response Time (s) ↓
Random (choose tasks randomly)1358.08793.064869.6391.43.08
Half Min-Max473.15624.064616.4083.43.30
Deadline-Aware (threshold = 4.25)−1554.91605.852586.4575.85.45
Deadline-Aware (threshold = 4.5)92.38714.112074.9870.46.12
Deadline-Aware (threshold = 4.75)989.86813.353019.2080.45.05
Suitable1489.17832.944899.5693.33.03
SPT (Shortest Processing Time)3090.20991.396545.5199.81.44
Proposed Method (Ours)3114.8 (+0.7%)1001.38 ( 0.7 %)6564.10 (+0.2%)99.91.42 ( 1.4 %)
Table 2. Results on public benchmark instances.
Table 2. Results on public benchmark instances.
MethodBrandimarte (FJSP)Dauzère (FJSP)Taillard (JSSP)Demirkol (JSSP)Lawrence (JSSP)
C max Gap C max Gap C max Gap C max Gap C max Gap
BKS173.30-2212.94-2354.16-4614.23-1107.28-
6 × 6
PPO199.1024.75%2442.1410.19%2478.9518.97%5872.3426.89%1269.1015.45%
DQN208.8029.70%2536.1714.42%2807.2321.30%6030.4130.08%1218.6810.48%
DDQN201.5029.66%2440.3310.05%2762.3119.48%6019.0129.70%1195.649.99%
PER199.3026.54%2495.7812.47%2790.2118.31%5877.5026.30%1217.7710.00%
Dueling207.3030.94%2483.6712.92%2775.3918.08%5870.8926.22%1221.9910.37%
Noisy199.9027.16%2493.9412.52%2865.9121.87%6032.1430.06%1264.4314.24%
Distributional200.3027.16%2494.5212.56%2863.5121.73%6031.6230.02%1226.9610.82%
Multi-step207.7031.76%2588.7216.93%2965.5125.69%6174.0433.70%1299.1217.45%
Rainbow198.3024.39%2492.3312.18%2761.6318.17%5995.3229.80%1228.7910.79%
Ours199.5025.00%2521.1712.30%2781.5619.00%6013.8629.90%1243.1510.80%
10 × 5
PPO198.0025.47%2401.728.28%2476.6918.97%5744.4523.90%1210.589.96%
DQN200.2026.13%2552.1114.91%2689.1517.87%5934.0927.42%1262.5610.88%
DDQN201.5029.56%2463.6911.30%2690.0817.87%5874.6626.90%1243.7310.44%
PER210.3029.13%2496.6711.59%2796.3118.40%5891.0127.04%1273.7911.24%
Dueling210.3026.57%2503.1712.35%2792.1618.47%5875.3926.89%1275.4411.26%
Noisy207.0028.65%2515.1213.46%2788.9819.19%5894.0227.26%1261.4710.89%
Distributional201.3026.45%2508.7813.10%2797.9419.13%5910.2127.42%1268.3410.96%
Multi-step202.6029.71%2567.2613.01%2786.7719.10%5899.5627.24%1266.9010.91%
Rainbow211.5032.12%2551.1512.15%2790.2119.29%5880.0327.10%1271.9010.99%
Ours213.5932.69%2570.3712.30%2798.8419.50%5895.5327.20%1283.7311.00%
20 × 10
PPO210.1028.73%2616.1811.91%2788.2819.14%5892.3127.42%1249.7210.80%
DQN196.8027.86%2464.6411.96%2761.4819.27%5862.5827.28%1247.1310.75%
DDQN200.7030.36%2466.1411.99%2782.5619.40%5875.7927.34%1253.2010.79%
PER201.1031.66%2491.7012.34%2787.4719.42%5879.2127.38%1246.7210.74%
Dueling198.1030.40%2478.8512.17%2785.4419.34%5877.5227.31%1247.6610.76%
Noisy210.9029.72%2486.4712.36%2788.6719.38%5885.6227.35%1249.3410.78%
Distributional200.8030.87%2508.9913.07%2787.2319.41%5893.6127.39%1250.6710.79%
Multi-step209.2031.61%2510.6013.01%2789.2419.44%5898.0227.41%1252.1210.80%
Rainbow208.2030.23%2510.6013.01%2788.9119.43%5899.5627.42%1252.1010.80%
Ours209.5030.50%2517.6813.10%2792.7419.50%5901.4327.45%1254.9610.85%
Table 3. Hybrid experiments: Performance under uniform and normal distributions. Arrows indicate whether higher (↑) or lower (↓) is better.
Table 3. Hybrid experiments: Performance under uniform and normal distributions. Arrows indicate whether higher (↑) or lower (↓) is better.
MethodUniform DistributionNormal Distribution
Cumulative Reward ↑ Success Rate (%) ↑ Response Time (s) ↓ Cumulative Reward ↑ Success Rate (%) ↑ Response Time (s) ↓
PPO3090.2099.01.396545.5199.81.44
DQN1358.0879.03.064869.6391.41.42
Ours + PG3114.80100.01.386564.1099.91.42
Ours + DQN3078.3099.51.416532.7099.71.44
Ours + PPO3146.95100.01.366629.74100.01.40
Table 4. Ablation study results: Performance of different model configurations.
Table 4. Ablation study results: Performance of different model configurations.
Model ConfigurationCumulative RewardSuccess Rate (%)Response Time (s)Average Tardiness (s)Training Time (s)
Baseline (No RL)2029.590.53.755.601500
Policy Gradient Only2445.2295.23.024.871800
Semi-Supervised Pre-training3053.7897.62.874.212100
Mask Mechanism Only2523.4596.43.104.652000
Full Model3114.8(+16.3%)100.0(+3.6%)1.38(−51.9%)3.15(−33.6%)2500(+19.0%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Li, Z.; Fu, S.; Xu, Q.; Du, Z.; Yuan, G. Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning. Appl. Sci. 2025, 15, 10108. https://doi.org/10.3390/app151810108

AMA Style

Zhang X, Li Z, Fu S, Xu Q, Du Z, Yuan G. Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning. Applied Sciences. 2025; 15(18):10108. https://doi.org/10.3390/app151810108

Chicago/Turabian Style

Zhang, Xiang, Zhongfu Li, Simin Fu, Qiancheng Xu, Zhaolong Du, and Guan Yuan. 2025. "Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning" Applied Sciences 15, no. 18: 10108. https://doi.org/10.3390/app151810108

APA Style

Zhang, X., Li, Z., Fu, S., Xu, Q., Du, Z., & Yuan, G. (2025). Optimization Scheduling of Dynamic Industrial Systems Based on Reinforcement Learning. Applied Sciences, 15(18), 10108. https://doi.org/10.3390/app151810108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop