Next Article in Journal
Mean-Field Stochastic Linear Quadratic Optimal Control for Jump-Diffusion Systems with Hybrid Disturbances
Next Article in Special Issue
Particle Swarm Optimization Algorithm Using Velocity Pausing and Adaptive Strategy
Previous Article in Journal
Nonlinear Transport through Parity–Time Symmetric Lattice Potentials
Previous Article in Special Issue
Scheduling Optimization of Compound Operations in Autonomous Vehicle Storage and Retrieval System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Search Algorithm for Multiplicity Dynamic Flexible Job Shop Scheduling with New Order Arrivals

1
School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
2
College of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou 325060, China
*
Authors to whom correspondence should be addressed.
Symmetry 2024, 16(6), 641; https://doi.org/10.3390/sym16060641
Submission received: 9 April 2024 / Revised: 14 May 2024 / Accepted: 17 May 2024 / Published: 22 May 2024
(This article belongs to the Special Issue Symmetry in Computing Algorithms and Applications)

Abstract

:
In today’s customer-centric economy, the demand for personalized products has compelled corporations to develop manufacturing processes that are more flexible, efficient, and cost-effective. Flexible job shops offer organizations the agility and cost-efficiency that traditional manufacturing processes lack. However, the dynamics of modern manufacturing, including machine breakdown and new order arrivals, introduce unpredictability and complexity. This study investigates the multiplicity dynamic flexible job shop scheduling problem (MDFJSP) with new order arrivals. To address this problem, we incorporate the fluid model to propose a fluid randomized adaptive search (FRAS) algorithm, comprising a construction phase and a local search phase. Firstly, in the construction phase, a fluid construction heuristic with an online fluid dynamic tracking policy generates high-quality initial solutions. Secondly, in the local search phase, we employ an improved tabu search procedure to enhance search efficiency in the solution space, incorporating symmetry considerations. The results of the numerical experiments demonstrate the superior effectiveness of the FRAS algorithm in solving the MDFJSP when compared to other algorithms. Specifically, the proposed algorithm demonstrates a superior quality of solution relative to existing algorithms, with an average improvement of 29.90%; and exhibits an acceleration in solution speed, with an average increase of 1.95%.

1. Introduction

The individualized requirements in the market are increasing, which is placing higher demands on manufacturing systems [1,2]. To remain competitive, companies are increasing the flexibility of their manufacturing systems [3,4]. However, flexibility in the manufacturing environment can lead to increased complexity, posing challenges for production managers when making decisions [5,6]. Industry 4.0 has led to the emergence and evolution of various technologies that support decision-making in manufacturing systems [7,8]. This enables continuous iteration and upgrading of both hardware and software in manufacturing systems. Benefiting from the support and maturity of these technologies, dynamic scheduling has become a trend of general interest for enterprises and researchers. Compared to traditional scheduling, dynamic scheduling is more adaptable [9,10,11]. Specifically, the dynamic flexible job shop scheduling problem (DFJSP) is considered more valuable than the traditional job shop scheduling problem (JSP) [12,13]. This is because it encapsulates most of the scheduling challenges of high-mix, low-volume production in modern discrete manufacturing systems. Moreover, it deals with uncertainties inherent in the production environment, such as fluctuating demand, machine downtime, and unpredictable lead times [14].
Among the various iterations of the standard DFJSP, the multiplicity dynamic flexible job shop scheduling problem (MDFJSP) stands out as particularly applicable to manufacturing systems characterized by a significant volume of identical or similar jobs that can be grouped in the production process. Industries such as semiconductor manufacturing and motorcycle component production serve as prime examples of such systems [15,16,17]. Addressing the MDFJSP can assist production managers in increasing throughput, decreasing lead times, minimizing machine downtime, and maximizing resource utilization. Enterprises can enhance their market competitiveness by effectively managing the scheduling of multiple job instances, which can increase productivity and enable prompt response to customer requirements. The literature contains extensive research on the DFJSP, but few studies have addressed the MDFJSP [18,19,20]. Ding et al. (2022) [21] proposed a fluid master-apprentice evolutionary method to address the multiplicity flexible job shop scheduling problem, and Ding et al. (2024) [22] presented a multi-policy deep reinforcement learning method for multi-objective multiplicity flexible job shop scheduling problem. But these two papers are only applicable to static scheduling problems, and are not suitable for addressing dynamic scheduling challenges.
To fill this research gap, we investigate the MDFJSP with new order arrivals. The inclusion of numerous new jobs in new orders is more indicative of actual production scenarios [23]. However, new order arrivals frequently generate numerous new positions, leading to an extensive solution space that challenges achieving high-quality solutions. The current methods, such as meta-heuristics and dispatching rules for the DFJSP, have been observed to have shortcomings in terms of time efficiency and long-term solution quality [24,25,26]. The current literature indicates that fluid models can rapidly identify near-optimal solutions in the context of multiplicity job shop scheduling. Therefore, we propose a new algorithm called the fluid randomized adaptive search (FRAS) algorithm for the MDFJSP, by incorporating a fluid model. The FRAS algorithm integrates a fluid model-based approach named the fluid construction heuristic (FCH) with the GRAS framework. The main function of the FCH in this algorithm is to optimize the initial solution. By incorporating the FCH, which captures the system’s continuous dynamics, the FRAS algorithm offers a more robust and adaptive solution approach for the MDFJSP. The results of numerical experiments prove this novel algorithm can efficiently generate high-quality scheduling schemes for the MDFJSP with varying problem scales in a reasonable computation time. This study makes the following key contributions.
(1)
We propose a fluid randomized adaptive search algorithm for addressing the MDFJSP, effectively bridging the existing research gap in this area.
(2)
A dynamic fluid model is developed for the first time to address the MDFJSP with new order arrivals.
(3)
The proposed FRAS algorithm surpasses existing algorithms due to two design features. Firstly, we developed a fluid construction heuristic that incorporates an online fluid dynamic tracking policy to optimize the initial solution. Secondly, we propose a tabu search procedure based on the public critical operation to enhance the efficiency of the solution space search.
The following sections of this article are structured as follows: Section 2 reviews the related papers. Section 3 describes the MDFJSP and introduces a mathematical model along with an illustrative example. Section 3 delineates the FRAS algorithm. Section 4 provides the results and analyses. Finally, we summarize this article in Section 5, and outline future directions and recommendations.

2. Literature Review

This section provides a review of the relevant work, which consists of two main parts. Due to the scarcity of research on the MDFJSP, the first part focuses on methodologies used to solve the FJSP. The second part reviews the current state of research on fluid modeling in scheduling.

2.1. Methodologies for the FJSP

To solve FJSPs, researchers have proposed various optimization strategies, including dispatching rule-based, and meta-heuristic. Even though exact solution methods can produce the optimal solution to a problem, their complexity makes it difficult to obtain viable solutions in a reasonable timeframe [27]. Dispatching rules are widely used to address FJSPs in practical production systems [28]. However, choosing a specific dispatching rule for a particular problem is complex. Therefore, numerous methods to extract and select dispatching rules for FJSPs have been presented by researchers [29]. Furthermore, many researchers have extracted dispatching rules using the genetic programming-based method to solve the DFJSP with particular constraints [30,31]. Although dispatching rules are easy to implement, they often exhibit myopic behavior, which means they may not guarantee to reach a local optimum in the long run [32].
In addition to dispatching rule-based methods, many meta-heuristics have been proposed to generate feasible and promising schedule schemes [33,34]. Meta-heuristics can achieve new near-optimal solutions by utilizing global information at each rescheduling point [35,36]. However, these methods often encounter challenges in terms of time efficiency, primarily attributed to their intricate evolutionary processes and search methods in an expansive search space [37,38,39]. This limitation hinders their application in manufacturing systems, where new disturbances may happen unexpectedly, even before a new rescheduling scheme has been established. For more information about meta-heuristic-based methods, refer to the overview literature by Xie et al. (2019) [40].

2.2. Fluid Model

The fluid model was first proposed by Chen (1993) [41] to address the dynamic scheduling challenges associated with multiple classes of fluid traffic. The fluid model method offers a robust mathematical framework for analyzing and solving job shop scheduling problems with multiplicity characteristics [42,43,44,45]. By representing the discrete scheduling problem as a continuous fluid model, researchers can gain valuable insights into system behavior by analyzing the performance measures and developing optimization techniques. Fluid models are deterministic, continuous approximations of stochastic. Several researchers have effectively employed fluid models in both static and dynamic JSP [45,46]. However, most of the existing literature works have primarily focused on addressing the JSP. When it comes to the FJSP, applying the fluid model approach becomes more challenging due to the additional complexities involved in decision-making and representation.
Because of the limited literature on the fluid model in FJSPs, we just can conduct a comprehensive analysis of the existing literature on fluid modeling in JSPs. Table 1 analyzes the application of fluid models in JSP from three key perspectives: problem type, problem scale, and objective function. Our findings indicate that fluid models are primarily used in JSP research to obtain approximate solutions in a relatively short time frame. Consequently, they are often combined with other heuristic algorithms to enhance the efficiency of the algorithmic solution. The existing research primarily concerns deterministic problems [42,44,47,48,49], with only very few papers addressing stochastic problems [45,50]. Fluid modeling can rapidly resolve large-scale problems in certain specific contexts. However, its application is contingent on certain parameters, which means that it is applicable to a limited number of practical scenarios and some scenarios are not applicable.Furthermore, a review of the existing literature reveals that most studies on fluid models are conducted in the context of JSPs with high multiplicity [43,44]. The application of the fluid model in these papers remains constrained, and the scenarios in which the fluid model is applicable are not sufficiently discussed. This represents an avenue for further exploration.
Despite the extensive research on DFJSPs, there is a lack of studies specifically addressing the multiplicity of characteristics of DFJSPs. This research gap highlights the need to develop an optimization model and propose an efficient solution approach for addressing the complexities introduced by multiplicity in DFJSPs. By considering the unique challenges posed by multiplicity, future research efforts can contribute to the advancement of scheduling methodologies and provide valuable insights for optimizing the performance of DFJSPs in practical manufacturing environments.

3. Problem Description

This article studies a special variant of the DFJSP called the multiplicity dynamic flexible job shop scheduling problem (MDFJSP). In MDFJSPs, multiple jobs are permitted for any job type. It consists of M machines, S orders with random arrival times, and R job types. Each job type r comprises J r several operations. All jobs are classified into a total of R unique types. The orders s arrive randomly, containing N s r jobs for job type r. The job quantity belonging to a specific job type r in order s is called its multiplicity ( N s r 1 ). In the actual manufacturing environment, the arrival of new orders containing multiple jobs for each job type follows the Poisson distribution [51], which means the time interval between new orders’ arrival obeys an exponential distribution [52]. Rescheduling occurs when a new order s arrives at A s , serving as the rescheduling point. All operations are divided into four sets: completed, being processed, unprocessed, and new arrived operations. To handle complexity, some assumptions were made:
(1)
Machines are assumed to be ready for operation at the beginning of the time horizon.
(2)
The same type of jobs can be processed on a specified series of machines. However, it is not permitted for any given machine to perform more than one operation simultaneously.
(3)
Operations must be completed without interruption.
(4)
Operations must be conducted in a specified order, with no operation starting before the previous one has finished.
(5)
Setup times between operations are not considered.
The mathematical model uses the following notations:
NotationDescription
Indexes:
sOrders, s = 1 , 2 , , S .
mMachines, m = 1 , 2 , , M .
r , r Job types, r , r = 1 , 2 , , R .
n , n Jobs belonging to job types r and r , n = 1 , 2 , , N r , n = 1 , 2 , , N r .
j , j Operations of job types r and r , j = 1 , 2 , , J r , j = 1 , 2 , , J r .
Parameters:
RThe number of job types.
MThe number of machines.
J r The operation quantity of job type r.
J r n Job n t h of type r.
o r n j Operation j t h of the job n t h pertaining to job type r.
o r j Operation j t h of type r.
t r j m The processing time for operation o r j on machine m.
NotationDescription
Sets:
M m The complete set of machines, M m = { 1 , 2 , , M } .
M r j The specific set of machines that are eligible to perform the operation o r j .
R r The set of all job types, R r = { 1 , 2 , , R } .
I r The set of all jobs belonging to type r, I r = { 1 , 2 , , N r } .
O r The set of all operations belonging to job type r, O r = { 1 , 2 , , J r } .
O R J The complete set of operations belonging to all job types, O R J = { o r j r R r , j O r } .
O m R J The set of operations that machine m can perform, O m R J = { o r j r R r , j O r , m M r j } .
Variables:
SThe quantity of new orders.
A s The arrival time of order s; order s = 1 is the first order, and  A 1 = 0 .
D s The due date of order s.
d r n The due date of job J r n ; d r n = D s if z r n s = 1 .
N s r The total job quantity of job type r in order s.
N s The total job quantity in order s, N s = r = 1 R N s r .
N r The total job quantity of job type r in all orders, N r = s = 1 S N s r .
NThe number of jobs in all orders, N = s = 1 S N s .
c t r n j The completion time of operation o r n j .
Decision variables:
x r n j m x r n j m = 1 , if o r n j is assigned on machine m . 0 , otherwise .
y r n j r n j y r n j r n j = 1 , if o r n j is a predecessor of o r n j . 1 , if o r n j is a successor of o r n j .
z r n s z r n s = 1 , if job J r n belongs to order s . 0 , otherwise .
We describe the mathematical model as follows:
min C max = min ( max { c t r n j r R r , n I r , j O r } )
Subject to:
c t r n 0 = 0 , c t r n j > 0 , r , n , j
m M r j x r n j m = 1 , r , n , j
s = 1 S z r n s = 1 , r , n
( c t r n j t r j m c t r , n , j 1 ) x r n j m 0 , r , n , j , m
( c t r n j t r j m A s ) z r n s x r n j m 0 , r , n , j , m , s
( c t r n j t r n m c t r n j ) x r n j m x r n j m ( y r n j r n j + 1 ) + ( c t r n j t r n m c t r n j ) x r n j m x r n j m ( 1 y r n j r n j ) 0 , r , r , n , n , j , j , m
Equation (1) represents the objective of minimizing the maximum completion time in the mathematical model, where C max means the maximum completion time, i.e., the makespan. Equation (2) implies that the completion time of each operation is non-negative. Equation (3) indicates that each operation can be assigned to only one available machine. Equation (4) indicates that each job belongs to only one order. Equation (5) states the precedence constraint of operations for each job, while Equation (6) ensures jobs are processed only upon arrival. Equation (7) ensures that the scheduling schemes are feasible for the capacity of machines.
To facilitate comprehension for the reader, Table 2 provides an illustrative example of MDFJSP with two new orders and three machines. Initially (in order 1), each job type is allocated a specific number of available positions. The dynamic scheduling system then implements the schedule until the next order arrives. The process of dynamic scheduling is repeated until all jobs have been completed.
Figure 1 presents the details of the example of the MDFJSP. Figure 1a illustrates that two task types, r = 1, 2, are processed on three machines, m = 1, 2, 3. In addition, six jobs, 11, 12, 13, 21, 22, and 23, in order one must be planned, and order two includes four new jobs, 14, 15, 24, and 25, which come at time 50. First, we establish the scheduling scheme based on the first order, as depicted in Figure 1a. At time 50, the ongoing activities 222, 122, and 113 are not halted, but rather the rescheduling procedure is initiated when order number two arrives. All non-processed operations are rescheduled following the proposed DA3C once a rescheduling process commences. Consequently, 19 operations must be scheduled for time 50 (Reschedule point). Figure 1b depicts the new schedule scheme acquired following the arrival of the second order.

4. Fluid Randomized Adaptive Search Algorithm

In the construction phase of the fluid randomized adaptive search (FRAS) algorithm, the fluid construction heuristic (FCH) method is used to construct a solution step by step, considering symmetry. For each decision time, one operation and one corresponding available machine are selected according to the online fluid dynamic tracking policy. D solutions are constructed with the FCH. Then, the proposed tabu search procedure improves the best solution of the D solutions. Figure 2 presents the flow diagram of dynamic scheduling systems and the basic flow of the FRAS algorithm.

4.1. Construction Phase

4.1.1. Dynamic Fluid Model and Control Problem

The proposed dynamic fluid model has the advantage of being a linear programming model that can be easily solved using CPLEX. We define the solution of the fluid model as a fluid solution. The solution of the fluid model is called the fluid solution. In the dynamic fluid model, the operation o r j is represented as fluids k r j . Then, the set of all operations of all job types can be represented as fluid set F = { k r j r R r , j O r } . The dynamic fluid model provides a fluid solution, which is a time proportion { u m k r j R 0 u m k r j 1 , m M m , k r j F } that each machine devotes to each fluid. Here, we illustrate with a simple example. If machine m is equipped to perform the set of operations { o 11 , o 12 , o 21 } . After solving the fluid model, the deduced optimal time fractions that machine m spends on each operation are { u m k 11 = 0.3 , u m k 12 = 0.4 , u m k 21 = 0.2 } . The interpretation of these fractions is such that for each unit of time, 30 % , 40 % , and  20 % of that duration is allocated by machine m for performing operations o 11 , o 12 , and  o 21 , respectively. The details of the dynamic fluid model are as follows:
NotationsDescription
Indexes:
k r j The fluid which represents the operation o r j in the dynamic fluid model.
Parameters:
e m k r j The processing rate of fluid k r j on machine m.
e k r j The total processing rate of fluid k r j , e k r j = m = 1 M e m k r j u m k r j .
Sets:
FThe set of fluids, F = { k r j r R r , j O r } .
F m The subset of fluids that machine m can process, F m = { k r j r R r , j O r , o r j O m R J } .
M k r j The subset of machines that can process fluid k r j , M k r j = { m m M , m M r j } .
Variables:
u m k r j The proportion of processing time that machine m devotes to fluid k r j .
c k r j The completion time of fluid k r j .
q k r j ( t ) The amount of fluid k r j at time t.
Q r j ( t ) The job quantity in operation stage o r j at time t.
q k r j + ( t ) The total amount of fluid k r j not processed at time t.
Q r j + ( t ) The total job quantity in operation o r j not processed at time t.
q m k r j + ( t ) The total amount of fluid k r j not processed by machine m at time t.
Q m r j + ( t ) The total job quantity in operation o r j not processed by machine m at time t.
After the arrival of order s and before the arrival of order s + 1 , the dynamic fluid model satisfies the following Equations (8)–(16). Obviously, the time t [ A s , A s + 1 ) . Equation (8) means the number of fluid k r j at time t. The number of fluid k r j that has not yet been processed can be calculated as Equation (9). The number of the fluid k r j that has not yet been processed by machine m can be calculated as Equations (10) and (11). The computation time of each fluid k is shown in Equation (12). Equation (13) indicates that the utilization rate must be less than 100% for each machine. Equation (14) demonstrates that if q k r j ( A s ) = 0 , then the total processing rate of fluid k r j is no more than its arrival rate. This makes q k r j ( t ) 0 , which ensures that the fluid solution is feasible. Equation (15) denotes the range of decision variables.
q k r j ( t ) = q k r j ( A s ) + ( t A s ) ( e k r , j 1 e k r j ) , k r j F , j 2 q k r j ( A s ) ( t A s ) e k r j , k r j F , j = 1
q k r j + ( t ) = q k r j + ( A s ) ( t A s ) e k r j , k r j F
q m k r j + ( A s ) = q k r j + ( A s ) e m k r j u m k r j e k r j , k r j F , m M k r j
q m k r j + ( t ) = q m k r j + ( A s ) ( t A s ) e m k r j u m k r j , k r j F , m M k r j
c k r j = q k r j + ( A s ) e k r j , k r j F
k r j F u m k r j 1 , m M m
e k r j e k r , j 1 , k r j F , j 2 , q k r j ( A s ) = 0
u m k r j [ 0 , 1 ] , k r j F , m M k r j
The dynamic fluid model aims to minimize the maximum completion time at each rescheduled point A s , which is shown in Equation (16).
C f s = min max c k r j | k r j F
Note that we must solve the dynamic fluid model based on both the unprocessed operations and the new operations when order s arrives at the time A s . Therefore, the fluid solution changes dynamically with the arrival of new orders.

4.1.2. Online Fluid Dynamic Tracking Policy

The dynamic fluid model above offers a fluid solution for each rearrangement point A s . The fluid solution only specifies the fraction of time devoted to a particular operation o r j on machine m, but does not specify the job sequence for each machine. However, the dynamic flexible job shop scheduling is a process in a discrete manufacturing environment, and job sequencing must be considered along with machine assignment. Consequently, to construct high-quality discrete solutions, the fluid solution needs to transform into a policy that can guide us to finish selecting an operation and assigning it to an available machine. To generate the policy, two metrics are defined to measure the similarity between the constructed discrete and fluid solutions.
  • Fluid-Gap: At time t, the gap between the total job quantity that operation o r j has not been processed and the amount of fluid k r j , which has not been processed, is calculated as follows:
gap r j ( t ) = Q r j + ( t ) q k r j + ( t ) q k r j + ( A s ) , k r j F
  • Fluid–Machine-Gap: At time t, the gap between the total job quantity with operation o r j not processed by machine m and the number of fluid k r j not processed by machine m is calculated as follows:
gap m r j ( t ) = Q m r j + ( t ) q m k r j + ( t ) q m k r j + ( A s ) , k r j F , m M k r j
In Equation (17), Q r j + ( t ) is related to the construction state of the discrete solution at time t. The  gap r j ( t ) indicates a gap between the constructed discrete solution and the fluid solution. In Equation (18), Q m r j + ( t ) = q m k r j + ( A s ) Q m r j ( t ) , the parameter Q m r j ( t ) is the number of the operation o r j that has been processed by machine m at time t. Therefore, Q m r j ( t ) is related to the construction state of the discrete solution at time t. The  gap m r j ( t ) indicates a gap between the processing state of the operation o r j on machine m in the constructed discrete solution and that in the fluid solution. Based on the two defined metrics, an online fluid dynamic tracking policy is designed, consisting of two steps: operation selection and machine selection.
  • Operation selection step: At decision point t, one operation o r n j is selected according to Equations (19) and (20).
o r j { o r j o r j O R J ( t ) , gap r j ( t ) gap * ( t ) } random
o r n j { o r n j n I r j ( t ) } min _ ct r , n , j 1
In Equation (19), O R J ( t ) is the operation set that is available processed by idle machines. O R J ( t ) = { o r j m M idle ( t ) , o r j O m R J , Q r j ( t ) > 0 } , M idle ( t ) is the set of idle machines, Q r j ( t ) > 0 means there are jobs in the operation stage o r j , so operation o r j is selectable. In this step, the operation o r j is selected randomly based on Equation (19), in which a threshold value gap * ( t ) = gap ( t ) + b ( gap + ( t ) gap ( t ) ) . The parameters gap and gap + are the minimum and maximum gap r j ( t ) value of all operations belongs to O R J ( t ) , b is applied to balance the stochasticity and covetousness. However, due to the multiplicity of our research problem, there may be multiple jobs in the operation stage o r j . In Equation (20), I r j ( t ) is the set of jobs in the operation stage o r j . Therefore, the operation o r n j , which has the minimum ct r , n , j 1 value, is selected according to Equation (20). The job n which is the first arrival operation stage o r j is selected.
  • Machine selection step: The machine m is selected to process the selected operation o r n j at the operation selection step.
m { m m M r j M idle ( t ) max _ gap m r j ( t ) }
Based on Equation (21), where m with the maximum value of gap m r j ( t ) is chosen to perform operation o r n j .

4.1.3. The Fluid Construction Heuristic Method

Once new order s arrives at the time A s , the dynamic fluid model is resolved to obtain the fluid solution. Then, the FCH is used to construct D high-quality initial solutions at the rescheduling point A s . Pseudo-code Algorithm 1 illustrates the detailed steps of the FCH. O s R N J is the set of operations that contain the unprocessed operations and the new operations at the rescheduling point A s . O R N J ( t ) is the set of unprocessed operations for all jobs at time t. D F C H = { d 1 , , d D } is the set of initial solutions constructed by the FCH, in which d i is the ith solution.

4.2. Local Search

Tabu search has been used effectively in diverse scheduling problems since it was proposed by Glover (1990) [53]. In the local search phase of the FRAS algorithm, an improved tabu search (ITS) procedure is presented to improve the feasible solutions obtained with the FCH. In the ITS, neighborhood solutions are formed by relocating a public critical operation to a different machine and repositioning a public critical operation on the same machine. In the disjunctive graph representation of the current solution, the longest path is the critical path and each operation on the critical path is a critical operation. However, the current solution may have more than one critical path. Then, the operation is a public critical operation if it belongs to all the critical paths [54]. For example, in Figure 3, three public critical operations belong to two critical paths 111, 112, 211.
Algorithm 1 The fluid construction heuristic method.
Require: 
D F C H
Ensure: 
D F C H
  1:
Initialization:
  2:
Solve the dynamic fluid model and obtain the fluid model.
  3:
The set of initial solutions: D F C H .
  4:
for  i = 1 to D do
  5:
   The current time: t A s .
  6:
   Next idle time of all machines: { time idle m | m M m } .
  7:
    O R N J ( t ) O s R N J . // Construct a solution d i with the online fluid dynamic tracking policy.
  8:
   while  O R N J ( t ) // If there are operations that have not been assigned. do
  9:
     if  O R J ( t ) = // If no operations can be selected at the current decision point t.
     then
10:
        // Move the time to the next decision point.
11:
         t min { time idle m | m M m } .
12:
     else
13:
        //Select the operation o r n j according to the operation selection step.
14:
         o r j { o r j | o r j O R J ( t ) , gap r j ( t ) gap * ( t ) } random .
15:
         o r n j { o ( r n j ) | n I ( r j ) ( t ) } min _ ct r , n , j 1 .
16:
        //Select machine m to perform operation o r n j .
17:
         m { m | m M r j M idle ( t ) } max gap m r j ( t ) .
18:
        // Update the unprocessed operations set O R N J ( t ) .
19:
        Remove the operation o r n j from O R N J ( t ) .
20:
        // Update the next idle time of machine m .
21:
         time idle m time idle m + t r j m .
22:
     end if
23:
   end while
24:
   Add the constructed solution d i to D F C H .
25:
end for
26:
return  D F C H = { d 1 , , d D } .
In the ITS, a public critical operation is reallocated on the same machine based on the neighborhood structure N π [55] and move a critical operation to another machine and insert it into a feasible position according to the neighborhood structure N 1 ( α ρ ) [56]. Moreover, the makespan estimate approach [56] is adopted to estimate the neighborhood quality for accelerating the local search. The procedure of the ITS is outlined in Algorithm 2.
Algorithm 2 The improved tabu search algorithm.
Require: 
The best solution d * in the set D F C H .
Ensure: 
The best solution d b e s t .
  1:
Initialize the maximum iteration number i t e r s t o p .
  2:
Initialize the count of iterations without improvement in the optimal solution: i t e r n u m b e r 0 .
  3:
while  i t e r n u m b e r < i t e r s t o p  do
  4:
   The neighborhood structures N π and N 1 ( α ρ ) are selected with the same probability to generate neighborhood solutions.
  5:
   Estimate the makespan of all neighborhood solutions.
  6:
   if the criteria for aspiration is fulfilled then
  7:
      i t e r n u m b e r 0 .
  8:
     Update both the current solution d * and the best solution d b e s t .
  9:
   else
10:
      i t e r n u m b e r i t e r n u m b e r + 1 .
11:
     Choose the best no-tabu solution from all neighborhood solutions as the current solution.
12:
   end if
13:
   Update the tabu list.
14:
end while
15:
return the best solution d b e s t .

4.3. Implementation

Pseudo-code Algorithm 3 details the procedure of the proposed FRAS algorithm. The FRAS algorithm obtains the best solution d g l o b a l s at the rescheduling point A s .
Algorithm 3 The FRAS algorithm.
  1:
for  s = 1  to S do
  2:
   New orders arrival.
  3:
   New order s arrive at the time A s and trigger the rescheduling.
  4:
   Update the set O s R N J .
  5:
   while the termination condition is not fulfilled do
  6:
      Generate D initial feasible solution of order s with Algorithm 2.
  7:
       { d 1 , d 2 , , d D } FCH ( ) .
  8:
      Select the best solution from D F C H .
  9:
       d * { d 1 , , d D } min   makespan .
10:
      Improve the solution solution s with Algorithm 3.
11:
       d b e s t ITS ( d * ) .
12:
      Update the best solution d global s .
13:
       d global s { d global s , d b e s t } min   makespan .
14:
   end while
15:
   Execute the solution d global s in the dynamic scheduling system.
16:
end for

5. Experiments and Results

In this section, various experiments are performed to compare the efficiency and effectiveness of the FRAS algorithm. First, dynamic cases are designed to verify the performance of the FRAS algorithm. Second, the FRAS algorithm is applied to the real-world manufacturing plant case study. In practical dynamic environments, achieving satisfactory solutions quickly is crucial. Therefore, the termination condition is established as a CPU time limit of n s × M × t milliseconds. Here, n s represents the total quantity of jobs pending rescheduling at point s, M represents the count of machines, and t stands for a pre-defined constant (assigned the value of 200 in the experimental setups). All algorithms undergo 30 independent executions on each case, using a PC equipped with an Intel(R) Core(TM) i5-9300 CPU @ 2.40 GHz and 8 GB RAM, and implemented in Python 3.6. Furthermore, the MDFJSP instances ( M D 01 M D 12 ) were generated based on real-world problem scenarios. Table 3 presents the generation parameters for instances. In Table 3, ‘randi’ denotes the uniform distribution of integer numbers.

5.1. Parameter Settings

Parameter selection plays a crucial role in algorithm performance. Three parameters affect the efficiency of the proposed FRAS algorithm. In this section, we calibrate the algorithm parameters with the Taguchi method for the design of experiments (DOE) [57]. The DOE is an effective and efficient investigative method widely used to evaluate the effect of multiple factors on different algorithms. The three parameters {D, i t e r s t o p , b} define the FRAS. D is the number of generated initial feasible solutions with the FCH method for each iteration of the FRAS algorithm, i t e r s t o p is the permitted maximum iteration number with no improvement of ITS, and b is the α parameter, which used to balance the stochasticity and covetousness of the proposed algorithm. Ten randomly generated instances are selected to identify the effect of the three parameters. The parameter D has four possible values: {10,20,40,80}, the parameter i t e r s t o p can be {10,20,30,40}, and the parameter b can be {0.1,0.2,0.3,0.4}. As shown in Table 4, the parameter combinations are represented with the orthogonal array L 16 . The average makespans C a v g of 10 instances, obtained through 30 independent runs of the FRAS, are used as the response values in our analysis. Figure 4 illustrates the trend in factor levels. Figure 4 shows that the FRAS algorithm with control parameters of i t e r s t o p = 40 , D = 40 , and b = 0.3 , performs better.

5.2. Validation of Proposed Methods

We conduct a set of experiments to assess the effectiveness of the proposed FCH and ITS methods in the FRAS framework for solving instances of the MDFJSP. Table 5 summarizes the experimental results. Two variants of the FRAS algorithm are considered: F R A S _ R A N D O M , which is a variant obtained by replacing the FCH with a random initialization method in the FRAS framework, and F R A S _ T S , which is a variant obtained by replacing the ITS with a classical tabu search (TS) procedure in the FRAS algorithm. The performance of these variants is compared and analyzed to assess the impact of the FCH and ITS methods in the FRAS framework.
Table 5 presents data on the best and average makespan for each algorithm in the “ B e s t ” and “ A v g ” columns, respectively. The rows labeled “#best”, “#even”, and “#worse” indicate the number of cases where the FRAS algorithm outperforms, equals, or lags behind the reference methods. The “ R P D ” column indicates the average relative percentage deviation, calculated using Equation (22). Smaller R P D values signify superior algorithm performance.
R P D = C a v g C b e s t C b e s t × 100 %
Table 5 demonstrates that the FRAS algorithm achieves the best and average makespan among all 12 instances. It outperforms FRAS_TS in 11 out of 12 instances, with FRAS_TS only achieving the best makespan for MD07. Additionally, the FRAS algorithm outperforms FRAS_RANDOM in all 12 instances. These results prove the effectiveness of the proposed FCH and ITS methods. Comparing FRAS_TS and FRAS_RANDOM, it is observed that FRAS_TS generally outperforms FRAS_RANDOM in most instances. This suggests that the FCH method has a more significant impact on improving the FRAS’s performance compared to the ITS method. Moreover, the FRAS algorithm exhibits a smaller R P D value and displays less fluctuation in the results, indicating that the proposed methods enhance the stability and robustness of the FRAS algorithm.

5.3. Comparison with Existing Algorithms

In this section, the FRAS algorithm is compared with the self-learning genetic algorithm (SLGA) [58], two-stage genetic algorithm (2SGA) [59], and greedy randomized adaptive search (GRAS) algorithm [38,60]. These three algorithms are utilized for solving combinatorial optimization problems. The SLGA combines Q-learning reinforcement learning methods, leading to improved performance compared to the classical genetic algorithm. The GRAS includes a successful greedy initialization method and effectively addresses FJSPs in both static and dynamic environments. The 2SGA combines fast neighborhood estimation, tabu search, and path relinking operators to successfully address FJSPs. These algorithmic approaches have demonstrated their effectiveness and efficiency in tackling complex optimization challenges in various domains. These algorithms are activated to generate a new schedule when rescheduling is triggered. The parameter settings for all comparative algorithms are selected from their corresponding literature. The respective literature has established these parameters’ values through experimental determination. Table 6 lists the result.
Furthermore, the total computation time (TCPU) of each algorithm in every run is monitored. The TCPU represents the cumulative time expended by the algorithms across all rescheduling points. It encompasses the time taken to produce the optimal new scheduling scheme at each rescheduling point, known as the response time. Additionally, the total response time (TR) for all orders in each run is documented. Table 7 summarizes the mean TCPU and the mean TR times.
Table 7 demonstrates that the FRAS algorithm consistently achieves the optimal solution in all 12 instances with minimal time consumption. In Figure 5b, the FRAS algorithm demonstrates the smallest R P D value and fluctuations, indicating enhanced stability and robustness. The mean TCPU of all algorithms is depicted in Figure 5c, revealing that the four algorithms incur comparable TCPU times. Nevertheless, as shown in Figure 5d, the mean TR of the FRAS algorithm significantly outperforms the reference algorithms, making the FRAS algorithm more suitable for practical production applications, particularly in scenarios with frequent dynamic events. Moreover, the computational efficiency of the FRAS algorithm is pivotal, particularly when scaling to larger, more complex problem instances involving an increase in machines (M), job types (R), and orders (S). This growth is accompanied by an escalation in computational demand, as measured by TCPU and TR, due to the expanding and increasingly intricate solution space. The algorithm’s solution through this space necessitates more computational resources and time to secure optimal or near-optimal solutions. The influx of new orders intensifies the scheduling complexity, prompting more frequent rescheduling and a higher number of jobs to be reallocated at each rescheduling juncture. This dynamic facet of the problem amplifies the computational burden, necessitating continuous adaptation and reevaluation by the algorithm in response to evolving conditions.
It is important to note that response time is essential when dealing with scheduling problems in actual manufacturing, especially when dynamic events are involved. Figure 6 presents a mean reaction time comparison for the four algorithms across all rescheduled points in four representative cases. It illustrates that the FRAS algorithm consistently exhibits shorter response times than the reference algorithms for the majority of rescheduled issues. In addition, the mean response time of the FRAS algorithm has more minor fluctuations throughout all the orders. This is owing to the addition of the FCH methods to the FRAS algorithm. The FCH constructs D solutions, and chooses the best solution as the initial solution in each iteration of the FRAS algorithm. This ensures both high quality and diversity in initial solutions. Furthermore, the FRAS algorithm maintains a well-balanced approach between exploitation and exploration capabilities through the selection of appropriate parameters iter stop and b. To statistically assess FRAS’s performance against the reference algorithms, the Wilcoxon test [61] is employed. Furthermore, we perform a sensitivity analysis to assess the impact of the time interval for new order arrivals on the algorithm’s performance. Due to space limitations, a more detailed description is provided in the Appendix D.

5.4. Industrial Application

To further validate the effectiveness of the FRAS algorithm, we conducted a series of practical case studies from a real-world motorcycle part manufacturing factory. The study utilizes process information obtained from a motorcycle part manufacturing factory (details in Appendix E). The manufacturing system comprises 20 machine tools, including various CNC machines, used to process 3 different types of motorcycle parts with flexible process routes as shown in Figure 7. The factory’s production data revealed the processing of hundreds of jobs daily, with orders containing multiple jobs of each part type arriving randomly. This scenario exhibits large-scale, multiplicity, and dynamic production characteristics. Five manufacturing cases (Data01–Data05) are selected based on enterprise orders, with additional information provided in Appendix E.
For instance, Data01’s initial schedule is created for orders 1-1, consisting of 51 new parts across three types, arriving at time 0 (stage 1). Subsequently, order 1-2, comprising 65 new parts and arriving at time 537, triggers the transition to the next stage (stage 2). The FRAS algorithm then generates a new schedule after orders 1-2 arrive. Figure 8 illustrates both the initial and rescheduled schemes for Data01.
Table 8 demonstrates that the FRAS algorithm consistently achieves optimal and average makespan results across all cases. More importantly, the FRAS algorithm exhibits significantly reduced mean total response times (TR) compared to the GRAS by from 40 % to 90 % , surpassing the other three reference algorithms by more than 70 % . These results emphasize the strong suitability of the FRAS algorithm for tackling the MDFJSP. Additionally, the FRAS algorithm consistently outperforms the reference algorithms regarding mean TR time and completion time. This indicates that the FRAS algorithm processes all parts more efficiently and can swiftly generate new rescheduling schemes in response to dynamic events. The FRAS algorithm shows significant promise for effectively addressing practical production challenges characterized by multiplicity and dynamic flexibility in shop scheduling.
Furthermore, the FRAS algorithm’s practical utility shines through its capacity to deliver significant advantages across a spectrum of manufacturing contexts. In high-mix, low-volume sectors, such as aerospace and automotive parts production, where customization and product diversity are paramount, the algorithm’s proficiency in orchestrating various job types enhances the scheduling efficiency, shortens lead times, and elevates customer satisfaction. In just-in-time production environments, like electronics and consumer goods, marked by elevated inventory costs and variable market needs, the FRAS algorithm’s dynamic rescheduling capabilities ensure the seamless synchronization with real-time orders, minimizing stockpiles and waste. Its rescheduling agility is equally crucial in emergency response scenarios, where it facilitates rapid realignment in the face of unexpected disruptions, such as production disturbances, aiding in the swift restoration of operations and the mitigation of losses. Furthermore, for contract manufacturing enterprises where agility and rapid response are vital, the FRAS algorithm offers a competitive edge by enabling nimble scheduling adjustments to accommodate fluctuating client requirements. In each of these settings, the FRAS algorithm’s ability to navigate complexity and adapt to changing conditions not only substantiates its theoretical underpinnings but also yields tangible benefits, making it a compelling tool for elevating operational efficiency and resilience in the manufacturing landscape.

6. Conclusions and Perspective

This article investigates the multiplicity dynamic flexible job shop scheduling problem (MDFJSP) with new order arrivals. A novel fluid randomized adaptive search (FRAS) algorithm is proposed to minimize makespan. The proposed FRAS algorithm comprises two phases. In the first phase, a fluid construction heuristic (FCH) incorporating an online fluid dynamic tracking policy is presented to generate high-quality initial solutions. In the second phase, we employ an improved tabu search to enhance the search efficiency in the solution space. The efficiency of the FRAS algorithm is validated by a comparison with its variants F R A S _ T S and F R A S _ R A N D O M . Further, the performance of the FRAS algorithm is compared with the self-learning genetic algorithm, two-stage genetic algorithm, and greedy randomized adaptive search algorithm. The results indicate that the FRAS algorithm outperformed the other algorithms in terms of effectiveness and superiority for solving the MDFJSP. Statistical analysis using the Wilcoxon test further validates FRAS’s outperformance over the reference algorithms. However, the FRAS algorithm exhibits a prolonged computational time when addressing large-scale problems, rendering it unsuitable for the efficient response requirements in large-scale optimization problems.
In the future, researchers could explore multi-objective approaches for MDFJSPs. Furthermore, expanding the problem scope to incorporate machine breakdowns and changes in due dates would provide valuable insights for addressing real-world complexities.

Author Contributions

Conceptualization, L.D.; methodology, L.D., D.L. and W.F.; software, L.D.; validation, L.D. and W.F.; formal analysis, L.D., D.L. and M.R.; investigation, D.L.; writing—original draft, D.L.; writing—review and editing, Z.G., D.L., M.R. and W.F.; supervision, Z.G.; project administration, Z.G.; funding acquisition, M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 52275489), and the National Program on Key Basic Research Project (Grant No. 2021YFB3301902).

Data Availability Statement

The data that support the findings of this study are openly available at https://doi.org/10.5281/zenodo.8369652 (accessed on 18 May 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

List of Abbreviations

AbbreviationDefinition
JSPJob shop scheduling problem
FJSPFlexible job shop scheduling problem
DFJSPDynamic flexible job shop scheduling problem
MDFJSPMultiplicity dynamic flexible job shop scheduling problem
FRASFluid randomized adaptive search
FCHFluid construction heuristic
ITSImproved tabu search
R P D Average relative percentage deviation
2SGATwo-stage genetic algorithm
GRAS             Greedy randomized adaptive search
SLGASelf-learning genetic algorithm
TCPUTotal computation time
TRTotal response time
TSTabu search

Appendix A. Comparison with Optimal Value

This section tests the FRAS algorithm on multiplicity instances ( d a t a 01 d a t a 08 ) to be compared with optimal value. The optimal values are obtained with the KNITRO 9.0.1 solver. Table A1 shows the optimal value obtained with respect to time. The KNITRO 9.0.1 solver can only optimally solve the instance with equal or less than N = 9 , M = 10 within the limit of two hours of computation time. The best-obtained solution by the FRAS algorithm in d a t a 6 d a t a 7 outperforms the KNITRO with a high gap. However, the exact method is not effective for solving problems with a larger number of jobs and machines.
Table A1. Computation results on generated instances.
Table A1. Computation results on generated instances.
InstanceN × MKNITRO 9.0.1FRAS
C max t ( s ) Optimality Gap (%) B e s t ( A v g ) t ( s )
data013 × 33030.340303 (303)0.70
data026 × 35510.480551 (553)1.05
data036 × 53700.590370 (371)0.85
data049 × 55550.720555 (557)1.19
data059 × 1027246.760272 (272.8)20.06
data0615 × 10583720019.68394 (396.2)12.26
data0720 × 101574720099.23770 (772.5)22.88
data0840 × 10 1457 (1462.7)40.10

Appendix B. Experiments on Static Datasets

In this section, experiments on multiplicity static FJSP instances are conducted to have an idea about the general attitude of the proposed FRAS algorithm. Because there is no benchmark instance for multiplicity static or dynamic FJSP so far, then this article designed 10 multiplicity static FJSP instances (LSM-Mk01-LSM-Mk10) based on the static FJSP benchmark datasets ( M k 01 M k 10 ) [62]. The standard benchmarks consist of a single job per job type. Therefore, each job is duplicated several times ( N r ) to create large-scale multiplicity instances ( M × N 1000 ) . To further verify the efficiency of the proposed FRAS algorithm, we compared it with the tabu search algorithm (TS) [63], genetic algorithm (GA) [12], and two state-of-the-art algorithms (two-stage genetic algorithm (2SGA) [59] and greedy randomized adaptive search algorithm (GRSA) [38]). All the algorithms run 20 times independently. The results are shown in Table A2.
In Table A1, the B e s t and A v g columns represent each algorithm’s best and average makespan. The best solutions obtained by each algorithm are shown in bold. The rows of # b e s t , # e v e n , and # w o r s e denote that the number of cases obtained by the FRAS algorithm is better, equal, and worse than the reference algorithms. The column R P D is the average relative percentage deviation of the objective over 20 runs. Equation (A1) defines the calculation of R P D . The C a v g and C b e s t are the average and minimum makespan, respectively. The algorithm has a better performance when a smaller R P D is obtained.
R P D = C a v g C b e s t C b e s t × 100 %
The results in Table A1 show that the FRAS algorithm obtains the best solutions in 9 out of the 10 instances. 2SGA obtains the best solution for LSM-Mk03 and LSM-Mk04. Therefore, the FRAS algorithm has a better performance than the reference algorithms on the best solution.
Table A2. Comparison of the FRAS and reference algorithms on generated static instances.
Table A2. Comparison of the FRAS and reference algorithms on generated static instances.
InstanceR N r N × M TSGA2SGAGRASFRAS
B e s t ( A v g ) R P D B e s t ( A v g ) R P D B e s t ( A v g ) R P D B e s t ( A v g ) R P D B e s t ( A v g ) R P D
LSM-Mk011020200 × 61212 (1241)69.541024 (1025)40.03782 (784)7.10941 (942)28.69732 (732)0.00
LSM-Mk021020200 × 6973 (999)95.12840 (851)66.21567 (570)11.33836 (847)65.43512 (512)0.00
LSM-Mk031510150 × 82780 (2984)46.272555 (2595)27.212040 (2040)0.002545 (2625)28.682040 (2047)0.34
LSM-Mk041510150 × 8889 (929)52.80780 (781)28.45608 (608)0.00687 (692)13.82617 (622)2.30
LSM-Mk051520300 × 44687 (4782)39.214371 (4395)27.953602 (3607)5.013628 (3655)6.403435 (3435)0.00
LSM-Mk061010100 × 151221 (1262)139.021139 (1140)115.91584 (585)10.80617 (623)17.99528 (529)0.19
LSM-Mk072010200 × 52483 (2578)85.332170 (2192)57.581559 (1565)12.511898 (1911)37.381391 (1391)0.00
LSM-Mk082010200 × 106226 (6263)19.756056 (6075)16.165430 (5442)4.055489 (5505)5.265230 (5230)0.00
LSM-Mk092010200 × 105274 (5294)74.664723 (4755)56.883337 (3344)10.333495 (3545)16.963031 (3036)0.16
LSM-Mk102010200 × 154102 (4364)127.533649 (3671)91.402212 (2217)15.593217 (3228)68.301918 (1921)0.16
#better   10 10 8 10   
#even   0 0 1 0   
#worse   0 0 1 0   
Figure A1a shows that among the compared algorithms, 2SGA is the closest to the FRAS algorithm, but the FRAS algorithm is superior to 2SGA in 8 out of the 10 instances except for instance LSM-Mk03 and LSM-Mk04. Figure A1b shows the interval plot for the R P D of the FRAS algorithm and reference algorithms ( C I : 95%), which are used for checking whether the FRAS algorithm outperforms reference algorithms statistically. The FRAS algorithm obtains a better R P D value and has smaller fluctuations in the results, which means it has better stability and robustness.
Figure A1. The average makespan of different static cases and interval plot for R P D of the five algorithms.
Figure A1. The average makespan of different static cases and interval plot for R P D of the five algorithms.
Symmetry 16 00641 g0a1
In Figure A2, it is not difficult to find that the convergence curve of the FRAS algorithm stays below the convergence curve of other algorithms in all instances except LSM-Mk03 and LSM-Mk04. The FRAS algorithm holds the lower iteration initial point in 9 out of the 10 instances, and the final solutions of all reference algorithms are not even better than the initial solution of the FRAS algorithm in 8 instances. This is due to the proposed FCH significantly improving the performance of the FRAS algorithm, and the FRAS algorithm occupies a competitive achievement for reaching the optima at the beginning. Moreover, the FRAS algorithm converges to the best solution faster than the reference algorithms. In general, we can conclude that the FRAS algorithm is significantly more excellent than other algorithms while solving the large-scale multiplicity static FJSPs.
Figure A2. The mean convergence curve of LSM-Mk01-LSM-Mk10.
Figure A2. The mean convergence curve of LSM-Mk01-LSM-Mk10.
Symmetry 16 00641 g0a2

Appendix C. Statistical Analysis

In this section, the Wilcoxon test is used to compare the FRAS algorithm with the reference algorithms to check whether the proposed algorithm outperforms all reference algorithms statistically. The Wilcoxon is a non-parametric test method that is suitable for continuous optimization problems in multiple-problem analysis [61]. Table A3 summarizes the results of applying the Wilcoxon test on the average makespan. They display the sum of rankings obtained in each comparison and the p-value associated. The FRAS algorithm outperforms these reference algorithms considering independent pairwise comparisons due to the p-value are below α = 0.05 . However, Wilcoxon’s test performs individual comparisons between two algorithms. If we include them within the multiple comparisons, the p-value obtained is p = 0.0155 . Then it can be deduced that HFMAE outperforms the three reference algorithms in terms of solution quality with a level of significance α = 0.05 .
Table A3. Wilcoxon test considering average makespan.
Table A3. Wilcoxon test considering average makespan.
FRAS vs. R + R p-Value
SLGA3600.00390625
2SGA3600.00390625
GRAS3600.00390625

Appendix D. Sensitivity Analysis

To further assess the extent of our proposed algorithm, an instance with M = 10 , S = 5 , R = 20 is used to perform a sensitivity analysis to analyze the impact of the time interval between new orders’ arrival on the objective and the mean total response time (TR). Figure A3 shows the results of the sensitivity analysis.
Figure A3. The impact of the time interval between new orders’ arrival on the objective and the mean total response time (TR).
Figure A3. The impact of the time interval between new orders’ arrival on the objective and the mean total response time (TR).
Symmetry 16 00641 g0a3
From Figure A3, it can be observed that increasing the time interval between new orders’ arrival leads to an increase in makespan and a decrease in the mean total response time. This can be explained by the fact that fewer jobs need to be rescheduled at each rescheduling point, reducing the complexity of the scheduling problem. As a result, the FRAS algorithm can converge to a final solution more quickly. Additionally, a more significant time interval between new orders’ arrival makes the scheduling system more myopic, limiting the potential for finding better overall schedules. These findings indicate that the FRAS algorithm effectively captures the impact of the time interval between new orders’ arrival on system performance. Furthermore, the proposed algorithm demonstrates a good stability and robustness across various levels of new orders’ arrival time intervals.

Appendix E. Process Information Furthermore, Production Data

Table A4. The processing information of the three types of parts.
Table A4. The processing information of the three types of parts.
Part TypesOperationProcessing Time
rj ( r , j , m , t r j m )
11(1, 1, 1, 18)
2(1, 2, 3, 10)
3(1, 3, 1, 21)
4(1, 4, 6, 19)
5(1, 5, 7, 28) (1, 5, 8, 28) (1, 5, 9, 28)
6(1, 6, 11, 10)
7(1, 7, 13, 20) (1, 7, 14, 20)
8(1, 8, 15, 32) (1, 8, 16, 32) (1, 8, 17, 32)
9(1, 9, 18, 19) (1, 9, 19, 19) (1, 9, 20, 19)
10(1, 10, 10, 8)
21(2, 1, 2, 18)
2(2, 2, 3, 10)
3(2, 3, 4, 21)
4(2, 4, 5, 19)
5(2, 5, 7, 28) (2, 5, 8, 28) (2, 5, 9, 28)
6(2, 6, 11, 10)
7(2, 7, 13, 20) (2, 7, 14, 20)
8(2, 8, 15, 32) (2, 8, 16, 32) (2, 8, 17, 32)
9(2, 9, 18, 19) (2, 9, 19, 19) (2, 9, 20, 19)
10(2, 10, 10, 8)
31(3, 1, 2, 23)
2(3, 2, 3, 14)
3(3, 3, 4, 25)
4(3, 4, 5, 26) (3, 4, 6, 26)
5(3, 5, 7, 42) (3, 5, 8, 42) (3, 5, 9, 42) (3, 5, 10, 42)
6(3, 6, 11, 15)
7(3, 7, 12, 25)
8(3, 8, 13, 31) (3, 8, 14, 31)
9(3, 9, 15, 48) (3, 9, 16, 48) (3, 9, 17, 48)
10(3, 10, 18, 28) (3, 10, 19, 28)
11(3, 11, 7, 10) (3, 11, 8, 10) (3, 11, 9, 10) (3, 11, 10, 10)
Table A5. The detailed information of the five cases.
Table A5. The detailed information of the five cases.
CasesOrders (s)Arrival Time ( A s )Number of Parts for Different Types
r 1 r 2 r 3
Data011-10221316
1-2537191729
Data022-10111415
2-2641222029
2-32537121611
Data033-10111526
3-21075191815
3-31652152118
3-44247241225
Data044-10181628
4-22253291615
4-32885191928
4-44726252420
4-54937111211
Data055-10241312
5-21161292411
5-33214221324
5-43327202623
5-53437292028

References

  1. Gu, X.; Koren, Y. Mass-Individualisation–the twenty first century manufacturing paradigm. Int. J. Prod. Res. 2022, 60, 7572–7587. [Google Scholar] [CrossRef]
  2. Leng, J.; Zhu, X.; Huang, Z.; Li, X.; Zheng, P.; Zhou, X.; Mourtzis, D.; Wang, B.; Qi, Q.; Shao, H.; et al. Unlocking the power of industrial artificial intelligence towards Industry 5.0: Insights, pathways, and challenges. J. Manuf. Syst. 2024, 73, 349–363. [Google Scholar] [CrossRef]
  3. Basantes Montero, D.T.; Rea Minango, S.N.; Barzallo Núñez, D.I.; Eibar Bejarano, C.G.; Proaño López, P.D. Flexible Manufacturing System Oriented to Industry 4.0. In Proceedings of the Innovation and Research: A Driving Force for Socio-Econo-Technological Development 1st, Sangolquí, Ecuador, 1–3 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 234–245. [Google Scholar]
  4. Margherita, E.G.; Braccini, A.M. Industry 4.0 technologies in flexible manufacturing for sustainable organizational value: Reflections from a multiple case study of Italian manufacturers. Inf. Syst. Front. 2023, 25, 995–1016. [Google Scholar] [CrossRef]
  5. Leng, J.; Chen, Z.; Sha, W.; Lin, Z.; Lin, J.; Liu, Q. Digital twins-based flexible operating of open architecture production line for individualized manufacturing. Adv. Eng. Inform. 2022, 53, 101676. [Google Scholar] [CrossRef]
  6. Zhang, W.; Xiao, J.; Liu, W.; Sui, Y.; Li, Y.; Zhang, S. Individualized requirement-driven multi-task scheduling in cloud manufacturing using an extended multifactorial evolutionary algorithm. Comput. Ind. Eng. 2023, 179, 109178. [Google Scholar] [CrossRef]
  7. Ghaleb, M.; Zolfagharinia, H.; Taghipour, S. Real-time production scheduling in the Industry-4.0 context: Addressing uncertainties in job arrivals and machine breakdowns. Comput. Oper. Res. 2020, 123, 105031. [Google Scholar] [CrossRef]
  8. Luo, D.; Thevenin, S.; Dolgui, A. A state-of-the-art on production planning in Industry 4.0. Int. J. Prod. Res. 2023, 61, 6602–6632. [Google Scholar] [CrossRef]
  9. Sangaiah, A.K.; Suraki, M.Y.; Sadeghilalimi, M.; Bozorgi, S.M.; Hosseinabadi, A.A.R.; Wang, J. A new meta-heuristic algorithm for solving the flexible dynamic job-shop problem with parallel machines. Symmetry 2019, 11, 165. [Google Scholar] [CrossRef]
  10. Zhou, J.; Zheng, L.; Fan, W. Multirobot collaborative task dynamic scheduling based on multiagent reinforcement learning with heuristic graph convolution considering robot service performance. J. Manuf. Syst. 2024, 72, 122–141. [Google Scholar] [CrossRef]
  11. Han, X.; Cheng, W.; Meng, L.; Zhang, B.; Gao, K.; Zhang, C.; Duan, P. A dual population collaborative genetic algorithm for solving flexible job shop scheduling problem with AGV. Swarm. Evol. Comput. 2024, 86, 101538. [Google Scholar] [CrossRef]
  12. Pezzella, F.; Morganti, G.; Ciaschetti, G. A genetic algorithm for the Flexible Job-shop Scheduling Problem. Comput. Oper. Res. 2008, 35, 3202–3212. [Google Scholar] [CrossRef]
  13. Gui, Y.; Tang, D.; Zhu, H.; Zhang, Y.; Zhang, Z. Dynamic scheduling for flexible job shop using a deep reinforcement learning approach. Comput. Ind. Eng. 2023, 180, 109255. [Google Scholar] [CrossRef]
  14. Tang, D.B.; Dai, M.; Salido, M.A.; Giret, A. Energy-efficient dynamic scheduling for a flexible flow shop using an improved particle swarm optimization. Comput. Ind. 2016, 81, 82–95. [Google Scholar] [CrossRef]
  15. Monch, L.; Fowler, J.W.; Dauzere-Peres, S.; Mason, S.J.; Rose, O. A survey of problems, solution techniques, and future challenges in scheduling semiconductor manufacturing operations. J. Sched. 2011, 14, 583–599. [Google Scholar] [CrossRef]
  16. Purba, H.H.; Aisyah, S.; Dewarani, R. Production capacity planning in motorcycle assembly line using CRP method at PT XYZ. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Chennai, India, 16–17 September 2020; IOP Publishing: Bristol, UK, 2020; Volume 885, p. 012029. [Google Scholar]
  17. Han, J.H.; Lee, J.Y. Scheduling for a flow shop with waiting time constraints and missing operations in semiconductor manufacturing. Eng. Optim. 2023, 55, 1742–1759. [Google Scholar] [CrossRef]
  18. Zhang, F.; Mei, Y.; Nguyen, S.; Zhang, M. Evolving scheduling heuristics via genetic programming with feature selection in dynamic flexible job-shop scheduling. IEEE Trans. Cybern. 2020, 51, 1797–1811. [Google Scholar] [CrossRef] [PubMed]
  19. Tang, H.; Xiao, Y.; Zhang, W.; Lei, D.; Wang, J.; Xu, T. A DQL-NSGA-III algorithm for solving the flexible job shop dynamic scheduling problem. Expert Syst. Appl. 2024, 237, 121723. [Google Scholar] [CrossRef]
  20. Jing, X.; Yao, X.; Liu, M.; Zhou, J. Multi-agent reinforcement learning based on graph convolutional network for flexible job shop scheduling. J. Intell. Manuf. 2024, 35, 75–93. [Google Scholar] [CrossRef]
  21. Ding, L.; Guan, Z.; Zhang, Z.; Fang, W.; Chen, Z.; Yue, L. A hybrid fluid master–apprentice evolutionary algorithm for large-scale multiplicity flexible job-shop scheduling with sequence-dependent set-up time. Eng. Optim. 2024, 56, 54–75. [Google Scholar] [CrossRef]
  22. Ding, L.; Guan, Z.; Rauf, M.; Yue, L. Multi-policy deep reinforcement learning for multi-objective multiplicity flexible job shop scheduling. Swarm. Evol. Comput. 2024, 87, 101550. [Google Scholar] [CrossRef]
  23. Guo, C.T.; Jiang, Z.B.; Zhang, H.; Li, N. Decomposition-based classified ant colony optimization algorithm for scheduling semiconductor wafer fabrication system. Comput. Ind. Eng. 2012, 62, 141–151. [Google Scholar] [CrossRef]
  24. Kundakci, N.; Kulak, O. Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem. Comput. Ind. Eng. 2016, 96, 31–51. [Google Scholar] [CrossRef]
  25. Lei, D.; Zhang, J.; Liu, H. An Adaptive Two-Class Teaching-Learning-Based Optimization for Energy-Efficient Hybrid Flow Shop Scheduling Problems with Additional Resources. Symmetry 2024, 16, 203. [Google Scholar] [CrossRef]
  26. Liu, J.; Sun, B.; Li, G.; Chen, Y. Multi-objective adaptive large neighbourhood search algorithm for dynamic flexible job shop schedule problem with transportation resource. Eng. Appl. Artif. Intell. 2024, 132, 107917. [Google Scholar] [CrossRef]
  27. Pei, Z.; Zhang, X.F.; Zheng, L.; Wan, M.Z. A column generation-based approach for proportionate flexible two-stage no-wait job shop scheduling. Int. J. Prod. Res. 2020, 58, 487–508. [Google Scholar] [CrossRef]
  28. Subramaniam, V.; Ramesh, T.; Lee, G.K.; Wong, Y.S.; Hong, G.S. Job shop scheduling with dynamic fuzzy selection of dispatching rules. Int. J. Adv. Manuf. Technol. 2000, 16, 759–764. [Google Scholar] [CrossRef]
  29. Zhang, H.; Roy, U. A semantics-based dispatching rule selection approach for job shop scheduling. J. Intell. Manuf. 2019, 30, 2759–2779. [Google Scholar] [CrossRef]
  30. Zhang, F.F.; Mei, Y.; Nguyen, S.; Zhang, M.J. Correlation Coefficient-Based Recombinative Guidance for Genetic Programming Hyperheuristics in Dynamic Flexible Job Shop Scheduling. IEEE Trans. Evol. Comput. 2021, 25, 552–566. [Google Scholar] [CrossRef]
  31. Đurasević, M.; Jakobović, D. Selection of dispatching rules evolved by genetic programming in dynamic unrelated machines scheduling based on problem characteristics. J. Comput. Sci. 2022, 61, 101649. [Google Scholar] [CrossRef]
  32. Holthaus, O.; Rajendran, C. Efficient jobshop dispatching rules: Further developments. Prod. Plan. Control 2010, 11, 171–178. [Google Scholar] [CrossRef]
  33. Ren, W.; Yan, Y.; Hu, Y.; Guan, Y. Joint optimisation for dynamic flexible job-shop scheduling problem with transportation time and resource constraints. Int. J. Prod. Res. 2021, 60, 5675–5696. [Google Scholar] [CrossRef]
  34. Fuladi, S.K.; Kim, C.S. Dynamic Events in the Flexible Job-Shop Scheduling Problem: Rescheduling with a Hybrid Metaheuristic Algorithm. Algorithms 2024, 17, 142. [Google Scholar] [CrossRef]
  35. Thi, L.M.; Mai Anh, T.T.; Van Hop, N. An improved hybrid metaheuristics and rule-based approach for flexible job-shop scheduling subject to machine breakdowns. Eng. Optim. 2023, 55, 1535–1555. [Google Scholar] [CrossRef]
  36. Guo, H.; Liu, J.; Wang, Y.; Zhuang, C. An improved genetic programming hyper-heuristic for the dynamic flexible job shop scheduling problem with reconfigurable manufacturing cells. J. Manuf. Syst. 2024, 74, 252–263. [Google Scholar] [CrossRef]
  37. Mohan, J.; Lanka, K.; Rao, A.N. A Review of Dynamic Job Shop Scheduling Techniques. Procedia Manuf. 2019, 30, 34–39. [Google Scholar] [CrossRef]
  38. Baykasoğlu, A.; Madenoğlu, F.S.; Hamzadayı, A. Greedy randomized adaptive search for dynamic flexible job-shop scheduling. J. Manuf. Syst. 2020, 56, 425–451. [Google Scholar] [CrossRef]
  39. Zhang, H.; Buchmeister, B.; Li, X.; Ojstersek, R. An Efficient Metaheuristic Algorithm for Job Shop Scheduling in a Dynamic Environment. Mathematics 2023, 11, 2336. [Google Scholar] [CrossRef]
  40. Xie, J.; Gao, L.; Peng, K.; Li, X.; Li, H. Review on flexible job shop scheduling. IET Collab. Intell. Manuf. 2019, 1, 67–77. [Google Scholar] [CrossRef]
  41. Chen, D.D.Y.H. Dynamic Scheduling of a Multiclass Fluid Network. Oper. Res. Lett. 1993, 41, 1104–1115. [Google Scholar] [CrossRef]
  42. Dai, J.G.; Weiss, G. A fluid heuristic for minimizing makespan in job shops. Oper. Res. 2002, 50, 692–707. [Google Scholar] [CrossRef]
  43. Bertsimas, D.; Gamarnik, D.; Sethuraman, J. From fluid relaxations to practical algorithms for high-multiplicity job-shop scheduling: The holding cost objective. Oper. Res. 2003, 51, 798–813. [Google Scholar] [CrossRef]
  44. Masin, M.; Raviv, T. Linear programming-based algorithms for the minimum makespan high multiplicity jobshop problem. J. Sched. 2014, 17, 321–338. [Google Scholar] [CrossRef]
  45. Gu, J.W.; Gu, M.Z.; Lu, X.W.; Zhang, Y. Asymptotically optimal policy for stochastic job shop scheduling problem to minimize makespan. J. Comb. Optim. 2018, 36, 142–161. [Google Scholar] [CrossRef]
  46. Bertsimas, D.; Nasrabadi, E.; Paschalidis, I.C. Robust Fluid Processing Networks. IEEE Trans. Autom. Control 2015, 60, 715–728. [Google Scholar] [CrossRef]
  47. Bertsimas, D.; Gamarnik, D. Asymptotically optimal algorithms for job shop scheduling and packet routing. J. Algorithms 1999, 33, 296–318. [Google Scholar] [CrossRef]
  48. Bertsimas, D.; Sethuraman, J. From fluid relaxations to practical algorithms for job shop scheduling: The makespan objective. Math. Program. 2002, 92, 61–102. [Google Scholar] [CrossRef]
  49. Nazarathy, Y.; Weiss, G. A fluid approach to large volume job shop scheduling. J. Sched. 2010, 13, 509–529. [Google Scholar] [CrossRef]
  50. Boudoukh, T.; Penn, M.; Weiss, G. Scheduling jobshops with some identical or similar jobs. J. Sched. 2001, 4, 177–199. [Google Scholar] [CrossRef]
  51. Shen, X.N.; Yao, X. Mathematical modeling and multi-objective evolutionary algorithms applied to dynamic flexible job shop scheduling problems. Inf. Sci. 2015, 298, 198–224. [Google Scholar] [CrossRef]
  52. Vinod, V.; Sridharan, R. Dynamic job-shop scheduling with sequence-dependent setup times: Simulation modeling and analysis. Int. J. Adv. Manuf. Technol. 2008, 36, 355–372. [Google Scholar] [CrossRef]
  53. Glover, F. Tabu search-part II. ORSA J. Comput. 1990, 2, 4–32. [Google Scholar] [CrossRef]
  54. Li, J.Q.; Pan, Q.K.; Liang, Y.C. An effective hybrid tabu search algorithm for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 2010, 59, 647–662. [Google Scholar] [CrossRef]
  55. González, M.A.; Vela, C.R.; Varela, R. Scatter search with path relinking for the flexible job shop scheduling problem. Eur. J. Oper. Res. 2015, 245, 35–45. [Google Scholar] [CrossRef]
  56. Kemmoé-Tchomté, S.; Lamy, D.; Tchernev, N. An effective multi-start multi-level evolutionary local search for the flexible job-shop problem. Eng. Appl. Artif. Intell. 2017, 62, 80–95. [Google Scholar] [CrossRef]
  57. Mandavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in large-scale global continues optimization: A survey. Inf. Sci. 2015, 295, 407–428. [Google Scholar] [CrossRef]
  58. Chen, R.H.; Yang, B.; Li, S.; Wang, S.L. A self-learning genetic algorithm based on reinforcement learning for flexible job-shop scheduling problem. Comput. Ind. Eng. 2020, 149. [Google Scholar] [CrossRef]
  59. Defersha, F.M.; Rooyani, D. An efficient two-stage genetic algorithm for a flexible job-shop scheduling problem with sequence dependent attached/detached setup, machine release date and lag-time. Comput. Ind. Eng. 2020, 147. [Google Scholar] [CrossRef]
  60. Resende, T.A.F.M.G.C. Greedy randomized adaptive search procedures. J. Glob. Optim. 1995, 6, 109–133. [Google Scholar]
  61. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  62. Brandimarte, P. Routing and scheduling in a flexible job shop by tabu search. Ann. Oper. Res. 1993, 41, 157–183. [Google Scholar] [CrossRef]
  63. Mastrolilli, M.; Gambardella, L.M. Effective neighbourhood functions for the flexible job shop problem. J. Sched. 2000, 3, 3–20. [Google Scholar] [CrossRef]
Figure 1. An example of the MDFJSP with new order arrivals.
Figure 1. An example of the MDFJSP with new order arrivals.
Symmetry 16 00641 g001
Figure 2. (a) A flow diagram of the dynamic scheduling system. (b) A basic flow of the FRAS algorithm.
Figure 2. (a) A flow diagram of the dynamic scheduling system. (b) A basic flow of the FRAS algorithm.
Symmetry 16 00641 g002
Figure 3. An example of public critical operations.
Figure 3. An example of public critical operations.
Symmetry 16 00641 g003
Figure 4. The trend of the factor level.
Figure 4. The trend of the factor level.
Symmetry 16 00641 g004
Figure 5. (a) The average makespan of different cases. (b) The interval plot for R P D of the five algorithms. (c) The total computation time. (d) The total response time.
Figure 5. (a) The average makespan of different cases. (b) The interval plot for R P D of the five algorithms. (c) The total computation time. (d) The total response time.
Symmetry 16 00641 g005
Figure 6. The mean response time of the four algorithms at each rescheduling point on four represent instances.
Figure 6. The mean response time of the four algorithms at each rescheduling point on four represent instances.
Symmetry 16 00641 g006
Figure 7. The production process of the motorcycle parts manufacturing factory.
Figure 7. The production process of the motorcycle parts manufacturing factory.
Symmetry 16 00641 g007
Figure 8. The initial scheduling scheme and rescheduling scheme of Data01 obtained by the FRAS algorithm.
Figure 8. The initial scheduling scheme and rescheduling scheme of Data01 obtained by the FRAS algorithm.
Symmetry 16 00641 g008
Table 1. Literature review of fluid models in JSPs.
Table 1. Literature review of fluid models in JSPs.
PaperType of JSPsProblem ScaleObjective
Number of MultiplicityDeterministicStochasticSmallLargeMakespanHolding Cost
Bertsimas and Gamarnik (1999) [47]Up to 2500
Boudoukh et al. (2001) [50]Up to 1000
Dai and Weiss (2002) [42]Up to 100,000
Bertsimas and Sethuraman (2002) [48]Up to 1000
Bertsimas et al. (2003) [43]Up to 10,000
Nazarathy and Weiss (2010) [49]Up to 2 14
Masin and Raviv (2014) [44]Up to 10
Gu et al. (2018) [45]Up to 100
Table 2. An example for the MDFJSP with 2 new orders and 3 machines.
Table 2. An example for the MDFJSP with 2 new orders and 3 machines.
N 1 r N 2 r rj M 1 M 2 M 3
1291128
3212141524
3182525
1121625
3222221514
3262620
Table 3. Parameter settings for instances.
Table 3. Parameter settings for instances.
ParameterValue
Total number of machines (M)randi [10, 20]
Total number of job types (R)randi [5, 10]
Total number of operations of job type r ( J r )randi [5, 10]
Number of available machines per operationrandi [1, M]
Processing time of an operation ( t r j m )randi [1, 20]
Total number of new orders (S)randi [1, 5]
Total number of job type r in new order s ( N s r )randi [10, 20]
Mean interarrival time of new jobs ( λ )randi [200, 800]
Table 4. Orthogonal array and average response values.
Table 4. Orthogonal array and average response values.
CombinationParameters C avg
iter stop D b
110100.12010
210200.22030.7
310400.32017.7
410800.42014
520100.22025
620200.12034
720400.42016.7
820800.32025.3
930100.32032.3
1030200.42035.3
1130400.12021
1230800.22017.3
1340100.42024.7
1440200.32000.7
1540400.22006.3
1640800.12024.3
Table 5. Comparison of the FRAS algorithm and its variant algorithms on generated instances.
Table 5. Comparison of the FRAS algorithm and its variant algorithms on generated instances.
InstanceMRSFRAS_RANDOMFRAS_TSFRAS
Best Avg RPD Best Avg RPD Best Avg RPD
MD011051639732.60.86398403.60.02394400.40.02
MD02105328142951.61.0514571468.40.0214401457.20.01
MD03105534893683.80.7621012113.60.0120892092.80.00
MD0410101118012771.305705750.04555556.40.00
MD0510103472151031.87178918060.0217781791.80.01
MD061010593709569.62.0731273137.20.0031223148.20.01
MD072051452492.80.303803800.003803800.00
MD08205337943841.20.0935453548.40.0035403541.60.00
MD09205523472419.20.17207820780.00207520750.00
MD1020101775929.41.53395401.40.09367373.60.02
MD112010325772727.80.3719921993.60.0019861988.80.00
MD122010535393667.60.3427682773.20.01273727370.00
#better12 11
#even0 1
#worse0 0
Table 6. Comparison of the FRAS and reference algorithms.
Table 6. Comparison of the FRAS and reference algorithms.
InstanceSLGA2SGAGRASFRAS
B e s t A v g R P D B e s t A v g R P D B e s t A v g R P D B e s t A v g R P D
MD01648660.40.68414419.20.06479482.40.22394400.40.02
MD0222302282.80.5915031511.40.0517391754.20.2214401457.20.01
MD0329282989.80.4321312147.20.03225522810.0920892092.80.00
MD04105110860.96619625.40.13674684.60.23555556.40.00
MD0544614517.21.5421512163.80.2223282344.20.3217781791.80.01
MD0691899333.21.9939163935.40.2643604392.20.4131223148.20.01
MD074204360.153803800.00393396.20.043803800.00
MD08379538150.0835433543.80.0035603565.80.0135403541.60.00
MD0921552168.20.0420752075.60.00223122550.09207520750.00
MD10756792.41.163833870.05531541.40.48367373.60.02
MD11240524190.2220022011.20.0120982106.40.0619861988.80.00
MD1231353219.40.18275427580.0129292936.20.07273727370.00
#better12 10 12
#even0 2 0
#worse0 0 0
Table 7. Comparison of the mean TCPU time and the mean TR time for the FRAS and reference algorithms.
Table 7. Comparison of the mean TCPU time and the mean TR time for the FRAS and reference algorithms.
InstanceSLGA2SGAGRASFRAS
TCPU (s)TR (s)TCPU (s)TR (s)TCPU (s)TR (s)CPU (s)TR (s)
MD01810.91158.34817.53128.01801.36104.01811.4177.13
MD022692.95515.802627.61405.752522.59336.102593.53204.81
MD034617.25873.664210.52638.914048.66516.814175.16332.65
MD041486.13286.941479.06246.471462.64128.971472.91218.72
MD057037.061312.376870.161208.666454.06589.966615.44584.89
MD0617,062.522928.8715,696.342623.2313,776.692038.5714,340.09898.58
MD071690.17280.261688.4210.191680.81224.951685.283.22
MD084471.08843.804469.41554.094425.31374.184436.13260.98
MD098368.031497.528192.6765.398194.891016.408128.3421.76
MD103156.47614.293154.64291.183121.53400.183132.09344.46
MD1110,460.312011.129734.641338.019601.251256.729600.47852.91
MD1215,184.582880.9314,310.42905.4614,043.111906.0214,065.841379.80
Table 8. Comparison of the FRAS algorithm and reference algorithms on manufacturing cases.
Table 8. Comparison of the FRAS algorithm and reference algorithms on manufacturing cases.
ExampleSLGA2SGAGRASFRAS
B e s t ( A v g ) TR (s) B e s t ( A v g ) TR (s) B e s t ( A v g ) TR (s) B e s t ( A v g ) TR (s)
Data013501 (3529)558.62851 (2960)548.52108 (2124)250.71933 (1959)39.5
Data023828 (3856)718.63483 (3524)598.63416 (3422)412.23327 (3328)136.9
Data036107 (6168)1072.85785 (5839)1002.55493 (5499)570.75376 (5378)199.4
Data048358 (8443)1651.57566 (7572)1533.66632 (6641)736.56431 (6433)428.7
Data0510,698 (10,785)1803.59369 (9532)1748.26801 (6806)647.46502 (6504)273.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, L.; Guan, Z.; Luo, D.; Rauf, M.; Fang, W. An Adaptive Search Algorithm for Multiplicity Dynamic Flexible Job Shop Scheduling with New Order Arrivals. Symmetry 2024, 16, 641. https://doi.org/10.3390/sym16060641

AMA Style

Ding L, Guan Z, Luo D, Rauf M, Fang W. An Adaptive Search Algorithm for Multiplicity Dynamic Flexible Job Shop Scheduling with New Order Arrivals. Symmetry. 2024; 16(6):641. https://doi.org/10.3390/sym16060641

Chicago/Turabian Style

Ding, Linshan, Zailin Guan, Dan Luo, Mudassar Rauf, and Weikang Fang. 2024. "An Adaptive Search Algorithm for Multiplicity Dynamic Flexible Job Shop Scheduling with New Order Arrivals" Symmetry 16, no. 6: 641. https://doi.org/10.3390/sym16060641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop