Next Article in Journal
A Quantile Spillover-Driven Markov Switching Model for Volatility Forecasting: Evidence from the Cryptocurrency Market
Previous Article in Journal
The Resistance Distance Is a Diffusion Distance on a Graph
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches

1
Ecole Nationale Supérieure d’Informatique (ESI), Laboratoire des Méthodes de Conception de Systèmes (LMCS), Oued Smar, Algiers BP 68M-16270, Algeria
2
Railenium Research and Technology Institute, 59540 Valenciennes, France
3
Division of Science, New York University Abu Dhabi, Abu Dhabi P.O. Box 129188, United Arab Emirates
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2381; https://doi.org/10.3390/math13152381
Submission received: 18 June 2025 / Revised: 18 July 2025 / Accepted: 22 July 2025 / Published: 24 July 2025

Abstract

Metaheuristics are powerful optimization techniques that are well-suited for addressing complex combinatorial problems across diverse scientific and industrial domains. However, their application to computationally expensive problems remains challenging due to the high cost and significant number of fitness evaluations required during the search process. Surrogate modeling has recently emerged as an effective solution to reduce these computational demands by approximating the true, time-intensive fitness function. While surrogate-assisted metaheuristics have gained attention in recent years, their application to complex scheduling problems such as the Permutation Flowshop Scheduling Problem (PFSP) under learning, deterioration, and maintenance effects remains largely unexplored. To the best of our knowledge, this study is the first to investigate the integration of surrogate modeling within the artificial bee colony (ABC) framework specifically tailored to this problem context. We develop and evaluate two distinct strategies for integrating surrogate modeling into the optimization process, leveraging the ABC algorithm. The first strategy uses a Kriging model to dynamically guide the selection of the most effective search operator at each stage of the employed bee phase. The second strategy introduces three variants, each incorporating a Q-learning-based operator in the selection mechanism and a different evolution control mechanism, where the Kriging model is employed to approximate the fitness of generated offspring. Through extensive computational experiments and performance analysis, using Taillard’s well-known standard benchmarks, we assess solution quality, convergence, and the number of exact fitness evaluations, demonstrating that these approaches achieve competitive results.

1. Introduction

Metaheuristics, a class of approximate optimization techniques designed to tackle complex optimization problems by finding (near) optimal solutions within a reasonable computational time [1], have seen evolutionary or nature-based approaches emerge as a promising framework for addressing real-world combinatorial challenges in science and industry [2]. In general, metaheuristics involve an iterative search process that can escape local optima and conduct a robust exploration of the search space. Throughout this process, numerous solutions are generated and evaluated until the algorithm converges on an optimal or near-optimal solution. Consequently, the cost of optimizing expensive problems is dominated by the number of fitness function evaluations required for algorithm convergence [3], resulting in a huge computational overhead. In many real-world applications, however, the computational cost is often severely constrained [4]. To overcome this shortcoming and reduce the number of costly fitness function evaluations, surrogate or meta-models [5,6] have been introduced in the literature to provide an efficient approximation of the true fitness function.
Surrogate-assisted evolutionary algorithms (SAEAs) have recently proven to be an efficient optimization tool for addressing computationally expensive problems [7]. By employing surrogate models to approximate the fitness function, SAEA enhances the ability to identify robust solutions while significantly reducing computational time. These surrogate models, built using machine learning (ML) techniques such as the Gaussian Process (Kriging) [8], Radial Basis Function (RBF) [9], Random Forest (RF) [10], and Support Vector Regression (SVR) [11], act as efficient substitutes for the true and time-consuming fitness function during the search process. The design of SAEA typically involves two main phases. (1) First, the surrogate model can be constructed using an offline approach [1], using data from past runs on similar problems. Alternatively, in an online approach [1], the model is continuously updated and refined during the optimization process. (2) Then, the interaction with the metaheuristic, referred to as model management or evolution control [12], involves strategies to determine when to rely on the surrogate model instead of performing real fitness function evaluations. To mitigate the risk of approximation errors from the surrogate model leading to false optima convergence [13,14], two main management mechanisms are employed in the literature [15]: (1) individual-based evolution control, where the surrogate model evaluates a certain number of individuals within each generation; and (2) population-based evolution control, where a portion of the total generations is evaluated using the surrogate model. These mechanisms strike a balance between computational efficiency and optimization accuracy, effectively guiding the search toward promising regions and ultimately toward the true optimal solution(s).
Scheduling problems within the permutation flow shop environment (PFSP) represent a critical class of problems commonly encountered in manufacturing and large-scale production, with significant social and economic implications [16]. The PFSP is an NP-hard combinatorial optimization problem [17] aiming to determine the optimal sequence for processing a set of n jobs in the same order on a set of m machines. Many scholars have recently concentrated on studying the PFSP in real-world settings to bridge the gap between theoretical models and industrial applications [18,19]. In this regard, this paper examines a PFSP that reflects real-world applications by considering unavailability periods due to predictive maintenance [20], as well as learning [21] and deterioration [22] effects. The learning effect reduces maintenance durations as they are scheduled later in the sequence. This reflects the improved efficiency of maintenance teams over time, as confirmed by [21]. Ignoring this effect may lead to overestimating maintenance durations and inefficient scheduling. In contrast, the deterioration effect increases the processing times of production jobs when they are started later in the sequence. This reflects real-world scenarios where delays in processing can lead to resource degradation or material spoilage, as highlighted by [22]. Failing to account for this effect may result in underestimated processing times, thus increasing the risk of production delays. The objective of this work is to determine the sequence of production jobs and maintenance operations that minimizes the makespan criterion, considering the learning and deterioration effects. The integration of these effects is critical for generating accurate and practical schedules [23]. Given that this problem is also NP-hard, metaheuristics are the most effective optimization methods to solve it, and a variety of these approaches have been proposed in the literature [24,25].
The artificial bee colony (ABC) algorithm [26] is an efficient evolutionary approach inspired by the intelligent foraging behavior of honeybees. It mimics the food-search process of three types of foraging bees—employed, onlooker, and scout—maintaining a balance between exploration and exploitation, as scout bees promote diversity by exploring new areas, while employed and onlooker bees concentrate on refining promising regions. Initially proposed for continuous optimization problems, the ABC algorithm was later extended to tackle discrete problems as well [27], and has shown strong performance in addressing a range of combinatorial problems, notably the PFSP [28].
Although the ABC algorithm primarily relies on exploitation mechanisms through its employed and onlooker bees, its exploitation capability may still be insufficient to ensure rapid convergence, particularly when dealing with complex, multimodal, or large-scale search spaces [4,29,30]. The use of multiple search operators to effectively exploit promising regions of the search space has been widely adopted in the ABC algorithm for enhancing its performance [31,32,33,34]. However, selecting the most suitable operator remains a key challenge [29]. Notably, refs. [31,32] showed that the random selection of search operators during the employed bee phase often led to premature convergence, primarily due to the insufficient exploitation of promising solutions. To address this limitation, ref.  [35] proposed an approach based on the ABC algorithm, incorporating a Q-learning-based operator selection mechanism. This reinforcement-learning-driven strategy demonstrated superior performance regarding solution quality and convergence properties, highlighting the importance of selecting the best search operator. While the standard ABC algorithm relies on systematically evaluating a full set of solutions, resulting in a high number of costly fitness evaluations, the key contributions of this paper are the proposal of the Surrogate-Assisted ABC (SABC) algorithms, which reduce computational effort while enhancing both the exploitation and the exploration capabilities. Additionally, we propose three variants of surrogate modeling optimization approaches, each incorporating a different evolution control mechanism. The first, Individual-based Surrogate Modeling Optimization (ISQABC), improves exploitation by using a surrogate model to perform a more intensive local search in the neighborhood of a subset of the current population during the employed bee phase. In contrast, the second approach, Generation-based Surrogate Modeling Optimization (GSQABC), enhances the exploration capability of the algorithm by evaluating newly generated populations with the surrogate model across the employed, onlooker, and scout bee phases. Finally, the third approach, Combined Surrogate Modeling Optimization (CSQABC), aims to balance exploitation and exploration by integrating individual- and generation-based surrogate modeling techniques. Together, these three approaches offer novel strategies for optimizing the PFSP through surrogate modeling techniques.
This paper is organized as follows: Section 2 reviews recent advancements in surrogate modeling and its integration into metaheuristic optimization. Section 3 presents the formulation of the PFSP with maintenance, learning, and deterioration effects. Section 4 describes the proposed surrogate-based approaches for solving the problem. Section 5 evaluates their performance through computational experiments and discusses the results. Finally, Section 6 concludes the paper by summarizing the key findings and outlining directions for future research.

2. Review of Recent Advancements in Surrogate Modeling for Metaheuristic Optimization

The use of surrogate models within metaheuristic algorithms to reduce computational costs has received significant attention in recent years. This trend is particularly evident in the development of Surrogate-Assisted Evolutionary Algorithms (SAEAs) [12], which have been applied to both single-objective [36] and multi-objective optimization problems [37]. According to [36], SAEAs are classified into two main types based on their modeling approaches: regression-based models and similarity-based models. Regression-based models use approximation techniques to map a solution vector to its predicted fitness value and have been the most extensively studied in the literature. The primary challenge lies in selecting suitable regression models for two distinct tasks: local modeling, where the surrogate predicts fitness values within a specific region of the search space, and global modeling, where the surrogate predicts fitness values across the entire search space. In a systematic comparison study, ref. [38] demonstrated that Kriging models are particularly well-suited for global modeling due to their ability to capture complex patterns and provide uncertainty estimates. On the other hand, similarity-based models rely on past evaluations to infer fitness values for new solutions based on their proximity to previously evaluated points. Although less common than regression-based models, these approaches are useful for certain optimization problems [39]. A significant advancement in SAEAs is the use of multiple surrogate models rather than reliance on a single model [40]. As discussed in [12], incorporating diverse surrogates with different characteristics and modeling capabilities can reduce prediction errors and improve optimization performance. However, this approach introduces the additional challenge of effectively selecting suitable models for the problem at hand and balancing their contributions. Moreover, training multiple surrogate models simultaneously can significantly increase computational time.
Over the past few years, various SAEAs have been proposed by combining different surrogate models and metaheuristics. For instance, ref. [41] combined a Radial Basis Function (RBF) model with a hybrid approach integrating teaching–learning-based optimization and differential evolution. Similarly, ref. [42] paired an RBF model with a multi-population particle swarm optimization algorithm to improve search efficiency. RF models have also been widely used in hybrid frameworks, with ref. [43] combining RFs with particle swarm optimization and ref. [44] integrating them with genetic algorithms. Meanwhile, Kriging models, known for their adaptability in global modeling tasks, were effectively integrated with genetic algorithms [45].
The ABC algorithm is a well-established metaheuristic that has demonstrated effectiveness in solving a wide range of combinatorial optimization problems [31,32,35,46,47,48]. These studies have successfully adapted the ABC algorithm to discrete and combinatorial domains by modifying the solution representation and defining suitable neighborhood optimization over permutations. The integration of surrogates with the ABC algorithm has been explored in several studies, which leverage surrogate models to estimate the fitness landscape, thereby reducing the number of exact evaluations and improving the overall efficiency of the algorithm [4,29]. For instance, ref. [4] applied RBF and Kriging models to assist the employed and onlooker bee stages, respectively. Moreover, ref. [29] employed an RBF model to evaluate the offspring generated by the search operator pool during the employed bee phase.
Despite the growing research applying SAEA to general, computationally expensive optimization problems, limited work has been carried out on real-world applications [7]. Production scheduling, a fundamental and widely studied combinatorial optimization problem, has notably received limited attention in the context of surrogate-assisted approaches. Only a few research efforts have addressed this gap: for instance, ref. [49] employed surrogate-assisted Ant Colony Optimization to solve a practical job shop scheduling problem, while ref. [50] proposed a surrogate-assisted differential evolution approach for a practical parallel machine scheduling problem. To the best of the authors’ knowledge, studies applying SAEA to the flowshop scheduling problem remain very limited, with only the work of [51], the most significant study to date. Moreover, no prior studies have addressed the PFSP with maintenance, learning, and deterioration effects, which aligns with the focus of this paper, highlighting a clear gap and underscoring the need for further investigation into the effectiveness of SAEA in this context.

3. Formulation of the Multi-Effect PFSP with Maintenance, Learning, and Deterioration Constraints

We consider an extended version of the permutation flowshop scheduling problem (PFSP), integrating predictive maintenance constraints along with learning and deteriorating effects. In this environment, a set J = { J 1 , J 2 , , J n } of n production jobs is processed through a series M = { M 1 , M 2 , , M m } of m machines, in the same predetermined order from one machine to another; i.e., only permutation schedules are allowed. Moreover, to reflect realistic operational conditions, each machine is continuously monitored by a Prognostics and Health Management (PHM) module, which estimates the Remaining Useful Life (RUL) and the degradation associated with each job. When the accumulated degradation reaches a critical threshold, a maintenance task must be scheduled to prevent system failure.
The complete schedule for each machine M i consists of a sequence π i combining jobs and planned maintenance activities. This sequence is composed of k i   1 maintenance operations interleaved with blocks of jobs and can be formally represented as   
π i = { B i 1 ,   M i 1 ,   B i 2 ,   M i 2 , . . . ,   B i k ,   M i k i ,   B i ( k i + 1 ) } , where l = 1 k i + 1 B i l = J .   
The following assumptions are considered in the problem formulation:
  • All jobs are available at the beginning of the schedule (time zero), and preemption is not allowed;
  • A machine can perform either a job or a maintenance task at any given time;
  • Each job j has a base processing time p i j on machine M i , and each maintenance operation c has a base duration p m i c ;
  • Degradation after processing job j on machine i is calculated as σ i j = p i j / R U L i j where 0 < σ i j < 1 ;
  • The degradation threshold Δ is set to 1 for all machines;
  • At least one maintenance operation is scheduled per machine, and no maintenance is allowed after the last job;
  • After each maintenance, the machine is fully restored to its original state (“as good as new”).
To make the model more reflective of practical production systems, two dynamic effects are introduced:
  • Learning effect: Maintenance durations tend to decrease when scheduled later in the sequence due to improved worker efficiency;
  • Deteriorating effect: Job processing times increase over time as machines wear out or experience delays.
These effects are modeled using the following expressions:
p m i c ( c ) = p m i c × c α , 1 < α < 0
p i j ( t ) = p i j + β × t , β > 0
where
  • p m i c ( c ) indicates the actual duration of the cth maintenance i scheduled on the ith machine.
  • p m i c indicates the basic duration of the cth maintenance i on the ith machine.
  • α indicates the learning rate.
  • p i j ( t ) indicates the actual processing time of the jth job scheduled on the ith machine at time t.
  • p i j indicates the basic processing time of job j on machine i.
  • β indicates the deterioration rate.
The objective is to determine the best sequencing of jobs and maintenance operations for each machine to minimize the total schedule completion time, C max , while accounting for dynamic variations in job processing times and maintenance durations. Ignoring these effects would lead to inaccurate estimations of C max , resulting in suboptimal schedules that either overestimate or underestimate the total completion time. By explicitly considering both learning and deteriorating effects, the proposed model ensures a more accurate and realistic estimation of C max , which is critical for generating practical and efficient schedules in industrial settings, considering that
  • The learning effect reduces maintenance durations over time, as maintenance teams become more efficient with experience. This reduction directly decreases the C m a x when maintenance operations are scheduled later in the sequence;
  • The deteriorating effect increases the processing times of production jobs if they are started later in the sequence. This increase directly impacts the C m a x by extending the overall schedule duration.
Given that C max primarily depends on the sum of job processing times and maintenance durations (as shown in Equation (3)), the expressions for C max considering both effects simultaneously are provided in Equation (4):
C m a x = I T + Σ j = 1 j = n p m j + Σ c = 1 c = k m p m m c
C m a x = I T + Σ j = 1 j = n p m j ( t ) + Σ c = 1 c = k m p m m c ( c ) = I T + Σ j = 1 j = n ( p m j + β × t ) + Σ c = 1 c = k m ( p m m c × c α )
Here, I T represents the total idle time of the last machine m while waiting for job arrivals, and k m denotes the number of maintenance activities scheduled on the last machine m.

4. Proposed Solving Strategies Leveraging Surrogate-Modeling-Based Approaches

In this paper, four distinct ABC-based optimization approaches are proposed. By integrating surrogate modeling, these approaches approximate the fitness landscape, enabling rapid estimations of the fitness function and supporting extensive investigation of the solution space while significantly reducing reliance on exact evaluations. The foundation of our proposed approaches relies primarily on our previous studies, namely the ABC algorithm [31] and its enhanced variant, the Integrated Q-Learning-based ABC (IQABC) algorithm [35], illustrated in Figure 1. The ABC algorithm is an iterative search process that begins with an initialized population of solutions. Then, it proceeds through three main phases: (i) the employed bee phase, where the neighborhood of current solutions isexploited using randomly selected perturbation operators; (ii) the onlooker bee phase, where solutions are evaluated based on a fitness-based selection process to exploit promising areas of the search space; and (iii) the scout bee phase, which maintains diversity by replacing poor solutions with randomly generated ones. In the ABC framework, the employed and onlooker bee phases are the main drivers of exploitation, focusing on intensifying the search around high-quality solutions through neighborhood exploitation and selection mechanisms. In contrast, the scout bee phase promotes exploration by introducing new random solutions into the population, helping to escape local optima and increase search diversity [52,53] With the aim to ensure more adaptive exploitation of the search space, the IQABC algorithm [35] enhanced our standard ABC by incorporating a Q-learning strategy to guide the exploitation of the employed bees’ neighborhood, replacing its original random mechanism.
The first proposed surrogate-based approach, namely the Surrogate-Assisted ABC (SABC) algorithm, represents an advanced variant of the ABC algorithm designed to optimize search operator selection. It introduces systematic exploitation of the neighborhood by comprehensively covering the pool of search operators. The surrogate model is leveraged to approximate fitness evaluations of offspring, enabling efficient operator selection and improved exploitation of candidate solutions. On the other hand, the Individual-based Surrogate-Assisted ABC (ISQABC), Generation-based Surrogate-Assisted ABC (GSQABC), and Combined-based Surrogate-Assisted ABC (CSQABC) algorithms extend the IQABC algorithm to address both local exploitation and global exploration challenges. The global architecture and hierarchical organization of the proposed surrogate-based approaches is depicted in Figure 2.
These four approaches reflect a deliberate investigation into how surrogate modeling can be integrated within the ABC framework to address different optimization challenges. Specifically,
  • SABC focuses on improving exploitation efficiency through systematic operator selection;
  • ISQABC enhances local exploitation by systematically refining solutions using surrogate-assisted evaluations of the employed bees’ neighborhood;
  • GSQABC prioritizes exploration at the population level, leveraging surrogate modeling to identify and probe promising regions of the search space;
  • CSQABC integrates both strategies, simultaneously balancing local exploitation and global exploration to achieve a more comprehensive search.
This structured design enables a systematic investigation of the trade-offs and challenges associated with surrogate-assisted metaheuristics in the context of complex scheduling problems, an area that remains largely underexplored.
Finally, all four proposed algorithms (SABC, ISQABC, GSQABC, and CSQABC) share a common foundation in the ABC framework and its enhancement through surrogate modeling. For clarity and to avoid redundancy, we present here the key parameters that govern the behavior of these algorithms:
  • Population size (Pop_Size) refers to the number of candidate solutions (food sources) in the population. Larger populations increase the diversity of the search space, supporting better local exploitation and robustness against premature convergence.
  • Maximum number of iterations (Max_iteration) determines the global search horizon. A higher number allows for broader exploration of the solution space.
  • Onlooker bee percentage (Onlook%). In our discrete ABC adaptation, we explicitly control the fraction of the population participating in the onlooker phase, which performs neighborhood-based refinements (using local search). Setting this value allows us to intensify exploitation without overriding the search diversity preserved by the rest of the population.
  • Scout bee limit (limit) defines the number of consecutive unsuccessful trials after which a food source (solution) is considered stagnant and is replaced. This parameter controls diversification and helps escape local optima.
  • Controlled individuals (in ISQABC and CSQABC) determines the number of individuals within the population that are evaluated exactly rather than via surrogate approximation. These individuals are typically those with the best estimated fitness values, ensuring reliable exploitation of promising areas.
  • Controlled generations (in GSQABC and CSQABC) indicate the number of iterations (generations) in which all individuals are evaluated exactly. These generations serve to refine solutions periodically and improve the surrogate model’s learning.
In what follows, we first present the construction of our surrogate model, which is crucial to the success of our proposed methods. We then describe how this model is used in each of the four optimization approaches.

4.1. Surrogate Model Design

Our surrogate model is carefully designed to accurately capture the nuances of the makespan function landscape, enhancing estimation precision through the following steps:
1.
Determining the input feature set.
This is a critical process in constructing the surrogate model [50]. Drawing on our domain knowledge and detailed analysis of the fitness function, we have selected a feature set that precisely and uniquely describes solution details. For our specified PFSP problem with n jobs and m machines, each candidate solution is represented through a structured encoding scheme. The production sequence is captured in a job-order vector, while a maintenance scheduling matrix indicates where maintenance operations are inserted along the job timeline for each machine. The matrix is binary-coded, with entries signifying whether a maintenance action occurs after a given job on a specific machine.
In our representation, these two global pieces of information are encoded into n + m features. The first n attributes indicate the order of each job in the sequence, while the last m attributes refer to the decimal encoding of each binary line of the maintenance matrix.
2.
Selection of Surrogate Model Algorithm.
After determining the feature set, the choice of a suitable surrogate model is critical for the success of surrogate-assisted optimization [12], yet it is inherently problem-dependent [54]. As emphasized by [55], “one surrogate model might give good results for a particular problem while it might perform very poorly when applied to another problem”. This observation underlines the importance of empirical evaluation when selecting a surrogate model for a specific problem context. Consequently, we compared five common methods: Random Forest (RF), Support Vector Regression (SVR), Radial Basis Function (RBF), and Kriging. Each model was tested on instances of varying sizes ( n { 20 , 50 , 100 , 200 , 500 } and m { 5 , 10 , 20 } ). The training set was constructed using solutions generated during an initial ABC algorithm run. The Kriging model demonstrated superior performance in terms of mean squared error (MSE) and computational efficiency.
Furthermore, Kriging has several theoretical advantages that support its integration into surrogate-assisted metaheuristics: it provides exact interpolation of known data points, it offers uncertainty quantification, enabling informed decisions about whether to trust approximate fitness or trigger exact evaluations, and it is flexible and adaptable to high-dimensional or noisy landscapes, making it well-suited to complex discrete optimization problems such as PFSP [56]. These advantages, both empirical and theoretical, justify the adoption of Kriging in the proposed surrogate-assisted framework.
3.
Training of the Surrogate Model.
Training the surrogate model is a critical step in its construction, directly affecting its accuracy and effectiveness in approximating the fitness landscape. We used an online training approach to continuously update the model throughout the optimization process, ensuring its accuracy as new solutions are generated. Training begins in the initialization phase, where accurate evaluations of solutions generated using a combination of NEH-based methods and random generation are used to build the initial model. As optimization progresses, the surrogate model is incrementally retrained with new exact evaluations until its accuracy plateaus. This plateau is defined as no significant improvement in accuracy over six consecutive iterations; at this point, retraining ceases to minimize the computational overhead [14]. The incremental training approach allows the surrogate model to adapt to the evolving fitness landscape, achieving a balance between computational efficiency and prediction accuracy.

4.2. Surrogate-Assisted Artificial Bee Colony Algorithm

The Surrogate-Assisted ABC (SABC) algorithm enhances our previous ABC variant [32] by replacing the random selection of a single neighborhood operator with a systematic evaluation of multiple operators, guided by surrogate-assisted fitness estimation. This improvement aims to produce higher-quality offspring while reducing the number of evaluations required during the exploitation of the neighborhood at each employed bee stage. The search operator pool, as introduced in [32,35], comprises six strategies designed to enhance solution exploitation as follows:
  • Swap Move on Production Jobs randomly swaps the positions of two production jobs within the sequence.
  • Double Swap Move executes two consecutive swap operations on production jobs.
  • Insert Move on Production Jobs removes a randomly selected job from its current position and reinserts it into a different position.
  • Double Insert Move performs two consecutive insertion operations on production jobs.
  • Right Shift on Maintenance Activities moves a maintenance activity to the right, placing it immediately after the next job in the sequence.
  • Left Shift on Maintenance Activities: Moves a maintenance activity to the left, placing it immediately before the preceding job in the sequence.
These operators enable flexible adjustments to both job sequences and maintenance schedules, enhancing the diversity and efficiency of the search process.
The following steps detail the execution of the SABC algorithm, from initialization to the generation of optimized solutions, highlighting its mechanisms for balancing exploration and exploitation within the search space as follows:
  • Initialization. The initial population is generated using the integrated NEH (INEH) algorithm and a modified version of the INEH algorithm [31], along with random generation to enhance diversity and quality. Each solution is decoded and evaluated to construct the initial training sample for the Kriging model.
  • Surrogate-assisted Employed Bee Phase. Initially, a search operator from the pool is randomly selected to generate offspring for each solution in the population. The offspring are then evaluated using the exact fitness function. Subsequently, each employed bee applies all six search operators, and the Kriging model evaluates the resulting offspring. Greedy selection (Algorithm 1) is then used to determine whether to replace the current solution with the best offspring generated. The goal is to enhance the solution quality efficiently, minimizing the number of exact evaluations.
  • Onlooker Bee Phase. Onlooker bees select candidates from employed bees using the roulette wheel method. The Iterated Local Search (ILS) algorithm [57] is then applied to improve the selected solutions (Algorithm 2). This local search operates in two phases: (i) a destruction phase, where a subset of d jobs is randomly removed from the current solution, and (ii) a reconstruction phase, where the removed jobs are reinserted using the NEH heuristic. To accelerate the search, we implement two stopping strategies: a first-improvement criterion for all food sources and a complete-insertion strategy applied only to the best solution.
  • Scout Bee Phase. Solutions not improved beyond a certain limit are abandoned and replaced with new random ones to maintain diversity and avoid stagnation. Specifically, each solution is associated with a trial counter that tracks the number of consecutive iterations without improvement. When this counter exceeds a predefined threshold, the solution is considered stagnant and is replaced by a newly generated random permutation, ensuring the exploration of new regions in the search space.
The employed, onlooker, and scout bee phases are iterated until a specified maximum number of iterations is reached. Throughout these iterations, the surrogate model is incrementally trained using accurately evaluated solutions from all phases, continuously refining its accuracy until reaching a stagnation point, defined as no improvement in the Kriging model’s accuracy for six consecutive iterations. Figure 3 illustrates the flowchart of the SABC algorithm, and its pseudo code is described in Algorithm 3.
Algorithm 1 Greedy Selection for Updating Solution
  • Require:X: Current solution, f ( X ) : Real fitness value of X, X i : Neighboring solution, f ˜ ( X i ) : Approximate fitness value of X i by surrogate model
1:
if   f ˜ ( X i ) < f ( X )   then
2:
    Compute f ( X i ) (exact fitness value of X i )
3:
    if  f ( X i ) < f ( X )  then
4:
          X X i
5:
          f ( X ) f ( X i )
6:
    end if
7:
else
8:
      Keep X as is
9:
end if
        return X , f ( X )
Algorithm 2 Iterated Local Search (ILS) algorithm
  • Require: S: Current solution, d: Number of jobs to remove, type : stopping criterion (first-improve or full)
  • Ensure: Improved solution S *
  1:
S * S
  2:
Randomly remove d distinct jobs from S to form job set R
  3:
S partial S without the jobs in R
  4:
if  type = first improve  then
  5:
     for all job j in R do
  6:
           for all insertion position p in S partial  do
  7:
                 Insert j at position p to obtain S new
  8:
                 if  f ( S new ) < f ( S * )  then
  9:
                        S * S new
10:
                       break inner loop
11:
                 end if
12:
           end for
13:
     end for
14:
else
15:
     for all job j in R do
16:
           Evaluate all possible insertions of j in S partial
17:
           Select the best insertion to update S partial
18:
     end for
19:
     if  f ( S partial ) < f ( S * )  then
20:
            S * S partial
21:
     end if
22:
end if
        return S *

4.3. Individual-Based Surrogate Assisted Q-Learning ABC

The Individual-based Surrogate-Assisted Q-learning ABC (ISQABC) algorithm is designed to significantly boost the exploitation capabilities of the baseline IQABC algorithm [35] by leveraging a larger population size and integrating a Kriging model for efficient fitness approximation. One of the key challenges in surrogate-assisted optimization is avoiding premature convergence to false optima while maintaining computational efficiency [13,14]. To address this, ISQABC incorporates individual-based evolution control, wherein a carefully selected portion of the population undergoes exact fitness evaluations. Deciding which individuals to control is a critical open problem in the field [3], as improper selection may lead to suboptimal solutions. In ISQABC, we employ a strategy that prioritizes exact evaluations for the best-performing individuals—which are most likely to lead to optimal solutions—while approximating less promising individuals using the surrogate model. This approach strikes a balance between enhancing solution quality and reducing the computational burden, thereby improving the overall efficiency of the algorithm.
Algorithm 3 Surrogate -Assisted ABC (SABC) Algorithm
  • Require:  M a x _ I t e r , P o p _ S i z e , l i m i t , O n l o o k %
  1:
Initialize population using INEH, modified INEH, and random solutions
  2:
Evaluate initial population exactly and construct the initial training set
  3:
Train initial Kriging model
  4:
s t a g n a t i o n _ c o u n t e r 0
  5:
for  t = 1 to M a x I t e r  do
  6:
      if Kriging model accuracy not yet stagnated then
  7:
           // Employed Bee Phase with Exact Evaluation
  8:
           for each solution X i  do
  9:
                 Select one search operator at random
10:
                 Generate offspring X i and evaluate it exactly
11:
                 if  f ( X i ) < f ( X i )  then
12:
                        X i X i , f ( X i ) f ( X i )
13:
                        t r i a l i 0
14:
                 else
15:
                        t r i a l i t r i a l i + 1
16:
                 end if
17:
                 Add ( X i , f ( X i ) ) to training set
18:
            end for
19:
      else
20:
           // Surrogate-Assisted Employed Bee Phase
21:
           for each solution X i  do
22:
                Generate six offspring using all search operators
23:
                Evaluate offspring with surrogate model
24:
                Apply Greedy Selection (Algorithm 1)
25:
                if solution X i updated by exact evaluation then
26:
                      t r i a l i 0
27:
                else
28:
                      t r i a l i t r i a l i + 1
29:
                end if
30:
           end for
31:
      end if
32:
      // Onlooker Bee Phase
33:
      Select O n l o o k % P o p _ S i z e solutions via roulette wheel
34:
      for each selected solution do
35:
            Apply ILS (Algorithm 2)
36:
            if solution improved then
37:
                  t r i a l i 0
38:
            else
39:
                  t r i a l i t r i a l i + 1
40:
            end if
41:
      end for
42:
      // Scout Bee Phase
43:
      for each X i  do
44:
            if  t r i a l i > l i m i t  then
45:
                 Replace X i with a new random solution
46:
                  t r i a l i 0
47:
            end if
48:
      end for
49:
      Update Kriging model with new exact evaluations
50:
      if model accuracy improved then
51:
             s t a g n a t i o n _ c o u n t e r 0
52:
      else
53:
             s t a g n a t i o n _ c o u n t e r s t a g n a t i o n _ c o u n t e r + 1
54:
      end if
55:
end for
56:
return Best solution found
The steps outlined below describe the execution of the ISQABC algorithm, from initialization to the generation of optimized solutions, emphasizing its exploitation within the search space as follows:
  • Initialization. The initial population is generated using the INEH, modified INEH, and random generation methods. The population is then sorted based on fitness values to identify individuals best suited for exact fitness evaluations and those suitable for approximate assessment.
  • Employed bee phase. During this phase, two swarms are created: the first swarm targets the top P 1 % of the best individuals, using the Q-learning algorithm [35] with exact fitness evaluations to generate offspring and update the Q-table, guiding the selection of search operators. The second swarm, composed of the remaining individuals, uses the Q-table to generate offspring whose fitness is evaluated using the Kriging model. This population, seeded with controlled individuals evaluated using the exact fitness function and approximated individuals assessed with the Kriging model, helps mitigate the risk of false optima by maintaining a diverse exploitation of the search space while ensuring accurate evaluation of the most promising solutions.
  • Onlooker bee phase. In this phase, the ILS algorithm (Algorithm 2) is applied to a subset of solutions generated by the first swarm and the best-updated individual from the second swarm.
  • Scout bee phase. Finally, the scout bee phase replaces individuals that have not been updated after a defined number of trials with new random individuals to sustain the diversity in the population.
As in the SABC approach, the employed, onlooker, and scout bee phases are iterated until a specified maximum number of iterations is reached. During these iterations, the surrogate model is incrementally trained using accurately evaluated solutions from all phases—employed, onlooker, and scout bees—to improve its accuracy. Figure 4 illustrates the flowchart of the ISQABC algorithm, while Algorithm 4 describes its pseudo code.
Algorithm 4 Individual-based Surrogate-Assisted Q-Learning ABC (ISQABC)
  • Require:  M a x _ I t e r , P o p _ S i z e , O n l o o k % , l i m i t , P: Percentage of controlled individuals.
  1:
Initialize population using INEH, modified INEH, and random methods
  2:
Evaluate all solutions exactly and sort by fitness
  3:
Select top P % individuals as controlled swarm, rest as approximate swarm
  4:
Initialize Q-learning table
  5:
for  t = 1 to M a x _ I t e r  do
  6:
      // Employed Bee Phase
  7:
      for each controlled individual X i  do
  8:
            Select operator using Q-learning
  9:
            Generate X i and evaluate exactly
10:
            Update Q-table based on improvement
11:
            if  f ( X i ) < f ( X i )  then
12:
                  X i X i , t r i a l i 0
13:
            else
14:
                  t r i a l i t r i a l i + 1
15:
            end if
16:
            Add ( X i , f ( X i ) ) to training set
17:
      end for
18:
      Train Kriging model
19:
      for each approximate individual X j  do
20:
            Select operator using Q-table
21:
            Generate X j and evaluate f ˜ ( X j ) with Kriging model
22:
            Apply Greedy Selection (Algorithm 1)
23:
            if solution X i updated by exact evaluation then
24:
                  t r i a l i 0
25:
            else
26:
                  t r i a l i t r i a l i + 1
27:
            end if
28:
      end for
29:
      Update training set with new exact evaluations
30:
      // Onlooker Bee Phase
31:
      Select a subset of individuals from the controlled swarm and best updated from the approximated swarm
32:
      for each selected individual do
33:
            Apply ILS (Algorithm 2)
34:
            if solution improved then
35:
                  t r i a l i 0
36:
            else
37:
                  t r i a l i t r i a l i + 1
38:
            end if
39:
      end for
40:
      // Scout Bee Phase
41:
      for each X i  do
42:
            if  t r i a l i > l i m i t  then
43:
                  Replace X i with a new random solution
44:
                   t r i a l i 0
45:
            end if
46:
      end for
47:
end for
48:
return Best solution found

4.4. Generation-Based Surrogate Assisted Q-Learning ABC

The Generation-based Surrogate Assisted Q-learning ABC (GSQABC) algorithm is engineered to enhance the exploration capabilities of the baseline IQABC algorithm [35] by leveraging generation-based evolution control and surrogate modeling. A Kriging model is integrated to estimate the fitness of individuals during key phases (employed, onlooker, and scout bees) across selected generations, expediting the search process by reducing the number of exact fitness evaluations required in early iterations. To prevent premature convergence to false optima, GSQABC implements a controlled evolution approach during the initial generations. During these critical early stages, exact evaluations guide the algorithm’s exploration, ensuring accurate training of the surrogate model and preventing misleading fitness approximations. Once the initial control phase ends, the surrogate model is used to efficiently explore the broader search space. To further refine the solution quality, GSQABC applies a greedy selection mechanism (Algorithm 1) at the end of each uncontrolled generation, ensuring that the best-found solutions are continuously updated. This combination of controlled early evolution and surrogate-guided exploration significantly improves the algorithm’s ability to mainly explore and exploit the search space, enhancing both the quality of solutions and the computational efficiency of the optimization process.
The steps outlined below describe the execution of the GSQABC algorithm, from its initialization to the generation of optimized solutions, emphasizing its exploration within the search space, as follows:
  • Initialization. Generate the initial population using the INEH, modified INEH, and random generation methods.
  • Controlled Early Generations. During the employed, onlooker, and scout bee phases, the fitness of all individuals is evaluated exactly. After these generations, the surrogate model is trained using the individuals that underwent exact evaluation during these phases.
  • Uncontrolled Generations. During the uncontrolled generation, the fitness of new individuals in the employed, onlooker, and scout bee phases is approximated using the surrogate model. At the end of each generation, a greedy selection process is applied to update the best solutions based on exact evaluations, ensuring the accuracy and quality of the best solutions found.
The employed, onlooker, and scout bee phases are iterated until a specified maximum number of iterations is reached. During these iterations, the surrogate model is incrementally trained using accurately evaluated solutions from all phases—employed, onlooker, and scout bees—during controlled generations to improve its accuracy. Figure 5 illustrates the flowchart of the GSQABC algorithm, and Algorithm 5 describes its pseudo code.
Algorithm 5 Generation-based Surrogate-Assisted Q-Learning ABC (GSQABC)
  • Require:  M a x _ I t e r , P o p _ S i z e , l i m i t , O n l o o k % , G c : Number of controlled generations,
  1:
Initialize population using INEH, modified INEH, and random generation
  2:
Evaluate all individuals using exact fitness function
  3:
Initialize Q-learning table
  4:
for  t = 1 to M a x _ I t e r  do
  5:
      if  t G c then                                                                 ▹ — Controlled Generations —
  6:
            // Employed Bee Phase
  7:
            for each individual X i  do
  8:
                  Select operator using Q-learning
  9:
                  Generate X i , evaluate exact f ( X i )
10:
                  if  f ( X i ) < f ( X i )  then
11:
                         X i X i , t r i a l i 0
12:
                  else
13:
                         t r i a l i t r i a l i + 1
14:
                  end if
15:
                  Add ( X i , f ( X i ) ) to training set
16:
            end for
17:
            // Onlooker Bee Phase
18:
            Select individuals using roulette wheel
19:
            for each selected X j  do
20:
                  Apply ILS (Algorithm 2)
21:
                  if solution improved then
22:
                         t r i a l j 0
23:
                  else
24:
                         t r i a l j t r i a l j + 1
25:
                  end if
26:
            end for
27:
            // Scout Bee Phase
28:
            for each X k  do
29:
                  if  t r i a l k > l i m i t  then
30:
                        Replace X k with new random solution
31:
                         t r i a l k 0
32:
                  end if
33:
            end for
34:
            Train Kriging model using exact evaluations
35:
    else                                                                               ▹ — Uncontrolled Generations —
36:
            // Employed Bee Phase
37:
            for each individual X i  do
38:
                  Select operator using Q-learning
39:
                  Generate X i and evaluate f ˜ ( X i ) with Kriging model
40:
                  Apply Greedy Selection (Algorithm 1) using exact f ( X i ) only if f ˜ ( X i ) < f ( X i )
41:
            end for
42:
            // Onlooker Bee Phase
43:
            Select individuals and apply ILS using surrogate-based evaluations
44:
            // Scout Bee Phase
45:
            for each X k  do
46:
                  if  t r i a l k > l i m i t  then
47:
                        Replace X k with new random solution
48:
                         t r i a l k 0
49:
                  end if
50:
            end for
51:
      end if
52:
end for
53:
return Best solution found

4.5. Combined Surrogate Assisted Q-Learning ABC

The CSQABC algorithm is designed to harness the complementary strengths of ISQABC and GSQABC, merging individual and generation-based control mechanisms for more robust optimization. In CSQABC, the individual-based evolution control from ISQABC is used to fine-tune the exploitation phase of the algorithm by ensuring that only the most promising individuals—based on exact fitness evaluations—are refined with exact fitness values. This mechanism minimizes computational overhead while improving the accuracy of high-performing solutions, thus boosting the exploitation capabilities of the algorithm. Simultaneously, the generation-based control from GSQABC enhances the exploration phase. This enables broader exploration of the solution space through surrogate-guided fitness evaluations, reducing the risk of premature convergence and enabling more efficient navigation of the search space. During the critical early stages, exact fitness evaluations are employed to train the surrogate model effectively, ensuring accuracy in the subsequent exploration phase.
The following steps outline the execution of the CSQABC algorithm, from initialization to the generation of optimized solutions. We focus on how it effectively balances exploration and exploitation within the search space, as detailed below.
  • Initialization. Generate an initial population of solutions using the INEH, the modified INEH, and random generation. This diverse initialization ensures a broad exploration of the solution space from the start.
  • Controlled Generations
    • Employed Bee Phase. In the early generation, the population is divided into two swarms. The first swarm focuses on the top P 1 % of the best individuals, applying the Q-learning algorithm with exact fitness evaluations. These evaluations guide the generation of new offspring and update the Q-table, which informs the selection of search operators to ensure effective optimization. The second swarm consists of the remaining individuals and relies on the surrogate Kriging model for fitness evaluations. These individuals use the Q-table learned from the exact evaluations of the first swarm to produce new offspring, but their fitness is approximated using the surrogate model. This partitioning helps balance the precise exploration of promising solutions with computational efficiency, mitigating the risk of converging on false optima.
    • Onlooker Bee Phase. In this phase, the ILS algorithm (Algorithm 2) is applied to refine a subset of solutions from the first swarm and the best-updated individual from the second swarm. This improves solution quality by focusing on regions of the search space where promising solutions have been found.
    • Scout Bee Phase. The scout bee phase ensures population diversity by replacing individuals not improved after a number limit of trials with randomly generated ones, keeping the algorithm from stagnating.
    At the end of these controlled generations, the surrogate model is trained using the precisely evaluated individuals from this phase, enhancing its accuracy for future generations.
  • Uncontrolled Generations
    After the initial control phase, the surrogate model is employed extensively to approximate the fitness of new individuals across the employed, onlooker, and scout bee phases. Moreover, at the end of each generation, a greedy selection process is applied to maintain solution quality. This process uses exact fitness evaluations for the best-found solutions, refining and updating them to maintain the accuracy of the optimization results.
Figure 6 illustrates the flowchart of the CSQABC algorithm.

4.6. Study of Algorithmic Complexity

The computational complexity of metaheuristic algorithms is a critical aspect of their design and analysis, as it directly impacts their scalability and applicability to large-scale optimization problems. In this section, the complexity of the proposed algorithms is analyzed in terms of their major components: initialization, fitness evaluation, surrogate model training, and search operator application. The following assumptions are made:
  • n: Number of jobs.
  • m: Number of machines.
  • P: Population size.
  • T: Total number of iterations.
  • T c : Number of controlled generations (where the exact fitness evaluation is used).
  • T u : Number of uncontrolled generations (where the surrogate model is used for fitness estimation).
  • k: Number of training samples for the Kriging model.
  • E: Total number of exact fitness evaluations.
  • S: Number of states in the Q-learning table.
  • A: Number of actions in the Q-learning table.

Assessing SABC Algorithmic Complexity

  • Initialization:
    • The INEH and modified INEH algorithms have a complexity of O ( n 3 × m ) [58].
    • The random generation has a complexity of O ( n × m ) .
    • Exact fitness evaluation (makespan calculation) has a complexity of O ( n × m ) [59].
    • The total complexity is   O ( P × n 3 × m ) .
  • Surrogate-assisted Employed Bee Phase:
    • Generating offspring using search operators: O ( T × P × n × m ) .
    • Exact fitness evaluations (during controlled generations): O ( T c × P × n × m ) .
    • Fitness estimation using the Kriging model costs O ( k 2 ) [60] during the uncontrolled generations: O ( T u × P × k 2 ) .
    • Greedy selection applied on B solutions during the uncontrolled generations: O ( T u × B × n × m ) .
    • Total complexity:  O ( T × P × n × m ) + O ( T u × P × k 2 ) .
  • Onlooker Bee Phase:
    • The Roulette wheel selection costs O ( P ) [61]; total complexity is then O ( T × P ) .
    • Applying Iterated Local Search (ILS): ILS has a complexity of O ( n 2 × m ) per solution [57].
    • Total complexity:  O ( T × P × n 2 × m )
  • Scout Bee Phase:
    • Replacing abandoned solutions: O ( T × P × n × m ) .
    • Total complexity:  O ( T × P × n × m ) .
  • Surrogate model training:
    • Training the Kriging model has a complexity of O ( k 3 ) [60]; then, the total complexity is O ( T c × k 3 ) .
Overall time complexity of the SABC is  O ( P × n 3 × m ) + O ( T × P × n 2 × m ) + O ( T u × P × k 2 ) + O ( T c × k 3 )

Assessing ISQABC Algorithmic Complexity

  • Initialization:
    • Same as SABC: O ( P × n 3 × m ) .
  • Employed Bee Phase:
    • The first swarm( P 1 % · P ): Q_table selecting and updates ( O ( S × A ) ), generating offspring ( O ( n × m ) ), exact fitness evaluation ( O ( n × m ) ). The total complexity is then O ( T × P 1 % . · P × ( n × m + S × A ) ) .
    • The second swarm: Q_table selecting ( O ( A ) ), generating offspring ( O ( n × m ) ), fitness estimation by Kriging ( O ( k 2 ) ). The total complexity is then  O ( T × ( 1 P 1 % ) · P × ( n × m + A + k 2 ) ) .
    • Total complexity:  O ( T × P × ( n × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) ) .
  • Onlooker Bee Phase:
    • Same as SABC: O ( T × P × n 2 × m ) .
  • Scout Bee Phase:
    • Same as SABC: O ( T × P × n × m ) .
  • Surrogate Model Training:
    • Same as SABC: O ( T × k 3 ) .
Overall Time Complexity of ISQABC is   O ( T × P × ( n 2 × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) + T × k 3 )

Assessing GSQABC Algorithmic Complexity

  • Initialization: same as SABC: O ( P × n 3 × m ) .
  • Controlled generations:  O ( T c × P × ( n × m + S × A ) ) .
  • Surrogate model training:  O ( k 3 ) .
  • Uncontrolled generations:  O ( T u × P × ( n × m + A + k 2 ) ) ,
Overall Time Complexity of GSQABC:  O ( T c × P × ( n × m + S × A ) + T u × P × ( n × m + A + k 2 ) + k 3 ) .

Assessing CSQABC Algorithmic Complexity

  • Initialization: same as SABC: O ( P × n 3 × m ) .
  • Controlled generations:  O ( T c × P × ( n 2 × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) + T c × k 3 ) .
  • Surrogate model training:  O ( k 3 ) .
  • Uncontrolled generations:   O ( T u × P × ( n × m + A + k 2 ) )
Overall Time Complexity of CSQABC is  O ( T c × P × ( n 2 × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) + T c × k 3 + T u × P × ( n × m + A + k 2 ) ) .
While the theoretical time complexity of each proposed algorithm has been detailed individually, a comparative interpretation is necessary to better highlight their computational demands.
  • The SABC algorithm has a complexity of O ( P × n 3 × m ) + O ( T × P × n 2 × m ) + O ( T u × P × k 2 ) + O ( T c × k 3 ) , where the computational overhead mainly stems from Kriging retraining and surrogate-assisted offspring evaluation during the employed bee phase.
  • The ISQABC algorithm involves more computational effort due to a larger population and the selective use of exact evaluations based on Q-learning feedback. Its complexity is O ( T × P × ( n 2 × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) + T × k 3 ) . The key contributor to its higher cost is the combination of online surrogate usage and controlled evaluation on selected individuals across the full iteration range.
  • The GSQABC algorithm, while emphasizing exploration, limits exact evaluations to a subset of generations T c . Its complexity is O ( T c × P × ( n × m + S × A ) + T u × P × ( n × m + A + k 2 ) + k 3 ) . This structure significantly reduces the number of expensive exact evaluations compared to ISQABC.
  • The CSQABC algorithm combines both individual- and generation-based evolution control. It inherits the overhead of both strategies, leading to the highest overall complexity: O ( T c × P × ( n 2 × m + P 1 % × S × A + ( 1 P 1 % ) × ( A + k 2 ) ) + T c × k 3 + T u × P × ( n × m + A + k 2 ) ) . The primary computational overhead in CSQABC arises from maintaining two control mechanisms (on both individual and generation levels) and frequent Kriging retraining, especially during the controlled phase.
In summary, although CSQABC offers a balanced exploration–exploitation strategy, it is also the most computationally expensive due to its hybrid evolution control and dual training cycles. This explains the observed increase in CPU time compared to other variants.

5. Computational Results and Discussion

In this section, we conduct a thorough experimental analysis to evaluate the performance of the proposed surrogate-modeling-based optimization approaches. All algorithms and tests were developed using Python 3.9.5 and executed on a personal computer running Windows 10 Enterprise operating system, an Intel i5 CPU running at 2.10 GHz and 8 GB of RAM.
Below, we present the key components of our experiment, datasets, metrics, and algorithm parameters, which form the foundation of our research methodology. We compare our surrogate-modeling-based optimization approaches to IQABC [31], a competitive baseline previously shown to outperform both ABC [31] and the Variable Neighborhood Search (VNS) algorithm [62] under consistent conditions adapted to PFSP with learning, maintenance, and deterioration effects.

5.1. Datasets and Evaluation Metrics

We conducted experiments using 11 test beds from the Taillard benchmark, covering a range of PFSP problem sizes, with n { 20 , 50 , 100 , 200 } and m { 5 , 10 , 20 } . Each test bed consists of 10 instances to ensure robust evaluation. To this test bed, which contains only production data (job processing times), we incorporated maintenance and learning/deterioration effects.
For the maintenance data, job degradation values—crucial for scheduling maintenance—were generated based on job processing times. Two maintenance scenarios were considered to simulate different levels of maintenance complexity:
  • Mode 1: Frequent maintenance operations with medium durations, generated from a uniform distribution U [ 50 , 100 ] .
  • Mode 2: Complex maintenance operations with longer durations, generated from a uniform distribution U [ 100 , 150 ] .
Additionally, for learning and deteriorating effects, random indices (rates) were generated from a uniform distribution U [ 0 , 1 ] to simulate variable learning and deterioration over time.
The performance of the surrogate-modeling-based approaches was assessed based on three key metrics:
  • Solution Quality: Each instance was executed five times per solution approach, and the average makespan value was retained. The Average Relative Percentage Deviation (RPD) was then calculated for the best-known solution for the Taillard instance without maintenance, learning, or deteriorating effects (Equation (5)).
    R P D = 1 R × Σ i = 1 R C m a x S o l i C m a x b e s t C m a x b e s t × 100 .
    where C m a x b e s t is Taillard’s best-known solution, C m a x S o l i is the returned value, and R is the number of similar scaled instances running.
  • Computational efficiency: To evaluate the computational efficiency of the proposed algorithms, we employ the following two complementary metrics:
    CPU Time: This metric captures the actual time (in seconds) consumed by each algorithm during its execution. It accounts for both the metaheuristic search and the overhead introduced by surrogate model training and prediction.
    gain_FE: This metric estimates the percentage reduction in the number of exact fitness evaluations (FEs) required by surrogate-assisted algorithms compared to the baseline IQABC. It is defined as
    g a i n _ F E s = 1 Controlled individuals × Controlled generations Total Exact Fitness evaluations in IQABC .
    This is a conservative estimate, considering only the fixed evaluations in the employed bee phase. Additional exact evaluations (e.g., during onlooker or scout phases) are not included, as they are non-deterministic and depend on the dynamic behavior of the algorithm. Nonetheless, gain_FE provides a lower-bound indicator of evaluation efficiency.
  • Convergence Abilities: The convergence speed and behavior of each algorithm were analyzed to assess how quickly each method approaches optimal or near-optimal solutions.

5.2. Parameters Setting

The choice of parameters for the four proposed algorithms—SABC, ISQABC, GSQABC, and CSQABC—was made to balance exploration and exploitation while ensuring a fair comparison in terms of the number of exact function evaluations. Below, we detail and justify the parameter settings for each algorithm:
  • The parameters for the SABC algorithm were chosen to mirror those of the IQABC algorithm, which has proven to be effective in addressing the PFSP. Specifically, we retained a population size of 70, a maximum of 200 iterations, an onlooker bee percentage of 40 % , and a scout limit threshold of 5. These parameters ensure that the SABC algorithm retains a similar balance between exploration and exploitation, allowing a direct comparison between SABC and IQABC while incorporating surrogate-assisted evaluations to reduce the number of exact fitness evaluations.
  • In the ISQABC algorithm, the population size, representing the number of employed bees, was increased to 120. This larger population is designed to enhance the exploitation capabilities during the employed bee phase by providing more potential solutions for refinement. However, we kept the number of controlled individuals, i.e., the number of employed bees performing exact evaluations, at 70 identical to the IQABC algorithm. This decision ensures that the total number of exact fitness evaluations remains comparable across algorithms. The remaining parameters, maximum iterations set at 200, onlooker bee percentage set at 40 % , and scout limit threshold set at 5, were retained from the IQABC algorithm, allowing us to isolate the impact of increased exploitation through a larger population.
  • To enhance the exploration capabilities of the algorithm, the number of generations (iterations) was increased to 270. We contend that a longer runtime enables the GSQABC algorithm to explore a larger portion of the search space, potentially improving convergence for complex problem instances. However, we maintained the number of controlled generations at 200, consistent with the IQABC algorithm, to ensure the comparability of the number of exact fitness evaluations. The remaining parameters, population size set at 70, onlooker bee percentage at 40 % , and scout limit threshold at 5, were also kept identical to those of the IQABC algorithm to ensure that the effect of extended exploration was the primary distinguishing factor.
  • For the CSQABC algorithm, we adopted a hybrid strategy by combining elements from the ISQABC and GSQABC algorithms. The population size was set to 120, of which 70 are controlled individuals performing exact evaluations. Similarly, the maximum number of iterations was set to 270, with 200 controlled iterations, allowing the CSQABC algorithm to balance exploration (through increased iterations) and exploitation (through a larger population), while maintaining a comparable number of exact evaluations as the IQABC algorithm. The onlooker bee percentage ( 40 % ) and scout limit threshold (5) were retained to ensure that all algorithms share a consistent foundation for the bee colony behavior.
To ensure transparency and facilitate reproducibility, Table 1 summarizes the key configuration parameters used for each of the four algorithms.
It is important to note that these surrogate-assisted algorithms may trigger additional exact evaluations beyond those counted in standard algorithmic iterations. These are selectively used when the surrogate model predicts an improved solution, either to replace a current one or to update the best-found solution within an iteration. Such validations are essential to prevent false optima and maintain the integrity of the search. These additional evaluations do not aim to enhance the search but rather aim to ensure correctness and prevent the premature rejection of promising solutions due to surrogate approximation errors. This behavior reflects a common and necessary trade-off in surrogate-assisted optimization, where maintaining reliability sometimes requires limited, targeted exact evaluations in addition to those driving the search dynamics.
In the following section, we evaluate the performance of the proposed algorithms in comparison to the baseline IQABC algorithm. The results are based on 1100 independent runs across a comprehensive benchmark. Detailed insights are provided through multiple metrics, including the frequency with which each algorithm achieves the best solutions, along with statistical validation. The performance assessment is organized into two parts:
  • Ablation Analysis, assessing the individual and combined effects of exploitation and exploration.
  • Overall Comparison, evaluating algorithms across various problem scales and metrics, including solution quality, computational efficiency, and convergence behavior.

5.3. SABC, ISQABC, GSQABC, and CSQABC Ablation Analysis

The ablation analysis evaluates the individual contributions of the surrogate-assisted components in the proposed algorithms. The goal of this analysis is to isolate the effects of exploitation, exploration, and their combination, providing insights into the strengths and limitations of each approach.

5.3.1. SABC Algorithm

The SABC algorithm serves as a bridge between the basic ABC [32] and the more advanced IQABC [35]. It replaces the random operator selection of the ABC algorithm with a surrogate-based systematic operator selection mechanism, which improves the efficiency and effectiveness of the search process. The results from Table 2, Table 3 and Table 4 show that SABC achieves performance equivalent to IQABC, outperforming the basic ABC algorithm. This demonstrates the effectiveness of surrogate-assisted approaches in enhancing the performance of metaheuristic algorithms.

5.3.2. Exploitation-Focused Algorithm (ISQABC)

The ISQABC algorithm enhances the exploitation of IQABC by increasing the number of employed bees through a larger population size, thereby intensifying local refinement, leveraging surrogate-assisted fitness estimation during the employed bee phase. It performs well on large-scale problems (Table 4) due to its ability to refine high-quality solutions through focused exploitation, which is particularly effective in high-dimensional search spaces. However, on small-scale problems (Table 2), its strong exploitation tendency can lead to premature convergence and stagnation in local optima, reducing its performance. This is because small-scale problems often require a better balance between exploration and exploitation, which ISQABC’s heavy focus on exploitation may not adequately provide.

5.3.3. Exploration-Focused Algorithm (GSQABC)

The GSQABC algorithm enhances the exploratory capacity of IQABC by increasing the number of iterations; when combined with low-cost surrogate-assisted evaluations, this broadens the likelihood of identifying promising regions over time, even in the presence of partial stagnation. This approach performs well on specific large-scale instances (Table 4), particularly under maintenance mode 2, e.g., 100 × 5 (M2), 200 × 10 (M2), where its exploration capabilities prove beneficial. However, its performance suffers on small-scale problems (Table 2), as excessive exploration leads to inefficiency.

5.3.4. Combined Approach (CSQABC)

The CSQABC algorithm combines the strengths of ISQABC and GSQABC to balance exploitation and exploration. This hybrid approach achieves competitive performance across all problem scales, highlighting the versatility and robustness of CSQABC in a wide range of scenarios. This demonstrates the effectiveness of combining exploitation and exploration in surrogate-assisted optimization.

5.3.5. Summary

These findings highlight that while no single algorithm universally outperforms the others, their combined capabilities demonstrate a high degree of adaptability and efficiency. Each algorithm’s unique characteristics make it particularly suited to specific problem scenarios, proving the overall effectiveness of the proposed approaches in addressing the complexities of PFSPs under varying conditions. This equivalence further validates the utility of these algorithms as reliable solutions for scheduling problems with maintenance, learning, and deteriorating effects.

5.4. Comparative Assessment of SABC, ISQABC, GSQABC, and CSQABC

The overall comparison evaluates the algorithms across different problem scales (small, medium, and large) using solution quality (Table 2, Table 3 and Table 4), computational efficiency (Table 5 and Table 6) and convergence behavior, analyzed through convergence curves (Figure 7).

5.4.1. Solution Quality Assessment

The RPD results, presented in Table 2, Table 3 and Table 4, reveal distinct performance trends across problem scales. For small-scale problems, CSQABC and ISQABC consistently outperform the other algorithms, demonstrating their ability to handle smaller problem scales effectively. In medium-scale problems, the performance of all algorithms converges, with minor differences in RPD values. For large-scale problems, the performance differences narrow further, with ISQABC and CSQABC occasionally excelling in specific instances. GSQABC shows weaker results than its counterparts, except in specific large-scale instances under maintenance mode 2.

5.4.2. Computational Efficiency Assessment

The CPU time results, presented in Table 5, reveal that the surrogate-assisted algorithms require more execution time than the baseline IQABC. This increase is mainly due to the overhead introduced by the surrogate model, particularly the online training and evaluation steps. However, the increase remains moderate, and the approaches remain practical even for large-scale instances.
To complement this analysis, we calculated the gain_FE metric as a conservative estimate of real fitness evaluation savings (Table 6). For example, in the GSQABC algorithm, 70 controlled individuals over 270 iterations yield 18,900 exact fitness evaluations, compared to the fixed 18,900 in IQABC. If GSQABC explored more iterations with fewer controlled ones (e.g., 200 controlled generations), the gain_FE could reach up to 25.9%, showing a clear reduction in costly evaluations while maintaining competitive performance. In the case of the SABC algorithm, which does not use explicitly defined controlled individuals or generations, estimating the number of exact fitness evaluations is more complex. During each iteration, each employed bee applies six search operators and generates candidate solutions, all evaluated via the surrogate model. A single exact fitness evaluation is performed only if the best surrogate-evaluated solution outperforms the current solution. Thus, at most one exact evaluation per bee per iteration is performed, but this number can be lower in practice. Given this stochastic behavior, we do not report a specific gain_FE value for SABC. However, the CPU time results show that SABC maintains computational efficiency comparable to IQABC, supporting the assumption that exact evaluations are effectively reduced through surrogate usage, even without explicit control mechanisms.
These results highlight that surrogate modeling enables the algorithm to explore deeper or broader search spaces (more individuals, more iterations) without proportionally increasing exact evaluations, demonstrating a good compromise between computational efficiency and solution quality. Nonetheless, we recognize that
  • Extra exact evaluations are still needed (e.g., in greedy selection) to correct potential surrogate inaccuracy;
  • Online model training may consume time, especially in early iterations or with complex landscapes.
Despite these challenges, the proposed surrogate-assisted algorithms show effective scalability and efficiency for solving complex PFSP instances, paving the way for future research on improving surrogate reliability and integration policies.

5.4.3. Convergence Behavior

The convergence curves, in Figure 7, illustrate how quickly each algorithm approaches optimal or near-optimal solutions. CSQABC consistently demonstrates superior convergence performance, with GSQABC following closely behind. This indicates that CSQABC effectively balances exploitation and exploration throughout the optimization process, while GSQABC benefits from its exploration-focused design. In contrast, IQABC, SABC, and ISQABC show slower convergence, reflecting their limitations in balancing exploration and exploitation. The superior performance of the CSQABC algorithm can be attributed to the combined individual- and generation-based control mechanisms, which dynamically adapt the search behavior. Specifically, the individual-based control enhances the algorithm’s exploitation capabilities by focusing on promising regions of the solution space, allowing the precise refinement of high-quality solutions. Meanwhile, the generation-based control mechanism promotes exploration by periodically diversifying the search, thus mitigating premature convergence to local optima.

5.4.4. Frequency of Best Solutions

The frequency (W) at which each algorithm identified the best solution provides additional insights into their comparative behavior. While ISQABC frequently obtains the best solutions across a range of instances, including both small-scale (20 × 20) and large-scale problems, e.g., 100 × 20, 200 × 10 (M1), 200 × 20 (M1), other algorithms also excel in specific scenarios. For example, CSQABC performs well in both small- and large-scale problems, including 20 × 10, 50 × 20, and 200 × 20 (M2). GSQABC shows strength in large-scale problems involving complex maintenance conditions, e.g., 100 × 5 (M2), 200 × 10 (M2), while SABC achieves strong results on medium-scale problems (50 × 5). This variation reflects the specialized strengths of each strategy, aligning with their respective focus on exploitation, exploration, or balance. The overall performance parity among the algorithms, confirmed by statistical tests, underscores the complexity of designing a universally superior surrogate-assisted approach for PFSP and highlights the necessity of tailoring algorithmic strategies to instance characteristics.
These findings highlight the efficiency of surrogate-assisted approaches, as they enable broader exploration of the search space within a fixed computational budget, thereby increasing the likelihood of discovering high-quality solutions. This demonstrates the potential of SM-based algorithms to enhance optimization performance across various problem domains.

5.4.5. Statistical Validation

To confirm the significance of the observed differences, we performed a Friedman test at the 5% significance level over the 220 problem instances. The test confirmed significant differences among the five algorithms (see Table 7) with the reported average ranks in Table 8.
Following the Friedman test, we applied the Nemenyi post hoc test to calculate the Critical Difference (CD). For k = 5 algorithms and N = 220 problem instances, the CD at α = 0.05 is approximately 0.412. Since no pair of algorithms exhibits a rank difference greater than the CD, no statistically significant difference was found using this criterion.
In addition, we conducted Wilcoxon signed-rank tests for all 10 pairwise comparisons and applied the Holm–Bonferroni correction (Table 9). Although some unadjusted p-values were below 0.05 (e.g., GSQABC vs. CSQABC = 0.013), none remained significant after correction, reinforcing the conclusion that observed differences are not statistically significant under rigorous control.
These results confirm that while GSQABC often yields lower-quality solutions, no algorithm statistically outperforms the others across all test instances at the 5% level.

6. Conclusions and Future Research Directions

In this research paper, we investigate the effectiveness of surrogate-modeling-based optimization for solving the flowshop scheduling problem, a classical combinatorial problem that becomes increasingly challenging when incorporating real-world complexities such as maintenance, learning, and deteriorating effects. By combining the robust exploration and exploitation capabilities of the ABC algorithm with the predictive power of surrogate models, we aimed to significantly reduce the computational cost associated with evaluating a high number of exact fitness functions. To this end, we proposed four surrogate-assisted approaches, each tailored to address specific aspects of the optimization process as follows:
  • The Surrogate-Assisted ABC (SABC) focused on enhancing the search operator selection mechanism, enabling a systematic application of the operator pool. Surrogate models were employed to efficiently evaluate the offspring, helping guide the search toward promising areas of the solution space.
  • The Individual-based Surrogate-Assisted Q-learning ABC (ISQABC) prioritized local exploitation by refining solutions systematically within the employed bees’ neighborhood. This approach used individual-based evolution control, ensuring the surrogate model accurately captured the local fitness landscape for improved refinement.
  • The Generation-based Surrogate-Assisted Q-learning ABC (GSQABC) shifted focus toward global exploration by leveraging generation-based evolution control. This approach used surrogate modeling to identify and probe promising regions of the search space, enhancing population-level diversity and exploration.
  • The hybrid approach, Combined Surrogate-Assisted Q-learning ABC (CSQABC), effectively balanced the strengths of ISQABC and GSQABC, delivering a comprehensive search that harmonizes local exploitation with global exploration.
A critical challenge in surrogate-modeling-based optimization is ensuring the accuracy of the surrogate model to avoid convergence toward false optima. To address this, we adopted an incremental online learning strategy, where the surrogate model was continually updated with accurately evaluated solutions during the controlled evolution process. This dynamic learning approach enhanced the surrogate model’s reliability and supported consistent optimization performance.
While the proposed algorithms did not consistently outperform the baseline IQABC across all instances, the findings offer several important insights: First, we observed that no single strategy is universally superior; rather, performance varies depending on the instance scale and maintenance scenario, which highlights the importance of adaptive surrogate integration. Second, the study reveals that model accuracy is a crucial bottleneck. A better understanding of the factors influencing surrogate precision in combinatorial spaces is needed to unlock the full potential of this approach. Third, the false optima issue remains a major challenge. Future work should explore alternative mechanisms (e.g., confidence thresholds or ensemble surrogates) to reduce reliance on exact fitness evaluations for model correction. Fourth, training time for online surrogate updates, while sometimes offset by reduced evaluations, must be optimized to ensure that surrogate-assisted algorithms remain competitive in practice. Fifth, while this study focused primarily on performance indicators such as solution quality, convergence, and fitness evaluation cost, it did not include direct or indirect measurement of the exploration–exploitation dynamics (e.g., via population diversity or attraction basin analysis). Future work should incorporate such measures to better characterize how surrogate-assisted control mechanisms influence the search behavior over time. Finally, although this study adopted tailored control parameters to ensure balanced computational effort, future investigations could further enhance comparability and generalizability by adopting termination criteria based on equal CPU time or consumed fitness evaluations.
In conclusion, this study opens a new and promising research avenue by demonstrating the feasibility, complexity, and future potential of using surrogate models to solve challenging scheduling problems. Promising directions include investigating the integration of advanced machine learning techniques for improved surrogate modeling, exploring hybrid metaheuristic frameworks, and extending these approaches to other complex scheduling environments. The insights gained from this work can guide researchers and practitioners in selecting and tailoring surrogate-assisted methods for their specific optimization problems, ultimately driving advancements in this important field.

Author Contributions

Conceptualization, N.T. and F.B.-S.T.; Methodology, F.B.-S.T.; Software, N.T.; Validation, F.B.-S.T., A.L. and R.B.; Writing—original draft, N.T.; Writing—review & editing, F.B.-S.T. and A.L.; Supervision, F.B.-S.T., A.L. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Karimi-Mamaghan, M.; Mohammadi, M.; Meyer, P.; Karimi-Mamaghan, A.M.; Talbi, E.G. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Eur. J. Oper. Res. 2022, 296, 393–422. [Google Scholar] [CrossRef]
  2. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef] [PubMed]
  3. Oliveira, J.A.; Almeida, M.S.; Santos, R.Y.; de Gusmão, R.P.; Britto, A. New surrogate approaches applied to meta-heuristic algorithms. In Proceedings of the Artificial Intelligence and Soft Computing: 19th International Conference, ICAISC 2020, Zakopane, Poland, 12–14 October 2020; pp. 400–411. [Google Scholar]
  4. Zeng, T.; Wang, H.; Ye, T.; Wang, W.; Zhang, H. A Multi-Surrogate-Assisted Artificial Bee Colony Algorithm for Computationally Expensive Problems. In Proceedings of the International Conference on Neural Computing for Advanced Applications, Jinan, China, 8–10 July 2022; pp. 394–405. [Google Scholar]
  5. Ratle, A. Accelerating the convergence of evolutionary algorithms by fitness landscape approximation. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Berlin, Hiderlberg, 27–30 September 1998; pp. 87–96. [Google Scholar]
  6. Emmerich, M.; Giotis, A.; Özdemir, M.; Bäck, T.; Giannakoglou, K. Metamodel—Assisted evolution strategies. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Granada, Spain, 7–11 September 2002; pp. 361–370. [Google Scholar]
  7. Liu, S.; Wang, H.; Peng, W.; Yao, W. Surrogate-assisted evolutionary algorithms for expensive combinatorial optimization: A survey. Complex Intell. Syst. 2024, 10, 5933–5949. [Google Scholar] [CrossRef]
  8. Dong, H.; Wang, P.; Fu, C.; Song, B. Kriging-assisted teaching-learning-based optimization (KTLBO) to solve computationally expensive constrained problems. Inf. Sci. 2021, 556, 404–435. [Google Scholar] [CrossRef]
  9. Song, X.; Lv, L.; Sun, W.; Zhang, J. A radial basis function-based multi-fidelity surrogate model: Exploring correlation between high-fidelity and low-fidelity models. Struct. Multidiscip. Optim. 2019, 60, 965–981. [Google Scholar] [CrossRef]
  10. Zheng, Y.; Fu, X.; Xuan, Y. Data-driven optimization based on random forest surrogate. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 487–491. [Google Scholar]
  11. Shi, M.; Lv, L.; Sun, W.; Song, X. A multi-fidelity surrogate model based on support vector regression. Struct. Multidiscip. Optim. 2020, 61, 2363–2375. [Google Scholar] [CrossRef]
  12. Khaldi, M.I.E.; Draa, A. Surrogate-assisted evolutionary optimisation: A novel blueprint and a state of the art survey. Evol. Intell. 2024, 17, 2213–2243. [Google Scholar] [CrossRef]
  13. Jin, Y.; Olhofer, M.; Sendhoff, B. On Evolutionary Optimization with Approximate Fitness Functions. In Proceedings of the Gecco, Las Vegas, NV, USA, 10–12 July 2000; pp. 786–793. [Google Scholar]
  14. Jin, Y.; Olhofer, M.; Sendhoff, B. A framework for evolutionary optimization with approximate fitness functions. IEEE Trans. Evol. Comput. 2002, 6, 481–494. [Google Scholar] [CrossRef]
  15. Ampatzis, C.; Izzo, D. Machine learning techniques for approximation of objective functions in trajectory optimisation. In Proceedings of the Ijcai-09 Workshop on Artificial Intelligence in Space, Noordwijk, The Netherlands, 17–18 July 2009; pp. 1–6. [Google Scholar]
  16. Pan, Z.; Wang, L.; Wang, J.; Lu, J. Deep Reinforcement Learning Based Optimization Algorithm for Permutation Flow-Shop Scheduling. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 7, 983–994. [Google Scholar] [CrossRef]
  17. Garey, M.R.; Johnson, D.S.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  18. Gawiejnowicz, S. Models and Algorithms of Time-Dependent Scheduling; Springer: Berlin/Heidelberg, Germany, 2020; Volume 2. [Google Scholar]
  19. Benkalai, I.; Rebaine, D.; Baptiste, P. Scheduling flow shops with operators. Int. J. Prod. Res. 2019, 57, 338–356. [Google Scholar] [CrossRef]
  20. Ladj, A.; Varnier, C.; Tayeb, F.S. IPro-GA: An integrated prognostic based GA for scheduling jobs and predictive maintenance in a single multifunctional machine. IFAC-PapersOnLine 2016, 49, 1821–1826. [Google Scholar] [CrossRef]
  21. Biskup, D. Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 1999, 115, 173–178. [Google Scholar] [CrossRef]
  22. Gupta, J.N.; Gupta, S.K. Single facility scheduling with nonlinear processing times. Comput. Ind. Eng. 1988, 14, 387–393. [Google Scholar] [CrossRef]
  23. Cheng, M.; Xiao, S.; Luo, R.; Lian, Z. Single-machine scheduling problems with a batch-dependent aging effect and variable maintenance activities. Int. J. Prod. Res. 2018, 56, 7051–7063. [Google Scholar] [CrossRef]
  24. Neufeld, J.S.; Schulz, S.; Buscher, U. A systematic review of multi-objective hybrid flow shop scheduling. Eur. J. Oper. Res. 2023, 309, 1–23. [Google Scholar] [CrossRef]
  25. Zaied, A.N.H.; Ismail, M.M.; Mohamed, S.S. Permutation flow shop scheduling problem with makespan criterion: Literature review. J. Theor. Appl. Inf. Technol. 2021, 99, 830–848. [Google Scholar]
  26. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report TR06; Erciyes University, Engineering Faculty, Computer Engineering Department: Kayseri, Turkey, 2005. [Google Scholar]
  27. Li, L.; Cheng, Y.; Tan, L.; Niu, B. A discrete artificial bee colony algorithm for TSP problem. In Proceedings of the International Conference on Intelligent Computing, Zhengzhou, China, 11–14 August 2011; Springer: Berlin/Heidelberg, Germany; pp. 566–573. [Google Scholar]
  28. Kaya, E.; Gorkemli, B.; Akay, B.; Karaboga, D. A review on the studies employing artificial bee colony algorithm to solve combinatorial optimization problems. Eng. Appl. Artif. Intell. 2022, 115, 105311. [Google Scholar] [CrossRef]
  29. Zeng, T.; Wang, H.; Wang, W.; Ye, T.; Zhang, L. Surrogate-assisted artificial bee colony algorithm. In Proceedings of the International Conference on Bio-Inspired Computing: Theories and Applications, Taiyuan, China, 17–19 December 2021; pp. 262–271. [Google Scholar]
  30. Sun, L.; Sun, W.; Liang, X.; He, M.; Chen, H. A modified surrogate-assisted multi-swarm artificial bee colony for complex numerical optimization problems. Microprocess. Microsyst. 2020, 76, 103050. [Google Scholar] [CrossRef]
  31. Touafek, N.; Benbouzid-Si Tayeb, F.; Ladj, A.; Dahamni, A.; Baghdadi, R. An Integrated Artificial Bee Colony Algorithm for Scheduling Jobs and Flexible Maintenance with Learning and Deteriorating Effects. In Proceedings of the Conference on Computational Collective Intelligence Technologies and Applications, Hammamet, Tunisia, 28–30 September 2022; pp. 647–659. [Google Scholar]
  32. Touafek, N.; Ladj, A.; Tayeb, F.B.S.; Dahamni, A.; Baghdadi, R. Permutation flowshop scheduling problem considering learning, deteriorating effects and flexible maintenance. Procedia Comput. Sci. 2022, 207, 2518–2525. [Google Scholar] [CrossRef]
  33. Kiran, M.S.; Hakli, H.; Gunduz, M.; Uguz, H. Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf. Sci. 2015, 300, 140–157. [Google Scholar] [CrossRef]
  34. Wang, H.; Wu, Z.; Rahnamayan, S.; Sun, H.; Liu, Y.; Pan, J.s. Multi-strategy ensemble artificial bee colony algorithm. Inf. Sci. 2014, 279, 587–603. [Google Scholar] [CrossRef]
  35. Touafek, N.; Benbouzid-Si Tayeb, F.; Ladj, A. A Reinforcing-Learning-Driven Artificial Bee Colony Algorithm for Scheduling Jobs and Flexible Maintenance under Learning and Deteriorating Effects. Algorithms 2023, 16, 397. [Google Scholar] [CrossRef]
  36. Tong, H.; Huang, C.; Minku, L.L.; Yao, X. Surrogate models in evolutionary single-objective optimization: A new taxonomy and experimental study. Inf. Sci. 2021, 562, 414–437. [Google Scholar] [CrossRef]
  37. Díaz-Manríquez, A.; Toscano, G.; Barron-Zambrano, J.H.; Tello-Leal, E. A review of surrogate assisted multiobjective evolutionary algorithms. Comput. Intell. Neurosci. 2016, 2016, 9420460. [Google Scholar] [CrossRef]
  38. Díaz-Manríquez, A.; Toscano, G.; Coello Coello, C.A. Comparison of metamodeling techniques in evolutionary algorithms. Soft Comput. 2017, 21, 5647–5663. [Google Scholar] [CrossRef]
  39. Fonseca, L.; Barbosa, H.; Lemonge, A. On similarity-based surrogate models for expensive single-and multi-objective evolutionary optimization. In Computational Intelligence in Expensive Optimization Problems; Springer: Berlin/Heidelberg, Germany, 2010; pp. 219–248. [Google Scholar]
  40. Jin, Y. Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm Evol. Comput. 2011, 1, 61–70. [Google Scholar] [CrossRef]
  41. Pan, J.S.; Liu, N.; Chu, S.C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  42. Liu, Y.; Liu, J.; Jin, Y. Surrogate-assisted multipopulation particle swarm optimizer for high-dimensional expensive optimization. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 52, 4671–4684. [Google Scholar] [CrossRef]
  43. Gu, Q.; Wang, Q.; Li, X.; Li, X. A surrogate-assisted multi-objective particle swarm optimization of expensive constrained combinatorial optimization problems. Knowl. Based Syst. 2021, 223, 107049. [Google Scholar] [CrossRef]
  44. Han, L.; Wang, H. A random forest assisted evolutionary algorithm using competitive neighborhood search for expensive constrained combinatorial optimization. Memetic Comput. 2021, 13, 19–30. [Google Scholar] [CrossRef]
  45. Pholdee, N.; Bureerat, S.; Nuantong, W. Kriging surrogate-based genetic algorithm optimization for blade design of a horizontal axis wind turbine. Comput. Model. Eng. Sci. 2021, 126, 261–273. [Google Scholar] [CrossRef]
  46. Arık, O.A. Artificial bee colony algorithm including some components of iterated greedy algorithm for permutation flow shop scheduling problems. Neural Comput. Appl. 2021, 33, 3469–3486. [Google Scholar] [CrossRef]
  47. Li, Y.; Li, X.; Gao, L.; Zhang, B.; Pan, Q.K.; Tasgetiren, M.F.; Meng, L. A discrete artificial bee colony algorithm for distributed hybrid flowshop scheduling problem with sequence-dependent setup times. Int. J. Prod. Res. 2021, 59, 3880–3899. [Google Scholar] [CrossRef]
  48. Xuan, H.; Zhang, H.; Li, B. An improved discrete artificial bee colony algorithm for flexible flowshop scheduling with step deteriorating jobs and sequence-dependent setup times. Math. Probl. Eng. 2019, 2019, 1–13. [Google Scholar] [CrossRef]
  49. Thiruvady, D.; Nguyen, S.; Shiri, F.; Zaidi, N.; Li, X. Surrogate-assisted population based ACO for resource constrained job scheduling with uncertainty. Swarm Evol. Comput. 2022, 69, 101029. [Google Scholar] [CrossRef]
  50. Hao, J.h.; Liu, M.; Lin, J.h.; Wu, C. A hybrid differential evolution approach based on surrogate modelling for scheduling bottleneck stages. Comput. Oper. Res. 2016, 66, 215–224. [Google Scholar] [CrossRef]
  51. Mekki, I.R.; Cherrered, A.; Tayeb, F.B.S.; Benatchba, K. Fitness Approximation Surrogate-assisted Hyper-heuristic for the Permutation Flowshop Problem. Procedia Comput. Sci. 2023, 225, 4043–4054. [Google Scholar] [CrossRef]
  52. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  53. Afşar, B.; Aydin, D.; Uğur, A.; Korukoğlu, S. Self-adaptive and adaptive parameter control in improved artificial bee colony algorithm. Informatica 2017, 28, 415–438. [Google Scholar] [CrossRef]
  54. Babaei, M.; Pan, I. Performance comparison of several response surface surrogate models and ensemble methods for water injection optimization under uncertainty. Comput. Geosci. 2016, 91, 19–32. [Google Scholar] [CrossRef]
  55. Viana, F.A.; Haftka, R.T. Using multiple surrogates for metamodeling. In Proceedings of the 7th ASMO-UK/ISSMO International Conference on Engineering Design Optimization, New York, NY, USA, 3–6 August 2008; pp. 1–18. [Google Scholar]
  56. Palar, P.S.; Shimoyama, K. On efficient global optimization via universal Kriging surrogate models. Struct. Multidiscip. Optim. 2018, 57, 2377–2397. [Google Scholar] [CrossRef]
  57. Tasgetiren, M.F.; Pan, Q.K.; Suganthan, P.; Oner, A. A discrete artificial bee colony algorithm for the no-idle permutation flowshop scheduling problem with the total tardiness criterion. Appl. Math. Model. 2013, 37, 6758–6779. [Google Scholar] [CrossRef]
  58. Taillard, E. Some efficient heuristic methods for the flow shop sequencing problem. Eur. J. Oper. Res. 1990, 47, 65–74. [Google Scholar] [CrossRef]
  59. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems, 5th ed.; Springer: New York, NY, USA, 2016. [Google Scholar]
  60. Forrester, A.I.J.; Sóbester, A.; Keane, A.J. Engineering Design via Surrogate Modelling: A Practical Guide. J. Eng. 2008, 1, 1–20. [Google Scholar]
  61. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  62. Ladj, A.; Tayeb, F.B.S.; Varnier, C.; Dridi, A.A.; Selmane, N. A Hybrid of Variable Neighbor Search and Fuzzy Logic for the permutation flowshop scheduling problem with predictive maintenance. Procedia Comput. Sci. 2017, 112, 663–672. [Google Scholar] [CrossRef]
Figure 1. The baseline ABC and IQABC algorithms.
Figure 1. The baseline ABC and IQABC algorithms.
Mathematics 13 02381 g001
Figure 2. Global architecture of the solution approaches.
Figure 2. Global architecture of the solution approaches.
Mathematics 13 02381 g002
Figure 3. SABC flowchart.
Figure 3. SABC flowchart.
Mathematics 13 02381 g003
Figure 4. ISQABC flowchart.
Figure 4. ISQABC flowchart.
Mathematics 13 02381 g004
Figure 5. GSQABC flowchart.
Figure 5. GSQABC flowchart.
Mathematics 13 02381 g005
Figure 6. CSQABC flowchart.
Figure 6. CSQABC flowchart.
Mathematics 13 02381 g006
Figure 7. Convergence curves of different PFSP instances.
Figure 7. Convergence curves of different PFSP instances.
Mathematics 13 02381 g007
Table 1. Summary of key configuration parameters for all algorithms.
Table 1. Summary of key configuration parameters for all algorithms.
ParameterSABCISQABCGSQABCCSQABC
Pop_Size7012070120
Max_iteration200200270270
Onlook%40%40%40%40%
limit5555
Controlled individuals7070
Controlled iterations200200
Table 2. RPD values of small problem scales for the M1 and M2 maintenance modes.
Table 2. RPD values of small problem scales for the M1 and M2 maintenance modes.
M1M2
Instance IQABC SABC ISQABC GSQABC CSQABC IQABC SABC ISQABC GSQABC CSQABC
20x5_112.839.587.549.249.629.4914.4214.1213.3610.98
20x5_28.549.259.9212.398.4612.9411.7311.9613.6911.25
20x5_311.998.3812.428.727.345.9412.168.278.856.68
20x5_413.8613.8613.7416.47.666.3411.819.087.5213.55
20x5_58.987.59.159.6810.656.025.249.176.628.25
20x5_68.9213.1999.599.897.416.9110.5911.877.89
20x5_710.029.944.7910.227.4611.959.629.359.6311.26
20x5_86.1611.5311.027.498.875.4410.8810.719.1310.17
20x5_99.2512.4112.0212.4112.7714.531311.8811.4513.36
20x5_105.818.146.888.377.612.319.3710.5311.1511.26
Average9.6310.379.6410.459.039.2310.5110.5610.3210.46
W4120343111
20x10_117.0617.5519.5219.718.2718.9319.7521.520.2619.27
20x10_230.0527.6830.1931.3630.9826.2128.6424.7128.0626.78
20x10_320.4819.8519.9921.3118.9622.6221.6818.4120.1722.36
20x10_414.8716.1916.3416.0715.4515.818.0316.3922.0917.57
20x10_520.2316.1120.1114.2515.8514.5314.2513.0317.9716.05
20x10_619.0919.320.3222.6917.4915.2615.2216.9815.9113.37
20x10_71618.0518.1618.2813.8918.4618.4420.2818.8217.02
20x10_815.7516.4813.1918.511.7516.418.5621.3419.0818.01
20x10_923.6625.8423.8927.4724.7727.0125.0526.1326.3925.15
20x10_1015.917.9916.7118.612.1515.3316.2514.3712.8610.9
Average19.3019.5019.8420.8217.9519.0519.5819.3120.1618.64
W3101531303
20x20_131.8932.3831.2335.7532.7232.8135.7532.8735.2831.46
20x20_225.4528.1327.8523.8824.7724.6827.9825.7626.3326.8
20x20_333.9832.4831.8333.0232.7432.1731.9731.1831.4732.64
20x20_434.5635.6535.8736.8734.2635.2936.7535.6535.6135.6
20x20_536.8535.0937.0233.5536.5536.3433.3836.7137.2635.93
20x20_630.5931.2729.5730.7730.1134.9537.8835.1235.2732.75
20x20_727.5230.8330.3526.3327.6431.7133.4732.6333.9532.16
20x20_829.0431.4127.7631.6927.4731.5631.4831.8634.6531.23
20x20_928.3927.7226.9829.2727.7329.4233.2928.9131.0331.69
20x20_1029.8930.2129.6429.5629.6432.1134.4430.1832.3632.7
Average30.8231.5130.8131.0630.3632.1033.6332.0833.3232.29
W0044231303
Table 3. RPD values of medium problem scales for the M1 and M2 maintenance modes.
Table 3. RPD values of medium problem scales for the M1 and M2 maintenance modes.
M1M2
Instance IQABC SABC ISQABC GSQABC CSQABC IQABC SABC ISQABC GSQABC CSQABC
50x5_15.6 9.56.258.8812.239.398.276.38.4910.17
50x5_25.266.558.5810.448.967.759.465.726.286.88
50x5_313.4511.7211.9513.4314.698.054.637.795.298.22
50x5_410.0312.218.8910.338.311.312.2610.355.5111.46
50x5_57.089.687.5510.99.127.425.476.186.875.88
50x5_63.128.518.5109.727-1.31.916.123.14
50x5_710.748.5711.5510.987.399.511.37.559.518.17
50x5_84.217.758.079.589.5112.2415.2297.678.17
50x5_99.637.4410.528.39.158.497.1512.427.611.93
50x5_1010.456.7211.578.8810.824.734.410.474.574.94
Average7.958.869.3410.179.988.587.687.766.797.89
W5300205320
50x10_113.1517.9818.2420.4313.9712.7412.6911.1810.9113.27
50x10_212.948.7113.2213.0310.4212.9814.1712.7813.9312.64
50x10_315.8516.6315.716.6916.121514.3514.413.788.46
50x10_413.3810.6912.4711.513.1917.3116.516.7513.616.81
50x10_511.0912.0612.8511.9811.7316.4113.9413.7311.8511.99
50x10_613.2112.9913.1813.213.0115.2219.9711.0819.1916.22
50x10_715.8313.812.059.4411.8310.6116.8518.0616.0714.54
50x10_816.1315.0712.8613.4214.411.6112.139.048.5710.17
50x10_915.3514.5712.3116.112.8110.9610.5410.4812.869.18
50x10_1013.9815.1118.5116.5116.7715.6714.9314.2412.5510.14
Average14.0913.7614.1314.2313.4213.85114.60713.1713.3312.34
W3331010144
50x20_122.7422.9920.6822.5421.1319.8519.2921.3921.3219.96
50x20_221.2223.3721.224.2121.7425.3327.4926.6227.1625.64
50x20_324.5823.523.719.2221.3921.8617.919.5420.7918.71
50x20_425.8827.5925.8625.4921.7325.726.9124.5625.8627.42
50x20_522.6522.8721.0122.3523.6528.6529.7328.9326.3728.19
50x20_623.2122.5423.5723.5221.3923.4222.920.9420.720.07
50x20_719.9718.0620.5717.0319.1220.4919.3919.3621.1820.54
50x20_821.7120.7323.7122.321.6319.6623.2420.9221.1522.26
50x20_925.0728.1826.352623.9326.2126.4325.8325.7524.88
50x20_1023.8421.4824.1624.423.3920.4919.1119.8217.7218.2
Average23.0923.1323.0822.7021.9123.1623.2322.7922.822.58
W0232322222
Table 4. RPD values of large problem scales for the M1 and M2 maintenance modes.
Table 4. RPD values of large problem scales for the M1 and M2 maintenance modes.
M1M2
Instance IQABC SABC ISQABC GSQABC CSQABC IQABC SABC ISQABC GSQABC CSQABC
100x5_16.41 13.597.778.1510.090.935.68.346.0911.63
100x5_210.387.857.0213.978.27.162.417.138.838.34
100x5_310.685.498.399.559.916.557.627.7710.094.68
100x5_411.963.161.926.016.416.424.510.441.499.49
100x5_59.113.264.682.238.324.974.097.3910.815.67
100x5_64.358.395.949.4912.066.645.189.154.477.35
100x5_710.635.719.098.583−1.274.597.8110.628.17
100x5_8−0.492.687.86.7810.177.68844.762.95
100x5_94.9610.20.3211.085.288.8510.668.995.19.84
100x5_1010.514.4210.757.456.728.36.847.23.424.18
Average7.847.476.368.328.015.625.947.826.567.23
W3231122042
100x10_110.76.6210.158.749.084.927.778.526.467.99
100x10_24.926.373.714.266.078.786.0110.986.799.23
100x10_37.357.027.579.189.4811.9811.037.488.3210.29
100x10_47.410.496.711.4110.5210.666.274.7511.247.53
100x10_512.99.7214.0811.8515.428.967.4910.958.917.95
100x10_611.9611.937.6914.3512.4812.712.849.968.4414.88
100x10_712.569.6513.0214.7412.257.387.118.629.448.9
100x10_812.814.510.2211.1712.396.8512.8113.0210.557.28
100x10_910.9910.3611.159.5210.5610.5914.8510.379.8210.61
100x10_1012.539.368.9911.5810.595.99.376.516.918.54
Average10.419.609.3210.6810.888.879.559.118.689.32
W0451033220
100x20_119.7524.0319.4119.519.3520.9221.7417.2920.3319.52
100x20_211.2812.6911.4915.7411.9211.9511.412.3511.179.53
100x20_314.4614.8312.814.6514.2311.9314.8615.0114.9114.9
100x20_417.2616.8515.8215.9815.4715.8217.2813.3616.618.66
100x20_512.8615.0615.3614.2215.1310.9113.039.129.5213.72
100x20_621.6421.3420.1817.6520.1215.8115.7417.3317.317.28
100x20_718.8917.9217.4719.8418.3612.9512.4310.514.6412.01
100x20_817.7119.518.0721.3418.818.8519.9319.4519.7718.49
100x20_919.6118.8920.9922.9421.716.9214.8515.8817.5818.16
100x20_1020.8221.4918.3720.0819.3519.4420.1118.9620.5921.65
Average17.4218.2616.9918.1917.4415.5516.1314.9216.2416.39
W3131212502
200x10_111.098.16.86.978.028.987.253.364.647.55
200x10_29.874.288.5111.477.496.336.3412.214.747.62
200x10_311.5710.9813.5512.636.125.669.615.828.37.05
200x10_49.885.3410.8110.49.727.1912.078.394.953.34
200x10_57.556.145.4211.064.97.514.147.51129.11
200x10_66.2110.628.789.926.787.864.736.662.5812.46
200x10_711.839.099.468.0414.455.264.486.283.267.76
200x10_87.189.526.2810.1111.978.729.0212.0410.459.42
200x10_98.957.695.127.246.527.313.19.086.495.96
200x10_1011.5211.8811.181311.956.9511.14.4610.111.24
Average9.568.368.5910.088.797.177.187.586.758.15
W1241222231
200x20_112.2212.4515.0911.712.5510.7312.619.9813.7710.23
200x20_215.558.3613.812.2315.612.4815.1413.8910.813.28
200x20_314.3214.3612.8513.6814.9313.5511.8411.0210.8411.83
200x20_413.9817.471515.6615.9415.3912.7914.179.5611.28
200x20_513.2713.4110.8713.0910.2610.4212.889.8212.689.51
200x20_69.2412.9410.1711.1610.7210.6910.0112.4511.079.59
200x20_711.6611.8711.5211.2213.159.4911.9411.9911.1511.73
200x20_89.9110.2112.211.9112.2112.6411.9910.6113.59.85
200x20_912.9412.811.1614.0712.7412.2714.3511.1210.8911.76
200x20_1016.1516.1515.0116.516.4613.1614.1114.3714.9711.84
Average12.9213.0012.7613.1213.4512.0812.7611.9411.9211.09
W3132110144
Table 5. Average CPU time of proposed algorithms by problem scale and maintenance mode.
Table 5. Average CPU time of proposed algorithms by problem scale and maintenance mode.
M1M2
Instance IQABC SABC ISQABC GSQABC CSQABC IQABC SABC ISQABC GSQABC CSQABC
20x513.2013.7144.4324.1359.5613.3613.8949.0525.1054.11
20x1015.8916.7436.6925.5355.0416.6617.6240.4626.5060.83
20x2025.0025.4254.3537.1873.9423.7523.9651.3334.5068.63
50x521.6221.66129.9442.90101.1321.3721.5363.3440.0291.36
50x1031.2931.9375.0351.55106.3631.6632.2571.7850.9998.91
50x2052.4452.9099.8579.70142.3553.1253.5396.1378.44135.25
100x546.6647.96185.4492.69213.0946.0548.62213.7889.74179.18
100x1076.3678.00185.95126.73229.7873.8275.28193.56121.78207.72
100x20135.83140.19272.52200.39329.27138.97141.91250.98196.90321.29
200x10224.46229.05597.76354.97548.53216.88219.40341.20332.47532.68
200x20362.64374.81570.29535.43792.09372.62381.27597.62522.34752.84
Table 6. Estimated gain in exact fitness evaluations (gain_FEs) for each algorithm.
Table 6. Estimated gain in exact fitness evaluations (gain_FEs) for each algorithm.
AlgorithmControlled Ind.Controlled Gen.Est. Exact FEsgain_FEs (%)
IQABC7020014,0000
ISQABC7020014,00041.66
GSQABC7020014,00025.92
CSQABC7020014,00056.79
Table 7. Friedman’s two-way analysis of variance summary.
Table 7. Friedman’s two-way analysis of variance summary.
Total N220
Chi-square10,306
Degree Of Freedom4
p-value0.036
Table 8. Algorithms’ mean ranks in Friedman’s test.
Table 8. Algorithms’ mean ranks in Friedman’s test.
AlgorithmRank
IQABC2.92
SABC3.11
ISQABC2.84
GSQABC3.25
CSQABC2.88
Table 9. Pairwise comparisons using Wilcoxon test.
Table 9. Pairwise comparisons using Wilcoxon test.
Comparisonp_ValueAdjusted α Significant (Y/N)
IQABC vs. SABC0.1150.01N
IQABC vs. ISQABC0.5670.0166N
IQABC vs. GSQABC0.0410.00625N
IQABC vs. CSQABC0.5850.025N
SABC vs. ISQABC0.0850.00833N
SABC vs. GSQABC0.5590.0125N
SABC vs. CSQABC0.0670.00714N
ISQABC vs. GSQABC0.0210.0055N
ISQABC vs. CSQABC0.9000.05N
GSQABC vs. CSQABC0.0130.005N
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Touafek, N.; Benbouzid-Si Tayeb, F.; Ladj, A.; Baghdadi, R. Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches. Mathematics 2025, 13, 2381. https://doi.org/10.3390/math13152381

AMA Style

Touafek N, Benbouzid-Si Tayeb F, Ladj A, Baghdadi R. Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches. Mathematics. 2025; 13(15):2381. https://doi.org/10.3390/math13152381

Chicago/Turabian Style

Touafek, Nesrine, Fatima Benbouzid-Si Tayeb, Asma Ladj, and Riyadh Baghdadi. 2025. "Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches" Mathematics 13, no. 15: 2381. https://doi.org/10.3390/math13152381

APA Style

Touafek, N., Benbouzid-Si Tayeb, F., Ladj, A., & Baghdadi, R. (2025). Advanced Optimization of Flowshop Scheduling with Maintenance, Learning and Deteriorating Effects Leveraging Surrogate Modeling Approaches. Mathematics, 13(15), 2381. https://doi.org/10.3390/math13152381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop