Next Article in Journal
Design and Characteristic Analysis of Magnetostrictive Vibration Harvester with Double-Stage Rhombus Amplification Mechanism
Previous Article in Journal
Uncertainty Quantification for Full-Flight Data Based Engine Fault Detection with Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Sparrow Search Algorithm for Solving the Energy-Saving Flexible Job Shop Scheduling Problem

1
College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an 710021, China
2
School of Economics and Management, Fuzhou University, Fuzhou 350108, China
3
Centre for Accident Research and Road Safety, Queensland University of Technology, Brisbane, QLD 4000, Australia
*
Author to whom correspondence should be addressed.
Machines 2022, 10(10), 847; https://doi.org/10.3390/machines10100847
Submission received: 15 August 2022 / Revised: 14 September 2022 / Accepted: 19 September 2022 / Published: 23 September 2022
(This article belongs to the Section Industrial Systems)

Abstract

:
Due to emerging requirements and pressures related to environmental protection, manufacturing enterprises have expressed growing concern for adopting various energy-saving strategies. However, environmental criteria were usually not considered in traditional production scheduling problems. To overcome this deficiency, energy-saving scheduling has drawn more and more attention from academic scholars and industrial practitioners. In this paper, an energy-saving flexible job shop scheduling problem (EFJSP) is introduced in accordance with the criterion of optimizing power consumption and processing costs simultaneously. Since the classical FJSP is strongly NP-hard, an Improved Sparrow Search Algorithm (ISSA) is developed for efficiently solving the EFJSP. In the ISSA, a Hybrid Search (HS) method is used to produce an initial high-quality population; a Quantum Rotation Gate (QRG) and a Sine–Cosine Algorithm (SCA) are integrated to intensify the ability of the ISSA to coordinate exploration and exploitation; the adaptive adjustment strategy and Variable Neighborhood Search (VNS) are applied to strengthen diversification of the ISSA to move away from local optima. Extensive computational experiments validate that the ISSA outperforms other existing algorithms in solving the EFJSP due to the advantages of intensification and diversification mechanisms in the ISSA.

1. Introduction

With the promotion of Industry 4.0, innovation and transformation of the traditional manufacturing enterprise are rapidly developing. Against this background, a new and smart machining mode is being generated [1]. Artificial intelligence and machine learning methods have been introduced into the manufacturing industry to simulate the actual production environment [2]. Manufacturing workshops in various industries tend to be digital and intelligent. In the meantime, with the rapid growth of modern economy and the increasing awareness of environmental protection, manufacturing companies are facing both economic and environmental pressures. Reducing energy consumption is an essential objective for companies due to the requirements of sustainable development. To realize this environmental objective, some researchers proposed the straight method by designing more energy-saving machines for manufacturing procedures [3]. However, compared with the method of designing energy-saving machines, companies can effectively optimize the use of manufacturing resources through production scheduling optimization, so that energy consumption can be saved, and production efficiency can be achieved. Recently, scholars have paid more attention to production scheduling problems with the criterion of optimizing power consumption cost. Energy-saving scheduling has become a new research hotspot in the manufacturing industry and the scheduling community.
The following recent literature review on energy-saving scheduling is presented as follows. In 2012, Duflou et al. [4] applied the concept of energy saving to production scheduling procedures and established a scheduling model with two criteria, i.e., the makespan and the power consumption cost. Carli et al. [5] proposed a two-step scheduling model with the objective of the total cost to obtain a best solution for the material handling activities of electric mobile material handling equipment (MHE). Zhang et al. [6] declared a heuristic evolutionary algorithm for the uncorrelated concurrent machine scheduling problem. Ahmadi et al. [7] employed two types of algorithms, the non-dominated sorting genetic algorithm II (NSGA-II) with elite policies and the modified non-dominated sorting genetic algorithm (NSGA), for the job shop problem (JSP). Miguel et al. [8] pronounced an enhanced genetic algorithm (GA) for the JSP where machines can process jobs at speed scaling. Li et al. [9] used an enhanced artificial bee colony (ABC) algorithm for the JSP with three criteria, i.e., the makespan, the carbon emissions, and the total machine load. Zhang et al. [10] proposed an enhanced shuffled frog-leaping algorithm for an energy-aware FJSP with two objectives of minimizing the total power consumption and the makespan. Shahrabi et al. [11] employed a Q-learning algorithm to search the optimal parameters of a variable neighborhood search (VNS) algorithm in dynamic job shop scheduling. Yang et al. [12] developed a novel hybrid whale optimization algorithm for the Flexible Job Shop Problem (FJSP) to optimize the makespan. Wu et al. [13] constructed an energy-aware FJSP model and proposed an improved NSGA for it. Zhu et al. [14] adopted a strengthened whale optimization algorithm for solving the JSP. Anuar et al. [15] solved the FJSP with the dual criteria of makespan and power consumption by continuous and discrete particle swarm optimization (PSO) algorithms. Ding et al. [16] constructed an energy-saving scheduling problem model for the permutation flow shop. Yang et al. [17] designed a hybrid memetic algorithm (MA) with decomposition variable neighborhood for the FJSP with two criteria of minimizing the makespan and the total workload. Dai et al. [18] developed a strengthened GA to optimize the energy consumption and makespan for the FJSP with transportation constraints. Tan et al. [19] devised a modified NSGA-II for the FJSP with double resource constraints. Lu et al. [20] presented a novel multi-objective discrete virus optimization algorithm (MODVOA) for the FJSP with the dual objectives of makespan and total additional resource consumption. Carli et al. [21] introduced a model to achieve the best control scheme for the battery charging of the MHE in warehouses. Yin et al. [22] constructed a novel low-carbon FJSP model and proposed a multi-objective GA based on a simplex lattice. In summary, most studies on energy-saving scheduling have focused on single-machine-unit job shop systems. By comparison, little research has been devoted to investigating the energy-saving flexible job shop scheduling problem (EFJSP), which is our research problem in this paper. To fill this research gap, we investigated the EFJSP from the perspective of energy-saving constraints and environmental criteria.
In recent years, several researchers have designed a variety of artificial intelligence algorithms for solving different combinatorial optimization problems [23,24,25,26,27,28,29,30,31,32,33,34,35]. The Sparrow Search Algorithm (SSA) is a novel metaheuristic algorithm that imitates the life behaviors of sparrow populations in nature [36]. Since its introduction in 2020, the SSA has become a popular research topic by scholars in various research areas. Zhang et al. [37] developed a chaotic SSA for the stochastic configuration network. Ouyang et al. [38] presented an improved SSA (ISSA), which clusters and distinguishes individual positions of sparrows using the K-means clustering method to accelerate the updating effectiveness of the population and get rid of random effects. Zhang et al. [39] developed an ISSA for the classification problem of labeled and unlabeled data. Liu et al. [40] proposed an ISSA for solving the route selection problem. Yuan et al. [41] provided an ISSA to solve the ability mismatch loss problem in photovoltaic microgrid systems. Zhang et al. [42] introduced a discrete SSA to solve the traveling salesman problem (TSP).
Based on the above recent literature review, research gaps have been identified, and the main contributions are highlighted in the following.
  • It is rare to find relevant papers on the FJSP with the consideration of energy-saving constraints and the optimization criterion of minimizing the total power consumption cost. With energy-saving concerns, this study aims to fill in this research gap by defining, modelling, and solving the EFJSP.
  • To efficiently solve the EFJSP, we developed an improved sparrow search algorithm (ISSA) that consists of the hybrid search (HS), quantum rotation gate (QRG), sine–cosine algorithm (SCA), adaptive adjustment strategy (AAS), and variable neighborhood search (VNS) techniques.
  • The advantages of the developed ISSA are verified by extensive computational experiments on benchmark and practical instances.
  • The purpose of this paper is to increase knowledge reserves in the field of energy-saving scheduling in theory and to help manufacturing enterprises reduce energy consumption and processing costs in practice.
The organization of the remainder of this paper is as follows: Section 2 presents the definition of the EFJSP. Section 3 describes the traditional SSA. Section 4 develops the ISSA for the EFJSP. Section 5 reports the computational results of the ISSA, and Section 6 gives the conclusion and future research directions.

2. EFJSP Description

2.1. Problem Description

The EFJSP is stated as follows. There are a group of n jobs and a group of m machines, where J i denotes the number of operations contained in job i , and O i j denotes the jth operation of job i . Each operation O i j can be processed on one machine in a given set of m machines. The EFJSP can be divided into two subproblems: machine allocation and operation sequence. The common optimization objectives for FJSP are to minimize the makespan, total flow time, total tardiness, etc. In this study, the EFJSP aims at optimizing total power consumption while minimizing common objectives simultaneously. The objective of the EFJSP is to optimize the total cost, which is composed of two components: processing cost and energy consumption cost. Moreover, the energy consumption cost includes three parts: power consumption cost in processing mode, energy consumption cost in idle mode, and power consumption cost in transportation of jobs. The following additional constraints are also considered:
(1)
One machine can deal with only one operation at a time.
(2)
One operation must be machined continuously, as it cannot be interrupted midway.
(3)
There are sequential constraints among the operations of the same job, as it can start only after the previous one is completed.
(4)
Different operations of all jobs are independent.
(5)
There is no interruption when the machine is available.
(6)
The preparation of the machine before processing, loading, and unloading of the job is ignored.
(7)
Machine failure and other emergencies are not considered.

2.2. Model Illustration

Sets and indices, parameters, and decision variables of EFJSP are defined as follows:
Sets and Indices:
i : the index of a job;
j : the index of an operation;
k , w : the index of a machine;
n : the number of jobs;
m : the number of machines;
J i : the total number of operations contained in job i ;
O i j : the jth operation of job i ;
M k : machine k .
Parameters
p i j k : the processing time of O i j on M k ;
F : the object function to minimize the total cost;
s i j k : the processing cost coefficient of the O i j processed on M k ;
c i j k : the power consumption cost coefficient (PCCC) of O i j processed on M k ;
θ k : the PCCC when M k is in a standby mode;
T C w k : the transferred PCCC from M w to M k ;
T O i ( j 1 ) w . i j k : the transferred time of O i ( j 1 ) in M w to O i j in M k ;
S T i j : the starting time of operation O i j processing on M k ;
C T i j : the completion time of O i j .
Decision Variables
x i j k : 0–1 variable, if O i j is processed on M k , x i j k = 1 ; otherwise, x i j k = 0 .
Accordingly, the objective function of the EFJSP is established as follows:
min F = i = 1 n j = 1 J i k = 1 m x i j k p i j k s i j k + i = 1 n j = 1 J i k = 1 m x i j k p i j k c i j k + k = 1 m θ k ( max C T i j i = 1 n j = 1 J i ( C T i j S T i j ) x i j k ) + i = 1 n j = 1 J i w = 1 m k = 1 m T C w k T O i ( j 1 ) w . i j k x i ( j 1 ) w x i j k
To explain the EFJSP visually, a numerical instance is given in Table 1. There are three jobs { J 1 , J 2 , J 3 } to be processed on four machines { M 1 , M 2 , M 3 , M 4 } , where jobs { J 1 , J 2 , J 3 } have two, three, and two operations, respectively. In addition, each operation has different machine candidates. Table 1 shows the processing times and corresponding energy consumption costs of the jobs on machine, e.g., the processing time of O 11 on machine M 1 and M 4 is 7 min and 8 min, respectively, and the corresponding total energy consumption cost is CNY 12 and CNY 18, respectively. After an operation of the job is completed on a machine, it will be transferred to another machine to process the next operation. The transferred time of the job from one machine to another is shown in Table 2. Figure 1 shows an example of calculating the energy consumption cost of a machine on standby mode, e.g., when machine M 2 is chosen to process operation O 21 prior to O 32 , the 0–1 variables x 212 , x 322 are set to 1, and an idle time can be produced according to the operation sequence and transferred time. The idle time of M 2 is calculated as C T 32 ( ( C T 21 S T 21 ) + ( C T 32 S T 32 ) ) . Therefore, the energy consumption cost of a machine on standby mode can be obtained by multiplying the PCCC on standby mode by the idle time.

3. Sparrow Search Algorithm (SSA)

The SSA is a relatively new metaheuristic algorithm that mimics the predation and anti-predatory action of sparrow populations [36]. Specifically, in foraging, individuals act in two roles: discoverer and joiner. The discoverer is in charge of finding the food and guiding other individuals, and the joiner forages by following the discoverer. A certain percentage of sparrows have been selected as the guarders, which will send out alarm signals and perform anti-predation behaviors when they realize the danger. The position of the discoverer is regenerated by Equation (2):
X i , j t + 1 = X i , j t exp i α T                           R 2 < S T X i , j t + O G                                           R 2 S T
where t denotes the current value of updates. T is the maximal value of updates. X i j t presents the current location of the ith agent. X i j t + 1 defines the updated location of the ith sparrow in the dimension j . α ( 0 , 1 ] is a random value. S T ( 0.5 , 1 ] represents a safety value. R 2 ( 0 , 1 ] denotes a warning value. G defines a 1 × d matrix where all values are 1. O is a random parameter.
The position of the joiner can be regenerated by Equation (3):
X i , j t + 1 = O exp X w X i , j t i 2   i > n / 2 X b + X i , j t X b B G o t h e r w i s e
where X b denotes the current optimal location of the discoverer. X w defines the worst location of the sparrow. B is a 1 × d matrix in which all values are equal to 1 or −1, and A + = A T ( A A T ) 1 .
The location regeneration equation for the guarder is determined by Equation (4):
X i , j t + 1 = X b e s t t + β X i , j t X b e s t t f t i > f t g X i , j t + K X i , j t X w o r s t t f t i f t w + ε f t i = f t g
where X b e s t defines the global best position of the agent. β and K [ 1 , 1 ] are two random numbers; f t i represents the fitness value of the agent. f t w and f t g stand for the current worst fitness value and the best fitness value in the population, respectively; ε is a minimum number that is close to zero.

4. Our Proposed Improved Sparrow Search Algorithm (ISSA)

4.1. Scheduling Scheme Denotation

The EFJSP includes two sub-problems, i.e., machine selection and operation sequence. In consequence, a two-segment code is employed to present the Scheduling Scheme (SS). The first segment selects a suitable machine for every operation, and the second one indicates the machining sequence of jobs.
Taking a 3 × 2 (three jobs with six operations and two machines) EFJSP for a numerical example, the SS denotation can be illustrated in Figure 2. For the first part, the notation w represents an operation that selects the wth one in the optional machine set, in which all data are saved in sequence. For the second part, the elements v represent different operations of job v .

4.2. Individual Position Vector

In the developed ISSA, the location of the sparrow is represented by a real vector, which consists of two parts, i.e., Y = y ( 1 ) , y ( 2 ) , y ( u ) , , y ( 2 u ) and y ( i ) [ y min ( i ) , y max ( i ) ] , i = 1 , ...2 u , where i defines the location index of the individual position vector (IPV), and u denotes the number of all operations. The first part Y 1 = y ( 1 ) , , y ( u ) represents the machine selection, and the second part Y 2 = y ( u + 1 ) , , x ( 2 u ) denotes the operation permutation. For the 3 × 2 EFJSP in Figure 2, the IPV is illustrated in Figure 3. In addition, the ranges [ y min ( i ) , y max ( i ) ] are set as [ n , n ] , where n represents the number of jobs.

4.3. Transition Mechanism

The traditional SSA was designed for the successional optimization problem. However, the EFJSP is a discrete combinatorial optimization problem. Thus, it is vital to create a transition mechanism between the IPV and the SS. In this paper, the transition method by Yuan et al. [43] is employed here to achieve the transition between the IPV and the SS for the EFJSP.

4.3.1. Transition from SS to IPV

(1)
Machine selection segment: the transition procedure is shown in Equation (5).
y ( i ) = 2 n ( r i 1 ) g ( i ) 1 n , r ( i ) 1
where y ( i ) represents the ith data of the IPV. r ( i ) defines the number of machines that can be selected for the corresponding operation of the ith element. g ( i ) [ 1 , r ( i ) ] indicates the selected machine’s serial number; if r ( i ) = 1 , then y ( i ) can take any value in the interval [ n , n ] .
(2)
Operation sequence segment: firstly, the real numbers (detonated by u ) are randomly produced between [ n , n ] for the SS. Based on the ranked-order-value (ROV) rule, we can allocate one unique ROV datapoint for each real number produced before in an ascending sequence; hence, the ROV datapoint can map to one operation. Then, the ROV datapoint is re-ordered on the basis of the sequence of the operations, and the real number is also re-ordered according to the re-ordered ROV datapoint, which is the data value in the IPV. The transition procedure can be described as Figure 4.

4.3.2. Transition from IPV to SS

(1)
Machine selection part: based on the reverse derivation of Equation (5), the transformation can be implemented by Equation (6).
g i = r o u n d [ [ y ( i ) + n ] [ r ( i ) 1 ] 2 n + 1 ]
(2)
Operation sequence part: We assign an ROV datapoint to each element of the IPV in ascending order. Then, the ROV datapoint is used as the fixed position number. Finally, the operation permutation can be achieved by matching the ROV datapoint to the operations. The conversion process is described in Figure 5 as follows:

4.4. Population Initialization

The performance of the original swarm has an enormous influence on the optimization result of the metaheuristics algorithm. Therefore, we can use certain strategies to improve the original swarm, which can enhance the algorithm’s capability. Since this paper uses a two-segment coding mechanism, the swarm initialization will be executed for the machine selection segment and the operation sequence segment separately. Firstly, the machine selection segment in the original swarm is produced by adopting an HS method [44]. Secondly, some operation sequence schemes are randomly produced for each machine selection segment and combined with it, in turn, to produce some scheduling scheme. Finally, the original swarm can be gained by selecting the optimal scheduling scheme each time.

4.5. Dynamic Weights and Quantum Rotation Gate

Quantum rotating for a microscopic particle can increase the diversity of its own position. The discoverers are the dominant group in the population, so executing the quantum rotating operation with the single encoding can strengthen the diversity of the swarm. To further improve the effectiveness of the ISSA, dynamic weight is also introduced simultaneously. The improved discoverer’s location updating formula is described as Equation (7).
X i t + 1 = X i t + ω X p X i t R 2 < S T X i t cos θ 1 ( X i t ) 2 sin θ R 2 S T
where X i t declares the current position of the sparrow. X p indicates the current optimal position gained by the discoverer. X i t + 1 is the regenerated location of the individual. S T ( 0.5 , 1 ] represents a safety value; R 2 ( 0 , 1 ] denotes a warning number; θ is the angle of rotation and θ = r a n d × 2 × π .
ω = ω max ω max ω min t T + ( 0.5 q ) ( 1 t T ) 2
where q is a random value in (0,1]. ω max indicates the maximal value of ω (e.g., the value is set as 0.9 in this paper). ω min represents the minimum value of ω (e.g., the value is set as 0.4 in this paper).

4.6. Sine–Cosine Algorithm

The moving of joiners towards the optimal position occupied by the discoverers often leads to gathering of individuals, which leads the SSA to run into a local optimum. The sine–cosine algorithm (SCA) has oscillatory properties. If we replace the original search mechanism of the joiner with the SCA, it can reduce the search blind spots in the solution space. The enhanced position updating method can be described in Equation (9).
X i t + 1 = X i t + z 1 sin ( z 2 ) z 3 X p X i t z 4 < 0.5 X i t + z 1 cos ( z 2 ) z 3 X p X i t z 4 0.5
z 1 = a × ( 1 t T )
where X p indicates the current optimal location obtained by the discoverer; a defines a constant number and a = 2 ; z 2 , z 3 and z 4 are random numbers with a uniform distribution and z 2 ( 0 , 2 π ) , z 3 ( 0 , 2 ) , z 4 ( 0 , 1 ) .

4.7. Adaptive Adjustment Strategy

Like other metaheuristic algorithms, coordination among the capacities of exploitation and exploration is significant for the capability of the SSA. Therefore, the adaptive adjustment strategy for the discoverer–joiner is introduced [45]. In the early stage of iterations, the discoverer occupies the majority. With the increase in iterations, the number of discoverers decreases adaptively while the number of joiners increases adaptively, which progressively transforms the algorithm from exploitation to exploration so that the quality of the scheme obtained can be strengthened. The specific formula is as follows:
r = b tan ( π t 4 T + π 4 ) σ r a n d ( 0 , 1 )
p N u m = r N
s N u m = ( 1 r ) N
where p N u m defines the number of discoverers, and s N u m presents the number of joiners; b defines a scale factor to control the number of discoverers and joiners; σ is the perturbation deviation factor to perturb r , which follows a nonlinear decrease.

4.8. Variable Neighborhood Search

In the stage of exploration, the sparrows regenerate their locations guided by X p , which decides the precision of the solution obtained by the SSA to a certain extent. In this case, the VNS is employed to strengthen the capability of the current best scheduling scheme W p , which correspond to X p . Meanwhile, an “Iterative Counter (IC)” is set for W p , which is equal to 0 at the initial time. If the fitness value of W p remains unchanged after iteration, the IC will be increased by 1; otherwise, it will remain unchanged. When the IC reaches the stability threshold δ (e.g., the value is set as 15 in this study), the individual has reached a steady state, and the VNS is executed on it. Three neighborhood structures used in this paper are designed as follows:
For the neighborhood structure N 1 : randomly select two positions in the operation sequence segment that belong to different jobs and arrange the elements between the two positions chosen in reverse order.
For the neighborhood structure N 2 : randomly select two positions in the operation sequence segment that belong to different jobs and insert the element of the first location into the location after the second location.
For the neighborhood structure N 3 : choose one location in the machine assignment part, in which the operation has many optimal machines, and then replace the machine chosen with the one with the shortest machining time in the alternative machine set.
The VNS performed on W p in this paper is the threshold acceptance method, which can be shown as follows:
Step 1.
Set the current best scheduling scheme W p as the original scheme W , and set the threshold δ > 0 , γ 1 , ρ 1 and termination condition γ max .
Step 2.
If ρ = 1 , set N 1 ( W ) N 3 ( W ) as W ; if ρ = 0 , set N 2 ( W ) N 3 ( W ) as W .
Step 3.
If C max ( W ) C max ( W ) δ , then set W as W ; if not, set ρ 1 as ρ .
Step 4.
Set γ + 1 as γ , if γ > γ max , then set W as W , and go to Step 5; if not, return to Step 2.
Step 5.
End.

4.9. Parameters of the ISSA

The relevant parameters of the ISSA include: the population size N , the safety threshold value S T , the scale factor b , the perturbation deviation factor σ , the dynamic weight ω , the maximum iterations t max . The parametrization of the ISSA is performed as follows:
S T is used to select different formulas to update the discoverer in Equation (7); the value of ST is set as 0.8 in this study.
σ is used to perturb the nonlinear decreasing value r ; the value of σ is set as 0.1 in this study.
b is used to adjust the number of discoverers and joiners; the value of b is set as 0.2 in this study.
ω is used to update the individuals in the discoverer subpopulation to improve the effectiveness of the ISSA.
t max is used to control the number of iterations of the algorithm.
N is used to control the size of the search space.

4.10. Procedure of the ISSA

In the following, the main procedure of the developed ISSA is presented with the illustration by a flow chart in Figure 6.
Step 1.
Set parameters and produce the original swarm by using the HS.
Step 2.
Determine the objective function values of all scheduling schemes, and then search the optimal and worst schemes.
Step 3.
Determine whether the optimal scheduling scheme is in a steady state; if so, execute the VSN on it; otherwise, go to Step 5.
Step 4.
Perform the conversion from scheduling scheme to IPV, and retain X p corresponding to W p and the worst individual position vector X w o r s t corresponding to the worst scheduling scheme W w o r s t .
Step 5.
Regenerate the discoverer’s positions based on Equation (7), the joiner’s positions based on Equation (9), and the guarder’s positions based on Equation (4).
Step 6.
Adjust the number of discoverers and joiners using the adaptive adjustment strategy.
Step 7.
A conversion mechanism is used to convert the updated IPV in the population to the scheduling scheme; then, find the optimal scheduling scheme.
Step 8.
Judge whether the stopping condition is met; if so, output W p ; otherwise, return to Step 2.

5. Computational Experiments

5.1. Experimental Settings

The ISSA was implemented in MATLAB 2017b and tested on a workstation, which had a configuration of 2.40 GHz, Intel Core i5-6200 CPU, 8G RAM, and Windows 10 operating system. In the experiments, 19 FJSP instances (10 Brandimarte instances, 4 Kacem instances, and 5 random instances) were adopted to test the validity and feasibility of the ISSA. For each instance, 10 independent runs were adopted. The processing cost per unit time on all machines was 50 RMB/min. The transfer time was produced in a uniform distribution on (5, 15); the unit was min. The transferred PCCC was produced in a uniform distribution on (5, 10); the unit was kw/min. In both the benchmark and random instances, the PCCC for all machines on the processing and idle mode were randomly produced within (0, 1). For the random instances, the processing times of all operations were produced in a discrete uniform distribution on [0, 100], and the machining order of all operations was also randomly produced. In the ISSA, the swarm scale was 200, and the maximal iteration was 1200; the safety value was 0.8; the scale factor was 0.2; the perturbation deviation factor was 0.1; the initial number of the guarder was 20.

5.2. Effectiveness of Enhancement Strategies

In this study, we used the HS to yield the original swarm. Then, the QRG and SCA were employed to enhance the capacity of the ISSA for balancing the capacities of exploration and exploitation. Thirdly, the AAS and VNS were adopted to strengthen the capacity of the SSA to move away from the local optima. In this section, the validity of three strategies is examined. In Table 3, the case name is placed in the first column, and the experiment data are placed in the latter column. “SSA“ defines the traditional sparrow search algorithm. “SSA-L” denotes the SSA that adopt the HS method to produce the original population. SSA-N denotes the algorithm that introduces the QRG, SCA, and AAS to the SSA-L. ISSA is the algorithm that adds the VNS to the SSA-N. Additionally, “Best” defines the optimal result in 10 runs. “Avg” is the mean value of the results in the 10 runs, and the boldface indicates the optimal result achieved by four algorithm variants. To enhance the comparison, the above algorithm variants were set to the same parameters.
As shown in Table 3, we can draw the following conclusions: (1) In the comparison of “Best” value, the ISSA obtained nine optimal values, which is obviously excellent versus the other three variants. The second optimal algorithm, namely SSA-N, obtained eight of the best results. (2) In the comparison of “Avg” values, the ISSA obtained 15 of the best results, which is better than the other three algorithms. The second optimal algorithm, namely SSA-N, only obtained three optimal values. Moreover, the average values obtained by the four algorithms are summarized in Table 3 and comparatively analyzed in Figure 7.

5.3. Comparison with Existing Algorithms

To certify the capability of the ISSA, we conducted a comparative study with three other metaheuristic algorithms, namely, the genetic algorithm (GA), the capuchin monkey algorithm (CapSA), and the whale optimization algorithm, with the original population generated randomly (RWOA). In addition, “SD” defines the standard deviation of the data gained in 10 replications, and “ARPD” denotes the average relative percent deviation, i.e.:
A R P D = l = 1 L 100 × A o l r M i n M i n × L
where “L” is the quantity of replications, “Min” is the minimum value of all performed replications, and “ A o l r “ is the objective value gained by the algorithm in the rth run. In the GA, the scale of the swarm was 200, the maximal number of iterations was 1200, the crossover probability was 0.8, and the variation probability was 0.6. In the CapSA, the swarm scale was 200, and the maximal number of iterations was 1200. In the RWOA, the swarm scale was 200, the maximal number of iterations was 1200, the spiral constant was 1, and each instance performed 10 independent replications.
The results are concluded and analyzed in Table 4 as follows: (1) In the comparison of the “Best” values, the ISSA acquired 13 optimal values and outperformed the GA in 14 out of 19 instances, the CapSA in 19 out of 19 instances, and RWOA in 17 out of 19 instances. (2) In the comparison of the “Avg” values, the proposed ISSA acquired 13 optimal values and outperformed the GA in 13 out of 19 instances, and the RWOA and the CapSA in 19 out of 19 instances. (3) In the comparison of the “SD” values, the proposed ISSA acquired 10 optimal values and outperformed the GA in 12 out of 19 instances, the CapSA in 15 out of 19 instances, and the RWOA in 10 out of 19 instances. (4) In the comparison of the “ARPD” values, the proposed ISSA acquired six optimal values and outperformed the GA in 18 out of 19 instances, the CapSA in 6 out of 19 instances, and the RWOA in 18 out of 19 instances. In addition, the average values of the four algorithms are shown in Table 4 and then compared in Figure 8. Figure 9 displays the convergence curve of the four algorithms for the RAND4 instance, with the size of 10 machines and 10 jobs. The Gantt chart for the RAND4 instance obtained by the ISSA is illustrated in Figure 10. The analysis indicates that the ISSA can find the best schedule among the four algorithms.
To examine the objective function values of the four compared algorithms in Table 4, an analysis of variance (ANOVA) test was implemented in Table 5. It is observed from Table 5 that p-value is equal to zero, implying a marked difference among the four algorithms in Table 4.

6. Conclusions

In this study, an improved sparrow search algorithm (ISSA) was developed for the EFJSP, which aims to optimize the total power consumption and processing cost. In the design of the ISSA, an HS was employed to produce an original swarm with high quality. Then, SCA and QRG were adopted to strengthen the capacity of the ISSA for coordinating exploration and exploitation. Moreover, the AAS and VNS were applied to reinforce the capacity of the ISSA to avoid trapping from local optima.
Numerous experiments were executed to test the capability of the ISSA. Computational results demonstrated that the improvement techniques were effective for enhancing solution accuracy. Furthermore, the proposed EFJSP methodology contributes to improving the theoretical foundation in the field of energy-saving scheduling and to helping manufacturing enterprises reduce energy consumption and processing costs in practice.
Regarding future research directions, energy-saving scheduling will be further investigated by considering more practical energy consumption modes, e.g., different energy consumption costs in sequence-dependent equipment relocation and setup applied to the mining industry [46]. For the ISSA, some discretization enhancement strategies will be adapted to solve the online EFJSP [47]. Finally, it would be a promising research topic to develop the ISSA for solving industry-oriented scheduling problems with the consideration of inter-machine storage, such as blocking, no-wait, and limited-buffer constraints [48,49,50,51,52,53,54].

Author Contributions

Writing—original draft, Software and Investigation: F.L. and R.L.; Methodology: F.L., R.L. and S.Q.L.; Writing—review and editing: S.Q.L. and M.M.; Software and Validation: S.L. and B.T.; Supervision: S.Q.L. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grants 71871064 and 11072192; and the Project of Scientific Research Foundation of Shaanxi University of Science and Technology under Grant 2021BJ-34.

Data Availability Statement

The data presented in this study are available on request from the first or corresponding author.

Conflicts of Interest

The authors claim no conflicts of interest.

References

  1. Kim, D.H.; Kim, T.J.Y.; Wang, X.; Kim, M.; Quan, Y.J.; Oh, J.W.; Min, S.H.; Kim, H.; Bhandari, B.; Yang, I.; et al. Smart machining process using machine learning: A review and perspective on machining industry. Int. J. Precis. Eng. Manuf.-Green Technol. 2018, 5, 555–568. [Google Scholar] [CrossRef]
  2. Angelopoulos, A.; Michailidis, E.T.; Nomikos, N.; Trakadas, P.; Hatziefremidis, A.; Voliotis, S.; Zahariadis, T. Tackling faults in the industry 4.0 era—A survey of machine-learning solutions and key aspects. Sensors 2019, 20, 109. [Google Scholar] [CrossRef] [PubMed]
  3. Fang, K.; Uhan, N.; Zhao, F.; Sutherland, J.W. A new shop scheduling approach in support of sustainable manufacturing. In Glocalized Solutions for Sustainability in Manufacturing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 305–310. [Google Scholar]
  4. Duflou, J.R.; Sutherland, J.W.; Dornfeld, D.; Herrmann, C.; Jeswiet, J.; Kara, S.; Hauschild, M.; Kellens, K. Towards energy and resource efficient manufacturing: A processes and systems approach. CIRP Ann. Manuf. Technol. 2012, 61, 587–609. [Google Scholar] [CrossRef]
  5. Carli, R.; Dotoli, M.; Digiesi, S.; Facchini, F.; Mossa, G. Sustainable scheduling of material handling activities in labor-intensive warehouses: A decision and control model. Sustainability 2020, 12, 3111. [Google Scholar] [CrossRef]
  6. Zhang, L.; Deng, Q.; Gong, G.; Han, W. A new unrelated parallel machine scheduling problem with tool changes to minimise the total energy consumption. Int. J. Prod. Res. 2020, 58, 6826–6845. [Google Scholar] [CrossRef]
  7. Ahmadi, E.; Zandieh, M.; Farrokh, M.; Emami, S.M. A multi objective optimization approach for flexible job shop scheduling problem under random machine breakdown by evolutionary algorithms. Comput. Oper. Res. 2016, 73, 56–66. [Google Scholar] [CrossRef]
  8. Salido, M.A.; Escamilla, J.; Giret, A.; Barber, F. A genetic algorithm for energy-efficiency in job-shop scheduling. Int. J. Adv. Manuf. Technol. 2016, 85, 1303–1314. [Google Scholar] [CrossRef]
  9. Li, Y.; Huang, W.; Wu, R.; Guo, K. An improved artificial bee colony algorithm for solving multi-objective low-carbon flexible job shop scheduling problem. Appl. Soft Comput. 2020, 95, 106544. [Google Scholar] [CrossRef]
  10. Zhang, X.; Ji, Z.; Wang, Y. An improved SFLA for flexible job shop scheduling problem considering energy consumption. Mod. Phys. Lett. B 2018, 32, 1840112. [Google Scholar] [CrossRef]
  11. Shahrabi, J.; Adibi, M.A.; Mahootchi, M. A reinforcement learning approach to parameter estimation in dynamic job shop scheduling. Comput. Ind. Eng. 2017, 110, 75–82. [Google Scholar] [CrossRef]
  12. Yang, W.; Su, J.; Yao, Y.; Yang, Z.; Yuan, Y. A novel hybrid whale optimization algorithm for flexible job-shop scheduling problem. Machines 2022, 10, 618. [Google Scholar] [CrossRef]
  13. Wu, X.; Sun, Y. A green scheduling algorithm for flexible job shop with energy-saving measures. J. Clean. Prod. 2018, 172, 3249–3264. [Google Scholar] [CrossRef]
  14. Zhu, J.; Shao, Z.H.; Chen, C. An improved whale optimization algorithm for job-shop scheduling based on quantum computing. Int. J. Simul. Model. 2019, 18, 521–530. [Google Scholar] [CrossRef]
  15. Anuar, N.I.; Fauadi, M.; Saptari, A. Performance evaluation of continuous and discrete particle swarm optimization in job-shop scheduling problems. IOP Conf. Ser. Mater. Sci. Eng. 2019, 530, 012044. [Google Scholar] [CrossRef]
  16. Ding, J.Y.; Song, S.; Wu, C. Carbon-efficient scheduling of flow shops by multi-objective optimization. Eur. J. Oper. Res. 2016, 248, 758–771. [Google Scholar] [CrossRef]
  17. Yang, J.; Xu, H. Hybrid memetic algorithm to solve multiobjective distributed fuzzy flexible job shop scheduling problem with transfer. Processes 2022, 10, 1517. [Google Scholar] [CrossRef]
  18. Dai, M.; Tang, D.; Giret, A.; Salido, M.A. Multi-objective optimization for energy-efficient flexible job shop scheduling problem with transportation constraints. Robot. Comput.-Integr. Manuf. 2019, 59, 143–157. [Google Scholar] [CrossRef]
  19. Tan, W.; Yuan, X.; Wang, J.; Zhang, X. A fatigue-conscious dual resource constrained flexible job shop scheduling problem by enhanced NSGA-II: An application from casting workshop. Comput. Ind. Eng. 2021, 160, 107557. [Google Scholar] [CrossRef]
  20. Lu, C.; Li, X.; Gao, L.; Liao, W.; Yi, J. An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times. Comput. Ind. Eng. 2017, 104, 156–174. [Google Scholar] [CrossRef]
  21. Carli, R.; Digiesi, S.; Dotoli, M.; Facchini, F. A control strategy for smart energy charging of warehouse material handling equipment. Procedia Manuf. 2020, 42, 503–510. [Google Scholar] [CrossRef]
  22. Yin, L.; Li, X.; Gao, L.; Lu, C.; Zhang, Z. A novel mathematical model and multi-objective method for the low-carbon flexible job shop scheduling problem. Sustain. Comput. Inform. Syst. 2017, 13, 15–30. [Google Scholar] [CrossRef]
  23. Choudhury, A. The role of machine learning algorithms in materials science: A state of art review on industry 4.0. Arch. Comput. Method E 2021, 28, 3361–3381. [Google Scholar] [CrossRef]
  24. Angelopoulos, A.; Giannopoulos, A.; Spantideas, S.; Kapsalis, N.; Trochoutsos, C.; Voliotis, S.; Trakadas, P. Allocating orders to printing machines for defect minimization: A comparative machine learning approach. In Proceedings of the International Conference on Artificial Intelligence Applications and Innovations, Crete, Greece, 17–20 June 2022; Springer: Cham, Switzerland, 2022; pp. 79–88. [Google Scholar]
  25. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl. Math. Model. 2014, 38, 2454–2462. [Google Scholar] [CrossRef]
  26. Jiang, T.; Zhang, C. Application of grey wolf optimization for solving combinatorial problems: Job shop and flexible job shop scheduling cases. IEEE Access 2018, 6, 26231–26240. [Google Scholar] [CrossRef]
  27. Han, Y.; Gong, D.; Jin, Y.; Pan, Q. Evolutionary multi-objective blocking lot-streaming flow shop scheduling with machine breakdowns. IEEE Trans. Cybern. 2017, 49, 184–197. [Google Scholar] [CrossRef]
  28. Li, J.; Duan, P.; Sang, H.; Wang, S.; Liu, Z.; Duan, P. An efficient optimization algorithm for resource-constrained steel-making scheduling problems. IEEE Access 2018, 6, 33883–33894. [Google Scholar] [CrossRef]
  29. Braik, M.; Sheta, A.; Al-Hiary, H. A novel meta-heuristic search algorithm for solving optimization problems: Capuchin search algorithm. Neural Comput. Appl. 2021, 33, 2515–2547. [Google Scholar] [CrossRef]
  30. Fan, J.; Li, Y.; Wang, T. An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism. PLoS ONE 2021, 16, e0260725. [Google Scholar] [CrossRef]
  31. Odili, J.B.; Mohmad Kahar, M.N.; Noraziah, A. Parameters-tuning of PID controller for automatic voltage regulators using the African buffalo optimization. PLoS ONE 2017, 12, e0175901. [Google Scholar] [CrossRef]
  32. Ling, Y.; Zhou, Y.; Luo, Q. Lévy flight trajectory-based whale optimization algorithm for global optimization. IEEE Access 2017, 5, 6168–6186. [Google Scholar] [CrossRef]
  33. Lyu, S.; Li, Z.; Huang, Y.; Wang, J.; Hu, J. Improved self-adaptive bat algorithm with step-control and mutation mechanisms. J. Comput. Sci. 2019, 30, 65–78. [Google Scholar] [CrossRef]
  34. Verma, O.P.; Aggarwal, D.; Patodi, T. Opposition and dimensional based modified firefly algorithm. Expert Syst. Appl. 2016, 44, 168–176. [Google Scholar] [CrossRef]
  35. Goings, J.J.; Li, X. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra. J. Phys. Chem. C 2016, 144, 234102. [Google Scholar] [CrossRef]
  36. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  37. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl.-Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  38. Ouyang, C.; Qiu, Y.; Zhu, D. A multi-strategy improved sparrow search algorithm. J. Phys. Conf. Ser. 2021, 1848, 012042. [Google Scholar] [CrossRef]
  39. Zhang, J.; Xia, K.; He, Z.; Yin, Z.; Wang, S. Semi-supervised ensemble classifier with improved sparrow search algorithm and its application in pulmonary nodule detection. Math. Probl. Eng. 2021, 2021, 6622935. [Google Scholar] [CrossRef]
  40. Liu, G.; Shu, C.; Liang, Z.; Peng, B.; Cheng, L. A modified sparrow search algorithm with application in 3d route planning for UAV. Sensors 2021, 21, 1224. [Google Scholar] [CrossRef]
  41. Yuan, J.; Zhao, Z.; Liu, Y.; He, B.; Wang, L.; Xie, B.; Gao, Y. DMPPT control of photovoltaic microgrid based on improved sparrow search algorithm. IEEE Access 2021, 9, 16623–16629. [Google Scholar] [CrossRef]
  42. Zhang, Z.; Han, Y. Discrete sparrow search algorithm for symmetric traveling salesman problem. Appl. Soft Comput. 2022, 118, 108469. [Google Scholar] [CrossRef]
  43. Yuan, Y.; Xu, H.; Yang, J.D. A hybrid harmony search algorithm for the flexible job shop scheduling problem. Appl. Soft Comput. 2013, 13, 3259–3272. [Google Scholar] [CrossRef]
  44. Zhang, G.H.; Gao, L.; Li, P.; Zhang, C.Y. Improved genetic algorithm for the flexible job-shop scheduling problem. J. Mech. Eng. 2009, 45, 145–151. [Google Scholar] [CrossRef]
  45. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.-Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  46. Liu, S.Q.; Kozan, E.; Corry, P.; Masoud, M.; Luo, K. A real-world mine excavators timetabling methodology in open-pit mining. Opt. Eng. 2022, in press. [CrossRef]
  47. Luan, F.; Cai, Z.; Wu, S.; Liu, S.Q.; He, Y. Optimizing the low-carbon flexible job shop scheduling problem with discrete whale optimization algorithm. Mathematics 2019, 7, 688. [Google Scholar] [CrossRef]
  48. Liu, S.Q.; Kozan, E.; Masoud, M.; Zhang, Y.; Chan, F.T. Job shop scheduling with a combination of four buffering constraints. Int. J. Prod. Res. 2018, 56, 3274–3293. [Google Scholar] [CrossRef]
  49. Liu, S.Q.; Kozan, E. Parallel-identical-machine job-shop scheduling with different stage-dependent buffering requirements. Comput. Oper. Res. 2016, 74, 31–41. [Google Scholar] [CrossRef]
  50. Liu, S.Q.; Kozan, E. Scheduling trains with priorities: A no-wait blocking parallel-machine job-shop scheduling model. Transp. Sci. 2011, 45, 175–198. [Google Scholar] [CrossRef]
  51. Liu, S.Q.; Kozan, E. Scheduling trains as a blocking parallel-machine job shop scheduling problem. Comput. Oper. Res. 2009, 36, 2840–2852. [Google Scholar] [CrossRef]
  52. Liu, S.Q.; Kozan, E. Scheduling a flow shop with combined buffer conditions. Int. J. Prod. Econ. 2009, 117, 371–380. [Google Scholar] [CrossRef]
  53. Masoud, M.; Kozan, E.; Kent, G.; Liu, S.Q. An integrated approach to optimise sugarcane rail operations. Comput. Ind. Eng. 2016, 98, 211–220. [Google Scholar] [CrossRef]
  54. Masoud, M.; Kozan, E.; Kent, G.; Liu, S.Q. A new constraint programming approach for optimising a coal rail system. Opt. Lett. 2017, 11, 725–738. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The Gantt chart of the instance mentioned in Table 1.
Figure 1. The Gantt chart of the instance mentioned in Table 1.
Machines 10 00847 g001
Figure 2. Denotation of Scheduling Scheme (SS).
Figure 2. Denotation of Scheduling Scheme (SS).
Machines 10 00847 g002
Figure 3. Denotation of Individual Position Vector (IPV).
Figure 3. Denotation of Individual Position Vector (IPV).
Machines 10 00847 g003
Figure 4. Transition from operation sequence to IPV.
Figure 4. Transition from operation sequence to IPV.
Machines 10 00847 g004
Figure 5. Transition from IPV to operation sequence.
Figure 5. Transition from IPV to operation sequence.
Machines 10 00847 g005
Figure 6. Flow chart of the Improved Sparrow Search Algorithm (ISSA).
Figure 6. Flow chart of the Improved Sparrow Search Algorithm (ISSA).
Machines 10 00847 g006
Figure 7. Comparison of average values of four algorithms for 19 instances.
Figure 7. Comparison of average values of four algorithms for 19 instances.
Machines 10 00847 g007
Figure 8. Comparison of average values of the four algorithms for 19 instances.
Figure 8. Comparison of average values of the four algorithms for 19 instances.
Machines 10 00847 g008
Figure 9. The convergence curve for the RAND4 instance obtained by the four algorithms.
Figure 9. The convergence curve for the RAND4 instance obtained by the four algorithms.
Machines 10 00847 g009
Figure 10. The Gantt chart for the RAND4 instance gained by the ISSA.
Figure 10. The Gantt chart for the RAND4 instance gained by the ISSA.
Machines 10 00847 g010
Table 1. One numerical instance of the EFJSP.
Table 1. One numerical instance of the EFJSP.
Job NumberOperationsMachine Candidates(Processing Time/min, Energy Consumption Cost/CNY)
J1O11M1, M4(7, 12), (8, 18)
O12M1, M2, M3(10, 15), (4, 21), (5, 26)
J2O13M1, M2, M3, M4(5, 14), (3, 16), (4, 17), (6, 18)
O21M1, M5(7, 26), (4, 13)
O22M2, M3, M4(8, 27), (6, 15), (5, 30)
J3O31M2(6, 17)
O32M1, M3, M4(4, 22), (8, 18), (6, 25)
O33M1, M2(6, 14), (3, 24), (5, 15)
Table 2. Transferred times between machines (unit: min).
Table 2. Transferred times between machines (unit: min).
Machine Tool NumberM1M2M3M4
M10243
M22056
M34502
M43620
Table 3. Effectiveness analysis of improvement strategy.
Table 3. Effectiveness analysis of improvement strategy.
InstancesSSASSA-LSSA-NISSA
BestAvgBestAvgBestAvgBestAvg
MK019306.949412.828423.308447.818356.378456.388446.758468.81
MK028347.208572.238104.278130.348083.548121.777985.848009.98
MK0350,733.2251,119.3948,360.5749,411.7247,461.0448,657.4647,516.6748,661.04
MK0418,298.1418,547.6918,168.2618,226.6017,592.4317,945.6317,045.5617,592.43
MK0535,375.7335,385.5934,864.2435,007.4834,027.6134,270.7934,031.9134,213.75
MK0619,846.1019,968.6619,019.4519,098.9418,963.1719,001.4519,063.1619,859.96
MK0737,695.0837,733.8936,634.7637,397.9235,363.0636,213.8935,254.1035,309.76
MK08131,569.25131,878.23130,738.76135,822.34128,607.15129,560.34127,846.40128,512.99
MK09123,380.55124,448.64123,133.18126,347.35120,314.60122,087.53120,320.29121,351.59
MK1010,4048.5310,5219.6910,3219.69103,533.07102,862.10103,107.8910,0520.95102,149.47
KACEM011719.851766.011711.071755.181708.871758.331698.761709.16
KACEM033344.443782.323324.513381.083233.203329.013245.583282.17
KACEM042225.472336.262032.632194.022137.792147.112102.872129.75
KACEM055548.585782.855140.645281.195131.955206.735026.085149.68
RAND137,144.1537,782.6835,409.2835,849.8033,843.7934,192.3633,224.2634,110.69
RAND234,004.4134,478.7134,060.5535,808.7134,886.1834,899.4534,118.0334,464.93
RAND324,803.2025,542.6624,191.8424,265.0824,100.0424,221.5324,102.6624,180.50
RAND4394,304.13401,434.71394,006.28394,110.82393,320.88393,514.54393,409.55393,679.24
RAND512,980.7014,326.9212,703.4212,863.3312,661.1512,713.4212,640.9812,669.47
Table 4. Comparative analysis for four different algorithms.
Table 4. Comparative analysis for four different algorithms.
InstancesGACapSA
BestAvgSDARPDBestAvgSDARPD
MK018123.738187.9942.4110.1211,289.2711,446.4391.250.24
MK027766.517797.6718.3413.5611,350.5411,907.87331.920.10
MK0349,827.5449,956.5082.089.0276,999.3078,509.15881.210.11
MK0416,788.8616,807.4112.5711.3820,528.3221,372.75573.870.84
MK0534,174.4334,226.9333.343.0036725.7337,163.20234.860.69
MK0623,949.1924,505.27361.5913.5440,674.4641,389.38416.040.25
MK0734,562.4434,772.24141.5612.9053,566.5054,177.84365.710.41
MK08127,959.83128,559.31374.171.20140,882.04142,364.34839.360.55
MK09125,155.91126,208.61642.927.30141,528.49143,081.35968.670.16
MK10104,807.56105,840.88586.538.60125,612.11127,089.44872.510.31
KACEM011739.731851.7460.0521.202529.612598.5640.820.72
KACEM033462.673500.5823.5146.809972.0311,349.15796.080.33
KACEM042734.462817.2053.1725.677971.428362.48258.130.64
KACEM056333.496486.5592.3518.9016,196.3217,292.33657.440.23
RAND135,281.0535,758.33284.1210.8054,552.7854,644.7653.690.00
RAND234,232.2334,899.90390.384.5037,328.6537,473.1786.490.68
RAND323,668.9223,790.7572.495.1028,322.2429,466.10684.980.14
RAND4393,456.93393,480.4619.424.98393,778.43393,823.41471.720.97
RAND512,674.7212,718.6031.887.5014,873.5315,621.90493.410.49
InstancesRWOAISSA
BestAvgSDARPDBestAvgSDARPD
MK018626.748670.1029.480.508446.758468.8117.560.06
MK028729.108994.14164.397.407985.848009.9818.030.00
MK0361,792.9262,719.36567.245.5047,516.6748,661.04638.340.10
MK0417,646.4017,911.92183.679.8017,045.5617,592.43317.152.54
MK0534,817.1134,856.4025.503.9034,031.9134,213.75102.670.98
MK0630,291.2930,785.26284.385.8019,063.1619,859.96465.290.73
MK0739,992.4640,244.24154.346.0035,254.1035,309.7635.800.03
MK08130,033.56131,579.36829.214.30127,846.40128,512.99179.553.01
MK09130,420.14131,663.27703.254.80120,320.29121,351.59594.231.94
MK10116,481.42117,221.85457.983.30100,520.95102,149.47931.651.25
KACEM011758.921791.2819.650.381698.761709.167.340.02
KACEM034610.954809.61121.448.403245.583282.1722.871.66
KACEM044015.784197.77118.530.322102.872129.7519.392.03
KACEM059748.189858.1169.1611.505026.085149.6877.014.11
RAND141,445.6541,693.81174.376.0033,224.2634,110.69531.621.13
RAND234,885.2235,002.9367.313.0034,118.0334,464.93194.540.40
RAND323,241.3924,705.01792.328.2024,102.6624,180.5046.632.48
RAND4393,765.79393,984.40152.455.90393,409.55393,679.24164.922.42
RAND512,626.2812,802.4699.378.8012,640.9812,669.4716.542.29
Table 5. ANOVA for ARPD of the four compared algorithms.
Table 5. ANOVA for ARPD of the four compared algorithms.
SourceDFSum of SquaresMean SquareFp-Value
Factor31692.88564.29218.830
Error722157.6529.967
Total753850.52
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luan, F.; Li, R.; Liu, S.Q.; Tang, B.; Li, S.; Masoud, M. An Improved Sparrow Search Algorithm for Solving the Energy-Saving Flexible Job Shop Scheduling Problem. Machines 2022, 10, 847. https://doi.org/10.3390/machines10100847

AMA Style

Luan F, Li R, Liu SQ, Tang B, Li S, Masoud M. An Improved Sparrow Search Algorithm for Solving the Energy-Saving Flexible Job Shop Scheduling Problem. Machines. 2022; 10(10):847. https://doi.org/10.3390/machines10100847

Chicago/Turabian Style

Luan, Fei, Ruitong Li, Shi Qiang Liu, Biao Tang, Sirui Li, and Mahmoud Masoud. 2022. "An Improved Sparrow Search Algorithm for Solving the Energy-Saving Flexible Job Shop Scheduling Problem" Machines 10, no. 10: 847. https://doi.org/10.3390/machines10100847

APA Style

Luan, F., Li, R., Liu, S. Q., Tang, B., Li, S., & Masoud, M. (2022). An Improved Sparrow Search Algorithm for Solving the Energy-Saving Flexible Job Shop Scheduling Problem. Machines, 10(10), 847. https://doi.org/10.3390/machines10100847

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop