Adaptive Population NSGA-III with Dual Control Strategy for Flexible Job Shop Scheduling Problem with the Consideration of Energy Consumption and Weight

: The ﬂexible job shop scheduling problem has always been the focus of research in the manufacturing ﬁeld. However, most of the previous studies focused more on efﬁciency and ignored energy consumption. Energy, especially non-renewable energy, is an essential factor affecting the sustainable development of a country. To this end, this paper designs a ﬂexible job shop scheduling problem model with energy consideration more in line with the production ﬁeld. Except for the processing stage, the energy consumption of the transport, set up, unload, and idle stage are also included in our model. The weight property of jobs is also considered in our model. The heavier the job, the more energy it consumes during the transport, set up, and unload stage. Meanwhile, this paper invents an adaptive population non-dominated sorting genetic algorithm III (APNSGA-III) that combines the dual control strategy with the non-dominated sorting genetic algorithm III (NSGA-III) to solve our ﬂexible job shop scheduling problem model. Four ﬂexible job shop scheduling problem instances are formulated to examine the performance of our algorithm. The results achieved by the APNSGA-III method are compared with ﬁve classic multi-objective optimization algorithms. The results show that our proposed algorithm is efﬁcient and powerful when dealing with the multi-objective ﬂexible job shop scheduling problem model that includes energy consumption.


Introduction
Facing the global competition of the integration of the world economy, if the manufacturing industry wants to stand out from the cruel survival of the fittest, it must accelerate its response to external changes, improve product quality and performance, reduce various costs in the process, and provide customer-based personalized service on-demand [1,2]. At the same time, affected by the deterioration of the climate and the greenhouse effect, society and the country have also put forward higher and higher requirements for the green production of enterprises [3,4]. The manufacturing industry, which accounts for half of the total carbon emissions, has also attracted the attention of society and governments at all levels [5,6]. Therefore, ensuring a high-efficiency production rhythm in production and processing while seeking to improve energy efficiency and reduce total carbon emissions has become the direction of the study efforts of experts and scholars [7,8].
A variety of multi-objective optimization algorithms have been proposed in many studies [35]. For multi-objective FJSP, Gao et al. combined the particle swarm algorithm and tabu search algorithm and proposed an effective hybrid algorithm to solve multiobjective FJSP with several conflicting and incommensurable objectives [36]. In another study, Gao et al. proposed a discrete harmony search algorithm for FJSP with minimization of multiple objective functions of the maximum of the completion time and the mean of earliness and tardiness [37]. Moreover, Li et al. proposed an effective hybrid tabu search algorithm (HTSA) to solve the FJSP with three minimization objectives of the maximum completion time (makespan), the total workload of machines, and the workload of the critical machine [38]. Lu et al. formulated the FJSP mathematical model with the objectives of minimizing both the makespan and the total additional resource consumption [39].
Most of the research has achieved excellent results for their issue. However, for FJSP as an NP-hard problem, the solution space will exponentially increase with jobs and operations [40][41][42]. Meanwhile, FJSP is a discrete problem; many algorithms that are good at dealing with continuous problems are not suitable for solving FJSP [43,44]. Many algorithms and improvement strategies are challenging to achieve a satisfactory effect for it [45,46]. The genetic algorithm (GA) is a very suitable tool for solving combinatorial optimization problems, such as FJSP. The non-dominated sorting genetic algorithm III (NSGA-III) algorithm is an extension of the GA in multi-objective optimization, which has been applied to various practical fields, so this paper adopts NSGA-III as a basic algorithm when solving FJSP. In order to further improve the solution ability of NSGA-III to prevent it from falling into the local optimum, and to enhance the diversity of the population, this paper proposes a dual control strategy (DCS) for multi-objective optimization and combines it with NSGA-III to avoid precocity and enhance the optimization capability of NSGA-III on solving combinatorial optimization problems such as FJSP. The highlights of this paper are as follows: (1) The FJSP model considering energy consumption that covers the transport of AGV and set up, processing, unload, and idle of the machine is designed.
(2) The weight attribute of the bars is also taken into account. The heavier the bar, the more energy it consumes.
(3) The DCS for multi-objective optimization is formulated to enhance the solving ability of NSGA-III.
The rest of this work is arranged as follows. Firstly, the details of FJSP are elaborated in Section 2. Then, the procedure of the adaptive population non-dominated sorting genetic algorithm III (APNSGA-III) is described in Section 3. Followed by the extensive experiments that are conducted in Section 4. Finally, we summarize the paper in Section 5.

Problem Description
Classic FJSP can be described as: there are n jobs J = (J 1 , J 2 , . . . , J n ) that are processed on m machines M = (M 1 , M 2 , . . . , M m ) in a production job shop. Each job J i consists of n i operation, O i,j denotes the jth operation of the ith job. Each operation can be processed on any machine that belongs to machine set M i,j . There are the following assumptions (constraints) about the FJSP [47].
(1) The completion time of the jth operation of the ith job is the sum of its start time and processing time.
where C ijk , S ijk , T ijk implies the completion time, the starting time, and the processing time of the jth operation of the ith job on machine M k , respectively.
(2) The start time of the jth operation of the ith job on the machine M k is determined by the completion time of the previous operation q on the machine and the completion time of the previous operation of the job.
(3) The downtime of machine M k is the completion time of the last operation on the machine.
CT k = C lk (3) where l represents the last job on machine M k , C lk indicates the completion time of the last job on machine M k .
(4) There are constraints on the machine required for the processing operation.
where n and m denotes the number of jobs and machines, respectively. C gk , T gk indicates the completion time and processing time of job J g on machine M k , M means a sufficiently large positive number. x igk represents the processing priority of job J i and job J g on machine M k . If the job J i is processed first, then x igk is 1, otherwise it is zero. (5) Operation O ij and O gz cannot be processed on any machine on machine set M ij ∩ M gz at the same time.
where Z ijgzk represents that operation O ij is processed before operation O gz on machine M k , 1 if yes, 0 otherwise. M ij ∩ M gz denotes the machine set that can process operation O ij and operation O gz at the same time. (6) There is a priority relationship between the various operations of the same job.
where K i represents the number of operation of job J i . (7) All jobs can be processed and all machines are optional when a task starts.
(8) Only one machine can be selected for each operation.
where R ijk denotes whether the machine M k processes operation O ij , 1 means processed, 0 means unprocessed. This paper considers two optimization objectives in the FJSP model that include the maximum completion time and the total energy consumption.

Maximum Completion Time (Makespan)
where Makespan represents the maximum completion time of all machines, it is the main indicator to measure the completion time of all jobs. CT k is expressed as: where R ijk denotes whether the machine M k processes operation O ij , 1 means processed, 0 means unprocessed.

Total Energy Consumption (TEC)
The total energy consumption considered in this paper is mainly composed of the following five parts.
(a): The transporting energy consumption (TTEC) TTEC is the energy consumed by the AGV transporting the job between machines. TTEC is related to the time AGV takes to transport the job and the weight of the job. It can be calculated by the following equation.
where T ij implies the time spent in transporting the job J i at the jth operation, W i means the weight of the job J i . P represents the energy consumed to transport a kilogram of objects per second. (b): Total set up energy consumption (TSEC) TSEC is the energy consumed by the machine to grab the job and set it on the processing table. Different operations of different jobs have different set up energy consumptions on different machines. Moreover, TSEC is positively related to the weight of the bar; the heavier the bar, the more energy it consumes. It can be calculated by the following equation.
where S ijk represents the energy consumption per unit weight for machine M k to set up O ij , W i denotes the weight of the job J i . (c): Total processing energy consumption (TPEC) TPEC is the main energy consumption that is used to perform operations such as punching, cutting, laser printing, spray painting, etc. Different operations of different jobs have different processing energy consumptions on different machines. The calculation of TPEC is not given in Equation (15).
where E ijk means the energy consumed by machine M k processing operation O ij , R ijk indicate whether the jth operation of the ith job is processed on machine M k , 1 if yes, 0 otherwise. (d): Total unloading energy consumption (TUEC) Corresponding to TSEC, TUEC is the energy consumed by the unloading job by machine. Different operations of different jobs have different unload energy consumptions on different machines. Moreover, TUEC is positively related to the weight of the bar; the heavier the bar, the more energy it consumes. It can be calculated by the following equation.
where U ijk represents the energy consumption per unit weight for machine M k to unload processing operation O ij , W i denotes the weight of the job J i . (e): Total idling energy consumption (TIEC) The machine in the job shop still consumes energy when it is not working, TIEC is the energy consumed when the machine is idle. Different machines consume different energy in standby state. It is given by following Equation (17). (17) where N i denotes the total idle time of machine M i , C i represents the energy consumption per minute when the machine is idle, m implies the number of machines. Therefore, the second optimization objective TEC can be formulated as follows: Figure 1 shows the main structure of our FJSP model. Each job needs to go through I-IV statuses from raw material to finished product, namely, origin, set up, processing, and unload. Each machine has A-D functions, namely, set up, processing, unload, and idle. Automatic guided vehicle (AGV) has only one E function, which is to transport jobs. Status I to status II of the jobs need to be driven by AGV, and the energy consumed by this part will also be recorded in the total energy consumption, which is the transportation consumption. Status II-IV of one job needs to be implemented by one machine's A-C functions. Notably, the weight attribute of jobs is considered in the transport, set up, and unload stages. The heavier the bars, the more energy is consumed in these three stages. The processing stage of the job does not consider the weight attribute because in our model, we assume that the job is placed on the processing table at this stage, and there is no need to overcome gravity to do work. The machine is idle when it is not processing any jobs. Therefore, the total energy consumption of the workshop can be attributed to the sum of the energy consumption of the AGV transporting the job and the energy consumption of the four functions of the machine. To more vividly illustrate the details of energy consumption information mentioned before, we place the Gantt chart of the 3 × 3 scale FJSP into Figure 2. At the beginning of the task, the time for the job to reach the machine through transport will also be recorded in the scheduling. The job must go through the set up stage before being processed. After the operation of the job is processed, it must go through the unload stage. A machine can only perform one of the set up, process, and unload stages on operations simultaneously. The transporting has nothing to do with any operation of the machine and it is related to AGV that is robot dedicated to transporting workpieces. As long as the job is finished performing the unload operation, it will be boxed and transported off the production line. Moreover, the job can be queued to be processed, as long as the unloading of the previous job on the machine is completed, and the setup stage can be carried out immediately. We can find that the idle time of the machine occupies a large proportion of the scheduling. It once again highlights the importance of investigating the energy consumption of the machine without a load.
An instance of the FJSP is given in Table 1. As it can be observed, there are three jobs to be processed on three machines. Job J 1 , J 2 , J 3 , all have three operations. The information of (set up time/ set up energy consumption), (processing time/process energy consumption), (unload time/unload energy consumption) are shown in Table 1 is 11 kWh and 16 kWh, respectively. After completing the preceding operation of a job on a machine tool, the AGV will convey the job from the current machine to another machine. The transportation time between different machines is shown in Table 2.  Table 2. The transportation times of AGV between machines (unit: s).

Improved NSGA-III
NSGA-III is the extension of the NSGA-II. NSGA-III and NSGA-II have a similar framework. The difference between the two is mainly the change in the selection mechanism. NSGA-II primarily relies on the crowding degree for sorting, and its role in highdimensional target space is not prominent, while NSGA-III has drastically adapted the crowding degree sorting. Maintain the diversity of the population by introducing widely distributed reference points. The individual selection mechanism of NSGA-III is shown as follows.

Non-Dominated Sorting
Suppose the individual number of the current population P t is N, that is |P t | = N. Generate Q t through genetic operator, that is |Q t | = N. The parent and child populations are combined as R t = P t ∪ Q t . Sort R t non-dominated and divide it into multiple nondominated levels (F 1 , F 2 , . . .). Put the population members of the non-dominated level 1 to l into |S t | in turn. If |S t | = N, P t+1 = S t . If |S t | > N, then part of the next generation is P t+1 = ∪ l−1 i=1 F i , the remaining individuals are selected in F l . The detailed selection procedure will be expanded in the following subsection.

Determination of Reference Points on a Hyperplane
The reference points of NSGA-III can be generated by a structured manner or supplied preferentially by the user. There are a variety of popular reference point preset methods; we introduce two popular methods.
Das and Dennis's systematic approach: The reference point is on a (M − 1)-dimensional hyperplane, and M is the dimension of the target space. If we divide each target into H parts, then the number of reference points is: For a three-objective problem with H = 4, the reference points form a triangle and 15 reference points are generated according to formula (20). The coordinates of each reference point are generated by the following rules: (1): Let X be all the (M-1) combinations of 0 (3): Let S be the reference point set, for each s ij ∈ S and x ij ∈ X, Deb and Jain's method: It is suggested to use two layers of reference points, which are generated as follows [48]: (1): Generate S 1 by Das and Dennis's method as the point set on the boundary layer; (2): Let S 2 be the point set on the inside layer, for each S ii ∈ S 2 and s ij ∈ S 1 , (3): The reference point set S = S 1 ∪ S 2 .

Adaptive Normalization of Population Individuals
Then, find the extra points corresponding to each coordinate axis using (22) and (23).
For the ith translate objective f i , generate an additional objective vector z i,max . M additional objective vectors will form an M-dimensional linear hyperplane. Find the intercept a i , i = 1, 2, . . . , M. Then, the objective function can be normalized to: where the function value of the intersection of the normalized plane and the coordinate axis f i = 1, the points on this normalized hyperplane satisfy ∑ M i=1 f n i = 1.

Link the Individuals to the Reference Points
After normalizing each objective adaptively based on the extent of members of S t in the objective space, it needs to associate each population member with a reference point. For this purpose, NSGA-III defines a reference line corresponding to each reference point on the hyperplane by joining the reference point with the origin. Then, calculate the perpendicular distance of each population member of S t from each of the reference lines. The reference point whose reference line is closest to a population member in the normalized objective space is considered to be associated with the population member.

Select Individuals
It is worth noting that a reference point may have one or more population members associated with it or need not have any population member associated with it. NSGA-III counts the number of population members from P t+1 = S t /F l that are associated with each reference point. Denote this niche count as ρ j for the jth reference point. NSGA-III devises a new niche-preserving operation as follows. First, identify the reference point set J min = {j : argmin j ρ j } having minimum ρ j . In the case of multiple such reference points, one (j ∈ J min ) is chosen at random.
If ρ¯j = 0 (meaning that there is no associated P t+1 member to the reference point j), there can be two scenarios withj in set F l . First, there exists one or more members in front F l that are associated with the reference pointj. In this case, the one having the shortest perpendicular distance from the reference line is added to P t+1 . The count ρ¯j for reference pointj is then incremented by one. Second, the front F l does not have any member associated with the reference pointj. In this case, the reference point is excluded from further consideration for the current generation.
In the event of ρ¯j ≥ 0 (meaning that one member associated with the reference point already exists in S t /F l ), a randomly 2 chosen member, if it exists, from front F l that is associated with the reference pointj is added to P t+1 . The count ρ¯j is then incremented by one. After niche counts are updated, the procedure is repeated for a total of K times to fill all of the vacant population slots of P t+1 .

The Framework of APNSGA-III
In the selection operator of ordinary GA, solutions with lower fitness will always be eliminated. Sometimes inferior individuals also have a probability of becoming a superior solution. Frequent elimination of them will not only waste resources but also reduce randomness and reduce global search capabilities. To this end, this paper designs a DCS for multi-objective optimization to enhance the performance of the NSGA-III.
The details of the APNSGA-III are shown in Algorithm 1 and Figure 3. First, pop that represents the entire population, best that is the best solution found so far, Best_notEnhance that records how many iterations best has not enhanced, and NotEnhance i that records how many iterations the solution x i has not enhanced are initialized in lines 1 to 5. In every iteration, N needs to be saved in N tmp before the main for loop because the N will change in the loop (pop_inc in line 16). The new solution m i is generated and evaluated after the genetic operator, including the mutation operator and crossover operator (in line 9). If m i is better than its predecessor vector x i , m i will replace the old x i , and NotEnhance i will be reset to zero in line 12; otherwise, NotEnhance i will increase by one because no better solution is found, and we still give a chance for the m i entering the population. Therefore, if the best solution during the iteration has not been updated for a long time (Best_notEnhance ≥ T), and the current population size N is smaller than N max , then the pop_inc strategy will be used. After this, the best solution x best in the current generation (in line 17), the worst solution in the current generation x worst (in line 18), the best solution found so far best, and Best_notEnhance (in lines 19 to 23) are updated, respectively. After the generation of the whole population, the pop_dec strategy is used to delete some pessimistic individuals in line 24. Then, the iteration until the terminal condition is satisfied. The pop_inc and pop_dec strategies are introduced in the following sections.

The pop_inc Strategy
The population size of most meta-heuristic algorithms is fixed, but this will limit the diversity of the population to a certain extent and lead to a local optimum. When the algorithm has not improved the optimal solution for many consecutive generations or has little improvement, it is important to increase the population size and introduce new individuals to enhance diversity and escape from the optimal. Vectors m i , generated by the genetic operation, will be helpful to increase the population diversity and may become promising solutions in the later evolution. Therefore, the pop_inc strategy is designed to give these worse vectors m i the opportunity to be added into the population, shown as Algorithm 2. The pop_inc strategy will be triggered if the solution best has been not changed for T iterations and N is smaller than N max in line 15 of Algorithm 1.  If the best is not improved for a long time, the algorithm may stagnate and fall into a local optimum. Then, adding some worse vectors may help perturb the population and jump out of the local optimum, increasing the population's diversity. if Best_notEnhance ≥ T and N < N max then 16 Implement the pop_inc(m i ,pop); 17 Update x best in the current iteration; 18 Update x worst in the current iteration; Although the pop_inc strategy will increase the diversity of the population to help the algorithm escape from the local optimum, if there is no population size, only increasing it will waste computer memory and delay the algorithm convergence speed. To this end, the pop_dec strategy is proposed to delete some inefficient individuals from the current population and ensure that N is smaller than N max . As shown in line 24 of Algorithm 1, the pop_dec strategy procedure is executed after an iteration of the evolution. To decide which individuals are to be deleted, a factor named "exacerbation value" (the ex value) of solutions is designed as: where n denotes the number of optimization objectives. The addition of the constant "1.0" is to avoid the divisor of zero. If ex(x i ) is large, it represents that f (x i ) is worse and close to f (x worst ), or the solution x i has not changed for a long time. Then, this solution x i may be hopeless and will be deleted from the population. The details of the pop_dec strategy are shown in Algorithm 3. If N is bigger than N min , some inefficient solutions should be deleted from the population. For each solution x i , the exacerbation value ex(xi) is calculated as (25). If ex(x i ) is bigger than ex max , and x i ranks after 30% in population, this solution will be deleted in line 6 of Algorithm 3, and N will reduce by one. After deletion, if N reduces to N min , the loop among lines 3 to 9 will end. Therefore, the population size N can be clamped in a certain range between N min and N max . The pop_dec strategy simultaneously considers fitness values and the update frequency of solutions, which greatly increases the fault tolerance rate that only considers fitness. Besides, there is no need to sort solutions by the fitness values in the pop_dec strategy, which helps save time.
Gao et al. designed an efficient chromosome representation to reduce decoding costs [49]. Due to its structure and coding rules, it does not require a repair mechanism and is very suitable for the integer optimization characteristics of NSGA-III. Therefore, this paper introduces this method to encode and decode our FJSP model, which helps the algorithm to solve the problem we put forward.

Experiment and Analysis
Since most of the previous studies take completion time as their main research goal, finding a benchmark containing time and energy consumption information is not easy. This paper learns from the Brandimarte rule and devises FJSP instances generation rules considering energy consumption and time, which can be described in Table 3. Without loss of generality, this paper randomly designs four FJSP instances according to the above rules to verify the effectiveness of the APNSGA-III. The general information of these four instances is shown in Table 4. The weight information of each job in each instance is placed in Table 5. The transport time between machines that are generated by arbitrary rules is placed in Table 6. In addition, the detailed information of above four instances is displayed in Tables 7-10. According to the suggestion of literature [50], the parameters CR (crossover rate) and MR (mutation rate) in APNSGA-III is set as 0.8 and 0.1, respectively. Referring to the recommendation of literature [51], T and ex max in DCS is set as 15 and 50, respectively. As p in TTEC is just a positive proportionality coefficient and will not have any effect on our experiments and models, we set it as 1. NSGA-II [52], NSGA-III [48], MOEA/D [53], MOWAS [54], and DEMO [55] are regarded as the comparison algorithm of APNSGA-III. The parameter settings of this algorithm refer to the suggested values of the original paper. The N (population size) and Max_iteration (maximum iteration) of all of the algorithms are set to 100 and 100. The N max , N min of APNSGA-III is set as 150 and 100, respectively. The algorithm automatically stops when iterated 100 times. Each algorithm is executed 10 counts independently on each instance.

Result Show and Analysis
The final comparison results of Instance 1, 2 are placed in Table 11, while the final comparison results of Instance 3, 4 are placed in Table 12. BT and BE denotes the minimum completion time and minimum processing energy consumption, respectively. AT and AE implies the average completion time and average processing energy consumption, respectively. For Instance 1, APNSGA-III obtains the optima on BE and AE, while APNSGA-III can get the optima on BT, BE, and AE for Instance 3. Better, APNSGA-III can obtain the optima on BT, AT, BE, and AE for Instance 2 and Instance 4. The performance of APNSGA-III far exceeds the other five comparison algorithms. In order to show the comparison of the algorithms in each goal more clearly, we rank the data in Tables 11 and 12 on a level of 1-20 from big to small and visually show them in Figure 4. The area from 0 − π/2, π/2 − π, π − 3π/2, 3π/2 − 2π represents the ranking situation in BT, AT, BE, and AE, respectively. The Arabic numerals 1-4 around the circle represent Instance 1-4. We can see that the area enclosed by the thick red line representing APNSGA-III is the smallest. Comprehensively, we can conclude that the DCS is valid and APNSGA-III is fit to solve the actual question we designed.

Two Independent Sample T-Tests
Due to the small sample size included in our experiment, we need to use two independent sample t-tests to test whether there is an accidental similarity between APNSGA-III and other comparison algorithms. We set 0.05 as the level of significance. The hypothesis is accepted when p > 0.05, thus the test results indicate no significant difference between the two compared algorithms. The hypothesis is rejected when p <= 0.05, thus the test results indicate significant differences between the APNSGA-III and contrasted algorithms. We will use the results of ten rounds of experiments that have been used to generate the average results in Tables 11 and 12 as the test sample. The test results are shown in  Tables 13 and 14. We denote the value of p more than 0.05 as "+" and denote the value of p less than 0.05 as ''−". 'Same' denotes the total number of APNSGA-III has no significant difference with other algorithms, while 'Better' represents the total number of APNSGA-III that has significant differences with other algorithms.
From Tables 13 and 14, we find that APNSGA-III and other algorithms are significantly different in almost all instances. Among the compared algorithms, NSGA-III is the closest to APNSGA-III, but there are still many differences between both. In general, we can conclude that APNSGA-III is significantly more excellent than the other algorithms.

Convergence Analysis
This subsection shows the convergence of BT, ET for Instance 1-4 in Figures 5 and 6, respectively, and discusses the algorithm's performance in the generation. The red line representing APNSGA-III always converges to the lowest position in the end except for part (a) of Figure 5. Moreover, the red line is almost below the other color lines in parts (b), (c), and (d) of Figures 5 and 6. It means the performance of each stage of the entire optimization process of APNSGA-III is better than other comparison algorithms. It once again confirms the improvement of DCS and the excellent applicability of APNSGA-III.

Gantt Chart Display and Analysis
The Gantt chart of Instance 1-4 with the smallest energy consumption is shown in Figure 7. For Instance 1, the Makespan and TEC is 324. 45  There are few gaps for Instance 2-4. It means that our method is effective, and the final scheduling plan has considerable advantages. In addition to the processing stage, the other three stages, set up, unload, and idle, also occupy a large proportion in the Gantt chart, especially in part (a); a large number of gaps means that the machine is at no load many times. This is also a large part of the energy consumption. If this part of energy consumption is not counted as the total energy consumption, it is impossible to make an accurate energy consumption analysis based on the existing scheduling situation, and thus it is impossible to achieve effective energy conservation.

Conclusions
Considering the previous research which mostly concentrates more on efficiency and ignores energy consumption, this paper designs the FJSP model with consideration of energy consumption. Meanwhile, the work includes non-negligible energy consumption, including the transport, set up, unload, and idle stage. The weight property of the job is also considered in our model. Besides, a dual control strategy for multi-objective optimization is designed to enhance the performance of NSGA-III when faced with multiobjective FJSP. Four FJSP instances are formulated to examine the metrics of our algorithm. The results obtained by APNSGA-III are compared with another five classic multi-objective optimization algorithms. All of the results reply that Diagonal Orthant Latent Dirichlet Allocation (DOLDA) owns a competitive superiority in solving the global optimization of these FJSP instances.
In conclusion, the proposed algorithm shows a performance improvement compared with other multi-objective optimization algorithms, but further studies on the model accuracy and computation time are required through investigating more energy-efficient scheduling problem. On the other hand, unexpected events, such as machine failure, rush orders, and job cancellation, which may occur in the practical applications of a manufacturing system, should be considered in our FJSP. Reducing the energy consumption in a dynamic scheduling problem should also be studied in the future.