Sorting-Based Discrete Artiﬁcial Bee Colony Algorithm for Solving Fuzzy Hybrid Flow Shop Green Scheduling Problem

: In this era of unprecedented economic and social prosperity, problems such as energy shortages and environmental pollution are gradually coming to the fore, which seriously restrict economic and social development. In order to solve these problems, green shop scheduling, which is a key aspect of the manufacturing industry, has attracted the attention of researchers, and the widely used ﬂow shop scheduling problem (HFSP) has become a hot topic of research. In this paper, we study the fuzzy hybrid green shop scheduling problem (FHFGSP) with fuzzy processing time, with the objective of minimizing makespan and total energy consumption. This is more in line with real-life situations. The non-linear integer programming model of FHFGSP is built by expressing job processing times as triangular fuzzy numbers (TFN) and considering the machine setup times when processing different jobs. To address the FHFGSP, a discrete artiﬁcial bee colony (DABC) algorithm based on similarity and non-dominated solution ordering is proposed, which allows individuals to explore their neighbors to different degrees in the employed bee phase according to a sequence of positions, increasing the diversity of the algorithm. During the onlooker bee phase, individuals at the front of the sequence have a higher chance of being tracked, increasing the convergence rate of the colony. In addition, a mutation strategy is proposed to prevent the population from falling into a local optimum. To verify the effectiveness of the algorithm, 400 test cases were generated, comparing the proposed strategy and the overall algorithm with each other and evaluating them using three different metrics. The experimental results show that the proposed algorithm outperforms other algorithms in terms of quantity, quality, convergence and diversity.


Introduction
The growth of manufacturing has brought economic and social prosperity. Shop scheduling, as a key part of manufacturing, plays an important role in economic development. Hybrid flow shop (HFS) is a common manufacturing environment [1] that combines the features of process shop and parallel machine scheduling and is widely used in container handling [2], electronics manufacturing, chemical production, and steel production [3][4][5], in addition to applications in internet service architecture [6], civil engineering [7], and production planting [8]. The hybrid flow shop scheduling problem (HFSP) refers to multiple jobs to be processed in multiple stages with one or more machines in each stage, and a specific optimization objective is achieved by determining the order in which the jobs are processed and the allocation of machines to each job in each stage [1]. It is worth noting that there are two other cases of HFSP in real life [9,10]: (1) the processing time 2 of 29 of a job is often not fixed but fluctuates within a limited range due to worker proficiency, newness of the machine. (2) The same machine processing different jobs requires a certain setup of the machine before processing, and due to the differences between jobs, the setup time required by the machine varies from job to job. Therefore, it is more meaningful and practical to study HFSP with setup time and fuzzy job processing time.
While the world is experiencing unprecedented economic and social prosperity, environmental pollution and energy scarcity are becoming a serious problem that seriously affects the future development of humanity. In particular, the manufacturing industry takes up most of the world's energy and produces a large amount of pollutant emissions [11]. Therefore, in order to solve the energy and environmental problems, green shop scheduling, as a key aspect of manufacturing, has become a hot spot for research [12]. The purpose of green shop scheduling is to reduce energy consumption, reduce environmental pressure, and achieve sustainable development without losing economic benefits. Therefore, the widely used hybrid flow green shop scheduling problem (HFGSP) has a high research value.
However, HFGSPs that consider fuzzy job processing time are not common at present. Fu et al. [13] developed a hybrid multi-objective optimization algorithm to solve HFSP with fuzzy processing time but did not consider the energy problem. Wang et al. [14] investigated the HFGSP of job processing time variation caused by the dynamic reconfiguration process of the device to minimize the energy consumption of makespan and the whole device and proposed an improved multi-objective whale optimization algorithm to solve it.
As HFSP has a wide range of application scenarios, the uncertain job processing time meets the actual production needs and the energy saving is in line with the future direction of manufacturing. In this paper, we study the fuzzy hybrid flow green shop scheduling problem (FHFGSP) which meets the above three scenarios and is less studied currently. FHFGSP considers fuzzy job processing time and machine setup time with the objective of minimizing both makespan (MS) and total energy consumption (TEC). Uncertain completion time is denoted by triangular fuzzy numbers (TFN) and TEC is divided into three parts: machine working time, machine setup time, and machine idle time. At present, there are not many HFGSPs that consider both fuzzy processing time and work sequence-related setup time, but FHFGSP is more in line with actual production scenarios and has higher research value.
Artificial bee colony (ABC) [15] is one of the swarm intelligence algorithms, which is divided into employed bees, onlooker bees, and scout bees according to the foraging behavior of the swarm, with good global exploration and local development. ABC has been shown to be superior or close to other classical swarm intelligence algorithms [16,17]. ABC is widely used to solve shop scheduling problems [18]. To solve FHFGSP, this paper proposed a sorting-based discrete artificial bee colony algorithm (SDABC). Individuals in the population are ranked according to non-dominated solutions and similarity to the ideal solution and adopt different search and follow strategies according to the location to achieve full exploration of the solution space and discover better solutions. It is worth mentioning that SDABC can be used not only to solve FHFGSP problems such as turning shop [19]. It can also be used to solve the expansion of FHFGSP described in the first paragraph.
The main contributions of this paper are as follows: (1) The FHFGSP with processing time fuzzy is investigated. The completion time is represented by TFN, and the energy consumption in the scheduling process is considered in three parts, which is more in line with the actual production environment. (2) In the employed bee phase, the population was ranked based on the number of dominant solutions and the similarity of ideal solutions, and different degrees of exploration were taken for individuals according to the results of the ranking, with the best individuals being more fully explored. (3) In the onlooker bee phase, a selection strategy is adopted so that individuals in the top ranking have a higher probability of being selected, and a mutation strategy is adopted to avoid falling into a local optimum. The paper is organized as follows: Section 2 gives the relevant works, Section 3 describes what the FHFGSP is, gives a symbolic representation and builds a mathematical model of the FHFGSP. Section 4 details the SDABC for solving the FHFGSP. Experimental validation is presented in Section 5 and the last section contains conclusions and outlook.

Related Works
ABC has been successfully applied to solve shop scheduling problems due to its advantages such as few control parameters and ease of implementation [20]. As there is no research related to ABC for solving FHFGSP, this section reviews the work related to the use of ABC for solving shop scheduling problems.
Li et al. [18] proposed a novel hybrid ABC and tabu search algorithm (TABC) to solve the HFSP finite buffers, employing a TS-based adaptive neighborhood strategy that gives the TABC algorithm the ability to learn and generate neighborhood solutions in different promising regions as a means to minimize makespan. Yue et al. [21] investigated the batching and hybrid model scheduling problem in a flexible parallel production line, considering the sequence-dependent setup time between hybrid model products with the aim of minimizing the manufacturing cycle time of the line while balancing the workload between lines and maximizing the net profit. In addition, a new material availability constraint is introduced to the problem. A novel Pareto guided ABC is designed to address the current problem. Gong et al. [22] considered the impact and potential of human factors on improving productivity and reducing production costs in real production systems and proposed a hybrid ABC to solve flexible job shop scheduling problems (FJSP) with worker flexibility. Zadeh et al. [23] proposed a heuristic model based on an ABC for the dynamic FJSP. Lei et al. [24] studied the distributed unrelated parallel machine scheduling problem with preventive maintenance (DUPMSP) and proposed an ABC with division to minimize MS. Xie et al. [25] proposed an improved ABC considering machining structure evaluation to solve the flexible integrated scheduling problem of networked equipment, which is an extension of job shop scheduling. Xuan et al. [20] proposed an improved DABC with the introduction of a genetic algorithm to solve FJSP for uncorrelated parallel machines with progressively deteriorating jobs and timing dependencies.
As flow shops are very common in practical production activities, the HFSP is of high research value. Wang et al. [19] proposed a new decoding method that simultaneously considers spindle speed optimization and scheduling scheme optimization and acts on the distribution estimation algorithm to simultaneously reduce energy consumption and makspan in the turning shop. Li et al. [26] proposed an improved ABC to solve the distributed flow shop problem (DFSP) with the objective of minimizing MS. Li et al. [27] proposed a hybrid ABC to solve the parallel batch DFSP with deteriorating jobs. In the proposed algorithms, two types of problem-specific heuristics are proposed, namely batch allocation and right-shift heuristics, which can significantly shorten makespan. Gong et al. [28] proposed a hybrid multi-objective DABC for solving the blocked batch flow process shop scheduling problem with two conflicting criteria of minimizing MS and lead time. With the objective of minimizing the total process time, Pan et al. [29] solved the distributed arrangement flow job scheduling problem based on a high-performance framework of DABC. Li et al. [30] proposed an improved ABC to solve a multi-objective optimization model with the objectives of minimizing MS and processing cost for the hybrid flow shop process planning and production scheduling independently of each other. Peng et al. [31] investigated the problem of flow shop rescheduling in the actual steelmaking process, considering interruptions caused by machine failures and controllable processing times in the final stages, and proposed an improved ABC to solve the problem.
However, in actual production, the processing time of jobs is often uncertain and there is very little research on ABC solutions to fuzzy HFSP. Zhong et al. [32] proposed a new artificial swarm algorithm, the improved artificial swarm algorithm, for the multi-objective fuzzy FJSP. The objectives are to minimize the maximum fuzzy MS, maximize the weighted consistency index and minimize the maximum fuzzy machine workload.
Most of the research on the use of ABC to solve shop scheduling problems is in the area of improving economic efficiency. Very little research has been done on saving energy and reducing pollution emissions. Li et al. [33] designed an improved ABC to solve a multi-objective low-carbon job shop scheduling problem with variable machining speed constraints. Zhang et al. [34] studied HFGSP with variable machine processing speed to minimize MS and TEC and proposed a multi-objective DABC (MDABC) to solve HFGSP. However, in HFGSP, the processing time of the job is set to an exact value, which is not fully compatible with the actual production environment. In real life, the processing time of the job often deviates due to the operator's business ability, machine aging, etc. Moreover, the neighborhood search adopted by MDABC in the employed bee phase and the binary race strategy adopted in the onlooker bee phase make the algorithm suffer from the problem that it cannot fully explore in the solution space, the convergence of the algorithm is not high, and it is easy to fall into local optimum.
For this reason, this paper studies the FHFGSP with uncertain job processing time and proposes SDABC to solve FHFGSP. In SDABC, the dominant individuals guide the poor individuals to update in the employed bee phase, which improves the convergence speed of the population, and the proposed ranking-based selection strategy and mutation strategy can prevent individuals from falling into local optimum in the onlooker bee phase. FHFGSP is consistent with the actual production environment and production requirements, but it is not common in previous studies.

FHFGSP
This section first details the problem definition of FHFGSP, then the rules of TFN operations are explained, and finally the symbolic representation of FHFGSP is given and the mathematical model of FHFGSP is developed.

Description of the Problem
FHFGSP combines the features of fuzzy scheduling and HFSP. In FHFGSP, n jobs will be processed in m (m ≥ 2) stages in the same order. Each stage j has at least one machine M j,k (k ≥ 1) and at least one stage has multiple machines [1,35,36]. The processing time T i,j,v of job i on machine M j,k is uncertain and is given by the triple [37] ,v denotes the optimal processing time, t m i,j,v denotes the most probable processing time, and t p i,j,v denotes the worst processing time. The constraints for FHFGSP are formulated as follows: (1) Jobs are not allowed to be interrupted and preempted when there is a job being processed on the machine, and the machine is not allowed to stop. (2) At the beginning, all jobs and machines are available.
(3) Only one job can be processed by any one machine at any one time and any job is only allowed to be processed by one machine at any one time. (4) Machines at the same stage process jobs at the same speed with the same power. (5) Machines are allowed to idle. (6) Machines can only process jobs at a selected speed. This cannot be changed during the processing.
The objective to be optimized by FHFGSP is to minimize MS and TEC. In this paper, the TEC is divided into three parts: when the machine is idle, when the machine is in the setup phase, and when the machine is processing jobs. There are three ways to reduce MS: (1) reduce machine idle time, which is influenced by the job sequence. (2) Reduce machine setup time, which also reduces TEC, which is also influenced by the job sequence.
(3) Reducing the time of the job being processed, which means increasing the processing speed of the job. However, the energy consumption of the machine when processing a job is proportional to the processing speed of the job [38], and reducing the job processing time increases the TEC. Since the two objectives to be optimized are in conflict with each other, Mathematics 2021, 9, 2250 5 of 29 this paper solves the FHFGSP by adjusting the job sequence and the speed of the machine when processing the job.

TFN Concepts and Operations
The concept of fuzzy sets was introduced by Zadeh [39] and the basic idea is to fuzzily the absolute affiliation in classical sets. It can be used to solve real-life uncertainty problems [40]. This subsection gives the rules for the operation of the TFN to facilitate the solution of the GFHSP.
For any two TFNs A = (a 1 , a 2 , a 3 ) and B = (b 1 , b 2 , b 3 ) the rules for each operation are as follows: 1.
Additive operations

Comparative operations
The TFN comparison operation is divided into three steps and has three judgement criteria.
Step 1: Get Step 2: Step 3: If − A = − B and a 2 = b 2 , then compare the difference between a 3 and a 1 . If

Mathematical Models
After understanding the basic concepts of FHFGSP and TFN, mathematical modelling of FHFGSP from the perspective of optimization objectives is needed to facilitate a better understanding of the problem to solve it. The interpretation of the relevant symbols appearing in the FHFGSP is shown in Table 1.

Objective:
Min{MS, TEC} Subject to: where (4) gives the objective of the FHFGSP to minimize both MS and TEC (5)- (10) give the associated constraints. (5) guarantees that each job i can be assigned to a specific machine k for processing at speed v at each stage j. (6)- (9) guarantees that no interruptions and preemptions by jobs are allowed during the processing and setup phases. (10) indicates that the machine starts processing as soon as setup is complete. (11) indicates that MS is determined by the end time of the last job to be processed in the final stage. (12) indicates that the TEC consists of three components, PE indicates the energy consumption of the machine while processing the job, SE indicates the energy consumption of the machine during the setup time, IE indicates the energy consumption of the machine during the idle time. (13) denotes the actual power of job i when it is processed at speed v in stage j. (14)- (16) are the specific information of PE, SE, and IE, respectively, all energy consumption is obtained by multiplying power by time.

MS
The time required to complete the entire scheduling program

TEC
The total energy consumption required to complete the entire scheduling program I The set of jobs and |I| = n i Index of the job, indicating the i-th job J The set of stages and |J| = m j Index of the stage, indicating the j-th stage M j The set of machines at stage j k Index of the machine e i,j The ending time of job i at stage j and its value is greater than 0 p i,j The standard processing time for job i at stage j c j,v The adjustment factor when the machine is running at speed v at stage j T i,j,v The time required for job i to be processed at speed v at stage j st i*,i,j The setup time from job i* (i* is the previous job of i) to job i at stage j. If i* = i, s i*,i,j indicates the setup time required for job i as the first job sp Energy consumed by the machine per unit time during the setup phase ap Energy consumed by auxiliary equipment per unit of time throughout the scheduling process ip Energy consumed by the machine per unit of time during the idle phase b i,j The beginning time of job i at stage j and its value is greater than 0 x i,j,k,v A control variable for job position that is equal to 1 if job i is processed on machine k at speed v at stage j, and 0 otherwise z i,i*,j,k A control variable for job sequence that is equal to 1 if the next job on the machine k at stage j for job i* is job i, and 0 otherwise B k,j The start time of the parallel machine k is at stage j E k,j The shutdown time of parallel machine k at stage j

SDABC of FHFGSP
This section presents the proposed SDABC algorithm for solving FHFGSP. The basic framework of the ABC algorithm is first presented, and then the encoding and decoding scheme and the energy saving procedure are described, followed by the details of SDABC, and finally a summary.

The Framework of ABC
In ABC, during the initialization phase, a set of food source locations are randomly selected by bees and their nectar amount is determined, then these bees enter the colony and share nectar information. Each search cycle consists of three steps. In the first phase, after information sharing, each employed bee searches for information in the vicinity of the food source location and abandons the old food source to choose a new food source if a better one is found. In the second phase, the onlooker bee selects a food source to follow based on the nectar distribution information sent by the employed bee, the better the food source the more likely it is to be followed. If the current food source is not updated for a long time, the employed bee will abandon the current food source and become a scout bee. The scout bee randomly selects a new food source to replace the abandoned food source. The overall framework of the basic ABC framework is shown in Algorithm 1.

Algorithm 1 Framework of the basic ABC framework
Input: population P; Output: results; 1: Initialize population P; 2: while requirements are met do 3: Employed bees to explore around food sources; 4: Onlooker bees select good individuals to follow and explore around the food source; 5: if trial i > threshold limit then 6: Employed bees transformed into scout bees looking for new food sources; 7: end if 8: end while 9: return results.
In this paper, the linear weighted sum method is used as the decomposition method. For a multi-objective optimization problem with m objectives, a weight vector λ = (λ 1 , λ 2 , . . . , λ m ) T is added, where i represents the sum of the weight values of the i-th objective. As shown in (17).
where f i (x i ) is the objective value for the i-th objective. Since this paper is a two-objective problem, λ = (λ 1

Coding Scheme
This subsection gives the encoding and decoding scheme of FHFGSP. The objective of FHFGSP is the minimum MS and TEC. To achieve these two optimization objectives, it is necessary to determine the sequence of jobs in each stage, the machine allocation for each stage of the job, and the speed at which each stage of the job is processed on the machine. Due to the characteristics of FHFGSP, each job needs to go through the same processing stages, so we only need to determine the sequence of jobs into the first stage, and the sequence of jobs in other stages can be determined automatically. However, since the processing speed of each job in each stage is independent of the preceding and following stages and the preceding and following jobs, the processing speed of each job in each stage is independent of the preceding and following jobs. Therefore, the speed of each job in each stage should be determined separately. Therefore, the solution is coded in two parts. The first part is the sequence of jobs into the first stage. The second part is the velocity selection matrix. In this paper, the two parts of the solution are represented as follows: where π n denotes the n-dimensional job sequence vector, π i is the sequence number of the i-th job entering the machine, V m×n represents the speed matrix of the jobs, and v i,j is the machine processing speed level of the i-th job at stage j. The second part represents the solution to the three-stage scheduling problem for three jobs as <π 3 , V 3×3 >. As shown below, the solution to the three-stage scheduling problem for three jobs is denoted as <π 3 , V 3×3 >. π 3 indicates that the order in which jobs enter the first stage of scheduling is job1, job3, and job2. V 3×3 indicates that in the three stages, job1 is processed at levels 1, 2, and 3, while job2 is processed at levels 2, 1, and 1, and job3 is processed at levels 2, 1, and 3, respectively.
After the coding scheme is determined, it needs to be decoded into an actual scheduling scheme to make sense. The detailed decoding scheme is as follows. In the first stage, machines are available at the moment 0. According to the order of jobs in π n , the jobs are placed on the machine that can be executed earliest and the jobs are processed according to the corresponding speed in the speed matrix, after which the available time of the machine is updated before processing the next job. The following steps are performed for each job in turn in the other stages: Step 1: Process the job according to its completion time in the previous stage, according to the first-come, first-served principle, i.e., the one that was completed earlier in the previous stage and arrives at this stage first is processed first.
Step 2: Based on the speed in the speed matrix, select the parallel machine that can process the job as early as possible.
Step 3: Update the available time of the machines. Assuming that machine k is available at the moment 0, it takes 3 times to process job i and 1 time to set up, then the available time of the machine is 0 + 3 + 1 = 4 times.

Initialization and Energy Saving Procedures
After determining the encoding, it is necessary to initialize the populations and external populations. In this paper, the population is initialized in a random way, and for each individual, the job sequence and velocity matrix are generated randomly.
After the population initialization is completed, the dominance relationship between individuals needs to be calculated and the non-dominated solutions are populated with external population.
Although the two optimization objectives of the FHFGSP conflict with each other, it is possible to use a suitable strategy to improve the other objective while controlling one optimization objective constant. To obtain high-quality solutions, individuals use an energy-saving procedure after initialization with the aim of further improving the quality of the population. The basic idea of the energy-saving procedure is to achieve a reduction of PE in TEC by reducing the processing speed of the job while controlling a constant MS.
To achieve constant MS, the energy-saving procedure uses the idea of backtracking. Starting from the last job in the last stage, the processing speed of the job is minimized without affecting the completion time of other jobs. The detailed steps are shown in Algorithm 2, where the symbols that appear are given in Table 1.

Algorithm 2 Energy saving procedure
Input: sequence of assignments in order of completion, π'; speed selection matrix, V; integer related to the number of parallel machines, k; Output: new speed selection matrix, V * ; 1: i' = the job processed on machine k after i; 2: i* = the job processed on machine k before i 3: for j = m to 1 do 4: for l = n to 1 do 5: i← Index of the l-th job in π'; 6: for

Employed Bees
During the employed bee phase, each individual tries to search around the food source to obtain a better food source. The food source is the solution to the problem.
In order to allow the employed bee to fully explore around the solution, a local search strategy based on ranking is proposed, with the central idea that high-quality solutions are used to guide bad solutions to update themselves.
First, there is a requirement to identify high quality individuals in population. A new way of determining high-quality individuals is proposed. The quality of each individual is related to two factors: the number of dominant solutions and the similarity to the ideal solution. (21) gives a high-quality assessment function for each individual, where n i denotes the number of solutions in population that are dominated by the current individual i, d + , and ddenote the Euclidean distances to the ideal and negative ideal solutions, respectively. (22) gives the formula for the Euclidean distance, where x i denotes the i-th subproblem of the current solution and x* i denotes the i-th subproblem of the ideal solution. Since this paper is about finding a minimum of two objectives, the ideal solution is the lower boundary of the search space and the negative ideal solution is the upper boundary of the search space.
The high-quality individuals then guide the poor individuals to self-renewal when the employed bees search around solutions. The high-quality individuals guided the poor individuals to different degrees, and (23) gives the degree to which each individual i guided the poor individuals. It is worth noting that the high-quality individuals only guide the poorer individuals in their neighborhood. The Euclidean distance of each individual i in population from other individuals was calculated and the nearest T individuals were selected as neighbors of i.
In addition, in order to prevent individuals in the population from leading differential updates that affect other individuals that have already been updated and destroy the structure of individuals, individuals in population are sorted in a non-ascending order according to their quality, and individuals that have already been updated do not participate in updates in the same population.
It is also worth noting that five update strategies are used in this paper, depending on the problem to be solved. These strategies are insertion and exchange of working sequences, mutation of velocity matrices, and insertion mutation and cross mutation of working sequences and velocity matrices. The employed bees obtain possible solutions based on these update strategies.
The employed bees search around the solution starting from the first update strategy. If the currently selected update strategy does not yield a solution with high fitness, then the next employed bee searches based on the next update strategy until it finds a high-quality solution. When all five update strategies have been searched, the search starts from the first one again. The flow of the employed bee phase is shown in Algorithm 3, where Quality() means calculating the quality of each individual according to (21), Level() means determining the degree to which an individual leads the difference solution, GetNew() means updating individuals according to the strategy q i with an initial value of 1 for q i , and GetBad() means obtaining the difference solution that has a high similarity to the current individual and has not been updated.

Onlooker Bees
In the onlooker bee phase, the onlooker bee will select good food sources for further search based on the information conveyed by the employed bee, with the aim of obtaining high-quality solutions and accelerating the convergence of the algorithm. In this paper, a sorting-based selection strategy is proposed to improve the search efficiency of the onlooker bee and speed up the convergence of the algorithm. First, the individuals in population are ranked according to (21), and those with small values are in the front. The high-quality solutions are placed in front of the bad solutions. Then, the onlooker bee selects an individual in population to follow according to (24), in which (24), i represents the i-th individual to follow and N denotes the population size. Therefore, the individual with the top ranking has a higher probability of being selected.
After selecting the individual X index according to the selection method proposed in this paper, the onlooker bees randomly select the neighboring individual T i of the current individual for two-point crossover [41] to generate a new individual. The two-point crossover is divided into two parts: the sequence of operations and the velocity matrix, and the specific operation is as follows: two points in the range are randomly selected, the part between two points in X index is left untouched, and the rest is filled by T i . For the job sequence, the remaining positions in X index are filled by the jobs in T i that are different from the remaining jobs in X index in turn. For the velocity matrix the remaining positions in X index are filled by the corresponding positions in T i .
Regarding the newly generated individuals, the algorithm will decide whether to replace the original individuals according to the greedy selection algorithm. In particular, in order to prevent the algorithm from falling into local optimum, this paper introduces mutation in the onlooker bee phase, and the probability of mutation of individuals in the population is 1/N. This avoids the algorithm from falling into local optimum to some extent. The whole onlooker bee detailed process is shown in Algorithm 4.

Algorithm 3 Employed bee phase
Input: population P; Output: new population, P ' ; 1: P ' = P; 2: for i = 1 to N do 3: s i = Quality(X i ); 4: end for 5: Sort(P ' , s); 6: for i = 1 to N do 7: l i = Level(X i ); 8: X ' i = GetNew(X i ,q i ); 9: for z = 0 to l i do 10: if z = 0 then 11: if X ' i < X i then 12: In Algorithm 4, Select() indicates that the onlooker bee selects a food source to follow according to (23), GetNeighbourhood() indicates a random selection from the neighbors of the food source, TPX() indicates the two-point crossover, and Mutation() represents mutation of an individual X, including the job sequence and speed selection matrix.

Scouting Bees
If a solution is not updated for a long time, the solution will be abandoned and the employed bee will then be transformed into a scout bee, choosing a new solution at random in the solution space. As random search is uncontrollable, this random strategy does not have a positive impact on the algorithm, therefore, this paper uses a neighborhoodbased solution swapping strategy to improve the efficiency of the scout bee phase of the algorithm [34]. This is because the solutions of neighboring sub-problems should be similar.
The scout bee searches in the following way: for a solution that has not improved after L cycles, the scout bee first finds a more suitable solution among its neighboring individuals. Then they exchange them with each other. If no better one is found, one individual is randomly chosen to exchange with each other. The basic procedure is described in Algorithm 5, where L(X i ) denotes the number of cycles X i has gone through, T denotes the number of neighboring individuals, X i,j denotes the j-th neighboring individual of the i-th solution.

Algorithm 5 Scout bee phase
Input: population, P; Output: new population, P ' ; 1: for i = 1 to N do 2: if L(X i ) > L then 3: for j = 1 to T do 4: if X i,j < X i then 5:

The Whole Process of the Algorithm
This section outlines the entire algorithmic process of SDABC. It can be roughly divided into four steps.
Step 1: The population is initialized randomly and is energy-efficient to improve the quality of the solution.
Step 2: Sort the population in the manner described in Section 4.4 and perform the algorithmic operations described in Sections 4.5-4.7 in sequence, while updating the domain relationships of individuals in population and the external populations after each subsection is completed.
Step 3: Repeat Step 2 until the end conditions are met.
Step 4: Perform another energy saving procedure on the external population. The algorithm flow of SDABC can be shown in Figure 1.  Figure 1. The overall flow of the SDABC.

Experiment
In this section, the proposed SDABC algorithm and strategy will be evaluated through experiments. Firstly, the parameter settings of FHFGSP and the performance indicators of the evaluation algorithm are introduced. Then the proposed strategy is compared with other common strategies in experiments. Finally, SDABC is compared with other algorithms in experiments.
The algorithm proposed in this paper is coded in C++ and performed in Codeblocks 16.01. All experiments were run on a PC with an Intel(R) Core (TM) i3-8100U CPU, 3.60 GHz, and 8 GB RAM. Maximum CPU usage time t = 100 was used as a stopping criterion.

Test Data
In order to fully evaluate the performance of the algorithm from different levels, the performance of SDABC needs to be tested by selecting different problem instances. The

Experiment
In this section, the proposed SDABC algorithm and strategy will be evaluated through experiments. Firstly, the parameter settings of FHFGSP and the performance indicators of the evaluation algorithm are introduced. Then the proposed strategy is compared with other common strategies in experiments. Finally, SDABC is compared with other algorithms in experiments.
The algorithm proposed in this paper is coded in C++ and performed in Codeblocks 16.01. All experiments were run on a PC with an Intel(R) Core (TM) i3-8100U CPU, 3.60 GHz, and 8 GB RAM. Maximum CPU usage time t = 100 was used as a stopping criterion.

Test Data
In order to fully evaluate the performance of the algorithm from different levels, the performance of SDABC needs to be tested by selecting different problem instances. The parameters controlling the problem instances are n, m, and st. In this paper, to extensively test the ability of SDABC to solve HFSP of different sizes, five different levels of n, four different levels of m and four different levels of st were designed [34]. This results in 80 problem combinations of different levels. Once the n, m, and st of the problem instances have been determined, it is also necessary to set them separately for the job and the workshop environment. For job i , the processing speed v and the standard processing time p need to be set, and for the shop environment, the number of parallel machines per stage k needs to be set, by means of the previous problem description. It is also necessary to set the energy consumption per unit time of the machines in the processing phase, the setup phase and the idle phase. To avoid chance in the algorithm results, five instances were generated for each problem combination. In summary, the factors and their levels of FHFGSP in generating test data are summarized in Table 2.

Performance Metrics
Three popular metrics for evaluating multi-objective optimization problems (MOPs) [34,42,43], namely the number of non-dominant solutions, set coverage, and inverse generation distance, were adapted to evaluate the performance of SDABC. The mean and standard deviation of each metric at each level were obtained from 400 instance problems of the FHFGSP over 30 independent iterations.
(1) Number of non-dominated solutions (N-metric). This metric is the number of nondominant solutions produced by the algorithm, with higher values indicating better performance the closer the PF is. (2) Inverse Generational Distance (IGD-metric). This metric evaluates the convergence and distribution performance of the algorithm.

IGD(A, PF
where d(v, A) is the minimum Euclidean distance between v and the point in A.
The smaller the value, the better the comprehensive performance of the algorithm including convergence and distribution performance. Since the real PF* cannot be solved, all non-dominated solutions obtained jointly by the algorithms of each comparison are used as PF* in this paper. C(A, B) = |{µ ∈ B|∃v ∈ A : v ≺ µ}| |B| (25) where C (A, B) represents the percentage of ideal solutions in B that are identical or dominant to those in A. The higher the value, the higher the performance.
In order to eliminate the effect of different metrics, a very simple max-min method [44] is used in this paper to normalize the obtained MS and TEC as follows.

Effect of Search Strategy
To evaluate the performance of the search strategy, SDABC with a search strategy was compared with DABC without a search strategy. All content factors of the algorithm were the same except for the difference in the employed bee phase search strategy. The evaluation results of the three metrics for the two strategies are shown in Tables A1-A3 (Tables in Appendix A), where the better values are shown in bold and the last row is the average of the 20 problem dimensions.
For the N-metric, it can be seen from Table A1 that SDABC has a higher average (AVG) for 85% of the questions and a lower standard deviation (SD) for 75% of the questions. In summary: no problems were found in DABC where both the AVG and SD were better than in SDABC, so SDABC led to better results. For the FHFGSP, for which it is difficult to find the exact solution, a higher N-metric can plot the PF more accurately and also help managers to get more options. Therefore, SDABC is more advantageous in this respect. This is because in the search phase, the employed bee is able to obtain more nondominated solutions by searching around the individual to different degrees depending on the number of dominant solutions and the similarity of the ideal solutions.
For the C-metric, it can be seen from Table A2 that, with the exception for 20 × 5 and 60 × 3, SDABC resulted in a better AVG on 90% of the questions and obtained a lower SD on 79% of the questions. Overall SDABC achieved a lower AVG and SD than DABC. To better show the difference between the C-metric obtained by SDABC and DABC, a boxplot of the two is plotted in Figure 2, and it can be seen that SDABC is able to obtain more concentrated and dense values and the median was significantly higher for SDABC than DABC. For the values of C (SDABC, DABC) away from the whole, which is the C-metric obtained for question 100 × 5, a comparison of Table A2 shows that lower values were obtained with DABC for the same question.

Effect of Search Strategy
To evaluate the performance of the search strategy, SDABC with a sea was compared with DABC without a search strategy. All content factors of th were the same except for the difference in the employed bee phase search s evaluation results of the three metrics for the two strategies are shown in Ta  (Tables in Appendix A), where the better values are shown in bold and the la average of the 20 problem dimensions.
For the N-metric, it can be seen from Table A1 that SDABC has a hig (AVG) for 85% of the questions and a lower standard deviation (SD) for 75% tions. In summary: no problems were found in DABC where both the AVG a better than in SDABC, so SDABC led to better results. For the FHFGSP, fo difficult to find the exact solution, a higher N-metric can plot the PF more ac also help managers to get more options. Therefore, SDABC is more advantag respect. This is because in the search phase, the employed bee is able to obtain dominated solutions by searching around the individual to different degree on the number of dominant solutions and the similarity of the ideal solutions For the C-metric, it can be seen from Table A2 that, with the exception fo 60 × 3, SDABC resulted in a better AVG on 90% of the questions and obtained on 79% of the questions. Overall SDABC achieved a lower AVG and SD tha better show the difference between the C-metric obtained by SDABC and DAB of the two is plotted in Figure 2, and it can be seen that SDABC is able to concentrated and dense values and the median was significantly higher for S DABC. For the values of C (SDABC, DABC) away from the whole, which is obtained for question 100 × 5, a comparison of Table A2 shows that lower obtained with DABC for the same question. This indicates that the quality of the solutions obtained by SDABC is high of DABC. This is because in the employed bee phase, the employed bee sear the individual to different degrees based on the similarity between the curr and the ideal solution, and by being guided by the ideal solution, the employe to obtain a high-quality solution.
For the IGD-metric, it can be seen from Table A3 that SDABC obtains a value than DABC, except for 20 × 5, 60 × 8, and 80 × 10. For 75% of the questi This indicates that the quality of the solutions obtained by SDABC is higher than that of DABC. This is because in the employed bee phase, the employed bee searches around the individual to different degrees based on the similarity between the current solution and the ideal solution, and by being guided by the ideal solution, the employed bee is able to obtain a high-quality solution.
For the IGD-metric, it can be seen from Table A3 that SDABC obtains a lower mean value than DABC, except for 20 × 5, 60 × 8, and 80 × 10. For 75% of the questions, SDABC obtained a lower SD. Taken together, SDABC obtained a lower AVG and SD. In order to show the difference more graphically, a boxplot of the two is plotted in Figure 3. It can be seen that SDABC is able to obtain a much more concentrated lower IGD and a much lower median value than DABC. athematics 2021, 9, x FOR PEER REVIEW For the IGD-metric, it can be seen from Table A3 that SDABC obtains a low value than DABC, except for 20 × 5, 60 × 8, and 80 × 10. For 75% of the questions obtained a lower SD. Taken together, SDABC obtained a lower AVG and SD. In show the difference more graphically, a boxplot of the two is plotted in Figure 3. seen that SDABC is able to obtain a much more concentrated lower IGD and a mu median value than DABC. This indicates that the solutions obtained by SDABC are better than DABC of diversity and convergence. This is due to the design of five different directio search phase, which improves the diversity of solutions obtained. The employed proved the convergence by searching around individuals based on the number nant solutions and the similarity of ideal solutions.

Effect of Selection Strategy
Three different selection strategies were compared, in order to evaluate th mance of the newly proposed selection strategy. The three strategies are as fol selection strategy proposed in this paper (denoted by ABC_snm), the selection st which individuals in population are selected according to similarity with muta noted by ABC_sm), and the selection strategy in which individuals in populatio lected according to similarity without mutation (denoted by ABC_s). The evalu sults of the three metrics for the three strategies are shown in Tables A4-A6, w better values are shown in bold and the last row is the average of the 20 problem sions.
For the N-metric, it can be seen from Table A4 that ABC_snm obtained sign better mean values than ABC_s. Compared to ABC_sm, ABC_snm achieved bett in 80% of the questions. For SD, ABC_s obtained a lower SD value due to the fac size of SD is positively related to AVG, and ABC_s has a significantly smaller AV so the resulting SD is also smaller. However, on balance ABC_snm was able more non-dominated solutions, giving the manager more options to choose from This indicates that it is more advantageous to select individuals in the onlo phase based on the number of solutions dominated by them and their similari ideal solution than to select only on the basis of similarity. A comparison of the reveal that ABC_snm was able to obtain a greater number of non-dominated sol This indicates that the solutions obtained by SDABC are better than DABC in terms of diversity and convergence. This is due to the design of five different directions in the search phase, which improves the diversity of solutions obtained. The employed bee improved the convergence by searching around individuals based on the number of dominant solutions and the similarity of ideal solutions.

Effect of Selection Strategy
Three different selection strategies were compared, in order to evaluate the performance of the newly proposed selection strategy. The three strategies are as follows: the selection strategy proposed in this paper (denoted by ABC_snm), the selection strategy in which individuals in population are selected according to similarity with mutation (denoted by ABC_sm), and the selection strategy in which individuals in population are selected according to similarity without mutation (denoted by ABC_s). The evaluation results of the three metrics for the three strategies are shown in Tables A4-A6, where the better values are shown in bold and the last row is the average of the 20 problem dimensions.
For the N-metric, it can be seen from Table A4 that ABC_snm obtained significantly better mean values than ABC_s. Compared to ABC_sm, ABC_snm achieved better results in 80% of the questions. For SD, ABC_s obtained a lower SD value due to the fact that the size of SD is positively related to AVG, and ABC_s has a significantly smaller AVG value, so the resulting SD is also smaller. However, on balance ABC_snm was able to obtain more non-dominated solutions, giving the manager more options to choose from.
This indicates that it is more advantageous to select individuals in the onlooker bee phase based on the number of solutions dominated by them and their similarity to the ideal solution than to select only on the basis of similarity. A comparison of the three can reveal that ABC_snm was able to obtain a greater number of non-dominated solutions.
For C-metric, it can be seen from Table A5 that ABC_snm obtains significantly better AVG and SD than ABC_s and ABC_sm. In each problem, ABC_snm achieves better results. Of course, overall, ABC_snm also obtains better AVG and SD than the other two strategies. Figure 4 plots the boxplots of the C-metric obtained by the three strategies, and it can be seen that ABC_snm is able to obtain more concentrated values and obtains a much higher median than the other two strategies, and the minimum value obtained for ABC_snm is also higher than the maximum values of ABC_s and ABC_sn. hematics 2021, 9, x FOR PEER REVIEW 18 of much higher median than the other two strategies, and the minimum value obtained ABC_snm is also higher than the maximum values of ABC_s and ABC_sn. This indicates that in this paper the proposed strategy is able to give obtain hig quality non-dominated solutions. This is due to the fact that the adopted selection strate can speed up the convergence of the algorithm and the adopted mutation strategy c prevent the algorithm from falling into local optimum.
For IGD-metric, it can be seen from Table A6 that in each problem, ABC_snm tained significantly lower AVG than ABC_s and ABC_sm. In total, 85% of the proble in ABC_snm had smaller SDs than the other two algorithms. As a whole, both the AV and SD of ABC_snm are smaller than the other two strategies. Figure 5 plots the boxp of the IGD-metric obtained by the three algorithms, and it can be seen that ABC_snm able to obtain more concentrated values, and the median obtained is much lower than other two strategies. This indicates that in this paper the proposed strategy has better diversity and co This indicates that in this paper the proposed strategy is able to give obtain highquality non-dominated solutions. This is due to the fact that the adopted selection strategy can speed up the convergence of the algorithm and the adopted mutation strategy can prevent the algorithm from falling into local optimum.
For IGD-metric, it can be seen from Table A6 that in each problem, ABC_snm obtained significantly lower AVG than ABC_s and ABC_sm. In total, 85% of the problems in ABC_snm had smaller SDs than the other two algorithms. As a whole, both the AVG and SD of ABC_snm are smaller than the other two strategies. Figure 5 plots the boxplot of the IGD-metric obtained by the three algorithms, and it can be seen that ABC_snm is able to obtain more concentrated values, and the median obtained is much lower than the other two strategies.
Mathematics 2021, 9, x FOR PEER REVIEW 18 of much higher median than the other two strategies, and the minimum value obtained f ABC_snm is also higher than the maximum values of ABC_s and ABC_sn. This indicates that in this paper the proposed strategy is able to give obtain hig quality non-dominated solutions. This is due to the fact that the adopted selection strate can speed up the convergence of the algorithm and the adopted mutation strategy c prevent the algorithm from falling into local optimum.
For IGD-metric, it can be seen from Table A6 that in each problem, ABC_snm o tained significantly lower AVG than ABC_s and ABC_sm. In total, 85% of the problem in ABC_snm had smaller SDs than the other two algorithms. As a whole, both the AV and SD of ABC_snm are smaller than the other two strategies. Figure 5 plots the boxp of the IGD-metric obtained by the three algorithms, and it can be seen that ABC_snm able to obtain more concentrated values, and the median obtained is much lower than t other two strategies. This indicates that in this paper the proposed strategy has better diversity and co vergence. This is because there is some probability that some low-quality individuals a also selected, which improves the diversity of the algorithm to some extent, and the pr posed mutation strategy also has some contribution to the diversity. In addition, t adopted selection strategy can speed up the convergence of the algorithm. This indicates that in this paper the proposed strategy has better diversity and convergence. This is because there is some probability that some low-quality individuals are also selected, which improves the diversity of the algorithm to some extent, and the proposed mutation strategy also has some contribution to the diversity. In addition, the adopted selection strategy can speed up the convergence of the algorithm.

Evaluation of SDABC
In this subsection, SDABC is compared with IMDABC, MDABC, and NSGAII. All algorithms use the same CPU time as a stopping criterion and all use the same parameter settings, and the results are shown in Tables 8-13 and A7, respectively. The last row of the table represents the average of the 20 problems. The best parts are marked in bold. Table A7 shows the N-metrics obtained by the four algorithms, and it can be seen that the average values obtained in SDABC are higher than the three remaining algorithms. The values obtained by SDABC are significantly higher than IMDABC and NSGAII in each problem. In addition, although some values of MDABC are higher than SDABC, the difference is not significant, and SDABC achieves higher values in 70% of the problems. To sum up, SDABC is able to obtain more non-dominated solutions compared to other algorithms. This is because the search strategy proposed by the SDABC in the employed bee phase proposed in this paper is able to search in both depth and breadth directions, enhancing the diversity of individuals and contributing to obtaining a greater number of solutions. Tables 8-10 show the C-metric obtained by the four algorithms, and it can be seen that the AVG and SD obtained by SDABC are significantly higher than IMDABC, MDABC and NSGAII in each of the problems except for the 100 × 10 problem in Table 10 where the SD is slightly higher. Figure 6 shows a boxplot of the C-metric obtained by SDABC versus the other three algorithms. The outliers in (a) are the C-metric obtained for problem 100 × 5. In Table 8, both C-metrics for 100 × 5 are lower than the overall value, but SDABC's is better than IMDABC's. In (b) it can be seen that SDABC is significantly higher than NSGAII overall. Two independent values of C (SDABC, MDABC) in (c) are for problems 100 × 3 and 100 × 5. While these two values deviate from the overall, SDABC has a higher quality AVG and SD for the same problem dimension. Additionally, the median of SDABC is significantly higher than the other three algorithms. Therefore, SDABC obtains solutions of significantly higher quality than IMDABC, MDABC and NSGAII. This is because SDABC follows the individual in the population in both the employed and onlooker bee phases. The evolution of SDABC continued in accordance with the dominance of individuals in population and the similarity to the ideal solution. At the same time, it is possible to find high-quality solutions faster. Tables 11-13 show the IGD-metric obtained by the four algorithms, and it can be seen that in each problem SDABC obtains significantly lower AVG and SD than IMDABC, MDABC, and NSGAII. Figure 7 plots the boxplot of the IGD-metrics obtained by the four algorithms. Figure 7a shows that the overall and median SDABC is much lower than IMDABC. Figure 7b demonstrates that SDABC has a better concentration than MDABC. Figure 7c indicates that SDABC has a better overall and median quality than NSGAII. With Figure 7, we can see that the SDABC distribution is more concentrated under the condition of obtaining a lower IGD. SDABC's is better than IMDABC's. In (b) it can be seen that SDABC is significantly higher than NSGAII overall. Two independent values of C (SDABC, MDABC) in (c) are for problems 100 × 3 and 100 × 5. While these two values deviate from the overall, SDABC has a higher quality AVG and SD for the same problem dimension. Additionally, the median of SDABC is significantly higher than the other three algorithms. Therefore, SDABC obtains solutions of significantly higher quality than IMDABC, MDABC and NSGAII. This is because SDABC follows the individual in the population in both the employed and onlooker bee phases. The evolution of SDABC continued in accordance with the dominance of individuals in population and the similarity to the ideal solution. At the same time, it is possible to find high-quality solutions faster.
Tables A11-A13 show the IGD-metric obtained by the four algorithms, and it can be seen that in each problem SDABC obtains significantly lower AVG and SD than IMDABC, MDABC, and NSGAII. Figure 7 plots the boxplot of the IGD-metrics obtained by the four  This indicates that SDABC performs better than IMDABC, MDABC, and NSGAII in terms of convergence and diversity. This is because SDABC takes five different search directions in the employed bee phase and also explores them to different degrees depending on the ranking, both of which can become a more adequate search for individuals in the solution space and increase the diversity of the population. In addition, in the onlooker bee phase, every individual in population has the potential to be tracked. It also contributes to the diversity of the algorithm due to the introduction of the variation strategy. In terms of convergence, both the employed bee and the onlooker bee phases operate based on ranking, which speeds up the convergence of the population based on the similarity and dominance with the ideal solution.

Conclusions
In this paper, we studied the FHFGSP with fuzzy processing time that minimizes makespan and total energy consumption. To solve FHFGSP, a discrete artificial bee colony algorithm based on similarity and non-dominated solution ordering was proposed. After extensive numerical experiments, it can be demonstrated that the proposed strategy and algorithm outperforms other algorithms in terms of performance.
In the employed bee phase, individuals fully explore around the dominant solution; in the onlooker bee phase, individuals at the front of the sequence have a greater chance of being followed; in addition, a mutation strategy was proposed to prevent the population from falling into a local optimum. The algorithm produced solutions of high-quality in terms of quantity, quality, convergence, and distribution.
In future, our aim is to study more flexible HFGSPs, such as the proficiency of shop workers, and to consider other green metrics, such as noise and carbon emissions. We will verify the effectiveness of the algorithm by comparing it with more optimization algorithms based on mimicking animal behavior, which will have a positive impact on the role of such algorithms in relation to the green shop scheduling problem. In addition, as smart manufacturing continues to evolve and people start to use information physical systems and industrial Internet of Things to obtain data in real time during manufacturing processes, it is also interesting to study how to process real-time state data for decision making and optimization of green shop scheduling.