An Improved African Vulture Optimization Algorithm for Dual-Resource Constrained Multi-Objective Flexible Job Shop Scheduling Problems

According to the characteristics of flexible job shop scheduling problems, a dual-resource constrained flexible job shop scheduling problem (DRCFJSP) model with machine and worker constraints is constructed such that the makespan and total delay are minimized. An improved African vulture optimization algorithm (IAVOA) is developed to solve the presented problem. A three-segment representation is proposed to code the problem, including the operation sequence, machine allocation, and worker selection. In addition, the African vulture optimization algorithm (AVOA) is improved in three aspects: First, in order to enhance the quality of the initial population, three types of rules are employed in population initialization. Second, a memory bank is constructed to retain the optimal individuals in each iteration to increase the calculation precision. Finally, a neighborhood search operation is designed for individuals with certain conditions such that the makespan and total delay are further optimized. The simulation results indicate that the qualities of the solutions obtained by the developed approach are superior to those of the existing approaches.


Introduction
With the intensification of economic globalization, the production cycle of products has been reduced, requiring enterprises to establish and use manufacturing systems in an agile manner. In the current production system, scheduling is essential for increasing labor efficiency and productivity, as well as enhancing an organization's ability to compete. At present, many studies focused on the analysis of the problem of flexible job shop scheduling (FJSP) can be found. For example, Gao et al. [1] have proposed an improved artificial bee colony algorithm for FJSP with fuzzy processing time. Li et al. [2] have developed a hybrid artificial bee colony algorithm based on Tabu search for emergencies in FJSP and a clustered grouping roulette method to better initialize the population. Caldeira et al. [3] have presented an improved Jaya algorithm, using a local search method, an effective initialization mechanism, and an acceptance criterion to improve the quality of the FJSP solution. Zhang et al. [4] have recently shown that a genetic algorithm (GA) based on a variable neighborhood search can be used to address the NP-hard property of FJSP, providing a strong search capability and diversity. To solve the FJSP, Chen et al. [5] have used a self-learning genetic algorithm that automatically adjusts the vital parameters of the GA through reinforcement learning. An improved GA based on hybrid initialized populations, manual cross-over methods, and adaptive weighting mechanisms has been proposed by Zhang et al. [6], in order to optimize multi-objective FJSP. Ding et al. [7] have investigated FJSP, and obtained a high-quality scheduling scheme by improving the encoding-decoding scheme, the inter-particle communication mechanism, and the substitution rules for the candidate operation machines in a particle swarm optimization algorithm. Li et al. [8] have developed a scheduling solution based on the Monte Carlo tree search algorithm for the dynamic flexible job shop scheduling problem, in order to minimize the makespan. Different Petri-net based heuristic scheduling methods were used to obtain optimal or near-optimal schedules for FJSP [9][10][11][12][13]. The FJSP has been solved by Caldeira et al. [14] using a discrete multi-objective Jaya algorithm, taking the makespan, total machine workload, and workload of essential machines as performance metrics. There have been many studies on FJSP that consider machine constraints, and many excellent results have been achieved. However, FJSP in the actual manufacturing process is not only constrained by machines, but also by workers. Therefore, it is evident that the traditional job shop scheduling problem (JSP) of machine constraints and FJSP of machine constraints do not meet the requirements of actual production. As a result, the dual-resource flexible job shop scheduling problem (DRCFJSP) with machine and worker constraints has been proposed. The gap between theoretical and practical scheduling problems can be reduced by considering the DRCFJSP. However, as the DRCFJSP is an extension of JSP and FJSP, it still faces significant challenges, such as inheriting the NP-hard features of JSP and FJSP.
At present, there are few relevant studies on DRCFJSP. Cao et al. [15] have contraposed the NP-hard nature of DRCFJSP and, based on the information processing mechanism of innate immunity in biological science, a novel immune genetic algorithm combining the immune algorithm and GA was proposed. Li et al. [16] have presented a branching population genetic metaheuristic algorithm, in order to minimize the completion time and cost of DRCFJSP. The algorithm is a GA-based scheduling method that accumulates and transmits the evolutionary experience of parental chromosomes by introducing pheromones into the branching population. Zheng et al. [17] have designed a knowledge-guided fruit fly optimization algorithm based on a novel coding method to solve the DRCFJSP including a processing time minimization criterion. Zhang et al. [18] have studied the DRCFJSP, and designed a hybrid discrete particle swarm optimization algorithm. Zhang et al. [19] have used a quantum genetic algorithm (QGA) for the DRCFJSP, in order to improve the efficiency of the QGA solution through quantum coding, niche techniques to initialize the population, adaptive rotation angles, and quantum mutation strategies. Tan et al. [20] have proposed a fatigue-based DRCFJSP to alleviate worker fatigue and improve productivity through the joint scheduling of machines and workers.
With the dawn of Industry 4.0 [21], DRCFJSP has become a hot topic of study. As mentioned above, many algorithms have been employed to address the DRCFJSP, in order to obtain scheduling solutions with better overall performance. The African vulture optimization algorithm (AVOA) [22] is a novel metaheuristic optimization algorithm that simulates the foraging and navigation behavior of vultures. Its effectiveness and superiority have been demonstrated through its application to various engineering design problems. Nevertheless, there have been few investigations on AVOA in scheduling problems, even though it has potential advantages in solving scheduling problems.
To adapt to the complicated and volatile external environment of enterprises, in this paper, we construct a makespan and total delay target model, and improve the AVOA (IAVOA) through various improvement strategies to shorten the production cycle time of products, thus increasing the competitive advantage of enterprises. In addition, we also compare and analyze IAVOA against a number of widely used multi-objective algorithms, in order to evaluate relevant performance metrics.
The remainder of this paper is structured as follows: Section 2 presents the specific details of the problem model. The algorithm adopted to solve the model is detailed in Section 3. The improvement strategy and the solution process of IAVOA are provided in Section 4. The simulation experiment is introduced in Section 5. Finally, Section 6 concludes the work.

Description of the Problem
In the following, we provide a description of the DRCFJSP. Suppose that there are w workers in a workshop using m machines to process n jobs. The set of jobs is denoted as J = {J 1 , J 2 , . . . , J n }, the set of machines as M = {1, 2, . . . , m}, and the set of workers as W = {1, 2, . . . , w}. A job J i has n i operations waiting for processing, and there is a constraint on the sequence between operations. Each operation O i,j requires m candidate machines, and each machine M P has w candidate workers. Therefore, there are cases where the processing time PT i,j,p,k , k ∈ W of the (same or different) machines processing operation O i,j differs. The objective of scheduling is to arrange the processing of n jobs reasonably. Therefore, the three tasks of operation sequence, machine allocation, and worker selection comprise the DRCFJSP.
In this paper, we construct a DRCFJSP model to optimize the makespan and total delay. To describe the DRCFJSP model, relevant parameters are defined in Table 1.
If the sequence of different operations for processing different jobs is processed on machine If the sequence of different operations for processing different jobs is processed by worker To obtain a better scheduling solution for the DRCFJSP, the following assumptions are made: (1) there is no hierarchy of importance for various jobs; (2) there is a precedence constraint between various operations of the same job, but not between operations of different jobs; (3) each worker is only able to operate one machine at a time to complete one operation; (4) each operation can only be processed on one machine; (5) each operation process cannot be interrupted; (6) the processing time for all jobs can be zero; and (7) the transfer time of workers on different machines is the same as that of jobs on different machines.

Total delay
The total delay objective function has been proposed by Jun [23]. The formula to calculate the total delay f 2 is as follows: such that As shown in Equation (4), there are restrictions on the order in which the job can be processed. Equation (5) indicates that, with each machine running at the same time, only one operation can be processed. Equation (6) implies that there is only one job that each worker can process at the same time. Finally, Equation (7) means that an operation can only be performed once.

African Vulture Optimization Algorithm
The African Vulture Optimization Algorithm (AVOA) [22] is a recently proposed metaheuristic algorithm that is divided into an exploration phase and an exploitation phase, mimicking the foraging and navigation behavior of African vultures. The exploitation phase can be divided further into a co-operative phase and a competitive phase. The AVOA determines the phase of the algorithm mainly based on the hunger level F of the vulture. If F < r 1 , AVOA enters the exploration phase. If F > r 2 , AVOA moves into the co-operative phase. If the above two conditions are not satisfied, AVOA enters the competition phase. The formula for calculating a vulture's hunger level F is as follows: where t i defines the present number of iterations, maxt defines the total number of iterations, z is a number drawn at random from the range [−1, 1], h is a number chosen at random from the range [−2, 2], rand 1 is a number selected at random from the range [0, 1], and ω is a constant value (ω = 1).
(1) Exploration phase: Vultures are superb foragers and have outstanding vision in the wild, so they can spot dying animals. However, vultures may have great difficulty in finding food, and need to check different random regions to find food. In AVOA, random regions are represented by two distinct position update formulas, and the parameter P 1 is used to select the areas searched by vultures. The random number rand is used to determine the position update formula to obtain new individuals in the exploration phase. If rand ≥ P 1 , the position update formula Equation (10) is used; otherwise, the position update Equation (13) is used: where P(i) is the position vector of the vultures to be updated; P(i + 1) is the new vulture position vector; R(i) indicates either an optimal or sub-optimal vulture; rand 1 , rand 2 , and rand 3 are random numbers between [0, 1]; lb and ub represent the upper and lower bounds of the variables; and X defines how vultures move at random to defend their prey from other vultures. (2) Co-operative phase: When the vultures are hungry, they will move collectively in search of food. The food source can be represented by two different position update formulas. The random number rand is used to determine the position update formula for the new individual in this phase. If rand ≥ P 2 , Equation (14) is applied. Otherwise, Equation (17) is applied.
where rand 4 , rand 5 , and rand 6 are a random numbers between [0, 1]. (3) Competitive phase: Serious disputes over food availability can arise when a large group of vultures gather at a single food source. Food sources can be represented by two different position update formulas. The random number rand is used to determine the position update formula followed by the new individual in the competitive phase. If rand ≥ P 3 , Equation (19) is applied; otherwise, Equation (20) is applied.
where BV 1 (t) is the optimal vulture in the current iteration, BV 2 (t) is the sub-optimal vulture in the current iteration, d(t) is the distance between BV 1 (t) and BV 2 (t), LF refers to Levy flight, d refers to the dimension of the problem, u and v are both arbitrary numbers between [0, 1], and the default value for β is fixed at 1.5.

Initialization of the Population
According to previous studies, a high-quality initial population is helpful in increasing the accuracy of such an algorithm, as well as balancing its exploration and development capacities. Therefore, we designed three rules to generate initial population individuals, as follows: 1.
The shortest processing time principle: Determine the sequence of operations and the selection of machines in a random manner. For the allocation of workers, select the workers with the shortest processing time; 2.
The machine-worker integration principle: Determine the sequence of operations in a random manner, fix the workers assigned to each machine at random, randomly determine the selection allocation of machines, and determine the selection of workers according to the selection of machines; 3.
The randomization principle: Determine the sequence of operations, the selection of machines, and the distribution of workers in a random manner.
The overall processing time of the jobs can be decreased by using the shortest processing time principle. On the contrast, the machine-worker integration principle can reduce the transfer time caused by worker transfer. These two methods have a greater probability of producing better individuals than random methods; however, to maintain population diversity and prevent the algorithm from converging too early, the proportion of individuals generated by the shortest processing time principle and machine-worker integration principle should be low. Thus, 20% of the individuals are generated by the shortest processing time principle, 10% of the individuals are generated by the machine-worker integration principle, and finally, 70% of the individuals are generated by the randomization principle.

Solution Representation
The DRCFJSP studied in this paper can be divided into three sub-problems: sequencing of operations, allocation of machines, and selection of workers. Therefore, we designed a three-segment encoding scheme-specifically, including a sequence code for operations OC, an allocation code for machines MC, and a selection code for workers WC. The size of each segment is the sum of the operations included in the scheduling problem. The size of the chromosome is n ∑ i=1 n i , where n i reflects the number of operations in job i. Each gene on a bit in OC corresponds to a job number, and different occurrences of the same job number correspond to different operations of the task. In MC, each gene represents that a qualified machine M k is selected from the machine set M for processing operation O ij . For WC, each gene represents a worker who uses the machine. The three-segment encoding method is conducive to optimizing the processing time and other information in the decoding process, and reducing the complexity of algorithm calculation. Figure 1 shows an example of the three-segment coding. This example includes three jobs {J 1 , J 2 , J 3 }, three machines {m 1 , m 2 , m 3 }, and five workers {w 1 , w 2 , w 3 , w 4 , w 5 }. The workers {w 2 , w 4 } can utilize machine m 1 , workers {w 1 , w 2 , w 5 } can utilize machine m 2 , and workers {w 1 , w 3 , w 4 } can utilize machine m 3 . From left to right, the OC is scanned. The first "3" illustrates the first operation O 31 of job J 3 , the first "1" illustrates the first operation O 11 of job J 1 , and the second "3" illustrates the second operation O 32 of job J 3 . Therefore, the processing sequence of the corresponding operation in Figure 1 is 24 . In MC, the first "2" represents machine m 2 processing operation O 11 , the first "1" represents machine m 1 processing operation O 12 , and the second "1" represents machine m 1 processing operation O 21 . In WC, the number "5" indicates that worker w 5 uses machine m 2 to process operation O 11 (i.e., {w 5 , m 2 , O 11 }), and the number "1" indicates that worker w 1 uses machine m 3 to process operation O 12 . Based on the above, the specific processing information of this example can be expressed as follows: O . Based on the above, the specific processing information of this example can be expressed as follows: 2  2  31  5  2  11  1  3  12  2  1  21  2  1  22  4  3  32  2  1  33  1  2  23  4  1 Figure 1. Three-segment encoding scheme.

Individual Position Vector
The initial position vector is determined by the OC . The position vector is composed of random numbers in the range [ , ] nn − , with length of

Individual Position Vector
The initial position vector is determined by the OC. The position vector is composed of random numbers in the range [−n, n], with length of Figure 2 shows an example of generating a position vector. In Figure 2, there are three jobs {J 1 , J 2 , J 3 } to be processed, where each job includes two operations. It can be seen in Figure 2 that the OC of individual Obtaining the sequence number according to the relation between the sequence number and the job number gives 5 → 2 → 1 → 6 → 3 → 4 . Obtaining the position vector using the relation between the random number and the sequence number gives 2.
where each job includes two operations. It can be seen in Figure 2 that the OC of individual 1 P is 3 1 2 2 3 1 → → → → → . Obtaining the sequence number according to the relation between the sequence number and the job number gives Relationship between the job number and the sequence number

Fitness
When solving DRCFJSP by IAVOA, each individual has different degrees of fitness. Therefore, it is crucial that we develop a way to assess an individual's fitness value. In evaluating individual fitness, the single objective optimization problem only requires comparing the value of the objective function. However, for problems regarding multiobjective optimization, due to the mutual constraints between objectives, it is necessary to measure the target value of each individual comprehensively. Common methods for evaluating comprehensive individual performance are SPEA2's k-nearest neighbors method, NSGA-Ⅱ's Pareto frontier method, and the weight method, among others. To better assess the fitness of individuals and obtain the optimal global solution, we designed a new formula to calculate the individual fitness value, as follows:

Fitness
When solving DRCFJSP by IAVOA, each individual has different degrees of fitness. Therefore, it is crucial that we develop a way to assess an individual's fitness value. In evaluating individual fitness, the single objective optimization problem only requires comparing the value of the objective function. However, for problems regarding multiobjective optimization, due to the mutual constraints between objectives, it is necessary to measure the target value of each individual comprehensively. Common methods for evaluating comprehensive individual performance are SPEA2's k-nearest neighbors method, NSGA-II's Pareto frontier method, and the weight method, among others. To better assess the fitness of individuals and obtain the optimal global solution, we designed a new formula to calculate the individual fitness value, as follows: where s is the dimension of the objective function, l is the index of the objective function, f is the objective function, and r l is a number drawn at random from the range [0, 1]. In this paper, to prevent the algorithm from entering local optima and maintain high population diversity, the parameter r l is used to control the search direction of the population.

Memory Bank
The memory bank is a simple storage unit, which is used to store the better individuals in the population. The memory bank has a full file value N A , which controls the population size. The memory bank judges the fitness of individuals according to certain rules. To improve the overall quality of the next generation of individuals, we designed two rules to select better individuals: 1.
Boundary rule: The boundary value α l of the population is calculated by Equation (23), and individuals smaller than the boundary value are stored in the memory bank. The boundary rule can eliminate individuals with large target values, reduce the number of invalid searches in the solution space, and enhance the algorithm's computational effectiveness.

2.
Distance rule: The distance between each individual and the optimal individual is calculated, and individuals with a large distance are deleted from the memory bank. The distance rule can control the algorithm's search direction, focusing it toward the optimal solution.
The specific steps of the memory bank processing are as follows: Step 1: Use Equation (23) to calculate the boundary value α l of the population and calculate the Euclidean distance between the individuals and the optimal individual. Individual distances in the population are sorted in ascending order: where l defines the dimension of the objective function, and r refers to a number in the interval [0, 1] (e.g., r = 0.35); Step 2: Individuals less than the boundary value α l of any dimension using the boundary rule are stored in the memory bank; Step 3: Determine whether the number of individuals in the memory bank has reached N A .
If the number of individuals l in the memory bank is higher than N A , individuals with a large distance, according to the distance rule, are deleted; otherwise, the storage of individuals in the memory bank is completed.

Updating of Individuals
IAOVA uses position vectors to update individuals in the population. First, we use the location update formula of AVOA to update the individual location vector in the memory bank, and then use the individual update mechanism to update the individuals. For this paper, we designed an operation update mechanism, three machine-worker update mechanisms, and a neighborhood search operation. The specific steps by which IAOVA updates individuals are as follows: Step 1: Determine the optimal vulture individual, the sub-optimal vulture individual, and the individual P i ; Step 2: Update the position vector of P i using the position update formula of AVOA; Step 3: Use the process update mechanism to update the OC of individuals in the memory bank; Step 4: Determine whether the neighborhood search condition is reached. If the condition is fulfilled, the neighborhood search operation is executed; otherwise, proceed to Step 5; Step 5: Randomly generate a random number rr in the range [0, 1]; Step 6: SUse rr to determine the machine-worker update mechanism. Update MC and WC of individuals in the memory bank.
Individual Updating Mechanism

Operation update mechanism
The specific steps for updating individual operation sequences are as follows: Step 1: θ is determined randomly, where θ is a position element of the position vector of individual P i ; Step 2: Carry out ascending ranking of element values higher than θ in the position vector of individual P i , and record the changed operation numbers and position vector elements; Step 3: Set the operation number of the optimal vulture or the sub-optimal vulture to zero, which is the same as the operation number recorded in Step 2, and record the non-zero numbers and position vector elements; Step 4: Record the operation number and position vector in Step 2 as the first half of the sub-generation, and record the operation number and position vector in Step 3 as the second half of the sub-generation.
The operation update mechanism of the individual is depicted in Figure 3.

023, 23, x FOR PEER REVIEW
Step 3: Set the operation number of the optimal vulture or the sub-optimal which is the same as the operation number recorded in Step 2, and zero numbers and position vector elements; Step 4: Record the operation number and position vector in Step 2 as the sub-generation, and record the operation number and position vector second half of the sub-generation.
The operation update mechanism of the individual is depicted in Fig   2 Figure 3. Operation update mechanism of the individual.

2.
Machine-worker update mechanism Based on the relationship between the AVOA solution and the optim mal vulture, and to maintain the diversity of the algorithm, we designed worker update mechanisms. The specific mechanisms are as follows.

2.
Machine-worker update mechanism Based on the relationship between the AVOA solution and the optimal or sub-optimal vulture, and to maintain the diversity of the algorithm, we designed three machine-worker update mechanisms. The specific mechanisms are as follows.
(1) Machine-worker self-updating mechanism Step 1: Generate two location indices l 1 and l 2 on MC in a random way; Step 2: Determine the processing procedures on the location indices l 1 and l 2 , and randomly select machines from the machine set to replace the machines at these locations; Step 3: Randomly select workers to replace workers at these locations. Step 2: Determine the processing procedures on the location indices 1 l and 2 l , and randomly select machines from the machine set to replace the machines at these locations; Step 3: Randomly select workers to replace workers at these locations.   (2) Machine-worker cross-updating mechanism Step 1: Calculate the exchange digit l using Equation (24): where round is the rounding function, and u = 0.225.
Step 2: Randomly generate two location indices l 1 and l 2 , where |l 1 − l 2 |= l ; Step 3: The position elements between l 1 and l 2 in vulture R i are transferred to the offspring C i in turn, keeping the original location index unchanged; Step 4: The values of individual P i , except for the location elements between the location indices l 1 and l 2 , are kept unchanged and passed to the child C i in turn. (2) Machine-worker cross-updating mechanism Step 1: Calculate the exchange digit l using Equation (24): where round is the rounding function, and 0.225 u = .
Step 2: Randomly generate two location indices 1 l and 2 l , where 12 || l l l −= ; Step 3: The position elements between 1 l and 2 l in vulture i R are transferred to the offspring i C in turn, keeping the original location index unchanged; Step 4: The values of individual i P , except for the location elements between the location indices 1 l and 2 l , are kept unchanged and passed to the child i C in turn. Figure 5 depicts the individual machine-worker cross-updating process.  The new MC is the MC of vulture i R . Figure 6 depicts the WC updating process.  (3) Worker self-updating mechanism

Neighborhood Search Operation
Step 1: Step 1: Randomly generate two location indices l 1 and l 2 on WC; Step 2: Set the location elements in the location indices l 1 and l 2 of the individual P i to be updated to the child C i ; Step 3: In individual R i , except for the location elements between the location indices l 1 and l 2 , the original location indices are kept unchanged and passed to the child C i in turn.
The new MC is the MC of vulture R i . Figure 6 depicts the WC updating process. (2) Machine-worker cross-updating mechanism Step 1: Calculate the exchange digit l using Equation (24): where round is the rounding function, and 0.225 u = .
Step 2: Randomly generate two location indices 1 l and 2 l , where 12 || l l l −= ; Step 3: The position elements between 1 l and 2 l in vulture i R are transferred to the offspring i C in turn, keeping the original location index unchanged; Step 4: The values of individual i P , except for the location elements between the location indices 1 l and 2 l , are kept unchanged and passed to the child i C in turn. Figure 5 depicts the individual machine-worker cross-updating process.  The new MC is the MC of vulture i R . Figure 6 depicts the WC updating process.

Neighborhood Search Operation
The neighborhood search operation enhances the algorithm's exploration of the solution space, enhances the quality of the solution for a wide variety of individual groups, and improves the ability of the algorithm to skip local optima. The neighborhood search

Neighborhood Search Operation
The neighborhood search operation enhances the algorithm's exploration of the solution space, enhances the quality of the solution for a wide variety of individual groups, and improves the ability of the algorithm to skip local optima. The neighborhood search operation is as follows: Step 1: Judge whether an individual's position vector meets the neighborhood search conditions (more than 60% of the elements on the position vector are the same, or all elements on the position vector are boundary values). If the condition is met, Step 2 is performed. Otherwise, the neighborhood search operation is not performed; Step 2: Determine the allele exchange number N g ; Step 3: Generate the location indices l 1 and l 2 ; Step 4: In OC, the alleles on the location indices l 1 and l 2 are exchanged. For the position vector, the elements located at l 1 and l 2 are generated randomly; Step 5: Judge whether the exchange number N g has been met. If the condition is met, new individuals will be generated. Otherwise, Step 2 is executed.
The specific operation flow of the neighborhood search operation is shown in Figure 7. Suppose that the allele exchange number N g is 3. Each exchange of alleles does not affect each other. Therefore, there is a certain probability of generating the same location index. For example, in Figure 7, the location index required for the second exchange is the same as that generated the first time. Step 1: Judge whether an individual's position vector meets the neighborhood search conditions (more than 60% of the elements on the position vector are the same, or all elements on the position vector are boundary values). If the condition is met, Step 2 is performed. Otherwise, the neighborhood search operation is not performed; Step 2: Determine the allele exchange number g N ; Step 3: Generate the location indices 1 l and 2 l ; Step 4: In OC , the alleles on the location indices 1 l and 2 l are exchanged. For the position vector, the elements located at 1 l and 2 l are generated randomly; Step 5: Judge whether the exchange number g N has been met. If the condition is met, new individuals will be generated. Otherwise, Step 2 is executed.
The specific operation flow of the neighborhood search operation is shown in Figure  7. Suppose that the allele exchange number g N is 3. Each exchange of alleles does not affect each other. Therefore, there is a certain probability of generating the same location index. For example, in Figure 7, the location index required for the second exchange is the same as that generated the first time.   Figure 7. Operation flow of the neighborhood search operation.

Framework of the Developed Algorithm
The AVOA is improved by designing the initial population and memory bank, balancing the ability of the AVOA to explore and develop the solution space. The neighborhood search operation is intended to enhance the ability of the algorithm to skip local optima. Figure 8 shows a flow chart indicating how IAVOA is used to solve the DRCFJSP. The specific steps of the solution process are as follows: Step 1: Set up the parameters; Step 2: Initialize the population t P by using the strategy for population initialization; Step 3: Calculate individual fitness values in the population. Select the optimal vulture and the sub-optimal vulture, and determine the vulture i R ; Step 4: Judge whether the termination condition has been met. If the condition is fulfilled, Step 9 is executed. Otherwise, Step 5 is executed; Step 5: Compose the population

Framework of the Developed Algorithm
The AVOA is improved by designing the initial population and memory bank, balancing the ability of the AVOA to explore and develop the solution space. The neighborhood search operation is intended to enhance the ability of the algorithm to skip local optima. Figure 8 shows a flow chart indicating how IAVOA is used to solve the DRCFJSP. The specific steps of the solution process are as follows: Step 1: Set up the parameters; Step 2: Initialize the population P t by using the strategy for population initialization; Step 3: Calculate individual fitness values in the population. Select the optimal vulture and the sub-optimal vulture, and determine the vulture R i ; Step 4: Judge whether the termination condition has been met. If the condition is fulfilled, Step 9 is executed. Otherwise, Step 5 is executed; Step 5: Compose the population P t and memory bank A t into a new memory bank A t , and calculate the Euclidean distance and boundary value α l between the individuals and the optimal vulture in A t ; Step 6: Use the memory bank pruning strategy to prune A t to obtain the memory bank A t+1 ; Step 7: Update the position vector of A t+1 using the position update formula of AVOA, and update the OC of A t+1 using the operation update mechanism; Step 8: Judge whether the neighborhood search operation is required. If the condition is fulfilled, the neighborhood search operation is used to update the individuals. Otherwise, the machine-worker update mechanism updates the individual machine code and the worker code. New individuals join the new population P t+1 . Then, Step 3 is executed; Step 9: Output the optimal solution.
Sensors 2023, 23, x FOR PEER REVIEW 13 of 21 Step 7: Update the position vector of 1 t A + using the position update formula of AVOA, and update the OC of 1 t A + using the operation update mechanism; Step 8: Judge whether the neighborhood search operation is required. If the condition is fulfilled, the neighborhood search operation is used to update the individuals. Otherwise, the machine-worker update mechanism updates the individual machine code and the worker code. New individuals join the new population 1 t P + . Then, Step 3 is executed; Step 9: Output the optimal solution.
Setting parameters of IAVOA and initializing the population Pt Is the conditions for termination met ?
Output optimal solution Calculated the fitness value , find the optimal and Suboptimal vultures and determine Ri Identify individuals in the memory bank Individual update mechanism to update MC and WC Update the position vector with the IAVOA position update formula and update OC with the operation update mechanism Is the condition of neighbourhod search operation met?

Experimental Simulation and Analysis
We analyze the performance of IAVOA through an experiment. The environment for the experimental analysis was Windows 10, with 4 GB of RAM and an Intel i7 processor, and the programming environment was MATLAB 2021. The experimental analysis included the following.
(1) Setting the parameters of IAVOA; (2) Assessing the optimal performance of IAVOA; (3) Examining the performance of IAVOA, compared to that of commonly used multiobjective optimization algorithms.

Experimental Simulation and Analysis
We analyze the performance of IAVOA through an experiment. The environment for the experimental analysis was Windows 10, with 4 GB of RAM and an Intel i7 processor, and the programming environment was MATLAB 2021. The experimental analysis included the following.
(1) Setting the parameters of IAVOA; (2) Assessing the optimal performance of IAVOA; (3) Examining the performance of IAVOA, compared to that of commonly used multiobjective optimization algorithms.

Evaluation Metrics
To assess the performance of IAVOA, three evaluation metrics are used in this paper: Generational Distance (GD) [24], Inverse Generational Distance (IGD) [25], and hypervolume (HV) [26]. GD and IGD were used to evaluate the convergence of IAVOA, while HV was employed to estimate the diversity of IAVOA. The formulas for these evaluation metrics are as follows: where Ω is the first Pareto front value (PF) obtained by an algorithm, Ω * is the true PF value, |Ω| is the quantity of elements in the Pareto front obtained from the algorithm's solution, and D i (Ω, Ω * ) denotes the minimum Euclidean distance that separates the solution in Ω * from solution i in Ω. A smaller GD value indicates better convergence of the algorithm.

IGD
where |Ω * | is the quantity of elements in the true PF, and D i (Ω * , Ω) denotes the shortest Euclidean distance that separates the solution in Ω from solution i in Ω * . A smaller value of the IGD indicates better convergence of the algorithm.

HV
where δ is the Lebesgue measure, and V i denotes the hypercube formed between the reference point and the solution i in PF. A larger HV value reflects better diversity of the algorithm.

Test Case
As the DRCFJSP is a relatively new problem, there were no standard cases for testing the algorithm's performance. Therefore, we applied 24 test cases, DMK01-DMK15 and DDP10-DDP18, based on FMK01-FMK15 and FDP10-FDP18, respectively [20]. The quantity of jobs, operations, machines, and workers; the processing time BT i,j,p,k of the operation; and the transit time TT i,(j−1),p ,k ,i,j,p,k of the two machines in the test case were the same as in FMK01-FMK15 and FDP10-FDP18. The delivery period of jobs is related to the tightness factor of the delivery period [27]. Equation (28) describes the delivery period for job i. The weights for the jobs were generated using the weight averaging method. Table 2 shows the scale used in the test case.
where * BT i,j,p,k indicates the mean processing time for operation j of job i, r i denotes the release time of job i (r i ∈ U[0, ∑ n i j=1 * BT i,j,p,k /n i ]), and c indicates the tightness factor of the delivery period (c = 1.2).

Setting of Parameters
The performance of IAVOA is significantly impacted by the parameter settings. The key parameters affecting the performance of the IAVOA are r 1 (i.e., r 1 ∈ [1, 1.3]), r 2 (i.e., r 2 ∈ [0.5, 0.8]), P 1 (i.e., P 1 ∈ [0.4, 0.7]), P 2 (i.e., P 2 ∈ [0.4, 0.7]), and P 3 (i.e., P 3 ∈ [0.3, 0.6]), where r 1 and r 2 determine the phase of the IAVOA (i.e., exploration, co-operative, or competition), and P 1 , P 2 , and P 3 primarily determine the location update formula employed by IAVOA at each phase. In this paper, the DMK08 was considered a test case, and the Taguchi method [28] for setting up an orthogonal experiment was applied to determine the best parameter combination strategy. Several reasonable levels for the parameters are given in Table 3, and each group of parameters was run 10 times, where the average value for each IGD group was regarded as the response value. Table 4 displays the average response values and the rank of parameters. As shown in Table 4, the parameters with the highest impact rank were r 2 and P 3 . Figure 9 depicts the factor level trend of the parameters. The optimal combination of parameters for IAVOA was found to be r 1 = 1.3, r 2 = 0.5, P 1 = 0.7, P 2 = 0.7, and P 3 = 0.3, through experimental analysis.    Figure 9. Factor level trend of the parameters.

Performance Analysis of IAVOA
In this subsection, the methods developed in [24] and [29] are employed as comparison algorithms to assess the performance of IAVOA (i.e., SPEA2 and NSGA-II). Both algorithms have been widely used and their excellent performance in solving engineering problems has been proven. Therefore, they were considered highly reliable for comparison. The differences of the two multi-objective optimization algorithms are as follows: SPEA2 evaluates individual fitness mainly by the k-nearest neighbor method, whereas NSGA-II evaluates individual fitness by the non-dominated sorting technique and crowding distance calculation approach. Table 5 shows the parameter setting values for the different algorithms. Zitzler [24] has identified that SPEA2 produces superior solutions around . Therefore, the probability of crossing and mutation for SPEA2 in Table 5 was set as 0.8 and 0.15, respectively. To avoid errors arising from the parameters, the NSGA-II probability of crossing and mutation was also set as 0.8 and 0.15. The range of each parameter was shown in parentheses. For each case, we run the tested algorithm 10 times.
The performance metrics of the three algorithms are displayed in Table 6 where the data in bold indicate the optimal values of the performance metrics for each test case. Multiple test cases are detailed in Table 6, where IAVOA's GD and IGD values were better than those of SPEA2 and NSGA-II, demonstrating that IAVOA has better convergence. The data in Table 6 indicated that IAVOA also had better HV values than SPEA2 and

Performance Analysis of IAVOA
In this subsection, the methods developed in [24,29] are employed as comparison algorithms to assess the performance of IAVOA (i.e., SPEA2 and NSGA-II). Both algorithms have been widely used and their excellent performance in solving engineering problems has been proven. Therefore, they were considered highly reliable for comparison. The differences of the two multi-objective optimization algorithms are as follows: SPEA2 evaluates individual fitness mainly by the k-nearest neighbor method, whereas NSGA-II evaluates individual fitness by the non-dominated sorting technique and crowding distance calculation approach. Table 5 shows the parameter setting values for the different algorithms. Zitzler [24] has identified that SPEA2 produces superior solutions around P c = 0.8 and P m = 0.1. Therefore, the probability of crossing and mutation for SPEA2 in Table 5 was set as 0.8 and 0.15, respectively. To avoid errors arising from the parameters, the NSGA-II probability of crossing and mutation was also set as 0.8 and 0.15. The range of each parameter was shown in parentheses. For each case, we run the tested algorithm 10 times. The performance metrics of the three algorithms are displayed in Table 6 where the data in bold indicate the optimal values of the performance metrics for each test case. Multiple test cases are detailed in Table 6, where IAVOA's GD and IGD values were better than those of SPEA2 and NSGA-II, demonstrating that IAVOA has better convergence. The data in Table 6 indicated that IAVOA also had better HV values than SPEA2 and NSGA-II, except for the cases DMK01, DMK04, and DMK07, which demonstrates that the diversity of IAVOA is generally better than the other two algorithms. Figure 10 shows the box plots for the three evaluation metrics.  Table 7 shows the optimization values for the test cases resolved using IAVOA, SPEA2, and NSGA-II. The data in bold indicate the optimal results for the corresponding cases solved by the algorithms, and the corresponding line chart is shown in Figure 11, which demonstrate that higher quality scheduling solutions were obtained when solving DRCFJSP with IAVOA, compared to the other two algorithms. Figure 11 illustrates the various advantages of IAVOA in solving the large-scale DRCFJSP. Thus, our experiment showed that IAVOA is better suited to the practical FJSP solution than SPEA2 and NSGA-II.    Table 7 shows the optimization values for the test cases resolved using IAVOA, SPEA2, and NSGA-II. The data in bold indicate the optimal results for the corresponding cases solved by the algorithms, and the corresponding line chart is shown in Figure 11, which demonstrate that higher quality scheduling solutions were obtained when solving DRCFJSP with IAVOA, compared to the other two algorithms. Figure 11 illustrates the various advantages of IAVOA in solving the large-scale DRCFJSP. Thus, our experiment showed that IAVOA is better suited to the practical FJSP solution than SPEA2 and NSGA-II. To clearly determine the performance of IAVOA, the medium-scale DMK12 was selected for performance analysis. Figure 12 presents the convergence graph for DMK12, which shows that IAVOA provides a better solution when the iterations increase. The superior results obtained by IAVOA illustrate the feasibility and effectiveness of the improved strategy. Figure 13 shows the Gantt chart of DMK12 for IAVOA.  To clearly determine the performance of IAVOA, the medium-scale DMK12 was selected for performance analysis. Figure 12 presents the convergence graph for DMK12, which shows that IAVOA provides a better solution when the iterations increase. The superior results obtained by IAVOA illustrate the feasibility and effectiveness of the improved strategy. Figure 13 shows the Gantt chart of DMK12 for IAVOA.

Conclusions
In this paper, we considered the influence of machine and worker constraints on the FJSP, and established the DRCFJSP with makespan and total delay as objective functions. The AVOA, based on the position update formula and position vector, was proposed to solve DRCFJSP. To sustain population diversity and optimize solution quality, the population was initialized in three ways, following the shortest processing time principle, the machine-worker integration principle, and the randomization principle. The memory bank of AVOA was improved to enhance the solution space exploration and exploitation capabilities of the algorithm. A neighborhood search operation was designed to avoid the algorithm falling into local optima, and the Taguchi method was employed to determine the optimal parameters of the algorithm. The test cases were DMK01-DMK10 and DDP10-DDP18, based on FMK01-FMK10 and FDP10-FDP18, respectively. The experimental results demonstrated that the IAVOA can outperform the state-of-the-art SPEA2 and NSGA-II in solving large-scale flexible job shop scheduling problems. In terms of performance metrics, the experiments verified that IAVOA has good convergence and diversity.
For future work, there exist many dynamic problems attached to FJSP, such as job insertion, machine breakdowns, and other emergencies, which deserve further consideration. In this manuscript, the limitation of overtime for workers was not considered. This problem can be solved in future work by increasing the number of workers. Distributed scheduling and reverse scheduling can also be considered as future research directions. In addition, the algorithm needs further improvement, in order to obtain a scheduling solution with better performance.