Scheduling Two Identical Parallel Machines Subjected to Release Times, Delivery Times and Unavailability Constraints

: This paper proposes a genetic algorithm (GA) for scheduling two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable. To make the problem more practical, we assumed that the machines are undergoing periodic maintenance rather than making them always available. The objective is to minimize the makespan ( C max ). A lower bound (LB) of the makespan for the considered problem was proposed. The GA performance was evaluated in terms of the relative percentage deviation (RPD) (the relative distance to the LB) and central processing unit (CPU) time. Response surface methodology (RSM) was used to optimize the GA parameters, namely, population size, crossover probability, mutation probability, mutation ratio, and pressure selection, which simultaneously minimize the RPD and CPU time. The optimized settings of the GA parameters were used to further analyze the scheduling problem. Factorial design of the scheduling problem input variables, namely, processing times, release times, delivery times, availability and unavailability periods, and number of jobs, was used to evaluate their e ﬀ ects on the RPD and CPU time. The results showed that increasing the release time intervals, decreasing the availability periods, and increasing the number of jobs increase the RPD and CPU time and make the problem very di ﬃ cult to reach the LB.


Introduction
Scheduling is a process of accomplishing determined tasks by effectively using restricted resources. Various scheduling problems have been investigated in various research fields, such as manufacturing operations [1], project management [2], construction [3], airline crew scheduling [4], outpatient appointments [5], maintenance [6], cloud manufacturing [7], and personnel scheduling [8]. Production scheduling is an important problem that has been widely investigated in the literature, such as the work presented by Pindo [9]. Production scheduling problem mostly refers to assigning a job to be processed in one or more machines in a specific sequence that leads to an optimal objective subjected to specific constraints. Abedinnia et al. [10] conducted a comprehensive literature review of the production scheduling problem.
Parallel machines scheduling problem refers to assigning a set of jobs for a specific number of identical machines by allocating each machine to a specific job(s) to optimize a specific objective(s) [11]. Each job must be processed in one machine. Alternately, each machine can only process one job at the same time. When the processing time of jobs is identical for all machines (machines have the same speed), the machines are called identical [12]. However, when the processing time depends on the The scheduling problem of two identical parallel machines subjected to release time, delivery time, and machine unavailability is formally defined as follows. A set of n jobs j 1 , . . . , j n have to be processed by a set of m identical parallel machines m 1 and m 2 . Each job j has a release time r j from which it will be ready to be processed. Job j has a processing time p j and has to be processed on the available machines m i . A particular machine is available for a period of t i after which it will be unavailable for a time s i , a required time to conduct a PM action on the machine m i . Since the two machines are identical, the available and unavailable periods for the two machines will be considered to be the same. In other words, t 1 = t 2 = t and s 1 = s 2 = s. Each j ∈ J must spend a duration called delivery time q j after processing had been finished on the machine m i . The processing of all jobs on the identical parallel machines is conducted under the following assumptions: • Each machine can process only one job at a time, and all jobs are non-preemptive. • If machine i is unavailable (down for a PM action), it will not be capable of processing jobs until s is finished. • PM activities can be done early (before the end of the period t), but for the possibility of failure occurring, they cannot be delayed.

•
The machines are available for processing again after the unavailable period (PM activity).

•
The release time r j , delivery time q j , and processing time p j for each job j are known in advance.

•
The availability t and unavailability s periods of a particular machine are deterministic and known in advance.
The objective is to minimize the makespan (C max ). According to [37], the related problem without addressing machine's availability and PM is an NP-hard problem. Therefore, the considered problem is NP-hard, and can be expressed using Graham's notations as follows: P2 r j , q j , av − pm C max . "av − pm" denotes the constraint of machine unavailability due to preventive maintenance (PM). The following notations will be used to define the considered problem: n: number of jobs; m: number of machines; j: job's index (j = 1, . . . , n); i: machine's index (i = 1, 2); p j : processing time of job j; r j : release time of job j; q j : delivery time of job j; st j : starting time for processing job j; t: machine available time; s: machine unavailable time, the required time to perform a PM action; C j : completion time of job j, C j = st j + p j + q j ; C max : maximum completion time, max (C j ).
The proposed objective function for the considered scheduling problem under investigation is stated as follows: Minimize C max (1) where C max can be calculated as max j∈n C j , where C j = st j + p j + q j . The following example illustrates the schedule construction of the considered problem, P2 r j , q j , av − pm C max . Consider a set of eight jobs, with the processing times, release times, and delivery times as shown in Table 1. The jobs have to be scheduled on two identical machines: m 1 and m 2 . The machine available periods and the unavailable periods, the required time for performing the PM actions, are presented in Table 2. For example, machine 1 is considered to be available for 9 units of time before a PM action, and then, it will be down (unavailable) for 2 units of time (under maintenance). Let the current job sequence π be (7,5,3,8,1,6,2,4). According to the given sequence, the jobs are assigned to the nearest available machine. The constructed schedule of the given sequence π is presented on a Gantt chart shown in Figure 1. From the π sequence, job 7 is the first job to be scheduled. Since the two machines are available at time 0 and job 7 is ready at time 1 (r 7 = 1), job 7 can be assigned to one of the two machines. Here, job 7 is assigned to m 1 , and it is processed for 6 units of time (p 7 = 6) starting at time 1 (st 7 = 1) and finishing at 7. Job 7 has a delivery time of 4 units of time (q 7 = 4), so its completion time is C 7 = 11 (C 7 = st 7 + p 7 + q 7 = 1 + 6 + 4 = 11). The second job to be scheduled is job 5, which is available at time 2 (r 5 = 2). Machine m 2 is available at time 2. Hence, job 5 is assigned to machine 2, and the starting processing time of job 5 is 2. Furthermore, it will be processed for 5 units (p 5 = 6), finishing at time 8. Job 5 has a delivery time of 6 units of time (q 5 = 6), so its completion time is C 5 = 14. The rest of the unscheduled jobs will be scheduled on the same procedure until all jobs in π are scheduled. Since machines are not always available as they need periodic PM, jobs cannot be processed when machines are under PM actions. From Table 2, machines 1 and 2 should have a PM every 9 operational times (t = 9), and the required time to do the PM is 2 units (s = 2). A PM for machine 1 will be performed for 2-unit times at time 9, despite machine 1 having worked for only 8 units of time. This is because job 1, the next scheduled job in π, has a processing time of 2 units, which will exceed the availability period (t = 9), and for the possibility of failure, the PM on machine 1 will be performed before starting processing job 1 at time 9. Machine 1 will be available again at time 11 and start processing job 1. Note that the machine will be in its best condition after the PM action. Machine 2 will be operated for 9 units until time 11, so it will be unavailable for 2 units. After finishing PM on machine 2, job 2 will be processed by machine 2. From Figure 1 the C max , for the given sequence, is 23. The constructed schedule of the given sequence is presented on a Gantt chart shown in Figure  1. From the sequence, job 7 is the first job to be scheduled. Since the two machines are available at time 0 and job 7 is ready at time 1 ( = 1), job 7 can be assigned to one of the two machines. Here, job 7 is assigned to , and it is processed for 6 units of time ( = 6) starting at time 1 ( = 1) and finishing at 7. Job 7 has a delivery time of 4 units of time ( = 4), so its completion time is = 11 ( = + + = 1 + 6 + 4 = 11). The second job to be scheduled is job 5, which is available at time 2 ( = 2). Machine is available at time 2. Hence, job 5 is assigned to machine 2, and the starting processing time of job 5 is 2. Furthermore, it will be processed for 5 units ( = 6), finishing at time 8. Job 5 has a delivery time of 6 units of time ( = 6), so its completion time is = 14. The rest of the unscheduled jobs will be scheduled on the same procedure until all jobs in are scheduled. Since machines are not always available as they need periodic PM, jobs cannot be processed when machines are under PM actions. From Table 2, machines 1 and 2 should have a PM every 9 operational times ( = 9), and the required time to do the PM is 2 units (s = 2). A PM for machine 1 will be performed for 2-unit times at time 9, despite machine 1 having worked for only 8 units of time. This is because job 1, the next scheduled job in , has a processing time of 2 units, which will exceed the availability period ( = 9), and for the possibility of failure, the PM on machine 1 will be performed before starting processing job 1 at time 9. Machine 1 will be available again at time 11 and start processing job 1. Note that the machine will be in its best condition after the PM action. Machine 2 will be operated for 9 units until time 11, so it will be unavailable for 2 units. After finishing PM on machine 2, job 2 will be processed by machine 2. From Figure 1 the , for the given sequence, is 23. It is worth mentioning that the number of jobs, processing times, release times, delivery times, and availability and unavailability periods affect the . For example, machine 1 will wait 1 unit of time for job 7 to be ready, and machine 2 will wait for 2 units for job 5 to be ready for processing. Besides, jobs 1 and 2 will wait for processing on machines 1 and 2, respectively, as they are not available.
For this problem, we propose the lower bound (LB) for the considered problem, 2| , , − | , as follows: Clearly, no job can be finished earlier than + + [37]. Furthermore, since the machine availability period must be greater than the maximum processing time such that where is the machine availability period, an LB that is similar to that for the problem , is as follows: As machines are subjected to PM actions, the minimum PM times can be calculated based on [50,52]: where is the machine availability period before a PM action is needed, and ⌊ ⌋ denotes the largest integer that is smaller than or equal to y. It is worth mentioning that the number of jobs, processing times, release times, delivery times, and availability and unavailability periods affect the C max . For example, machine 1 will wait 1 unit of time for job 7 to be ready, and machine 2 will wait for 2 units for job 5 to be ready for processing. Besides, jobs 1 and 2 will wait for processing on machines 1 and 2, respectively, as they are not available.
For this problem, we propose the lower bound (LB) for the considered problem, P2 r j , q j , av − pm C max , as follows: Clearly, no job j can be finished earlier than p j + r j + q j [37]. Furthermore, since the machine availability period must be greater than the maximum processing time such that t ≥ max j∈J p j (2) where t is the machine availability period, an LB that is similar to that for the problem p r j , q j C max is as follows: As machines are subjected to PM actions, the minimum PM times can be calculated based on [50,52]: where t is the machine availability period before a PM action is needed, and y denotes the largest integer that is smaller than or equal to y. The minimum time that a machine will not be available under PM activities can be calculated as follows: where s is the PM time.
By adding the unavailable time due to the PM activities to the two LBs developed by [51] for the p r j , q j C max , we get the following LBs: Ranking the jobs in an increasing manner as regards their release times and delivery times, such that r 1 (J) ≤ r 2 (J) and q 1 (J) ≤ q 2 (J) (r 1 (J) and r 2 (J) are the two smallest release times and q 1 (J) and q 2 (J) are the two smallest delivery times), we get

Genetic Algorithm
Genetic algorithms (GAs) are adaptive metaheuristic search algorithms used for solving combinatorial optimization problems. GAs represent one branch in the field of evolutionary algorithms (EAs). GAs use natural evolution-inspired techniques in which they imitate the biological processes of reproduction and natural selection to solve for the "fittest" solutions [53]. GA was first proposed by Holland in 1975, in an attempt to solve some optimization problems by imitating the genetic process of living organisms and the law of the evolution of species [54]. In 1989, Golberg [55] applied GA to optimization and machine learning.
To solve the problem under study, we propose a GA to present high-quality solutions within allowable computing times. The reasons behind choosing the GA are its well-known reputation for high performance when solving combinatorial optimization problems, and more importantly, its effectiveness in solving related problems as reported in previous studies (see for example [14,17]). Several genetic operators are considered, and the selection of the best GA operator parameters has been optimized based on response surface methodology (RSM) experimental design, as detailed in Section 4.3. The proposed GA presents the following operators: generation of the initial population, fitness evaluation, parent selection, crossover, and mutation. In the next subsections, the GA main elements are discussed.

Chromosome Encoding
Genes are the main component to build GAs, since chromosomes are a sequence of genes. When building a GA for scheduling problems, each job in the schedule is represented as a gene in a chromosome. This operation, that is, representing individual genes, is known as encoding. Several encoding schemes are presented in related studies. The most used encoding scheme is the permutation encoding in which every chromosome is represented as a permutation list of jobs. A permutation list is a linear order list of all the jobs with no repeated values, where an extra gene is used to distinguish the machines on the chromosome. See Section 2 ( Figure 1) for an illustration.

Initial Population
A population is a collection of chromosomes. In the GA, the two main aspects of a population are the size of the population and its initial generation process. Usually, the initial population is generated randomly. However, in some cases, for complicated problems, to increase the quality of the generated population, a small part of the initial population is generated using a problem-specific heuristic, and the rest is generated randomly to ensure population diversity. In the proposed GA, the population size is determined using a preliminary study as detailed in Section 4. However, the initial population is generated randomly to make sure the initial population is diverse.

Chromosome Evaluation
The evaluation process is used to compute the fitness value of each individual in the present population. In the evaluation process, the current population is evaluated by decoding each chromosome in that population and calculating its objective function value. As the generated chromosomes do not include the PM activities, the following procedure (Procedure 1) is required to decode each chromosome and calculate its objective value. In the proposed procedure, the evaluation starts by delivering all the inputs for parameters, including the randomly generated sequence π [n] . Starting with the first job of the sequence, there are two possibilities. The first is that the current machine age plus the processing time of that first job is less than or equal to the availability time of the machine before PM. Then, there is no PM, the machine's age will be updated, the job will be scheduled on that available machine, the completion time of the scheduled job will be calculated, and finally, the machine availability will be updated. The second possibility is that the current machine age, plus the processing time of that first job, is larger than the availability time of the machine before PM. Then, a PM activity should be assigned to that machine first before scheduling the job. The starting and completion time of the PM activity is going to be determined. Then, the machine's age and availability are going to be calculated. Finally, the job is going to be scheduled on the machine and the completion time of the scheduled job is going to be obtained. This process is going to be repeated for all the jobs in the input sequence π [n] . The final step of the procedure is to calculate C max as the maximum of all job's completion times. All details can be found in the proposed procedure (Procedure 1).

Selection and Reproduction Process
The selection process ascertains the chromosomes that are chosen for mating and reproduction. The main purpose of the selection process is as follows: "the better an individual is, the higher its chance of being a parent." In this study, Roulette wheel selection was used in which each individual is assigned a selection probability that is proportional to its relative fitness, p i = f i / n j=1 f j . The selection of beta individuals is performed by beta-independent spins of the roulette wheel. Each spin will select a single individual, and the better individual will be selected. The roulette wheel selection operator is selected based on some preliminary experiments, it has been found that, for the problem under investigation, the roulette wheel selection operator performs better than the tournament selection operator and random selection. This was consistent with previous research (for example [14,17,53,56]), as they have suggested that roulette wheel selection operator is the best operator for scheduling problems.

Crossover
Reproduction in GA is conducted by applying crossovers and mutations. The crossover process consists of a combination of two parents to create a new child. The crossover operators were selected using a permutation list; that is, the proposed operator was applied to a permutation of coded operations. In this study, a position-based crossover (PBX) proposed by Syswerda [57] was used. In a PBX operator, a subset of positions from the first parent is randomly selected and copies the components at these positions to the offspring chromosome at the same position and then fills the other positions with the remaining components in the same order as in the second parent. An example of a PBX is given in Figure 2. The most important variable that needs to be tuned in the crossover process is the crossover rate P c (P c ∈ [0, 1]), which represents the proportion of parents on which a crossover operator will occur.  Inputs: n is the number of jobs; m is the number of machines, which is 2; p j are the processing times of each job; r j are the ready times of each job; q j are the delivery times of each job; t is the machine available time; s is the machine unavailable time; π [n] is the generated sequence (randomly generated as explained in Section 3.2); ; and i is the assigned machine; If j is the 1st job in the assigned machine i: Output:

Mutation
Mutation occurs after crossover is completed. This operator applies the changes randomly to one or more "genes" to produce a new offspring. Hence, it creates new adaptive solutions good in avoiding local optima. The percentage of genes that will be mutated is denoted as Mu. Mutation

Mutation
Mutation occurs after crossover is completed. This operator applies the changes randomly to one or more "genes" to produce a new offspring. Hence, it creates new adaptive solutions good in avoiding local optima. The percentage of genes that will be mutated is denoted as Mu. Mutation probability (Pm) determines how many chromosomes (offspring) should be mutated in one generation. Pm is in the range of [0, 1]. In this paper, we used a combination of swap, reversion, and insertion mutation operators for producing the mutation offspring. Operators are selected randomly with equal probability, i.e., every time the mutation function was called by GA, one of the three operators will be randomly chosen with equal probability.

Replacement and Termination Condition
The replacement phase concerns the survivor selection of both the parent and the offspring populations. In this study, the population was updated based on elitism in which the best individuals are selecting from the parents and offspring. The GA preserves the solution with the best value of the objective function in the next generation when adopting elitism [57].
Computational time and convergence of the fitness value are considered simultaneously for terminating the GA iterations. In this paper, the search process will be terminated if the fitness reaches the LB, no improvement in the best fitness values of the current population values for a given number of successive iterations, or the number of iterations reaches the maximum allowable number. The algorithm is repeated until the termination condition is satisfied.

Experimental Design
In this section, we describe the design of the experiments performed to select the best GA parameters proposed in Section 3. In the next sections, we present the indicators used for the evaluations, describe the test instances, and discuss the selection of the GA parameters.

Indicator of the Evaluation
The statistics used in the analysis of the computational experiments to assess the performance of the proposed algorithm are based on the proposed LB presented in Equation (9). For each instance, the relative distance to the LB was calculated. Thus, for instance i, the relative gap between the Cmax i and LB i is calculated using the relative percentage deviation (RPD), denoted as RPD i in Equation (9): In addition to the RPD, the computation time taken to solve each instance CPUtime i was presented.

Description of Test Instances
To study the effect of the scheduling problem constraints, an experimental study based on the factorial design was conducted using a set of instances. The test instances were generated randomly based on the data set designed by previous studies. Two sets of processing times p j were similar to Liao et al. [42], and the sets were generated uniformly in the intervals of (a = 20, b = 50) and (a = 20, b = 100). Two sets of release time times r j were uniformly distributed in the intervals of (1, a) and 1, bn m , where a and b correspond to p j , n is the number of jobs, and m is the number of machines (m = 2). The first set was similar to that in [37], while the second one was generated so that the jobs arrive throughout the scheduling plan which makes the problem more practical. Two sets of delivery times q j were generated, with random values uniformly distributed in the intervals of (1, 0.5b), and (1, 1.5b) considering the delivery time as a function of the processing time. Machine availability and unavailability periods were generated similarly to that in Liao et al. [42] considering the t and s as a function of the processing times and number of jobs. Two sets of machine availability periods t of were generated such that t ∈ (a+b)n 4 and (a + b)n. Two sets of machine unavailability periods s were generated such that s ∈ (a+b)n 12 and (a+b)n 6 . For every combination of p j , r j , q j , t, and s, different problem sizes of n jobs were generated where n ∈ (10, 20, 30, 40, 50, 100, 200, 300, 400, and 500). For each size, five instances were randomly generated. The summary of the instance characteristics of the problem is presented in Table 3. Table 3. Experimental characteristics of the generated instances.

Response Surface Methodology
The influence of the GA parameters, discussed in Section 3, on its performance was evaluated through an experimental design approach. RSM was used for the parametric study and optimization of the GA parameters for efficient solving of the scheduling problem under study. In this work, RSM's face-centered central composite (FCC) design, a second-order experimental design with three levels, was employed to analyze the effects of the selected five GA parameters on output response (RPD i and CPUtime i ). Based on the literature and a preliminary screening, five GA parameters and their respective levels were selected, as shown in Table 4. The design consists of a full factorial having 52 runs including 32 cube points and 20 center points (10 center points in a cube and 10 center points along the axis). Analysis of variance (ANOVA) was performed to investigate the significance of the GA parameters and their interactions in relation to RPD and central processing unit (CPU) time. P-values less than 0.05 indicate model terms are significant. Model reduction was performed using the forward elimination process to improve the model by deleting insignificant terms without a statistically significant loss of fit. The data sets, that were used to experimentally optimize the GA's parameters, were selected from the generated instances described in Section 4.2, taking into account the sample size, n ∈ (10, 500). The GA was run 10 times, and the average RPD and CPU time were reported for the selected instances.
Tables 5 and 6 present the reduced ANOVA tables for the RPD and central processing unit (CPU) time, respectively. For the RPD, the population size, mutation ratio, and beta are statistically significant factors as their P-values less than 0.05 ( Table 5). The interactions of the population size with mutation probability and beta are statistically significant. For the CPU time, the population size, mutation rate, and mutation probability are statistically significant ( Table 6). The interactions of population size with mutation probability and mutation rate and the interactions of mutation probability with mutation rate are statistically significant. Note that the two-way interactions have a contribution of 36.04%, which indicates that a second-order model should be adapted for optimizing the GA parameters. It is worth noting that the normality assumption is satisfied and the adjusted R 2 for the RPD and CPU time models are 90.14% and 82.45%, respectively.  The mathematical models in terms of actual factors have been obtained to make predictions and multiobjective optimization. The mathematical models of RPD and CPU time are given in Equations (10) and (11) Desirability function, embedded in Minitab software, has been used for multiobjective optimization of the GA parameter settings. Desirability function is a mathematical method used for multiple response optimization, proposed by Derringer and Suich [58]. In the desirability function, each response is transformed into a desirability function that ranges from 0 to 1, where 0 is least desirable. Overall'desirability of the multi-response system is measured by combining the individual desirabilities of the responses. The objective is to find parameter settings that maximize the overall desirability. In this study, the optimal solution is to minimize RPD and CPU time. Table 7 shows the optimal combination values of the GA parameters as a result of multiobjective optimization with overall desirability of 0.9867.

Results and Discussions
In this section, the effect of the scheduling problem variables on the RPD and CPU time is investigated. A full factorial experimental study is conducted using a set of instances generated randomly, described in Section 4.2, based on five factors, which are process times (p), release times (r), delivery times (q), availability period (t), unavailability period (s), and the number of jobs (n). Two levels were considered for these variables except for the number of jobs in which there were 10 levels (Table 3), which resulted in the generation of 320 instances. Furthermore, every instance was generated five times. A total of 1600 instances was generated. The GA was run five times for every instance, and the average C max and CPU time was reported.
The proposed GA has been coded using MATLAB software (The MathWorks Inc., MA, USA), and the computational experiments for all instances have been conducted on the same computer with the following specifications: Intel ® Core™ i5-3230M at 2.6 GHz for the CPU processor (Intel Corporation, CA, USA) and 4.0 GB for RAM.
The computational results of the GA for the RPD and CPU times are given in Tables 8 and 9, respectively. Results are also presented in Figure 3a-f. The following notations are used in Table 8, Table 9, and Figure 3: • n denotes the number of jobs. • 1,2 refer to the low and high levels, respectively, corresponding to variables p, r, q, t, and s. • RPD-1 and RPD-2 refer to the RPD at low and high levels, respectively, corresponding to variables p, r, q, t, and s. • CPU-1 and CPU-2 refer to the CPU time at low and high levels, respectively, corresponding to variables p, r, q, t, and s.     Computational results of the RPD and CPU time are summarized in Table 8 and Table 9. It is evident from Table 8 that the release time factor has the most effect on the RPD. Increasing the release time interval from low level ( (1, )) to the high level ( (1, )) makes the problem very difficult to reach the LB and take high CPU time. The maximum error (RPD) at a low level of was 1.71%, while the error reaches 18.50% at a high level of . The maximum CPU time at a low level of was 55.7 seconds, while the CPU time reached 1113.9 seconds at a high level of . However, the RPD and CPU time are relatively reduced at a high level of when the availability period is high (( + ) ). The CPU time is significantly increased with an increase in .  Computational results of the RPD and CPU time are summarized in Tables 8 and 9. It is evident from Table 8 that the release time factor has the most effect on the RPD. Increasing the release time interval from low level (U(1, a)) to the high level (U 1, bn m ) makes the problem very difficult to reach the LB and take high CPU time. The maximum error (RPD) at a low level of r was 1.71%, while the error reaches 18.50% at a high level of r. The maximum CPU time at a low level of r was 55.7 seconds, while the CPU time reached 1113.9 seconds at a high level of r. However, the RPD and CPU time are relatively reduced at a high level of r when the availability period is high ((a + b)n). The CPU time is significantly increased with an increase in n. Figures 3a and 4f show the effect of these variables on the average RPD and CPU time with the changes in the number of jobs. Figure 3a shows that an increase in the number of jobs increases the RPD and CPU time. Figure 3b shows the average RPD and CPU time at low and high p variable. Moreover, there is no difference between them as the trends at low and high p are very close to each other. Figure 3c shows a significant effect of the r variable on the average RPD and CPU time. Figure 3d shows the average RPD and CPU time at low and high q variable. Furthermore, there is no difference between them as the trends at low and high q are very close to each other. Figure 3e shows a significant effect of the t variable on the average RPD. However, the effect of r on the CPU time is significant at a high number of jobs. 4b shows that the processing time, availability period, and delivery time are inversely proportional to the RPD. However, the unavailability period, release time, and sample size are directly proportional to the RPD. Moreover, the interaction plot of the factors that affect the RPD and CPU time are plotted in Figure 5. Figure 5a shows that there is a significant effect in the interaction between the availability period and release time in the CPU. Additionally, the more availability period and release time, the CPU time is less. Furthermore, the release time increases the CPU time with the increase in sample size. Figure 5b shows the interaction plot for the RPD. It is worth noting that almost the same interaction effects of the CPU times are applied to the RPD.

Conclusions
The scheduling problem of two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable, is tackled in this study. The machines were assumed undergoing periodic maintenance instead of assuming they are always available. This makes the scheduling problem more practical. The objective is to schedule all jobs such that the is minimized. An LB was proposed for the problem. A GA was proposed to solve the problem, since the problem is considered an NP-hard problem. The GA performance was evaluated in terms of the RPD (the relative distance to the LB) and CPU time. For better performance of the GA, RSM was used to optimize the GA parameters, namely, population size (Popsize), crossover probability (Pc), mutation probability (Pm), mutation ratio (Mu), and pressure selection  The effects of these variables were also studied statistically using ANOVA. In this section, ANOVA analysis will be used to investigate the effect of five factors on each response. First, the ANOVA table for the CPU time has been presented. Table 10 shows the ANOVA table of the CPU time. Note that the CPU time data was transformed using Box-Cox transformation λ = 0.5 (square root) to meet normality assumption. After conducting a model fitting, it was revealed that a two-way interaction model is the best fit for the data. P-values less than 0.05 indicate model terms are significant. In this case, processing times (p), release times (r), availability period (t), delivery times (q), and sample sizes (n) are significant model terms. In other words, these factors significantly affect the transformed CPU time. P-values greater than 0.05 indicate that the model terms are not significant. Moreover, the interactions between the release times (r) and the processing times (p), available period (t), delivery times (q), and sample sizes (n) have significant effects on the CPU times. This is also true for the interactions between the sample sizes (n) and the available period (t). It is worth noting that the adjusted R 2 is equal to 98.17%, which indicates an excellent representation of the variability of the data by the model terms. Moreover, the contribution of the model is very high, since it is more than 98%. The most significant effects are contributed by changing the sample size (n) factor (48.47%), and then followed by the interaction between the release times (r) and the sample sizes (n) (25.01%). Release time (r) has a contribution of (20.65%). Table 11 presents the ANOVA results for the RPD. It is worth noting that the data was transformed using square root transformation to meet the normality assumption. Since the used data are more than 300; the normality assumption has been verified using the values of skewness and kurtosis [59]. The results revealed that the processing times (p), unavailability period (s), available period (t), release times (r), and sample sizes (n) have a significant effect on the RPD, since their P-values are all less than 0.05. Consequently, the processing times (p), unavailability period (s), available period (t), release times (r), and sample sizes (n) have a significant effect on the quality of the solution obtained by the model. Moreover, the interactions between the processing times (p) and unavailability period (s); between processing times (p) and release times (r); between unavailability period (s) and available period (t); between unavailability period (s) and release times (r); between available period (t) and release times (r); between delivery times (q) and release times (r); and between release times (r) and sample sizes (n), all have a significant effect on the quality of the solution obtained by the model. This model has shown a good R 2 value = 0.83, which indicates the data variability is more represented by the model. Moreover, the contribution of the model is high since it is more than 83%. However, changing the release times (r) (45.06%) and its interaction with availability times (t) (12.81%) contribute more effects. Furthermore, the results have been visualized to represent the direction impacts of the factors and their interactions on the corresponding response. Figure 4 shows the main effects of the factors being studied on the RPD and CPU time. Figure 4a shows that the more the availability period (t), the less the CPU time needed for obtaining a good solution for the corresponding problem, regardless of the other factors. Furthermore, the first level of the release time factor (r) shows less CPU time. This is caused by the short period receiving orders in the first level of the release time factor. In addition, as it is shown, the greater the sample size, the more CPU time is needed. Moreover, Figure 4b shows that the processing time, availability period, and delivery time are inversely proportional to the RPD. However, the unavailability period, release time, and sample size are directly proportional to the RPD. Moreover, the interaction plot of the factors that affect the RPD and CPU time are plotted in Figure 5. Figure 5a shows that there is a significant effect in the interaction between the availability period and release time in the CPU. Additionally, the more availability period and release time, the CPU time is less. Furthermore, the release time increases the CPU time with the increase in sample size. Figure 5b shows the interaction plot for the RPD. It is worth noting that almost the same interaction effects of the CPU times are applied to the RPD. Moreover, the interaction plot of the factors that affect the RPD and CPU time are plotted in Figure 5. Figure 5a shows that there is a significant effect in the interaction between the availability period and release time in the CPU. Additionally, the more availability period and release time, the CPU time is less. Furthermore, the release time increases the CPU time with the increase in sample size. Figure 5b shows the interaction plot for the RPD. It is worth noting that almost the same interaction effects of the CPU times are applied to the RPD.

Conclusions
The scheduling problem of two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable, is tackled in this study. The machines were assumed undergoing periodic maintenance instead of assuming they are always available. This makes the scheduling problem more practical. The objective is to schedule all jobs such that the is minimized. An LB was proposed for the problem. A GA was proposed to solve the problem, since the problem is considered an NP-hard problem. The GA performance was evaluated in terms of the RPD (the relative distance to the LB) and CPU time. For better performance of the GA, RSM was used to optimize the GA parameters, namely, population size (Popsize), crossover probability (Pc), mutation probability (Pm), mutation ratio (Mu), and pressure selection

Conclusions
The scheduling problem of two identical parallel machines subjected to release times and delivery times, where the machines are periodically unavailable, is tackled in this study. The machines were assumed undergoing periodic maintenance instead of assuming they are always available. This makes the scheduling problem more practical. The objective is to schedule all jobs such that the C max is minimized. An LB was proposed for the problem. A GA was proposed to solve the problem, since the problem is considered an NP-hard problem. The GA performance was evaluated in terms of the RPD (the relative distance to the LB) and CPU time. For better performance of the GA, RSM was used to optimize the GA parameters, namely, population size (Popsize), crossover probability (Pc), mutation probability (Pm), mutation ratio (Mu), and pressure selection (Beta), while minimizing the RPD and CPU time. The optimization of multiple responses (RPD and CPU time) was performed using the desirability analysis. Optimized settings of the GA parameters were used to further evaluate the scheduling problem. Factorial design of the scheduling problem input variables, namely, processing times, release times, delivery times, availability and unavailability periods, and the number of jobs, was used to evaluate the GA performance.
The results show that the GA parameters have significant effects on the RPD and CPU time with R 2 being equal to 90.14% and 82.45%, respectively. Furthermore, the GA parameter interactions have high contributions to the RPD and CPU time. To minimize the RPD and CPU time, the GA parameters should be set as follows: Popsize is 200, Pc is 0.9, Pm is 0.14, Mu is 0.001, and Beta is 1. Regarding the scheduling problem input variables, increasing the release time interval makes the problem very difficult to reach the LB. Moreover, increasing the release time interval leads to a significant increase in the CPU time when the number of jobs is high. Increasing the number of jobs leads to an increase in CPU time. Decreasing the availability periods leads to a significant increase in the RPD and CPU time when the number of jobs is greater than 50 jobs.
Release times and availability period variables and their interaction most affect the RPD. Finally, the most significant variables that affect the CPU time are release times and sample sizes and their interactions.