Next Article in Journal
A Brain Storm and Chaotic Accelerated Particle Swarm Optimization Hybridization
Previous Article in Journal
Conditional Temporal Aggregation for Time Series Forecasting Using Feature-Based Meta-Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Trajectory-Based Immigration Strategy Genetic Algorithm to Solve a Single-Machine Scheduling Problem with Job Release Times and Flexible Preventive Maintenance

1
College of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou 325035, China
2
Department of Hotel Management, Vanung University, Taoyuan 32061, Taiwan
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(4), 207; https://doi.org/10.3390/a16040207
Submission received: 18 February 2023 / Revised: 4 April 2023 / Accepted: 10 April 2023 / Published: 12 April 2023

Abstract

:
This paper considers the single-machine problem with job release times and flexible preventive maintenance activities to minimize total weighted tardiness, a complicated scheduling problem for which many algorithms have been proposed in the literature. However, the considered problems are rarely solved by genetic algorithms (GAs), even though it has successfully solved various complicated combinatorial optimization problems. For the problem, we propose a trajectory-based immigration strategy, where immigrant generation is based on the given information of solution extraction knowledge matrices. We embed the immigration strategy into the GA method to improve the population’s diversification process. To examine the performance of the proposed GA method, two versions of GA methods (the GA without immigration and the GA method with random immigration) and a mixed integer programming (MIP) model are also developed. Comprehensive experiments demonstrate the effectiveness of the proposed GA method by comparing the MIP model with two versions of GA methods. Overall, the proposed GA method significantly outperforms the other GA methods regarding solution quality due to the trajectory-based immigration strategy.

1. Introduction

In this paper, we consider the single-machine problem with job release times and machine unavailable periods, where machine unavailable periods are caused by flexible preventive maintenance (PM) activities. For classical single-machine scheduling problems, most research assumes that all jobs are ready for processing simultaneously or that machines are always available to simplify the complexity of scheduling problems. These two assumptions may impede many possible practical applications, and some studies have demonstrated that there is a need to consider the dynamic job release time [1] or machine unavailable periods [2,3]. Both are common phenomena in the real world and are significant factors in production scheduling decisions. That is, taking into consideration jobs’ release time and machine unavailable periods, a given production scheduling problem can be solved more realistically.
For the considered problem, more precisely, there are n jobs with different release times to the production system and waiting to be processed on a single machine without preemption. The machine is not always available; it needs to be maintained periodically to prevent its continuous working time from exceeding a specific threshold value and to initialize the machine’s status. To the best of our knowledge, there are only three studies considering dynamic job release time and machine availability constraints simultaneously for the single-machine scheduling problem. Detienne [4] was the first to consider this type of problem and proposed a MIP model to minimize the weighted number of late jobs. In this study, the machine unavailable periods were fixed and known in advance. Cui and Lu [5] considered the dynamic job release time for the machine availability scheduling problem. The main difference from the study of Detienne [4] is that, when implementing PM as a decision variable in scheduling planning, it is not fixed and known. For this problem, the researchers proposed a MIP model, heuristic algorithms and the branch and bound (BAB) method to minimize the makespan. Pang et al. [6] considered the single-machine maintenance scheduling problem with dynamic job release time, and this study was motivated by a clean operation of semiconductor manufacturing in which the machine had to stop to remove the dirt in the machine as a clean agent. For this problem, the researchers proposed a scatter simulated annealing algorithm to simultaneously minimize the total weighted tardiness and total completion time.
Our considered problem is the same as that of Cui and Lu [5] mentioned above; the objective of Cui and Lu [5] is to minimize the makespan, implying maximizing the throughput of the system. With the increasing importance of time-related competition and customer satisfaction, production performance based on due-dates becomes more significant. Thus, the objective we adopted is to minimize the total weighted tardiness (TWT) for responding to the needs of on-time delivery in just-in-time (JIT) production, which is one of the important, relevant objectives for today’s manufacturing environments. Moreover, the TWT objective has not been considered as often by researchers in this problem. This problem is also NP-hard because the special case without machine maintenance constraints, that is, the single-machine scheduling problem to minimize the total weighted tardiness has proven to be NP-hard [7].
For the NP-hard problem considered in this paper, we propose a trajectory-based immigration strategy genetic algorithm. The main reason is that different genetic algorithms (GAs) have been implemented successfully in many complicated scheduling problems but are seldom applied to the considered problem. Additionally, an immigration strategy is one of the common ways to keep the diversity of the population and avoid local convergence. Thus, we develop a novel trajectory-based immigration strategy containing three different solution knowledge extraction matrices for collecting important information from the searched chromosomes and embed the immigration strategy into the proposed GA method to generate better immigrants. Furthermore, we develop a mixed integer programming (MIP) model to obtain benchmark solutions to evaluate the performance of the proposed GA.
The rest of this paper is as follows. In Section 2, we review previous related studies. We define the considered problem and a mixed integer programming model to minimize the total weighted tardiness in Section 3. In Section 4, we describe the trajectory-based GA in this paper. Section 5 describes the computational experiment, including the parameter settings of the GA, test data generation scheme and experimental results. In Section 6, we discuss the results obtained. Finally, Section 7 contains our conclusions and future research.

2. Literature Review

Regarding scheduling problems with machine availability constraints, hundreds of contributions have been developed in the literature. Most of the machine availability is caused by preventive maintenance. Preventive maintenance (PM) is designed as a prior measure to reduce the probability of failure or degradation, and activating PM tasks in scheduling problems are usually classified into two categories: (i) PM tasks are performed at a fixed interval or within a time window or (ii) PM tasks are carried out depending on certain monitored conditions. Table 1 exhibits a brief review of work on single-machine scheduling problems with PM tasks based on the two categories. As seen from Table 1, the previous major studies focused on the first category with different objective functions. Additionally, Ma et al. [8] provided a detailed review and classification of papers that dealt with deterministic scheduling problems related to fixed PM tasks in different manufacturing shop floors. This information indicated increasing interest in studying this field over the past several decades.
The references most pertinent to our considered paper are Qi et al. [9], Sbihi and Varnier [2], and Cui and Lu [5]. In these studies, the PM task is driven by monitoring the current machine’s working time to ensure that it does not exceed a preset critical time threshold and to initialize the status of the machine. Qi et al. [9] proposed three heuristic algorithms and a branch and bound (BAB) method to minimize the total completion time. Sbihi and Varnier [2] considered the two categories mentioned in Table 1 and proposed the BAB method to minimize maximum tardiness. The two above studies assumed that all jobs were ready at time 0. Cui and Lu [5] first considered the dynamic case of jobs’ release time in the single-machine problem. They proposed a mixed integer programming (MIP), a BAB method and a heuristic algorithm to minimize the makespan.
Another PM task motivated by the wafer cleaning operation of a semiconductor manufacturing factory was proposed by Su and Wang [10], where a machine has to be maintained periodically so that the amount of dirt left on the machine does not exceed a preset critical dirt threshold. Su and Wang [10] developed a MIP, a dynamic programming-based heuristic algorithm to minimize the total absolute deviation of job completion times. Later, Su et al. [11] extended the single-machine problem to a parallel machine problem and developed a MIP model and three heuristic algorithms to minimize the number of tardy jobs. Pang et al. [6] extended the study of Su and Wang [10] to consider job release time and bicriteria (total weighted tardiness and job completion time). They proposed a scatter simulated annealing (SSA) algorithm to obtain nondominated solutions.
From Table 1, it is evident that our considered problem, i.e., the dynamic single-machine scheduling problem with PM tasks, where PM tasks are driven by the threshold value of the machine’s continuous working time and the total weighted tardiness as the objective, has not been studied so far. The considered problem is NP-hard since the static single-machine problem with the objective of the total weighted tardiness, where the machine is always available, has proven to be NP-hard [7]. For this kind of NP-hard problem, applying traditional methodologies, such as heuristic algorithms or exact algorithms, suffers either from solution effectiveness or computational efficiency. In recent years, various GAs based on global exploration and local exploitation search mechanisms, due to their flexibility, have been utilized more successfully than traditional approaches in solving NP-hard problems [12].
Table 1. Two categories of related studies for the considered problem.
Table 1. Two categories of related studies for the considered problem.
ObjectivesFirst CategorySecond Category
Makespan[3,13,14,15,16][5]
Total completion time or total flow time[14,17,18,19,20][9]
Total weighted completion time[21,22,23,24,25][26]
Total absolute deviation of job completion times [10]
Maximum lateness[14][26]
Maximum earliness[27]
Maximum tardiness[2,20,28][2]
Mean lateness[20]
Mean tardiness[20]
Number of tardy jobs[14,29,30,31][11]
Weighted number of late jobs[4]
Bicriteria (total weighted tardiness and total completion time) [6]
Compared with classical scheduling problems, employing meta-heuristic algorithms to solve single-machine scheduling problems with machine unavailability constraints has been very limited, with only a few studies to date. Pang et al. [6] considered a single-machine scheduling problem in which PM tasks are driven by the accumulated dirt and adopted the total weighted tardiness and total completion time simultaneously as an objective. The researchers proposed a scatter simulated annealing (SSA) algorithm to obtain nondominated solutions. Chen et al. [32] developed a GA to solve a single-machine scheduling problem by minimizing total tardiness, where machine availability is measured by its reliability. Due to their success in applying meta-heuristic algorithms to solve the scheduling problem with machine availability constraints, we propose a new GA with knowledge of solution trajectory, aiming at presenting a trajectory-based immigration strategy to enhance the effectiveness of our GA.

3. Problem Description and Methodology

3.1. Problem Description

Let J = { J j | j = 1, 2, …, n} be the set of n jobs that are scheduled on a single machine. To keep the machine in good condition, the machine’s continuous working time cannot exceed a maximum specific time L. As a result, it is necessary to perform maintenance activity irregularly to initialize the machine. The maintenance time is MT. This paper considers non-preemptive and non-resumable cases; all jobs must be processed without interruption, and a job should be finished before a maintenance activity without restarting. Additionally, we assume that a job has a processing time, release time and due date for which data can be estimated in advance from the manufacturing execution system (MES). The objective is to minimize the total weighted tardiness (TWT) subject to the given job release time and maintenance constraint. According to the standard machine scheduling classification, the problem can be denoted as 1 | r j , n r , f p m | w j T j .

3.2. Mixed Integer Programming (MIP) Model

Based on the above description, we formulate a MIP model for the 1 | r j , n r ,   f p m | w j T j problem. The parameters and variables used in the model are as follows:
  • Parameters
    n : number of jobs
    J j : job j
    L : maximum working time limit
    M T : maintenance time, which is a constant
    M : a very large positive integer constant
    r j : release time of job j
    p j : processing time of job j
    w j : weight of job j
    d j : due date of job j
  • Decision variables
    C j : completion time of job j
    T j : tardiness of job j, where T j = m a x ( 0 , C j d j )
    X j k : 1 if job j is assigned at position k in the sequence, 0 otherwise
    Y k : 1 if maintenance activity is assigned after position k in the sequence,
    0 otherwise
    Q k : continuous working time of machine after position k in the sequence
    S T k : start time at position k for processing in the sequence
    P T k : processing time for the job assigned at position k in the sequence
    C T k : completion time at position k in the sequence
To obtain a feasible schedule, variables X j k and Y k are binary and decide which job is assigned at position k and whether a maintenance activity is assigned after position k, as shown in Figure 1. This is based on the following constraints.
Each job must be arranged into exactly one position
k = 1 n X j k = 1   j = 1 , 2 , 3 , , n
Each position must be occupied by exactly one job
j = 1 n X j k = 1   k = 1 , 2 , 3 , , n
For each position k, the start time for the processing job can be given by
S T k j = 1 n ( r j · X j k )     k = 1 , 2 , 3 , , n
S T k C T k 1 + ( M T · Y k 1 )   k = 2 , 3 , 4 , , n
For each position k, the processing time for the assigned job can be given by
P T k = j = 1 n ( p j · X j k )   k = 1 , 2 , 3 , , n
For each position k, the completion time of position k should be satisfied by
C T k S T k + P T k   k = 1 , 2 , 3 , , n
For the first position, the continuous working time can be given by
Q 1 = j = 1 n ( p j · X j 1 )
For each position k, excluding the first position, the continuous working time can be given by
Q k 1 + j = 1 n ( p j · X j k ) Q k + ( M · Y k 1 )   k = 2 , 3 , 4 , , n
j = 1 n ( p j · X j k ) Q k + M · ( 1 Y k 1 ) k = 2 , 3 , 4 , , n
In the above two equations, we apply the M value, a very large positive constant, to obtain the continuous working time for each position. If Y k 1 = 0 , then the continuous working time at position k ( Q k ) will be forced to be Q k 1 + j = 1 n ( p j · X j k ) ; on the other hand, if Y k 1 = 1 , then the continuous working time at position k ( Q k ) will be j = 1 n ( p j · X j k ) .
For each position, the continuous working time should satisfy
Q k L   k = 1 , 2 , 3 , , n
At the end of the sequence, there is no need to maintain the machine:
Y n = 0
For each job, the completion time can be given by
C j + M · ( 1 X j k ) C T k { j = 1 , 2 , 3 , , n k = 1 , 2 , 3 , , n
For each job, tardiness can be given by
T j ( C j d j ) j = 1 , 2 , 3 , , n
In this model, our goal is to minimize total weighted tardiness, which is given by
M i n   j = 1 n ( w j · T j )
It is worth noting from Figure 1 that the considered problem here involves two interrelated sets of decisions: how to sequence the jobs and when to execute PM activities. Implicitly, the decision of PM activities may affect the objective value, even for the same job sequence. To describe this phenomenon, suppose that the job sequence of J1-J3-J4-J5-J2 is given for the 5-job instance in Table 2, and two different PM decision methods are used for the job sequence.
The first PM decision method is inspired by the first fit (FF) concept for bin packing problems [33]. Thus, the jobs are assigned to the machine in orders as long as the working time of the machine does not exceed the threshold value (L). Based on the FF concept, the obtained TWT value is 51, according to the Gantt chart in Figure 2. The second PM decision method is motivated by the dynamic programming (DP) method for batch problems [34,35]. Using the DP method, the obtained TWT value is 41 to the Gantt chart in Figure 3. Additionally, the detailed steps of the DP method are demonstrated in Appendix A for the sake of brevity.
From this example, adopting the DP method as the PM decision method is better than the former. Moreover, the above two sets of decisions affect each other. In this paper, the sequence of jobs is in the form of chromosomes, and then the DP method is used as a decoding method to obtain an objective value for any chromosome in our GA.

4. The Proposed GA

GAs are well-known stochastic search algorithms to solve combinatorial optimization problems. The original idea was developed by Holland [36]. In a GA, a population is maintained by selection, crossover and mutation operators until a stopping criterion is satisfied and an optimal/best solution is obtained. However, it is likely to be trapped in local optima [37]. As a result, immigration strategies, such as random immigrants [38] and elitism-based immigrants [39], have been proposed to enhance the diversity of chromosomes in the population [40]. In this paper, we develop a trajectory-based immigrant scheme to maintain the diversity of chromosomes in each population.
Trajectory-based immigration schemes are not like random-based immigrant or elitism-based immigrant schemes. The former (random-based immigrant scheme) randomly generated immigrants. Regarding the latter, it adopted the elite chromosome as a base to generate immigrants with better solution quality in this way. In this paper, we develop a solution-characteristic reserved technology based on solution extraction knowledge matrices to extract the relation between job and position, job and job, and from job to job for each feasible solution. The solution extraction knowledge matrices are called job-position trajectory (JPT), job-job trajectory (JJT) and from-to trajectory (FTT). Based on the information provided by the three matrices, we develop a trajectory-based immigration scheme such that the generated immigrants can gain a balance between randomness and solution quality for the GA. Next, the steps for building three trajectory matrices are described as follows.
Step 1. Generate feasible schedules π y randomly and obtain the corresponding TWT value x y , y = 1, 2, …, N. N is the number of the population.
Step 2. Calculate the mean ( x ¯ = x y / N ) and standard deviation ( σ = ( x y x ¯ ) 2 / N 1 ) for this group.
Step 3. Obtain a semaphore value ( s y ) for each schedule π y by normalization, i.e., s y = ( x ¯ x y ) / σ . Note that a larger signal is better to minimize the TWT.
Step 4. Initialize matrices CJP, CJJ, CFT, SJP, SJJ and SFT.
Step 5. Complete count matrix (CJP) by counting the number of jobs i occupied at position j in schedule π y , and in a similar fashion to complete matrices (CJJ and CFT) if job i is before job j in schedule π y , and if job i is from to job j (job i and j is adjacent).
Step 6. Complete semaphore matrix SJP by accumulating semaphore value ( s y ) if job i occupied position j in schedule π y , and in a similar fashion to complete matrix SJJ if job i is before job j in schedule π y , and matrix SFT if from the job i is to job j.
Step 7. Obtain each element of the job-position trajectory (JPT) matrix by the following equation:
J P T i j = { 0 i f   C J P i j = 0 S J P i j C J P i j o t h e r w i s e { i = 1 , 2 , 3 , , n j = 1 , 2 , 3 , , n
Step 8. Obtain each element of the job-job trajectory (JJT) matrix by the following equation:
J J T i j = { 0 i f   C J J i j = 0 S J J i j C J J i j o t h e r w i s e { i = 1 , 2 , 3 , , n j = 1 , 2 , 3 , , n
Step 9. Obtain each element of the from-to trajectory (FTT) matrix by the following equation:
F T T i j = { 0 i f   C F T i j = 0 S F T i j C F T i j o t h e r w i s e { i = 0 , 1 , 2 , , n j = 0 , 1 , 2 , , n
To demonstrate the three types of trajectory forms (job-position, job-job and from-to), we use an 8-job instance in Table 3 and generate 1000 solutions randomly, for example. Applying the above steps, the JPT, JJT and FTT matrices are built, as shown in Table 4, Table 5 and Table 6.
To validate whether the three matrices can help us to find good immigrants, we applied correlation analysis to realize the correlation between the objective value and matrices. The steps are described as follows:
Step 1. For instance, generate K feasible solutions (π𝑦) randomly and obtain objective values ( x y ) for solution y, y = 1, 2,…, K.
Step 2. For each solution, π y = { J [ 1 ] ,   J [ 2 ] , J [ i ] , , J [ j ] , , J [ n ] } , obtain three feature values based on the JPT, JJT and FTT matrices using the following equations:
Z 1 y = j = 1 n J P T J [ j ] , j
Z 2 y = i = 1 n 1 j = i + 1 n J J T J [ i ] , J [ j ]
Z 3 y = F T T 0 ,   J [ 1 ] + j = 1 n 1 F T T J [ j ] , J [ j + 1 ] + F T T J [ n ] ,   0
Step 3. Obtain the mean and standard deviation of the objective value and feature values for the K solutions using the following equations:
x ¯ = 1 K y = 1 K x y ,   S x = y = 1 K ( x y x ¯ ) 2 / ( K 1 )
Z 1 ¯ = 1 K y = 1 K Z 1 y ,   S Z 1 = y = 1 K ( Z 1 y Z 1 ¯ ) 2 / ( K 1 )
Z 2 ¯ = 1 K y = 1 K Z 2 y ,   S Z 2 = y = 1 K ( Z 2 y Z 2 ¯ ) 2 / ( K 1 )
Z 3 ¯ = 1 K y = 1 K Z 3 y ,   S Z 3 = y = 1 K ( Z 3 y Z 3 ¯ ) 2 / ( K 1 )
Step 4. Calculate the correlation values between the objective value and matrices by the following equation:
r x , Z 1 = 1 ( K 1 ) × y = 1 K ( x y x ¯ S x ) × ( Z 1 y Z 1 ¯ S Z 1 )
r x , Z 2 = 1 ( K 1 ) × y = 1 K ( x y x ¯ S x ) × ( Z 2 y Z 2 ¯ S Z 2 )
r x , Z 3 = 1 ( K 1 ) × y = 1 K ( x y x ¯ S x ) × ( Z 3 y Z 3 ¯ S Z 3 )
For the example of Table 3 and based on the results in Table 4, Table 5 and Table 6, we determine that the correlation values ( r x , Z 1 ,   r x , Z 2 ,   r x , Z 3 ) are −0.97130, −0.88297 and −0.81809 when K = 250. The values are negative because the objective function is minimized. To further examine the correlation between objective value and matrices, we randomly regenerate 80 instances with eight jobs and follow the procedures mentioned above to obtain the correlation values shown in Table 7.
From Table 7, the results achieved for the JJT matrix are slightly worse, where the correlation value is greater than 75% in 53 cases of the 80 total instances is 66.25%, which is less than the 100% and 98.75% obtained by JPT and FTT, respectively. As expected, the impact of the position information for jobs on the objective function appears to be more significant than that of the precedence relationship of pairs of jobs. Overall, the majority of correlation values are greater than 75%, which indicates that the proposed matrices are highly correlated with the objective value of the schedule. That is, it is implied that we can apply the information provided by the proposed matrices to search for better solutions.
Based on this finding, we constructed the trajectory matrices of JPT, JJT and FTT from the previously explored chromosomes during the GA process. We did not discard or ignore the hidden information in them. Based on the information given by the three trajectory matrices, we developed the immigrant generation method and embedded it into the GA. Thus, the developed GA is called the trajectory-based immigration strategy GA (TISGA) in this paper. Each part of TISGA is described as follows:

4.1. Encoding Scheme

The encoding scheme is important in making a solution recognizable in applying GA. Our proposed GA is based on a permutation representation of n jobs, which is the natural representation of a solution and one of the widely used encoding schemes for single-machine scheduling problems.

4.2. Population Initialization

To generate a variety of chromosomes, i.e., the sequence of jobs, the jobs are first sorted according to the following dispatching rules, and then the rest of the chromosomes are generated randomly.
  • First-in, first-out (FIFO): sequence the jobs by increasing the order of their release time, i.e., r j . Ties are broken by the EDD rule.
  • Shortest processing time (SPT): sequence the jobs by increasing the order of their processing time, i.e., p j . Ties are broken by the FIFO rule.
  • Largest processing time (LPT): sequence the jobs by decreasing the order of their processing time. Ties are broken by the FIFO rule.
  • Weight shortest processing time (WSPT): sequence the jobs by increasing the order of the index p j / w j . Ties are broken by the FIFO rule.
  • Earliest due date (EDD): sequence the jobs by increasing the order of their due date, i.e., d j . Ties are broken by the FIFO rule.

4.3. Fitness Function and Evaluation

In this study, our objective was to minimize the TWT, and it was expected that a chromosome with a smaller TWT would have a larger fitness value for survival. Thus, the fitness value of a chromosome is evaluated by the inverse of its value as follows:
fitness   ( π y ) = 1 / [ T W T ( π y ) + ε ] , y = 1, 2,…, population size, where fitness ( π y ) is the fitness value for the yth chromosome; TWT( π y ) is the TWT value; and ε is the smallest value ( ε = 0.000001), which aims to keep the denominator greater than zero. To obtain the TWT value for each chromosome, we use the DP method, as mentioned above. In this decoding, the job sequence is based on the relative order in the chromosome, but when to execute PM activities depends on the proposed DP method, as mentioned above. Suppose one of the chromosomes is represented by J1-J3-J4-J5-J2 for the 5-job instance shown in Table 2. Using the proposed decoding method, the feasible schedule for the chromosome of J1-J3-J4-J5-J2 is obtained, as shown in Figure 3, where the corresponding TWT and fitness values are 41 and 0.0244, respectively.

4.4. Crossover/Mutation

For recombination/crossover to generate offspring, we applied bias roulette wheel selection to choose parents from the pool of the population, which was the first selection operator proposed by Holland in 1975 (Goldberg, 1989). Since then, it has become a common method used in a variety of GA applications. For crossover, we consider order crossover (OX). In the OX crossover, two cutoff points from parent one are randomly selected, and the information between the two cutoff points is added to the generated offspring. The remaining jobs are filled in the order from parent 2. In this way, OX crossover always generates feasible offspring. Figure 4 illustrates an example of an OX crossover.
For each offspring generated by OX crossover, the parent solution’s features may be randomly modified by the mutation operator. The mutation operator preserves a reasonable level of population diversity that helps the GA escape local optima. In this paper, we adopt a swap mutation. More precisely, we produced a random number r m u t from uniformly distributed between 0 and 1. If the random number r m u t is less than or equal to a given mutation probability p m u t , i.e., r m u t p m u t , then the contents of two random genes of the offspring are swapped.

4.5. Immigration

In immigrant schemes, a certain number of immigrants are generated and added to the pool of the population by replacing the worst individuals from the current generation. In this paper, we applied random immigration and trajectory-based immigration strategies addressed in this paper to create immigrants for the next population. For the trajectory-based immigration strategy, the percentage of immigrants generated from JPT, FTT and JJT is 37%, 33% and 30%, respectively, since the first two matrices have a higher correlation with the objective value mentioned above. A bias roulette wheel as the basic selection mechanism is used for producing immigrants based on the information of JPT, JJT and FTT matrices in which a job of higher v j k has a large chance to be selected.
The pseudocode of the trajectory-based immigration procedure and immigrant-creation procedure based on JPT, JJT and FTT are described in Procedure_TBI (seen in Figure 5), Procedure_JPTimm, Procedure_JJTimm and Procedure_FTTimm. Note that the main difference among the three procedures is that the given information to generate immigrants is different, i.e., provided by either JPT, JJT or FTT matrices. Here, we only use Procedure_JPTimm as an illustration in the following (seen in Figure 6).
It is noted that the value of k in Procedure_ F T T i m m begins from 0, not 1, which is the main difference between Procedure_ F T T i m m and the other two procedures.

4.6. New Generation

For a subsequent generation, first, we used the elitist strategy for reproduction, where 10% of the chromosomes with higher fitness values are automatically copied to the next generation. Second, the worse 10% of chromosomes are directly replaced by new chromosomes generated by “immigration” in the next generation. Finally, 80% of the chromosomes in the next generation come from crossover/mutation. If GA has no immigration scheme, then the rates for reproduction and crossover/mutation are changed to 10% and 90%, respectively.
The pseudocode for the proposed TISGA is described in Figure 7.

5. Computational Experiment

The A comprehensive experiments are conducted; there are 23 test problem sizes n ={5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500}. Job processing time, release time and weights are generated uniformly on the interval [2, 15], [0, 50] and [1, 10], respectively. The maximum working time limit and maintenance time are L = {15, 30} and MT = 5. Additionally, the due date of each job was generated from a uniform distribution ( r j + p j ) + U [ ( 1 TF R / 2 ) P ¯ ,   ( 1 TF + R / 2 ) P ¯ ], where P ¯ is the average processing time of all jobs, tardiness factor TF = {0.4, 0.6} and relative range factor of the due date R = {0.4, 0.6}. Ten instances were generated for each of the eight combinations of parameter values (L, TF, R), yielding 80 instances for each value of n. Our MIP model is executed by IBM ILOG CPLEX Optimization Studio Version 12.7.1, and the proposed GA is coded in C++. All tests were conducted on a PC with an Intel Xeon E-2124 3.4 GHz CPU with 32 GB of RAM.
To examine the performance of the proposed TISGA, we also proposed a basic GA without immigration and a GA with a random immigration strategy (GARI). For a fair comparison, all versions of the proposed GAs have ( 2 × n ) and 100 chromosomes for ( n 50 ) and ( n > 50 ) , respectively, and their parameter settings mentioned above are the same. Each version of the GA algorithm is run five times repeatedly to obtain the best result for each instance, and the computational time limit is set to ( n × 0.01 ) CPU seconds as the stopping criterion for each version of the GAs.
First, we aim to show the efficiency of the proposed GAs. The proposed MIP model is built to find optimal solutions for small-sized problems since the considered problem here is NP-hard. Table 8 shows the average TWT values (AveTWT) and the number of optimal solutions found by each algorithm for small-sized problems. The results obtained by the GA algorithms that are worse (higher) than those obtained by the MIP model are presented in boldface. From Table 8, even with a small increment of n (n = 15), it becomes impossible for the MIP model to reach optimal solutions within a reasonable computational time, where the computational time limit for the MIP model is set to 7200 s. Additionally, the three versions of the proposed GAs are almost equivalent when comparing the average TWT value and Nopt. Regarding the average computational time shown in Table 9, note that the computational time limit of all versions of the GAs is the same as ( n × 0.01 ) seconds. This experiment demonstrates that the MIP model is very expensive regarding the computational cost when n = 15. On the other hand, the GA performances are very reliable in finding the optimal solutions in less than 0.15 s. Therefore, this experiment justified that the development of GAs can reduce the computational effort without seriously losing solution quality.
Recall that the MIP model cannot find all optimal solutions within 7200 s, and the basic GA becomes slightly worse on solution quality when n = 15. Thus, we further investigate the performances of different GAs for large-sized problems in the second experiment. Please note that the computational time limits of the GAs are the same, i.e., ( n × 0.01 ) seconds, for each n. For comparison, we apply the relative percentage deviation (RPD) of each instance computed as follows.
RPD = [ ( T W T ( A ) M i n ) / M i n ] × 100 % , where Min is the lowest TWT value for a given instance obtained by any of the GA algorithms, and TWT(A) is the TWT value obtained for a given algorithm and instance. AveRPD refers to Average RPD. Table 10 shows the AveRPD and the total number of best solutions (Nbest) obtained by each GA. As depicted in the table, GARI finds slightly better solutions than the basic GA; that is, 1.937% is obtained by GARI, while the basic GA gives a mean AveRPD value of 2.341%. Overall, TISGA significantly outperforms GARI and basic GA because the total number of best solutions obtained by the proposed TISGA is 1269, 136 for basic GA, and 156 for GARI among 1280 instances. The results support our inference that GAs maintain the diversity level of the population through immigration strategy to achieve a better quality of solutions. Furthermore, the proposed trajectory-based immigration strategy enhances the effectiveness of the GA method more than the random immigration strategy.
We also examined the performance of GAs under different combinations of (L, TF, R) for each problem. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 illustrate the comparison results under eight combinations of (L, TF, R) for large-sized problems. From these figures, in some cases, GARI is worse than the basic GA; that is, the performances of GARI and the basic GA are influenced by the values of (L, TF, R). However, as the number of jobs increases, GARI becomes gradually better than the basic GA for any combination of (L, TF, R) because adding randomly generated immigrants helps increase the diversity of solutions for the GA method, especially for larger job sizes. The proposed TISGA overcomes the influence of combinations of (L, TF, R) on solutions and is robust in obtaining better solutions.
Better convergence is another important topic for designing a good GA method. Thus, we compared the convergence of the basic GA, GARI and TISGA using instances with n = 50 and n = 200, respectively, as shown in Figure 21 and Figure 22. From Figure 24 and Figure 25, the convergence speed in TISGA is highest among GARI and the basic GA, and the solutions obtained by TISGA require less computation time than those required by GARI and the basic GA. Through these experimental results, we conclude that coupling a GA with the trajectory-based immigration scheme accelerates the convergence speed and significantly improves the performance of the basic GA.

6. Discussions

Based on the comparison results in Section 5, we use nonparametric tests under α = 0.05 confidence level to examine the superiority of the proposed TISGA; it was verified that, when the size of the problem is small, the immigrant strategy does not induce significantly the quality of the final solution, since the hypothesis Basic GA = GARI and GARI = TISGA cannot be rejected with significance 0.317 and 1.000, respectively. It is reasonable that, when the problem is small, solution space can almost be searched by basic GA without adding any improved scheme. For large-size problems, the hypothesis Basic GA = GARI and GARI = TISGA are rejected with the significance of 0.002 and 0.000, respectively, which showed that the proposed immigrant strategy induces significantly the solution quality of GA.
Overall, the proposal TISGA has the following features and advantages when compared with GAs:
  • With the problem being NP-hard, it is tough to search for all solutions when the size of the problem increases. In this case, a good search scheme, such as the proposed trajectory-based immigration strategy to direct GA to obtain a better or optimal solution, is important.
  • The possibility to converge to solutions with good quality more quickly due to the trajectory-based immigration strategy.

7. Conclusions

In this paper, we have addressed the single-machine scheduling problem with job release time and flexible preventive maintenance to minimize TWT. To the best of our knowledge, this problem with the TWT objective has not yet been addressed in the literature. For this problem, some JPT, JJT and FTT matrices are established based on the concept of the experience-driven knowledge scheme. Equipped with these matrices, we proposed a GA coupled with a trajectory-based immigration strategy, called TISGA, to generate immigrants to maintain the population diversity of a GA.
To examine the performance of TISGA, we formulated a MIP model and two GAs; one is the basic GA without an immigration strategy, and the other is GARI with a randomly generated immigration strategy. For small-sized problems, GARI and TISGA exhibited the same performances in terms of AveTWT and Nopt as compared to the MIP model. For large-sized problems, 1269 of the 1280 (99.14%) best solutions were found by TISGA, and then 12.18% and 10.63% were obtained by GARI and the basic GA, respectively. The results showed that TISGA outperformed the GARI and basic GA methods. Furthermore, our TISGA showed robust performance with respect to different values of (L, TF, R). More specifically, the results have shown that embedding the proposed trajectory-based immigration strategy in a GA has been enough to obtain excellent solutions for the problem under consideration. Consequently, further research could come in developing more efficient and advanced metaheuristics, successfully adapting the concept of an experience-driven knowledge scheme. Additionally, other potential extensions of this study, including parallel machines, job shops and sequence-dependent setup times, can be made for future research.

Author Contributions

S.H. and F.-D.C.: Conceptualization, methodology, validation, investigation, project administration, funding acquisition. F.-D.C.: software, validation. S.H., Y.-C.T. and F.-D.C.: formal analysis, resources, data curation, Y.-C.T. and F.-D.C.: writing—original draft preparation, writing—review and editing, S.H. and F.-D.C.: visualization, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Zhejiang Province Natural Science Foundation of China (Grant No. LY18G010012).

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To describe the proposed DP method, we first define some notations as follows:
π y : a given feasible sequence of jobs, i.e., π y = { J [ 1 ] ,   J [ 2 ] ,   , J [ n ] } .
Δ k : a subset of π y that contains the first k jobs in order, i.e., Δ k = { J [ 1 ] ,   J [ 2 ] ,   , J [ k ] } .
g k : a subset of Δ k that contains g jobs from position kg + 1 through position k in order, i.e., g k = { J [ k g + 1 ] , , J [ k ] } , 1 g k .
C ( Δ k ) : minimum makespan of a partial schedule that contains the first k jobs in order. Initially, C ( Δ 0 ) = 0 .
Z ( Δ k ) : minimum TWT value of a partial schedule that contains the first k jobs in order where Z ( Δ k ) = m i n 1 g k { f g 1 ( g k ,   Z ( Δ k g ) ) } . Initially, Z ( Δ 0 ) = 0 .
To calculate the makespan, there are four cases below:
Case 1: δ = 1 and k = g
C ( Δ k ) = f δ 0 ( g k ,   C ( Δ k g ) ) = max ( C ( Δ 0 ) , r [ 1 ] ) + p [ 1 ]
Case 2: δ = 1 and k > g
C ( Δ k ) = f δ 0 ( g k ,   C ( Δ k g ) ) = max ( C ( Δ k g ) + M T , r [ k g + 1 ] ) + p [ k g + 1 ]
Case 3: δ > 1 and ρ g k p ρ L
( Δ k ) = f δ 0 ( g k ,   C ( Δ k g ) ) = max ( f δ 1 0 ( δ 1 k 1 ,   C ( Δ k g ) ) , r [ k g + δ ] ) + p [ k g + δ ]
Case 4: δ > 1 and ρ g k p i > L
C ( Δ k ) = f δ 0 ( g k ,   C ( Δ k g ) ) =
To calculate the TWT, there are three cases below:
Case 1: δ = 1
f δ 1 ( g k ,   Z ( Δ k g ) ) = Z ( Δ k g ) + m a x ( f 1 0 ( g k , C ( Δ k g ) ) d [ k g + 1 ] , 0 ) × w [ k g + 1 ]
Case 2: δ > 1 and ρ g k p ρ L
f δ 1 ( g k ,   Z ( Δ k g ) ) = f δ 1 1 ( g 1 k 1 ,   Z ( Δ k g ) ) + m a x ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ]
Case 3: δ > 1 and ρ g k p ρ > L
f δ 1 ( g k ,   Z ( Δ k g ) ) =
Here, we use the 5-job instance in Table 2 as an example and suppose that π y = { J [ 1 ] , J [ 2 ] ,   J [ 3 ] ,   J [ 4 ] ,   J [ 5 ] } = { J 1 ,   J 3 ,   J 4 ,   J 5 ,   J 2 } . Initially, Δ 0 = , C ( Δ 0 ) = 0 , Z ( Δ 0 ) = 0 .
Begin
k = 1 ,   Δ 1 = { J [ 1 ] } = { J 1 }
g = 1 ,   g k = 1 1 = { J [ 1 ] } = { J 1 } ,   ρ g k p ρ = 2 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 1 0 ( 1 1 ,   C ( Δ 1 1 ) ) = f 1 0 ( 1 1 ,   C ( Δ 0 ) ) = max ( C ( Δ 0 ) , r [ 1 ] ) + p [ 1 ] = max ( 0 , 0 ) + 2 = 2
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 1 1 ( 1 1 ,   Z ( Δ 1 1 ) ) = f 1 1 ( 1 1 ,   Z ( Δ 1 1 ) ) = Z ( Δ 0 ) + max ( f 1 0 ( 1 1 , C ( Δ 0 ) ) d [ 1 ] , 0 ) × w [ 1 ] = 0 + max ( 2 4 , 0 ) × 1 = 0
Therefore, in this stage
Z ( Δ 1 ) = m i n 1 1 1 { f 1 1 ( 1 1 ,   Z ( Δ 1 1 ) ) } = m i n 1 1 1 { f 1 1 ( 1 1 ,   Z ( Δ 0 ) ) } = 0
k = 2 ,   Δ 2 = { J [ 1 ] ,   J [ 2 ] } = { J 1 , J 3 }
For g = 1, g k = 1 2 = { J [ 2 ] } = { J 3 } , ρ g k p ρ = 2 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 1 0 ( 1 2 ,   C ( Δ 1 ) ) = max ( C ( Δ 2 1 ) + M T , r [ 2 1 + 1 ] ) + p [ 2 1 + 1 ] = m a x ( C ( Δ 1 ) + 5 , r [ 2 ] ) + p [ 2 ] = max ( 2 + 5 , 5 ) + 2 = 9
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 1 1 ( 1 2 ,   Z ( Δ 2 1 ) ) = f 1 1 ( 1 2 ,   Z ( Δ 1 ) ) = Z ( Δ 2 1 ) + max ( f 1 0 ( 1 2 , C ( Δ 2 1 ) ) d [ 2 1 + 1 ] , 0 ) × w [ 2 1 + 1 ] = Z ( Δ 1 ) + max ( f 1 0 ( 1 2 , C ( Δ 1 ) ) d [ 2 ] , 0 ) × w [ 2 ] = 0 + max ( 9 9 , 0 ) × 2 = 0
For g = 2, g k = 2 2 = { J [ 1 ] ,   J [ 2 ] } = { J 1 , J 3 } , ρ g k p ρ = 4 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 2 0 ( 2 2 ,   C ( Δ 0 ) ) = max ( f 1 0 ( 1 1 , C ( Δ 0 ) ) , r [ 2 ] ) + p [ 2 ] = max ( 2 ,   5 ) + 2 = 7
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 2 1 ( 2 2 ,   Z ( Δ 2 2 ) ) = f 2 1 ( 2 2 ,   Z ( Δ 0 ) ) = f δ 1 1 ( δ 1 k 1 , Z ( Δ k g ) ) + max ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ] = f 2 1 1 ( 2 1 2 1 , Z ( Δ 2 2 ) ) + max ( f 2 0 ( 2 2 , C ( Δ 2 2 ) ) d [ 2 2 + 2 ] , 0 ) × w [ 2 2 + 2 ] = f 1 1 ( 1 1 , Z ( Δ 0 ) ) + max ( f 2 0 ( 2 2 , C ( Δ 0 ) ) d [ 2 ] , 0 ) × w [ 2 ] = 0 + max ( 7 9 , 0 ) × 2 = 0
For this stage (k = 2)
( Δ 2 ) = m i n 1 g 2 { f g 1 ( g k ,   Z ( Δ k g ) ) } = m i n 1 g 2 { 0 ,   0 } = 0
k = 3 ,   Δ 3 = { J [ 1 ] ,   J [ 2 ] , J [ 3 ] } = { J 1 , J 3 , J 4 }
For g = 1, g k = 1 3 = { J [ 3 ] } = { J 4 } , ρ g k p ρ = 4 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 1 0 ( 1 3 ,   C ( Δ 2 ) ) = max ( C ( Δ 3 1 ) + M T , r [ 3 1 + 1 ] ) + p [ 3 1 + 1 ] = max ( C ( Δ 2 ) + 5 , r [ 3 ] ) + p [ 3 ] = max ( 7 + 5 , 7 ) + 4 = 16
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 1 1 ( 1 3 ,   Z ( Δ 3 1 ) ) = f 1 1 ( 1 3 ,   Z ( Δ 2 ) ) = Z ( Δ 3 1 ) + max ( f 1 0 ( 1 3 , C ( Δ 3 1 ) ) d [ 3 1 + 1 ] , 0 ) × w [ 3 1 + 1 ] = Z ( Δ 2 ) + max ( f 1 0 ( 1 3 , C ( Δ 2 ) ) d [ 3 ] , 0 ) × w [ 3 ] = 0 + max ( 16 12 , 0 ) × 2 = 8
For g = 2, g k = 2 3 = { J [ 2 ] ,   J [ 3 ] } = { J 3 , J 4 } , ρ g k p ρ = 6 L
Calculating makespan
f g 0 ( g k , C ( Δ k g ) ) = f 2 0 ( 2 3 , C ( Δ 1 ) ) = f 2 0 ( 2 3 ,   C ( Δ 1 ) ) = max ( f δ 1 0 ( δ 1 k 1 , C ( Δ k g ) ) , r [ k g + δ ] ) + p [ k g + δ ] = max ( f 1 0 ( 1 2 , C ( Δ 1 ) , r [ 3 ] ) + p [ 3 ] = max ( 9 ,   7 ) + 4 = 13
Calculating TWT
f g 1 ( g k , Z ( Δ k g ) ) = f 2 1 ( 2 3 ,   Z ( Δ 3 2 ) ) = f 2 1 ( 2 3 ,   Z ( Δ 1 ) ) = f δ 1 1 ( δ 1 k 1 , Z ( Δ k g ) ) + max ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ] = f 2 1 1 ( 2 1 3 1 , Z ( Δ 3 2 ) ) + max ( f 2 0 ( 2 3 , C ( Δ 3 2 ) ) d [ 3 2 + 2 ] , 0 ) × w [ 3 2 + 2 ] = f 1 1 ( 1 2 , Z ( Δ 1 ) ) + max ( f 2 0 ( 2 3 , C ( Δ 1 ) ) d [ 3 ] , 0 ) × w [ 3 ] = 0 + max ( 13 12 , 0 ) × 2 = 2
For g = 3, g k = 3 3 = { J [ 1 ] , J [ 2 ] ,   J [ 3 ] } = { J 1 , J 3 , J 4 } , ρ g k p ρ = 8 L
Calculating makespan
f g 0 ( g k , C ( Δ k g ) ) = f 3 0 ( 3 3 , C ( Δ 0 ) ) = max ( f δ 1 0 ( δ 1 k 1 , C ( Δ k g ) ) , r [ k g + δ ] ) + p [ k g + δ ] = max ( f 2 0 ( 2 2 , C ( Δ 0 ) , r [ 3 ] ) + p [ 3 ] = max ( 7 ,   7 ) + 4 = 11
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 3 1 ( 3 3 ,   Z ( Δ 3 3 ) ) = f 3 1 ( 3 3 ,   Z ( Δ 0 ) ) = f δ 1 1 ( δ 1 k 1 , Z ( Δ k g ) ) + max ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ] = f 3 1 1 ( 3 1 3 1 , Z ( Δ 3 3 ) ) + max ( f 3 0 ( 3 3 , C ( Δ 3 3 ) ) d [ 3 3 + 3 ] , 0 ) × w [ 3 3 + 3 ] = f 2 1 ( 2 2 , Z ( Δ 0 ) ) + max ( f 3 0 ( 3 3 , C ( Δ 0 ) ) d [ 3 ] , 0 ) × w [ 3 ] = 0 + max ( 11 12 , 0 ) × 2 = 0
For this stage (k = 3)
Z ( Δ 3 ) = m i n 1 g 3 { f g 1 ( g k , Z ( Δ k g ) ) } = m i n 1 g 3 { 8 , 2 , 0 } = 0
k = 4 ,   Δ 4 = { J [ 1 ] ,   J [ 2 ] , J [ 3 ] , J [ 4 ] } = { J 1 , J 3 , J 4 , J 5 }
For g = 1, g k = 1 4 = { J [ 4 ] } = { J 5 } , ρ g k p ρ = 4 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 1 0 ( 1 4 ,   C ( Δ 3 ) ) = max ( C ( Δ 4 1 ) + M T , r [ 4 1 + 1 ] ) + p [ 4 1 + 1 ] = max ( C ( Δ 3 ) + 5 , r [ 4 ] ) + p [ 4 ] = max ( 11 + 5 , 13 ) + 4 = 20
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 1 1 ( 1 4 ,   Z ( Δ 4 1 ) ) = f 1 1 ( 1 4 ,   Z ( Δ 3 ) ) = Z ( Δ 4 1 ) + max ( f 1 0 ( 1 4 , C ( Δ 4 1 ) ) d [ 4 1 + 1 ] , 0 ) × w [ 4 1 + 1 ] = Z ( Δ 3 ) + max ( f 1 0 ( 1 4 , C ( Δ 3 ) ) d [ 4 ] , 0 ) × w [ 4 ] = 0 + max ( 20 19 , 0 ) × 3 = 3
For g = 2, g k = 2 4 = { J [ 3 ] ,   J [ 4 ] } = { J 4 , J 5 } , ρ g k p ρ = 8 L
Calculating makespan
f g 0 ( g k , C ( Δ k g ) ) = f 2 0 ( 2 4 , C ( Δ 2 ) ) = max ( f δ 1 0 ( δ 1 k 1 , C ( Δ k g ) ) , r [ k g + δ ] ) + p [ k g + δ ] = max ( f 1 0 ( 1 3 , C ( Δ 2 ) , r [ 4 ] ) + p [ 4 ] = max ( 16 ,   13 ) + 4 = 20
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 2 1 ( 2 4 ,   Z ( Δ 4 2 ) ) = f 2 1 ( 2 4 ,   Z ( Δ 2 ) ) = f δ 1 1 ( δ 1 k 1 , Z ( Δ k g ) ) + max ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ] = f 2 1 1 ( 2 1 4 1 , Z ( Δ 4 2 ) ) + max ( f 2 0 ( 2 4 , C ( Δ 4 2 ) ) d [ 4 2 + 2 ] , 0 ) × w [ 4 2 + 2 ] = f 1 1 ( 1 3 , Z ( Δ 2 ) ) + max ( f 2 0 ( 2 4 , C ( Δ 2 ) ) d [ 4 ] , 0 ) × w [ 4 ] = 8 + max ( 20 19 , 0 ) × 3 = 11
For g = 3, g k = 2 3 = { J [ 2 ] , J [ 3 ] ,   J [ 4 ] } = { J 3 , J 4 , J 5 } , ρ g k p ρ = 10 L
Calculating makespan
f g 0 ( g k , C ( Δ k g ) ) = f 3 0 ( 3 4 , C ( Δ 1 ) ) = max ( f δ 1 0 ( δ 1 k 1 , C ( Δ k g ) ) , r [ k g + δ ] ) + p [ k g + δ ] = max ( f 2 0 ( 2 3 , C ( Δ 1 ) , r [ 4 ] ) + p [ 4 ] = max ( 13 ,   13 ) + 4 = 17
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 3 1 ( 3 4 ,   Z ( Δ 4 3 ) ) = f 3 1 ( 3 4 ,   Z ( Δ 1 ) ) = f δ 1 1 ( δ 1 k 1 , Z ( Δ k g ) ) + max ( f δ 0 ( g k , C ( Δ k g ) ) d [ k g + δ ] , 0 ) × w [ k g + δ ] = f 3 1 1 ( 3 1 4 1 , Z ( Δ 4 3 ) ) + max ( f 3 0 ( 3 4 , C ( Δ 4 3 ) ) d [ 4 3 + 3 ] , 0 ) × w [ 4 3 + 3 ] = f 2 1 ( 2 3 , Z ( Δ 1 ) ) + max ( f 3 0 ( 3 4 , C ( Δ 1 ) ) d [ 4 ] , 0 ) × w [ 4 ] = 2 + max ( 17 19 , 0 ) × 3 = 2
For g = 4, g k = 2 4 = { J [ 1 ] , J [ 2 ] , J [ 3 ] ,   J [ 4 ] } = { J 1 , J 3 , J 4 , J 5 } , ρ g k p ρ = 12 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 4 0 ( 4 4 ,   C ( Δ 0 ) ) =
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 4 1 ( 4 4 ,   Z ( Δ 4 4 ) ) = f 4 1 ( 4 4 ,   Z ( Δ 0 ) ) =
For this stage (k = 4)
Z ( Δ 4 ) = m i n 1 g 4 { f g 1 ( g k , Z ( Δ k g ) ) } = m i n 1 g 4 { 3 ,   11 , 2 , } = 2
k = 5 ,   Δ 5 = { J [ 1 ] ,   J [ 2 ] , J [ 3 ] ,   J [ 4 ] , J [ 5 ] } = { J 1 , J 3 , J 4 , J 5 , J 2 }
For g = 1, g k = 1 5 = { J [ 5 ] } = { J 2 } , ρ g k p ρ = 8 L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 1 0 ( 1 5 ,   C ( Δ 4 ) ) = max ( C ( Δ 5 1 ) + M T , r [ 5 1 + 1 ] ) + p [ 5 1 + 1 ] = max ( C ( Δ 4 ) + 5 , r [ 5 ] ) + p [ 5 ] = max ( 17 + 5 , 8 ) + 8 = 30
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 1 1 ( 1 5 ,   Z ( Δ 5 1 ) ) = f 1 1 ( 1 5 ,   Z ( Δ 4 ) ) = Z ( Δ 5 1 ) + max ( f 1 0 ( 1 5 , C ( Δ 5 1 ) ) d [ 5 1 + 1 ] , 0 ) × w [ 5 1 + 1 ] = Z ( Δ 4 ) + max ( f 1 0 ( 1 5 , C ( Δ 4 ) ) d [ 5 ] , 0 ) × w [ 5 ] = 2 + max ( 30 17 , 0 ) × 3 = 41
For g = 2, g k = 2 5 = { J [ 4 ] , J [ 5 ] } = { J 5 , J 2 } , ρ g k p ρ = 12 > L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 2 0 ( 2 5 ,   C ( Δ 3 ) ) =
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 2 1 ( 2 5 ,   Z ( Δ 5 2 ) ) = f 2 1 ( 2 5 ,   Z ( Δ 3 ) ) =
For g = 3, g k = 3 5 = { J [ 3 ] , J [ 4 ] , J [ 5 ] } = { J 4 , J 5 , J 2 } , ρ g k p ρ = 16 > L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 3 0 ( 3 5 ,   C ( Δ 2 ) ) =
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 3 1 ( 3 5 ,   Z ( Δ 5 3 ) ) = f 3 1 ( 3 5 ,   Z ( Δ 2 ) ) =
For g = 4, g k = 4 5 = { J [ 2 ] , J [ 3 ] , J [ 4 ] , J [ 5 ] } = { J 3 , J 4 , J 5 , J 2 } , ρ g k p ρ = 18 > L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 4 0 ( 4 5 ,   C ( Δ 1 ) ) =
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 4 1 ( 4 5 ,   Z ( Δ 5 4 ) ) = f 4 1 ( 4 5 ,   Z ( Δ 1 ) ) =
For g = 5, g k = 5 5 = { J [ 1 ] , J [ 2 ] , J [ 3 ] , J [ 4 ] , J [ 5 ] } = { J 1 , J 3 , J 4 , J 5 , J 2 } ,
ρ g k p ρ = 20 > L
Calculating makespan
f g 0 ( g k ,   C ( Δ k g ) ) = f 5 0 ( 5 5 ,   C ( Δ 0 ) ) =
Calculating TWT
f g 1 ( g k ,   Z ( Δ k g ) ) = f 5 1 ( 5 5 ,   Z ( Δ 5 5 ) ) = f 5 1 ( 5 5 ,   Z ( Δ 0 ) ) =
For this stage (k = 5), i.e., the final stage
( Δ 5 ) = m i n 1 g 5 { f g 1 ( g k , Z ( Δ k g ) ) } = m i n 1 g 5 { 41 ,   , , , } = 41

References

  1. Chiang, T.-C.; Cheng, H.-C.; Fu, L.-C. A memetic algorithm for minimizing total weighted tardiness on parallel batch machines with incompatible job families and dynamic job arrival. Comput. Oper. Res. 2010, 37, 2257–2269. [Google Scholar] [CrossRef]
  2. Sbihi, M.; Varnier, C. Single-machine scheduling with periodic and flexible periodic maintenance to minimize maximum tardiness. Comput. Ind. Eng. 2008, 55, 830–840. [Google Scholar] [CrossRef] [Green Version]
  3. Low, C.; Ji, M.; Hsu, C.-J.; Su, C.-T. Minimizing the makespan in a single machine scheduling problems with flexible and periodic maintenance. Appl. Math. Model. 2010, 34, 334–342. [Google Scholar] [CrossRef]
  4. Detienne, B. A mixed integer linear programming approach to minimize the number of late jobs with and without machine availability constraints. Eur. J. Oper. Res. 2014, 235, 540–552. [Google Scholar] [CrossRef]
  5. Cui, W.W.; Lu, Z. Minimizing the makespan on a single machine with flexible maintenances and jobs’ release dates. Comput. Oper. Res. 2017, 80, 11–22. [Google Scholar] [CrossRef]
  6. Pang, J.; Zhou, H.; Tasi, Y.C.; Chou, F.D. A scatter simulated annealing algorithm for the bi-objective scheduling problem for the wet station of semiconductor manufacturing. Comput. Ind. Eng. 2018, 123, 54–66. [Google Scholar] [CrossRef]
  7. Lawler, E.L. A pseudo-polynomial algorithm for sequencing jobs to minimize total tardiness. Ann. Discrete Math. 1977, 1, 331–342. [Google Scholar]
  8. Ma, Y.; Chu, C.; Zuo, C. A survey of scheduling with deterministic machine availability constraints. Comput. Ind. Eng. 2010, 58, 199–211. [Google Scholar] [CrossRef]
  9. Qi, X.; Chen, T.; Tu, F. Scheduling maintenance on a single machine. J. Oper. Res. Soc. 1999, 50, 1071–1078. [Google Scholar] [CrossRef]
  10. Su, L.-H.; Wang, H.-M. Minimizing total absolute deviation of job completion times on a single machine with cleaning activities. Comput. Ind. Eng. 2017, 103, 242–249. [Google Scholar] [CrossRef]
  11. Su, L.-H.; Hsiao, M.-C.; Zhou, H.; Chou, F.-D. Minimizing the number of tardy jobs on unrelated parallel machines with dirt consideration. J. Ind. Prod. Eng. 2018, 35, 383–393. [Google Scholar] [CrossRef]
  12. Campo, E.A.; Cano, J.A.; Gomez-Montoya, R.; Rodriguez-Velasquez, E.; Cortes, P. Flexible job shop scheduling problem with fuzzy times and due-windows: Minimizing weighted tardiness and earliness using genetic algorithms. Algorithms 2022, 15, 334. [Google Scholar] [CrossRef]
  13. Yang, D.L.; Hung, C.L.; Hsu, C.J.; Chen, M.S. Minimizing the makespan in a single machine scheduling problem with flexible maintenance. J. Chin. Inst. Ind. Eng. 2002, 19, 63–66. [Google Scholar] [CrossRef]
  14. Luo, W.; Cheng, T.C.E.; Ji, M. Single-machine scheduling with a variable maintenance activity. Comput. Ind. Eng. 2015, 79, 168–174. [Google Scholar] [CrossRef]
  15. Chen, J.S. Scheduling of nonresumable jobs and flexible maintenance activities on a single machine to minimize makespan. Eur. J. Oper. Res. 2008, 190, 90–102. [Google Scholar] [CrossRef]
  16. Yang, S.-J.; Yang, D.-L. Minimizing the makespan on single-machine scheduling with aging effect and variable maintenance activities. Omega 2010, 38, 528–533. [Google Scholar] [CrossRef]
  17. Sadfi, C.; Penz, B.; Rapine, C.; Blazewicz, J.; Frmanowicz, P. An improved approximation algorithm for the single machine total completion time scheduling problem with availability constraints. Eur. J. Oper. Res. 2005, 161, 3–10. [Google Scholar] [CrossRef]
  18. Yang, S.L.; Ma, Y.; Xu, D.L.; Yang, J.-B. Minimizing total completion time on a single machine with a flexible maintenance activity. Comput. Oper. Res. 2011, 38, 755–770. [Google Scholar] [CrossRef]
  19. Batun, S.; Aziaoglu, M. Single machine scheduling with preventive maintenance. Int. J. Prod. Res. 2009, 47, 1753–1771. [Google Scholar] [CrossRef]
  20. Ying, K.-C.; Lu, C.-C.; Chen, J.-C. Exact algorithms for single-machine scheduling problems with a variable maintenance. Comput. Ind. Eng. 2016, 98, 427–433. [Google Scholar] [CrossRef]
  21. Lee, C.Y. Machine scheduling with an availability constraint. J. Glob. Optim. 1996, 9, 395–416. [Google Scholar] [CrossRef]
  22. Kacem, I.; Chu, C. Efficient branch-and-bound algorithm for minimizing the weighted sum of completion times on a single machine with one availability constraint. Int. J. Prod. Econ. 2008, 112, 138–150. [Google Scholar] [CrossRef]
  23. Kacem, I. Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval. Comput. Ind. Eng. 2008, 54, 401–410. [Google Scholar] [CrossRef]
  24. Kacem, I.; Chu, C.; Souissi, A. Single-machine scheduling with an availability constraint to minimize the weighted sum of the completion times. Comput. Oper. Res. 2008, 35, 827–844. [Google Scholar] [CrossRef]
  25. Mosheiov, G.; Sarig, A. Scheduling a maintenance activity to minimize total weighted completion time. Compu. Math. Appl. 2009, 57, 619–623. [Google Scholar] [CrossRef] [Green Version]
  26. Graves, G.H.; Lee, C.Y. Scheduling maintenance and semiresumable jobs on a single machine. Nav. Res. Logist. 1999, 46, 845–863. [Google Scholar] [CrossRef]
  27. Ganji, F.; Mslehi, G.; Ghalebsax Jeddi, B. Minimizing maximum earliness in single-machine scheduling with flexible maintenance time. Sci. Iran 2017, 24, 2082–2094. [Google Scholar] [CrossRef] [Green Version]
  28. Liao, C.J.; Chen, W.J. Single-machine scheduling with periodic maintenance and nonresumable jobs. Comput. Oper. Res. 2003, 30, 1335–1347. [Google Scholar] [CrossRef]
  29. Chen, W.J. Minimizing number of tardy jobs on a single machine subject to periodic maintenance. Omega 2009, 37, 591–599. [Google Scholar] [CrossRef]
  30. Lee, J.-Y.; Kim, Y.-D. Minimizing the number of tardy jobs in a single-machine scheduling problem with periodic maintenance. Comput. Oper. Res. 2012, 39, 2196–2205. [Google Scholar] [CrossRef]
  31. Ganji, F.; Jamali, A. Minimizing the number of tardy jobs on single machine scheduling with flexible maintenance time. J. Algorithm Comput. 2018, 50, 103–109. [Google Scholar]
  32. Chen, L.; Wang, J.; Yang, W. A single machine scheduling problem with machine availability constraints and preventive maintenance. Int. J. Prod. Res. 2020, 59, 2708–2721. [Google Scholar] [CrossRef]
  33. Johnson, D.S. Near-Optimal Bin Packing Algorithms. Ph.D. Thesis, MIT, Cambridge, MA, USA, 1973. [Google Scholar]
  34. Lee, C.Y.; Uzsoy, R. Minimizing makespan on a single processing machine with dynamic job arrivals. Int. J. Prod. Res. 1999, 37, 219–236. [Google Scholar] [CrossRef]
  35. Chou, F.-D.; Wang, H.-M. Scheduling for a single semiconductor batch-processing machine to minimize total weighted tardiness. J. Chin. Inst. Ind. Eng. 2010, 25, 136–147. [Google Scholar] [CrossRef]
  36. Holland, J.H. Adaptation in Natural and Artificial Systems; The University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  37. Li, F.; Xu, L.D.; Jin, C.; Wang, H. Intelligent bionic genetic algorithm (IB-GA) and its convergence. Expert Syst. Appl. 2011, 38, 8804–8811. [Google Scholar] [CrossRef]
  38. Cobb, H.G.; Grefenstette, J.J. Genetic algorithms for tracking changing environments. In Proceedings of the Fifth International Conference on Genetic Algorithms, San Francisco, CA, USA, 1 June 1993; pp. 523–530. [Google Scholar]
  39. Yang, S. Genetic algorithms with elitism-based immigrants for changing optimization problems. In Workshops on Applications of Evolutionary Computation; Springer: Berlin, Germany, 2007; pp. 627–636. [Google Scholar]
  40. Muhuri, P.K.; Rauniyar, A. Immigrants based adaptive genetic algorithms for task allocation in multi-robot systems. Int. J. Comput. Intell. Appl. 2017, 16, 1750025. [Google Scholar] [CrossRef]
Figure 1. Feasible schedule.
Figure 1. Feasible schedule.
Algorithms 16 00207 g001
Figure 2. Gantt chart obtained by FF method.
Figure 2. Gantt chart obtained by FF method.
Algorithms 16 00207 g002
Figure 3. Gantt chart obtained by DP method.
Figure 3. Gantt chart obtained by DP method.
Algorithms 16 00207 g003
Figure 4. Illustration of OX crossover.
Figure 4. Illustration of OX crossover.
Algorithms 16 00207 g004
Figure 5. Pseudo code of Procedure_TBI.
Figure 5. Pseudo code of Procedure_TBI.
Algorithms 16 00207 g005
Figure 6. Pseudo code of Procedure_JPTmin.
Figure 6. Pseudo code of Procedure_JPTmin.
Algorithms 16 00207 g006
Figure 7. Pseudo code of TISGA.
Figure 7. Pseudo code of TISGA.
Algorithms 16 00207 g007
Figure 8. Comparison results for n = 20 with combinations of (L, TF, R).
Figure 8. Comparison results for n = 20 with combinations of (L, TF, R).
Algorithms 16 00207 g008
Figure 9. Comparison results for n = 25 with eight combinations of (L, TF, R).
Figure 9. Comparison results for n = 25 with eight combinations of (L, TF, R).
Algorithms 16 00207 g009
Figure 10. Comparison results for n = 30 with eight combinations of (L, TF, R).
Figure 10. Comparison results for n = 30 with eight combinations of (L, TF, R).
Algorithms 16 00207 g010
Figure 11. Comparison results for n = 35 with eight combinations of (L, TF, R).
Figure 11. Comparison results for n = 35 with eight combinations of (L, TF, R).
Algorithms 16 00207 g011
Figure 12. Comparison results for n = 40 with eight combinations of (L, TF, R).
Figure 12. Comparison results for n = 40 with eight combinations of (L, TF, R).
Algorithms 16 00207 g012
Figure 13. Comparison results for n = 45 with eight combinations of (L, TF, R).
Figure 13. Comparison results for n = 45 with eight combinations of (L, TF, R).
Algorithms 16 00207 g013
Figure 14. Comparison results for n = 50 with eight combinations of (L, TF, R).
Figure 14. Comparison results for n = 50 with eight combinations of (L, TF, R).
Algorithms 16 00207 g014
Figure 15. Comparison results for n = 60 with eight combinations of (L, TF, R).
Figure 15. Comparison results for n = 60 with eight combinations of (L, TF, R).
Algorithms 16 00207 g015
Figure 16. Comparison results for n = 70 with eight combinations of (L, TF, R).
Figure 16. Comparison results for n = 70 with eight combinations of (L, TF, R).
Algorithms 16 00207 g016
Figure 17. Comparison results for n = 80 with eight combinations of (L, TF, R).
Figure 17. Comparison results for n = 80 with eight combinations of (L, TF, R).
Algorithms 16 00207 g017
Figure 18. Comparison results for n = 90 with eight combinations of (L, TF, R).
Figure 18. Comparison results for n = 90 with eight combinations of (L, TF, R).
Algorithms 16 00207 g018
Figure 19. Comparison results for n = 100 with eight combinations of (L, TF, R).
Figure 19. Comparison results for n = 100 with eight combinations of (L, TF, R).
Algorithms 16 00207 g019
Figure 20. Comparison results for n = 200 with eight combinations of (L, TF, R).
Figure 20. Comparison results for n = 200 with eight combinations of (L, TF, R).
Algorithms 16 00207 g020
Figure 21. Comparison results for n = 300 with eight combinations of (L, TF, R).
Figure 21. Comparison results for n = 300 with eight combinations of (L, TF, R).
Algorithms 16 00207 g021
Figure 22. Comparison results for n = 400 with eight combinations of (L, TF, R).
Figure 22. Comparison results for n = 400 with eight combinations of (L, TF, R).
Algorithms 16 00207 g022
Figure 23. Comparison results for n = 500 with eight combinations of (L, TF, R).
Figure 23. Comparison results for n = 500 with eight combinations of (L, TF, R).
Algorithms 16 00207 g023
Figure 24. TWT value vs. computation time for TISGA, GARI and basic GA, for instance, with 50 jobs.
Figure 24. TWT value vs. computation time for TISGA, GARI and basic GA, for instance, with 50 jobs.
Algorithms 16 00207 g024
Figure 25. TWT value vs. computation time for TISGA, GARI and basic GA, for instance, with 200 jobs.
Figure 25. TWT value vs. computation time for TISGA, GARI and basic GA, for instance, with 200 jobs.
Algorithms 16 00207 g025
Table 2. Five-job instance where L = 10 and MT = 5.
Table 2. Five-job instance where L = 10 and MT = 5.
r j p j d j w j
J 1 0241
J 2 88173
J 3 5292
J 4 74122
J 5 134193
Table 3. Eight-job instance with L = 15 and MT = 5.
Table 3. Eight-job instance with L = 15 and MT = 5.
J 1 J 2 J 3 J 4 J 5 J 6 J 7 J 8
r j 354544334133243
p j 813353343
w j 31728313
d j 4755624456164958
Table 4. Job-position trajectory (JPT) matrix from 1000 random solutions for 8-job instance.
Table 4. Job-position trajectory (JPT) matrix from 1000 random solutions for 8-job instance.
Positions
12345678
Jobs10.319170.02314−0.16659−0.21222−0.14115−0.020650.062660.13564
2−1.38125−0.79550−0.41909−0.100380.173690.466890.837881.21775
3−0.228080.206690.307980.265750.15107−0.02584−0.23395−0.44361
40.336850.00495−0.12552−0.11933−0.08009−0.21764−0.46876−0.71966
50.259330.449420.412680.256550.02808−0.21764−0.46876−0.71966
60.879780.13539−0.01647−0.10382−0.15826−0.19973−0.24555−0.29134
70.21466−0.05454−0.12392−0.11331−0.06601−0.008040.047880.10329
8−0.400500.030450.130930.126750.092660.052280.00663−0.03926
Table 5. Job-Job trajectory (JJT) matrix from 1000 random solutions for 8-job instance.
Table 5. Job-Job trajectory (JJT) matrix from 1000 random solutions for 8-job instance.
Jobs
12345678
Jobs1---0.48009−0.06691−0.02260−0.20985−0.167320.008820.05005
2−0.48009---−0.53775−0.48038−0.70475−0.62080−0.43232−0.40540
30.066910.53775---0.00448−0.12972−0.089900.074470.09916
40.022600.48038−0.00448---−0.17715−0.139560.028890.06497
50.209850.704750.129720.17715---0.038820.203820.23131
60.167320.620800.089900.13956−0.03882---0.164830.19474
7−0.008820.43232−0.07447−0.02889−0.20382−0.16483---0.03258
8−0.050050.40540−0.09916−0.06497−0.23131−0.19474−0.03258---
Table 6. From-to trajectory (FTT) matrix from 1000 random solutions for an 8-job instance.
Table 6. From-to trajectory (FTT) matrix from 1000 random solutions for an 8-job instance.
To
012345678
From0---0.31917−1.38125−0.228080.336850.259330.879780.21466−0.40045
10.13564---0.182650.02463−0.13640−0.05755−0.15306−0.058530.06264
21.21775−0.25050---−0.08189−0.19567−0.27148−0.27544−0.11941−0.02336
3−0.443610.023180.29617---0.002090.08335−0.08437−0.005760.12895
40.03719−0.103680.129100.02109---−0.02009−0.09564−0.017710.04974
5−0.719660.068460.397030.142260.02626---−0.066190.014130.13772
6−0.291340.030360.171560.020050.03065−0.00769---0.014620.03179
70.10329−0.035780.07060−0.00748−0.01915−0.03210−0.09235---0.01297
8−0.03926−0.051210.134140.10942−0.044610.04624−0.11273−0.04199---
Table 7. Obtained correlation values between the objective value and matrices for different instances.
Table 7. Obtained correlation values between the objective value and matrices for different instances.
No. r x , Z 1 r x , Z 2 r x , Z 3 No. r x , Z 1 r x , Z 2 r x , Z 3
1−0.97398−0.90259−0.8252141−0.93246−0.78268−0.82146
2−0.93389−0.67382−0.9155142−0.96903−0.90456−0.82889
3−0.89515−0.70715−0.7913343−0.96012−0.85805−0.85599
4−0.95873−0.85844−0.8067944−0.96275−0.75995−0.83542
5−0.98988−0.85009−0.8231745−0.93506−0.64075−0.83282
6−0.97103−0.78600−0.8614346−0.94614−0.81629−0.86377
7−0.96237−0.80414−0.8106447−0.95039−0.68591−0.88292
8−0.95706−0.83210−0.8026348−0.92468−0.81150−0.81777
9−0.93486−0.64478−0.8146649−0.95806−0.78674−0.90038
10−0.96672−0.78058−0.8770050−0.97217−0.83153−0.84326
11−0.95342−0.73581−0.8685951−0.95388−0.65682−0.82511
12−0.95444−0.66436−0.8627952−0.94202−0.76971−0.83592
13−0.94975−0.76062−0.8265353−0.97487−0.87496−0.85896
14−0.94608−0.72400−0.8184354−0.95419−0.83605−0.78963
15−0.91384−0.51343−0.8031755−0.94642−0.80198−0.77172
16−0.96407−0.75132−0.8530356−0.97563−0.74293−0.78009
17−0.94233−0.57961−0.8644557−0.95643−0.82493−0.87601
18−0.98302−0.89270−0.8922058−0.91590−0.67764−0.86478
19−0.96143−0.78270−0.8924459−0.96370−0.80954−0.81591
20−0.96040−0.50497−0.8730960−0.94539−0.67941−0.82872
21−0.93761−0.67141−0.8683261−0.96729−0.85690−0.83829
22−0.94733−0.82641−0.8771562−0.96834−0.73018−0.90983
23−0.96481−0.77928−0.8678463−0.98041−0.85956−0.81707
24−0.96752−0.75952−0.8802264−0.97940−0.84165−0.84504
25−0.94186−0.64615−0.8794665−0.94475−0.76231−0.82987
26−0.95828−0.79151−0.8805066−0.95450−0.86418−0.83902
27−0.95353−0.83410−0.8870467−0.94122−0.67374−0.80949
28−0.93543−0.63366−0.8582268−0.97977−0.83760−0.90741
29−0.96274−0.83366−0.7231269−0.96696−0.81004−0.81336
30−0.98589−0.84328−0.8655070−0.97713−0.77468−0.85297
31−0.94389−0.46264−0.9058171−0.95879−0.79762−0.87808
32−0.94480−0.66877−0.8457872−0.97632−0.82410−0.88502
33−0.94191−0.63167−0.8976173−0.94578−0.79615−0.83772
34−0.96513−0.83834−0.8589674−0.95980−0.84630−0.83950
35−0.95250−0.78039−0.8264675−0.94803−0.79279−0.82569
36−0.95126−0.79587−0.8624476−0.95494−0.68985−0.86193
37−0.92409−0.67215−0.8636677−0.96827−0.86258−0.83804
38−0.98108−0.91518−0.8301178−0.94943−0.80676−0.82839
39−0.89571−0.57850−0.8012379−0.93170−0.61759−0.88283
40−0.94395−0.81748−0.8645580−0.94562−0.77085−0.84916
Table 8. Comparison of results for MIP and GAs for small-sized problems.
Table 8. Comparison of results for MIP and GAs for small-sized problems.
nCombinations
(L, TF, R)
AveTWTNopt
MIPBasic GAGARITISGAMIPBasicGARITISAB
5(1,1,1)75.70075.70075.70075.70010101010
(1,1,2)108.900108.900108.900108.90010101010
(1,2,1)217.000217.000217.000217.00010101010
(1,2,2)203.200203.200203.200203.20010101010
(2,1,1)35.70035.70035.70035.70010101010
(2,1,2)50.90050.90050.90050.90010101010
(2,2,1)127.800127.800127.800127.80010101010
(2,2,2)132.900132.900132.900132.90010101010
6(1,1,1)178.300178.300178.300178.30010101010
(1,1,2)119.200119.200119.200119.20010101010
(1,2,1)229.800229.800229.800229.80010101010
(1,2,2)207.300207.300207.300207.30010101010
(2,1,1)62.60062.60062.60062.60010101010
(2,1,2)94.50094.50094.50094.50010101010
(2,2,1)190.000190.000190.000190.00010101010
(2,2,2)226.900226.900226.900226.90010101010
7(1,1,1)283.200283.200283.200283.20010101010
(1,1,2)227.700227.700227.700227.70010101010
(1,2,1)340.900340.900340.900340.90010101010
(1,2,2)321.700324.700324.700324.70010999
(2,1,1)101.100101.100101.100101.10010101010
(2,1,2)173.900173.900173.900173.90010101010
(2,2,1)185.800185.800185.800185.80010101010
(2,2,2)245.300245.300245.300245.30010101010
8(1,1,1)343.200343.200343.200343.20010101010
(1,1,2)252.300252.600252.600252.60010999
(1,2,1)556.300556.300556.300556.30010101010
(1,2,2)337.900337.900337.900337.90010101010
(2,1,1)134.700134.700134.700134.70010101010
(2,1,2)181.000181.000181.000181.00010101010
(2,2,1)224.300224.300224.300224.30010101010
(2,2,2)356.400356.400356.400356.40010101010
9(1,1,1)336.700336.700336.700336.70010101010
(1,1,2)461.200461.200461.200461.20010101010
(1,2,1)595.500595.500595.500595.50010101010
(1,2,2)477.100478.400478.400478.40010888
(2,1,1)331.200331.200331.200331.20010101010
(2,1,2)148.100148.100148.100148.10010101010
(2,2,1)357.000357.000357.000357.00010101010
(2,2,2)463.500463.500463.500463.50010101010
10(1,1,1)432.100432.100432.100432.10010101010
(1,1,2)565.500565.500565.500565.50010101010
(1,2,1)612.500612.500612.500612.50010101010
(1,2,2)659.900667.900667.900667.90010999
(2,1,1)421.000421.000421.000421.00010101010
(2,1,2)475.000475.000475.000475.00010101010
(2,2,1)328.900328.900328.900328.90010101010
(2,2,2)525.500525.500525.500525.50010101010
15(1,1,1)1158.8001158.8001158.8001158.8008888
(1,1,2)1284.5001293.3001284.5001284.5008788
(1,2,1)1982.3001982.3001982.3001982.3005555
(1,2,2)1613.7001612.0001612.0001612.0006666
(2,1,1)1210.6001210.6001210.6001210.6007777
(2,1,2)1038.1001038.1001038.1001038.10010101010
(2,2,1)1284.3001284.3001284.3001284.3007777
(2,2,2)1079.5001079.5001079.5001079.50010101010
Table 9. The average computational time required by the MIP model and GAs.
Table 9. The average computational time required by the MIP model and GAs.
nMIPAll Versions of GAs
50.0500.050
60.0610.059
70.0830.070
80.2120.079
90.5360.090
101.6630.100
152854.0860.150
Remark: For n = 15, there are 19 instances in which computational time solved by the MIP model exceeds 7200 s.
Table 10. Comparison of results for GAs for large-sized problems.
Table 10. Comparison of results for GAs for large-sized problems.
AveRPDNbest
nBasic GAGARITISGABasic GAGARITISGA
200.126 0.180 0.000 686480
250.545 0.369 0.015 404879
300.731 0.490 0.005 192279
351.121 0.834 0.000 4680
401.012 0.908 0.001 2878
450.997 0.994 0.000 1180
501.409 1.325 0.005 2179
601.209 1.317 0.000 0080
701.440 1.271 0.001 0179
801.384 1.188 0.000 0080
901.733 1.349 0.000 0080
1002.045 1.419 0.000 0080
2004.572 3.315 0.000 0080
3006.110 4.896 0.000 0080
4007.074 6.055 0.000 0080
5005.944 5.083 0.028 0575
mean2.341 1.937 0.003 8.50 9.75 79.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, S.; Tsai, Y.-C.; Chou, F.-D. A Trajectory-Based Immigration Strategy Genetic Algorithm to Solve a Single-Machine Scheduling Problem with Job Release Times and Flexible Preventive Maintenance. Algorithms 2023, 16, 207. https://doi.org/10.3390/a16040207

AMA Style

Huang S, Tsai Y-C, Chou F-D. A Trajectory-Based Immigration Strategy Genetic Algorithm to Solve a Single-Machine Scheduling Problem with Job Release Times and Flexible Preventive Maintenance. Algorithms. 2023; 16(4):207. https://doi.org/10.3390/a16040207

Chicago/Turabian Style

Huang, Shenquan, Ya-Chih Tsai, and Fuh-Der Chou. 2023. "A Trajectory-Based Immigration Strategy Genetic Algorithm to Solve a Single-Machine Scheduling Problem with Job Release Times and Flexible Preventive Maintenance" Algorithms 16, no. 4: 207. https://doi.org/10.3390/a16040207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop