Minimizing the Total Weighted Waiting Times and Instability in a Rescheduling Problem with Dynamic Jobs Weight

: Thanks to smart technological tools, customers can at any moment create or modify their commands. This reality forced many production ﬁrms to become sensitive in rescheduling processes. In the literature, most of rescheduling problems consider classical efﬁciency measures. However, some existing works also consider stability as a measure for limiting the deviation from initial schedule. In this work, we aim to bridge the gap in existing works on rescheduling by investigating a new approach to measure simultaneously efﬁciency by the total weighted waiting times and stability by the total weighted completion time deviation. This combination of criteria is very signiﬁcant in industrial and hospital environments. In this paper, a single machine rescheduling problem with jobs arriving over time is considered. A mixed integer linear programming (MILP) model is designed for this problem and an iterative predictive-reactive strategy for dealing with the online part. Numerical results show that, at each time the jobs are rescheduled, the low weight ones move forward. Consequently, a new concept consisting in increasing the jobs weight as function of time is established. The effect of this new conception is evaluated by the variation of the average ﬂowtime. Eventually, the computing time of the MILP resolution is studied to explore its limitations.


Introduction
Through new technologies development, today's industry is currently experiencing the 4th revolution, referred to as Industry 4.0. This reality has a significant impact on the industrial management processes [1][2][3]. Hence, the customers practices are changed, they can, at any moment, create or modify their commands through new technological tools, such as the laptop, the tablet computer, or the smart phone [4,5]. Indeed, these situations have an impact on the production scheduling [6,7]. In addition, production problems can also disrupt the already established sequence, for instance, the mechanical machine breakdowns, or the lack of raw materials, etc. These factors force many production firms to react for updating the existing schedule. Hence, the rescheduling is necessary to revise schedules in a cost-effective manner [8].
Rescheduling is the mechanism of updating an established production schedule as a response to disruptions [8]. Many types of disruptions have been studied, such as the arrival of new jobs [9,10], machine breakdowns [11][12][13], operational unit failures [14], job rejection [15], quality problems and shortage of raw materials [16,17]. Over the last few years, many researches dealt with this area and dedicated to it more attention. Indeed, rescheduling studies have considered many types of machine environments [18][19][20][21].
The classical efficiency criteria are often used for measuring the performance of the schedule, such as the Makespan [22][23][24][25], Makespan combined with tardiness [26], total completion times [27,28], total weighted completion times [29,30], total weighted tardiness [31], total tardiness [32], total flow time [33] and maximum lateness [34]. Different from existing works, this one investigates new performance measures. In terms of efficiency, the Total Weighted Waiting Times (TWWT) is considered as a criterion. The waiting time represents the duration that the job has waited in the system before being executed [35]. In some real-life environments, the TWWT is a very significant criterion. For instance, in production systems, it can represent the waiting time of a job in front of a workstation, considering the importance of the job as a weight. In the operating rooms scheduling, it represents the delay between the patient's arrival and the starting time of his treatment, considering the patient emergency level as a weight. In Guo et al., the authors addressed a problem of rescheduling the jobs considering as an objective the maximum waiting time. The problem is inspired from quartz manufacturing industry. In which the waiting time is considered as the waiting for certain materials in oven before the reheating step, where both the rework flow and the regular production flows meet [36]. Thus, the authors studied and analyzed the complexity of a rescheduling problem on a single machine, to decide where to insert the rework jobs in the initial sequence of regular jobs, to minimize the maximum waiting time. Guo and Xie considered a problem of the total waiting time minimization on a single machine rescheduling environment. The authors formulated two mixed integer programming models [37]. The studied problem is also an illustration of quartz glass factory, where the waiting time represents the waiting of materials before the welding step. The minimization of the waits saves the energy consumption. In our study, the TWWT is considered as a criterion. This criterion has been discussed for the first time by Rinnooy Kan, in 1976 [38].
Even if the efficiency measures the performance of a scheduling system, it is not the only measure for dynamic rescheduling problems. Stability measure, limiting the deviation from the initial schedule is also studied in the literature [39][40][41]. During rescheduling event, this measure evaluates the impact of the movement of the jobs [42]. In real life applications, the changes of the sequence may generate supplementary costs, such as raw materials reordering costs and reallocation costs [43][44][45][46][47][48][49][50]. In this work, a new stability criterion is studied, it consists in measuring the difference between the sum of completion times of the jobs in the initial schedule and in the new one, assigning to each job a weight, referred to as the total weighted completion time deviation (TWCTD). The weights association makes difficult to disrupt the jobs with high weights. This is the case of an urgent patient in a hospital, for example.
Vieira et al. [8] and Herrmann [51] distinguished two basic rescheduling strategies. The first strategy is completely reactive, it is often referred to as dynamic scheduling strategy. In this strategy, the schedule is dynamically constructed by dispatching jobs when it is necessary. It uses locally the available information to decide which job to execute. Therefore, dispatching rules or other heuristics are used to prioritize jobs waiting to be processed on the machine [52,53]. The second is the predictive~-reactive rescheduling strategy, it consists in firstly generate an initial schedule and to update secondly this schedule, at each time a disruption occurs [54][55][56][57]. In our approach, the predictive-reactive rescheduling strategy is also adopted through firstly solving an initial problem with the objective of minimizing the TWWT. After the occurrence of a disruption, the objective becomes to minimize a new criterion combining simultaneously the schedule efficiency and the schedule stability, with the TWCTD as a stability measure. As in the majority of the rescheduling problems considering the efficiency and the stability [58,59], a relationship between both objective is established, by associating a coefficient for each measure (i.e., α efficiency + (1 − α) stability), referred to as the efficiency-stability coefficient. The influence of this coefficient on the system is later discussed. Considering simultaneously the schedule efficiency and schedule stability in the second objective function, plays a more significant role for reducing the actual costs in real applications. In the literature, some classical single machine scheduling models, uses rescheduling assumption to determine new schedules in response to disruptions. These previous models are only based on the classical measures Thus, several costs are disregard. Along with that, based on the numerical results observation, a new concept consisting in increasing the weight in function of time is introduced. This concept allows to avoid ignoring the jobs with low weights Appl. Sci. 2021, 11, 7040 3 of 23 thanks to increasing their weights in function of time. The weight increasing effect is also discussed in the numerical results section.
This work investigates a single machine rescheduling problem, the disruptions are caused by new jobs arrival. The choice of the single machine environment is motivated by the fact that this environment represents several real-life cases, for example a single operating room in a hospital or a single workstation in a factory. Moreover, according to Pinedo, the single machine provides a basis for heuristics that are applicable to more complicated machine environments [60]. In practice, scheduling problems in more complicated machine environments are often decomposed into sub-problems that deal with single machines. Thus, a mixed integer linear programming model is implemented to describe this problem and an iterative methodology is proposed for dealing with the online part. This work is a contribution to operational research area. The main research objectives are: -Considering simultaneously the TWWT as the schedule efficiency and the TWCTD as the schedule stability. These two criteria are mixed in one performance measure. -Implementing for this specific problem, a mathematical model based on the MILP and an iterative predictive-reactive methodology for solving the online problem. In addition, testing the limitation of the MILP resolution in terms of number of jobs. - Establishing an innovative concept that consists in varying the jobs weight in function of time spent in the system, thus helping the low-weighted earliest operations to be executed.
The rest of paper is organized as follows. In Section 2, the literature review is discussed. In Section 3, we motivate our research work. The problem and the proposed methodology are described in Section 4. The mathematical model is then presented in Section 5. The numerical results are analyzed in Section 6. Finally, conclusions and perspectives are given in Section 7.

Literature Review
This section is dedicated to the literature review of rescheduling problem. Rescheduling literature is very rich. At those time, Vieira et al. [8], Ouelhadj and Petrovic [61] and, recently, Uhlmann and Frazzon [62], provided a detailed reviews on rescheduling problems. Indeed, rescheduling studies have considered many types of machine environments. For example, Da Silva et al. considered a dynamic rescheduling problem where the set of original jobs has been scheduled on a single machine when new customer orders arrive, requiring the execution of a new job or the cancellation of previously ordered one [18]. The authors proposed an approach that reconsiders the actual schedule, at each time a new event arrives. A mixed integer linear programming model is formulated with the aim of minimizing the Makespan. Angel-Bello et al. recently studied the makespan minimization in a single machine rescheduling problem with jobs arrival as a disruption [19]. The authors implemented two rescheduling strategies. The first one uses all the available jobs at the rescheduling point to obtain an optimal solution. The second one considers only the jobs that can be completed until the next re-optimization point. Alagöz and Azizoglu studied, under machine eligibility constraints, a rescheduling problem in parallel machine environments [20]. The total flow time is considered as an efficiency criterion and the number of jobs that change the machine after rescheduling as stability criterion. The authors proposed several heuristics to provide approximate solutions. Gao et al. studied a flexible job shop rescheduling problem with new job insertion [21]. The authors considered multiple objectives to measure the performance of rescheduling: Makespan, average of earliness and tardiness, maximum machine workload and total machine workload. Four sets of heuristics are proposed for this problem and their effectiveness are studied. Most of these works, consider the classical efficiency criteria for measuring the performance of the schedule, such as the Makespan [22][23][24][25], Makespan combined with tardiness [26], total completion times [27,28], total tardiness [32], total flow time [33] and maximum lateness [34]. The majority of these criteria only consider the processing times to construct the schedule. In real applications, when new orders arrive, the priorities of these orders are Appl. Sci. 2021, 11, 7040 4 of 23 also taken into account, such as, the customers priorities in the industrial manufacturing or the emergency levels in the operating room rescheduling. Some operations, despite their short duration, are more important than others since their high priorities or emergency levels. Therefore, it seems unrealistic to delay the short duration operations to insert new operations, only based on the durations. Thus, the TWWT is considered as an efficiency criterion on this work.
Even if the efficiency measures the performance of a scheduling system, it is not the only measure for dynamic rescheduling problems. Stability measure, limiting the deviation from the initial schedule is also studied in the literature. For instance, Akkan studied in 2015 a rescheduling problem considering simultaneously the schedule efficiency measured with the maximum tardiness and the schedule stability measured with the sum of absolute starting time deviations. This stability criterion calculates the sum of absolute difference between the jobs starting times before and after rescheduling [44]. The author proved that their algorithm improves the schedule stability without impacting T max . Liu and Zhou studied a rescheduling problem on identical parallel machines and considered two conflicting rescheduling criteria: the total completion time as a measure of scheduling cost efficiency and the number of jobs assigned to different machines in the original schedule and newly generated schedule as a measure of stability cost [28]. Rahmani and Ramezanian considered a dynamic flexible flow shop scheduling that simultaneously measured the schedule efficiency by the total weighted tardiness and the schedule stability by the absolute deviation of completion times [43]. The considered stability criterion measures the difference between the completion time of the jobs before and after the rescheduling. Indeed, when the sequence is disrupted, the initial jobs completion times may change and will be replaced by real completion times obtained after the rescheduling. Moreover, there are also several works dealing with the stability measure in the rescheduling problems, such as the starting time deviation and the total deviation penalty [42], the starting time deviation [45], the amount of operations and the starting times operations have been altered [46], the absolute positional disruption, the positional disruption and the absolute completion time disruption [47], the total sequence disruption [48], the maximum time disruption (change of delivery times of jobs to customers) [49] and the rescheduling cost [50]. Most of these works consider the starting or the completion time deviation as a criterion to evaluate the schedule instability. However, the movement of the important orders is not considered in these stability measures. Assigning the weights to the stability criterion, penalizes more largely the movements of the important or the urgent operations. Thus, it will be difficult to change the position of the high priority operations. To the best of our knowledge, there is only few papers that consider the weights on the stability measures. To bridge the gap, this work investigates a new stability measure by considering the TWCTD as a stability criterion.
The predictive-reactive rescheduling strategy is the most used strategy to reschedule the jobs in response to disruptions [61]. The principle of this strategy is based on two steps. The first one is the predictive step. It consists in creating an initial schedule. The second one is the reactive step. It consists of revising the established schedule, at each time, a disruption occurs, in order to minimize its impact on the schedule performance [8]. Several researchers in the literature have adopted this strategy. For instance, Bahroun et al. recently implemented in a single machine rescheduling problem, a proactive-reactive strategy consisting in the first phase to use tentative information (release dates, processing times and due dates) to construct a set of feasible schedules [63]. Then, In the second reactive phase, the schedule is progressively built as the horizon rolls forward. Yang and Geunes treated a single machine rescheduling problem where they used a predictivereactive strategy to handle the occurrence of uncertain future jobs. Their strategy consists to determine the amount of planned idle time for uncertain future jobs and their positions as a predictive step [64]. Then, when a disruption occurs, the schedule reacts through including the new jobs in the idle time. Wang et al. treated a predictive-reactive approach of surgery scheduling problem for a single surgical suite, subject to random emergency surgeries [65].
The authors generated a primary schedule to optimize the schedule efficiency. Then, a slack time is inserted to introduce an emergency surgery when it occurs. In this work, we have used Predictive-reactive rescheduling strategy but acting differently from previous works. Firstly, we solve an initial problem with the objective of minimizing the TWWT. After occurrence of a disruption, the objective becomes to minimize a new criterion combining simultaneously the schedule efficiency and the schedule stability, with the TWCTD as a stability measure. In addition, we have associated these two criteria by a coefficient α, for helping the decision maker to find an overhead between the two problem aspects.
Several authors have opted to model the problems of operating room rescheduling as machine rescheduling problems. For example, Stuart and Kozan studied a rescheduling problem of a day-to-day running of a day surgery unit. The authors have modeled their problem as a single machine scheduling problem, considering that the sequence-dependent processing time and due date [66]. They developed a branch and bound method to tackle the problem. Wang et al. addressed, in single surgical suite, a surgery scheduling problem, where each patient should be sequenced in the same order on three stages [65]. The authors considered the arrival of random emergency surgeries as a disruption. They modeled their system as a problem of no-wait permutation flow shop scheduling and designed a predictive reactive strategy to handle the system disruption. Our research motivation is also an inspiration of the operating room rescheduling case. In the following section, we briefly present our motivation.

Motivation
This research is inspired by real life systems. We have taken the motivation from hospital system situation. In following paragraph, we present an illustration of this situation.
The operating theater (OT) constitutes the largest hospital cost category by 33% among the hospital departments [67,68]. Therefore, operating room scheduling has been identified as a major challenge by hospital decision-makers. In addition, according to Cardoen et al., the long waiting lists are the most received heard complaints in the health services [69]. Thus, the minimization of the patient waiting time is so interesting research area. By acting on this issue, we can improve the patient satisfaction and reduce the hospital costs. In fact, two types of operations are distinguished. The first is the elective operations when all the information are already known in advance. The second is the emergency operations which can occur at any time due to the urgent patients' arrival. These ones are uncertain and they are considered as a disruption for the already established schedule. On the other hand, the problem of a single operating room is usually studied in the literature since it can be extended to more complicated machine environment [60]. For example, Denton et al. studied a stochastic optimization model for daily of a single operating room scheduling, then the authors extended their study in two stage stochastic resource [70]. This justifies the fact that we have modeled the problem as a single machine rescheduling problem. In addition, the patients' categories can be divided into five major groups according to their surgery priorities [71]. Thus, it is assumed for each job a weight representing its emergency level. In this study, the objective is to model the described practical example as a problem of a single machine rescheduling. We proceed in two phases. In the predictive phase (elective), the aim is to minimize the patient weighted waiting times as an efficiency criterion. Then, in the reactive phase (emergency), the aim is to minimize the efficiency criterion combined with a stability criterion, to reduce the deviation from the initial schedule. To the best of our knowledge, in the literature, there is not any contribution dealing with this practical example and simultaneously taking into account the two previous criteria. This described system can also be encountered in industrial system, as presented in [36], considering the waiting time as the waiting of materials before the reheating step in the oven and the arrival of new orders as a disruption. The weights can also be added for representing the customer priorities.

Problem Description
In this paper, we study a single machine rescheduling problem with disruptions caused by the arrival of new jobs. The considered problem is solved in two phases. The phase before the disruptions concerns the TWWT minimization in a static single machine scheduling problem. The second phase aims to minimize, after each disruption, simultaneously, the TWWT of jobs and the TWCTD of uninitiated jobs. In the considered problem, for each job j is associated a weight w j , a release date r j and a processing time p j . When the processing of a job j starts, it is processed without preemption to completion time C j . In this phase of the problem (predictive phase), all the job information (p j , r j , w j ) is known in advance. Thus, the waiting time W j of job j is defined as W j = C j − p j − r j (see Figure 1). deviation from the initial schedule. To the best of our knowledge, in the literature, there is not any contribution dealing with this practical example and simultaneously taking into account the two previous criteria. This described system can also be encountered in industrial system, as presented in [36], considering the waiting time as the waiting of materials before the reheating step in the oven and the arrival of new orders as a disruption. The weights can also be added for representing the customer priorities.

Problem Description
In this paper, we study a single machine rescheduling problem with disruptions caused by the arrival of new jobs. The considered problem is solved in two phases. The phase before the disruptions concerns the TWWT minimization in a static single machine scheduling problem. The second phase aims to minimize, after each disruption, simultaneously, the TWWT of jobs and the TWCTD of uninitiated jobs. In the considered problem, for each job j is associated a weight wj, a release date rj and a processing time pj. When the processing of a job j starts, it is processed without preemption to completion time Cj. In this phase of the problem (predictive phase), all the job information (pj, rj, wj) is known in advance. Thus, the waiting time Wj of job j is defined as Wj = Cj − pj − rj (see Figure 1). Efficiency criterion: The waiting time represents the period that the job waits, from its release date rj, until the execution at the starting time Sj, i.e., Wj = Sj − rj. Thus, Wj = Cj − pj − rj, knowing that Sj = Cj − pj (see Figure 1).
In this system, it is supposed that N = {1, 2, ..., n} is a set of jobs, which are already available for scheduling at the time t = 0. In the predictive phase, the objective is to determine an initial schedule with the objective of minimizing the Total Weighted Waiting Times, i.e., Stability criterion: Let Coj be the completion time of job j, calculated when this job is scheduled for the first time. When a disruption occurs, the schedule will modify. Then, the job j will finish at a new completion time Cj. Thus, the difference between Coj and Cj is used to assess the instability cost. In addition, it is associated a weight wj for each job j, which makes it difficult to deviate the important orders. Thus, the stability measure is calculated as TWCTD is referred to as the total weighted completion times deviation. In this approach, the instability penalty concerns only the jobs existing in the previous schedule (n′-z). We denoted by z the number of new arriving jobs, which are not concerned by the stability measure. However, once the new job is scheduled in its first sequence, it will be concerned by the stability criterion in the ulterior schedules. When wj(Cj-Coj) > 0, a deviation has produced. Then, an instability penalty is calculated. Efficiency criterion: The waiting time represents the period that the job waits, from its release date r j , until the execution at the starting time S j , i.e., W j = S j − r j . Thus, Figure 1).
In this system, it is supposed that N = {1, 2, . . . , n} is a set of jobs, which are already available for scheduling at the time t = 0. In the predictive phase, the objective is to determine an initial schedule with the objective of minimizing the Total Weighted Waiting Times, i.e., ∑ n j = 1 w j W j . To solve this initial problem, a mathematical model is implemented. This problem can be represented by 1 r j ∑ w j W j in the standard notation. During the initial schedule execution, a new job can arrive. Accordingly, this new job will be combined with uninitiated jobs to create a new set of jobs N = {1, 2, . . . , n } and the existing schedule is updated. The new objective is to simultaneously optimizes the schedule efficiency measured with the original objective ∑ n j = 1 w j W j and the schedule stability measured with the total weighted completion times deviations, as described hereafter.
Stability criterion: Let Co j be the completion time of job j, calculated when this job is scheduled for the first time. When a disruption occurs, the schedule will modify. Then, the job j will finish at a new completion time C j . Thus, the difference between Co j and C j is used to assess the instability cost. In addition, it is associated a weight w j for each job j, which makes it difficult to deviate the important orders. Thus, the stability measure is calculated as ∑ n −z j = 1 w j (C j − Co j ). TWCTD is referred to as the total weighted completion times deviation. In this approach, the instability penalty concerns only the jobs existing in the previous schedule (n -z). We denoted by z the number of new arriving jobs, which are not concerned by the stability measure. However, once the new job is scheduled in its first sequence, it will be concerned by the stability criterion in the ulterior schedules. When w j (C j − Co j ) > 0, a deviation has produced. Then, an instability penalty is calculated.
Moreover, both objectives are associated with α the efficiency-stability coefficient. It has been introduced for helping the decision maker to find a trade-off between the two optimized aspects. Thus, we have associated α to the efficiency criterion and (1 − α) to the stability criterion. It is assumed that α is a real number between 0 and 1. Finally, we defined the new objective function which simultaneously measures the schedule efficiency and the schedule stability as α∑ n j = 1 w j W j + (1 − α)∑ n −z j = 1 w j (C j − Co j ). Thus, the dynamic rescheduling problem can be represented by: 1|r i |α∑ w i W i + (1 − α)∑ w i (C i − Co i ) in standard notation. The proposed predictive-reactive rescheduling strategy for handling the online part is described in Section 4.2.

Iterative Predictive-Reactive Strategy
The proposed predictive-reactive rescheduling strategy is divided in two phases. The predictive phase consists of solving a static scheduling problem. In this phase, it is assumed that all the jobs' information is known in advance. The aim is to minimizes the TWWT of the jobs which represents a schedule efficiency measure. The reactive phase starts after the instauration of an initial schedule. It consists of going iteratively through the simulation horizon T. At each period, the process updates the schedule in response to a disruption caused by the new job arrival. In this phase, the strategy optimizes the schedule efficiency, but also the schedule stability by associating two criteria, the TWWT as an efficiency measure and the TWCTD as the stability measure. This strategy provides, at each rescheduling period, an optimal solution since the decisions are taken locally, unlike other rescheduling works that used a proactive approaches consisting in predicting the dates of the disruptions occurrences and taking the job into the consideration in the original schedule formulation [43].
The proposed strategy consists of going iteratively through the simulation horizon period by period. At each period, a job can arrive. The new arrived job is combined with the set of uninitiated jobs for rescheduling. To handle, at each period, the arrival of the new jobs, firstly, we have considered a finite time horizon [0, T], discretized into periods ∆t (see Figure 2). For discrete production, the discrete flow models are often considered as more realistic models. They are used for performance evaluation and to track individual parts, or in real time flow control [72]. Indeed, ∆t is the time period length, it also represents the time step. It is assumed that the occurrence of the jobs may be possible only at these times. Indeed, when a job arrives at time t, this date becomes its release date. It is also assumed that ∆t = 1 unit of time, the same time unit already in use for the jobs data p j and r j , to simplify the calculations.
fined the new objective function which simultaneously measures the schedule efficiency and the schedule stability as in standard notation. The proposed predictivereactive rescheduling strategy for handling the online part is described in Section 4.2.

Iterative Predictive-Reactive Strategy
The proposed predictive-reactive rescheduling strategy is divided in two phases. The predictive phase consists of solving a static scheduling problem. In this phase, it is assumed that all the jobs' information is known in advance. The aim is to minimizes the TWWT of the jobs which represents a schedule efficiency measure. The reactive phase starts after the instauration of an initial schedule. It consists of going iteratively through the simulation horizon T. At each period, the process updates the schedule in response to a disruption caused by the new job arrival. In this phase, the strategy optimizes the schedule efficiency, but also the schedule stability by associating two criteria, the TWWT as an efficiency measure and the TWCTD as the stability measure. This strategy provides, at each rescheduling period, an optimal solution since the decisions are taken locally, unlike other rescheduling works that used a proactive approaches consisting in predicting the dates of the disruptions occurrences and taking the job into the consideration in the original schedule formulation [43].
The proposed strategy consists of going iteratively through the simulation horizon period by period. At each period, a job can arrive. The new arrived job is combined with the set of uninitiated jobs for rescheduling. To handle, at each period, the arrival of the new jobs, firstly, we have considered a finite time horizon [0, T], discretized into periods ∆t (see Figure 2). For discrete production, the discrete flow models are often considered as more realistic models. They are used for performance evaluation and to track individual parts, or in real time flow control [72]. Indeed, ∆t is the time period length, it also represents the time step. It is assumed that the occurrence of the jobs may be possible only at these times. Indeed, when a job arrives at time t, this date becomes its release date. It is also assumed that ∆t = 1 unit of time, the same time unit already in use for the jobs data pj and rj, to simplify the calculations. We then defined θ(t) a binary variable, equals to 1 if a job occurs at time t and 0 otherwise. The generation of its values is explained in the section of numerical results. The Algorithm 1 below describes the iterative predictive-reactive strategy. We then defined θ(t) a binary variable, equals to 1 if a job occurs at time t and 0 otherwise. The generation of its values is explained in the section of numerical results. The Algorithm 1 below describes the iterative predictive-reactive strategy.

End If End For
The algorithm traverses iteratively the time horizon period by period step. Thanks to the state of θ(t), it checks whether a job arrives. If a job arrives, the algorithm updates the set of jobs LIST and reschedules the new set of jobs. Figure 3 summarizes the used predictive-reactive rescheduling strategy.

End If End For
The algorithm traverses iteratively the time horizon period by period step. Thanks to the state of θ(t), it checks whether a job arrives. If a job arrives, the algorithm updates the set of jobs LIST and reschedules the new set of jobs. Figure 3 summarizes the used predictive-reactive rescheduling strategy.

Mathematical Models
In this section, a MILP model is implemented, it is divided in two parts. In the first subsection, we describe the mathematical model before the disruption. In the second one, we describe the mathematical model after the disruption.

Mathematical Model before the Disruption
The proposed model is based on the sequence position formulation [73,74], it consists in assigning each job to a position. A bigM constraint is included, proposed by Kooli and Serairi, for making the model in a linear form [75]. The model parameters are given below:

Mathematical Models
In this section, a MILP model is implemented, it is divided in two parts. In the first subsection, we describe the mathematical model before the disruption. In the second one, we describe the mathematical model after the disruption.

Mathematical Model before the Disruption
The proposed model is based on the sequence position formulation [73,74], it consists in assigning each job to a position. A bigM constraint is included, proposed by Kooli and Serairi, for making the model in a linear form [75].
S k , Cp k , W j , and C j ≥ 0∀j ∈ N, ∀k ∈ K (9) x jk ∈ {0, 1} ∀j ∈ N, ∀k ∈ K Constraint (1) consists in having only one job per position. Constraint (2) consists in having only one position per job. Constraint (3) consists in making the starting time of k th position greater than or equal to assigned release date. Constraint (4) consists in making the starting time of (k + 1)th position greater than or equal to the previous position completion time. Constraint (5) specifies that the completion time of kth position is equal to the starting time of kth position plus the assigned processing time. Constraint (6) defines the completion time of job j, which is greater than or equal to the completion time of assigned position, where M must be sufficiently large. Accordingly, it is assumed that the value of M is a function of the total completion times and maximum value of the release dates, for considering the jobs availability. Thus, M = ∑ n j = 1 p j + maxr j . Constraint (7) is a cut that allows reducing the computing time. Constraint (8) defines the waiting time of the job j, according to its completion time, release date and processing time, Constraint (9) consists in making all variables greater than or equal to zero. Constraint (10) consists in having a binary decision variable.
Example 1. In this example, the processing times p j , the release dates r j and the weights w j for 5 jobs are given in Table 1. The weight values are between 1 and 5, they represent 5-emergency levels or 5 priority levels of customers. The processing times can represent the durations of operations or a product manufacturing time and follow a uniform distribution between 1 and 4, obtaining operations of 1 ut (10 min) to 4 ut (40 min). The release dates represent the availability of the jobs. In the initial problem, we assumed that r j follow a uniform distribution between 1 and 2, in order to have an initial solution on which we generated the disruptions. More details about the instances' generation are presented in Section 6.1. The solution is given with the objective of minimizing the total weighted waiting times. Thus, the obtained optimal sequence is CAEDB and the obtained value is ∑ 5 j = 1 w j W j = 31. The optimal solution and its relative calculations are given in Table 2. 5.2. Complexity and Lower Bound for the 1|r j |Σw j W j In this section, we present the complexity of the 1|r j |Σw j W j problem and propose a lower bound.
Proof of Theorem 1. According to Rinnooy Kan, the Σw j W j criterion is equivalent to Σw j C j , because W j = C j -r j -p j and r j and p j are constants whatever the sequence [38]. Thus, min ∑ n j = 1 w j (C j − r j − p j ) is equivalent to min ∑ n j = 1 w j C j + constant. On the other hand, Lenstra et al. showed that 1|r j |Σw j C j is strongly NP-hard [76]. Labetoulle et al. proved that this problem remains strongly NP-hard, even if the preemption is authorized [77]. Hence, the problem of 1|r j |Σw j W j is strongly NP-hard.
According to Nessah et al., when all release dates are equal, wSPT rule (Weighted Shortest Processing Time) in which jobs are sorted in increasing order of p i /w i , optimally solves 1||Σw j C j problem [78]. However, wSPT rule cannot optimally solve the problem when the release dates are introduced. In the following, a lower bound for 1|r j |Σw j W j is presented.

Theorem 2.
The solution obtained with wSRPT is a lower bound for 1|r j |Σw j W j .
Proof of Theorem 2. Ahmadi obtains in 1990 a lower bound for 1|r j |ΣC j problem, by authorizing the preemption and using SRPT (Shortest Remaining Processing Time) rule to solve the problem [79]. In our case, we consider the problem with preemption, 1|r j , pmtn|Σw i W j and we use wSRPT (Weighted Shortest Remaining Processing Time) to obtain a lower bound for 1|r j |Σw j W j . Let us consider the example of Table 1, wSPT list is given by {A, C, E, D, B}. Table 3 presents the sequence obtained with wSRPT, denoted by σ WSRPT . Table 3. Solution obtained with wSRPT.

Job
C The obtained value using wSRPT is ∑ 5 j = 1 w j W j = 30, instead of 31 without preemption. Whatever the solution obtained by solving 1|r j |Σw j W j problem, it must be greater than or equal to the solution obtained by σ WSRPT . Thus, the solution obtained by σ WSRPT is a lower bound for 1|r j |Σw j W j problem. The new objective function is given by:

Mathematical Model after the Disruption
The constraints (1) until (10) are also used for this new model. In this second case, it is supposed that a disruption occurs while the machine executes a job. Therefore, we noted by td the current time when a disruption occurs. Indeed, all jobs that started their execution before td must keep the same position. Thus, we denoted by xojk the variable assigning jobs to positions in the original schedule, which are memorized when the initial model is solved. Then, the constraint (11), which allows saving the same sequence is defined as: Constraint (12) makes the completion time of a job greater than its original completion time since the original completion time have been obtained through minimization of the principal criterion TWWT only. The job and will not be moved forward after the rescheduling.
C j ≥ Co j ∀j ∈ N Example 2. In this example, the same initial problem is considered assuming that the jobs F and G arrive when the machine executes the schedule (see Figure 4).
new jobs have the same entry parameters as the first ones. For each job, we have rj, wj, pj.
The new parameters are: n' = n + z.  (10) are also used for this new model. In this second case, it is supposed that a disruption occurs while the machine executes a job. Therefore, we noted by td the current time when a disruption occurs. Indeed, all jobs that started their execution before td must keep the same position. Thus, we denoted by xojk the variable assigning jobs to positions in the original schedule, which are memorized when the initial model is solved. Then, the constraint (11), which allows saving the same sequence is defined as: Constraint (12) makes the completion time of a job greater than its original completion time since the original completion time have been obtained through minimization of the principal criterion TWWT only. The job and will not be moved forward after the rescheduling.
Example 2. In this example, the same initial problem is considered assuming that the jobs F and G arrive when the machine executes the schedule (see Figure 4).  The job F arrives at r F = 2, with p F =1 and w F = 5. Coefficient α is given as α = 0.5. So, the optimal solution will be CAFEDB and the optimal value of the objective function is Optimal solution is given in Table 4. The job F arrives at rF = 2, with pF =1 and wF = 5. Coefficient α is given as α = 0.5. So, e optimal solution will be CAFEDB and the optimal value of the objective function is ( ) Optimal solution is given in Table 4. The job G arrives at rG = 3, with pG = 1 and wG = 1. Coefficient α is set to α = 0.5 (see gure 5). The optimal solution becomes CAFEDBG and the optimal value is ( ) The new optimal solution is given in Ta The job G arrives at r G = 3, with p G = 1 and w G = 1. Coefficient α is set to α = 0.5 (see Figure 5). The job G arrives at rG = 3, with pG = 1 and wG = 1. Coefficient α is set to α = 0.5 (see Figure 5). The optimal solution becomes CAFEDBG and the optimal value is ( ) The new optimal solution is given in Table 5.

Numerical Results
The proposed MILP model is programed on the solver FICO Xpress IVE. The simulations have been performed on a Core i7 2.90 GHz laptop. The model implementation Arrival of job G 2 Figure 5. Disruption caused by job G arrival.
The optimal solution becomes CAFEDBG and the optimal value is α∑ 7 The new optimal solution is given in Table 5. The job G arrives at rG = 3, with pG = 1 and wG = 1. Coefficient α is set to α = 0.5 (see Figure 5). The optimal solution becomes CAFEDBG and the optimal value is ( ) The new optimal solution is given in Table 5.

Numerical Results
The proposed MILP model is programed on the solver FICO Xpress IVE. The simulations have been performed on a Core i7 2.90 GHz laptop. The model implementation Arrival of job G 2

Numerical Results
The proposed MILP model is programed on the solver FICO Xpress IVE. The simulations have been performed on a Core i7 2.90 GHz laptop. The model implementation has been designed accordingly to the chart presented on Figure 3. In the following section, a detailed explanation of the data generation.

Instances Generation
The study has been conducted on different instances. It allowed giving, at each rescheduling step, an optimal solution. Table 6 presents the used parameters. The data that have been chosen can be adapted for real cases, either in a hospital or in an industrial environment. The unit of time (ut) is equivalent to a time duration which may correspond to a real situation. In our case, it is assumed that ∆t = 1 ut and 1 ut corresponds to 10 min, i.e., every 10 min, an event may occur (arrival of a new order or arrival of a new urgent patient). Most important in our case, is to define the other problem data (p j and r j ) with the same unit of time, in order to be closer to a fair situation. The simulation has been conducted over an 8 h time horizon (480 min) representing a hospital or a factory opening time. Hence, T = 8 h = 480 min = 48 ut. The weight values can represent the 5 customers priority or the five emergency levels. The processing times can represent the durations of chirurgical operations or product manufacturing times. It is assumed that it follow a uniform distribution between 1 and 4, obtaining operations of 1 ut (10 min) to 4 ut (40 min). The values of the variable θ(t) are randomly generated using the Bernoulli distribution, giving, at each time t, a value one with probability p θ and zero with probability 1 − p θ . By definition, p θ [0,1]. It represents the appearance frequency of a job at each step. We start by testing a small appearance frequency, p θ = 0.2, since under 0.2 there is not enough disruptions. We have tested then with a medium appearance frequency, p θ = 0.5. Then, we have varied p θ up to 0.7 until the MILP becomes unable to solve the problem in a reasonable time. Thus, four main studies are established. At first, the impact of the efficiency-stability coefficient α on the schedules. Secondly, the impact of the weight increasing effect on the system. Then, a comparative analysis of the proposed method with other heuristics and dispatching rules. Finally, the computing time of the MILP resolution on different instances for exploring its limitations according to jobs number.

Impact of the Efficiency-Stability Coefficient α on the Schedules
In this study, the value of α is varied from 0.5 to 1 to study its impact on the optimal schedule. It is assumed that the TWWT is a principal criterion and the TWCTD is a secondary one. Thus, the values of α inferior to 0.5 are not tested. If α becomes too small, the schedule efficiency will be negligible and the situation will be unrealistic in real-life problems. The study has been conducted and analyzed for 10 different instances, generated accordingly to the parameters presented in Section 5.1. We present, in the following example, the results obtained on the instance presented in Table 1.

Example 3.
After resolution of the initial problem (see Table 2), it is assumed that new jobs occur disrupting the system. A total of 20 jobs are treated online. The obtained results are given in Table 7.
In Table 7, p θ = 0.5 (20 jobs appeared in a time horizon of 48 ut). At each disruption, the jobs are rescheduled and three values are extracted, the value of TWWT, the value of TWCTD and the value the objective function combining TWWT and TWCTD thanks to α. We have also numbered the schedules with their step numbers.
When α = 1, the objective function only considers the efficiency criterion, TWWT. In this case, when the release dates are equal, wSPT-rule is optimal for this problem. Otherwise, the jobs have been scheduled according to its weight and its processing time.
When α decreases, the schedule stability is more considered. Hence, a diminution of a stability criterion TWCTD values is observed, as well as an increase on the values of TWWT. This increase becomes larger in the last steps when the number of jobs is larger (see Figure 6). The more disruptions, the higher the value of TWWT. In fact, the increase of TWWT in the last steps signifies that there are some jobs that stay for a long time in the system. This increase becomes large when α decreases. To illustrate this phenomenon, we present hereafter the first five jobs disposition in the schedule (see Table 8).   The more disruptions, the higher the value of TWWT. In fact, the increase of TWWT in the last steps signifies that there are some jobs that stay for a long time in the system. This increase becomes large when α decreases. To illustrate this phenomenon, we present hereafter the first five jobs disposition in the schedule (see Table 8).  (1): TWWT (2): TWCTD The increase of the TWWT at each rescheduling step is due to the low-weight jobs. These jobs move forwards after each disruption, since the new arriving jobs are included in the middle of the sequence, when they have a high weight.
However, when α decreases, the stability criterion is more considered. Thus, the existing jobs make less movement, so as not to disrupt the previous sequence. Therefore, the new arriving jobs are placed lastly which also impacts the value of TWWT in the case they have a high weight. This explains the increase of the TWWT with the small α values.
Remark 1. The high weight jobs are always prioritized. Consequently, the jobs with low weights are delayed, even if they arrive early (the case of job B in the example). In fact, the low-weight jobs move forwards at every rescheduling step. There are few chances for these jobs to be executed. To deal with this problem, an innovative concept is conceived. It consists in increasing the jobs weight in function of the time. This is the purpose of Section 5.3.
To analyze the behavior of the criteria with respect to the number of jobs, we studied the variation of the final values of TWWT, TWCD and the objective function for different values of α and p θ . Ten different instances have been generated, according to the parameters of Section 5.1. Each instance starts by 5 initial jobs. Then, it will be disrupted by the new jobs' arrival, with an appearance frequency respectively equal to 0.2, 0.5 and 0.6. Averages are presented in Table 9. Table 9. Averages of TWWT, TWCTD and objective function, for different values of α and p θ . The increase of α helps the TWWT, since its average value decreases with α. However, the average of TWCTD increases with α since the stability has less priority. This behavior is more observed when p θ increases, i.e., when the number of appearing jobs increases. In the case when the system is more disrupted by the new jobs arrivals, the possibility of having high weight jobs increases, as well as the possibility to delay the low weight jobs. Hence, the increase of their weights in function of time is judicious. Other tests have been performed with p θ exceeding 0.6, but the MILP fails to solve the problem in a reasonable time.

The Weight Increasing Effect
As mentioned above, the jobs with low weights are always placed in the last place in the sequence. Thus, it is consistent to proceed with dynamic weights. Thus, the weights become in function of time, release date r j and the weight index ρ. The proposed formula is given by: w jt = w j (t − r j + 1) ρ . ρ allows regulating the weight increasing effect. It is assumed that its value is between 0 and 1 for obtaining relatively small augmentation of the weight values. The case when ρ = 0 signifies that the jobs' weights are static and no variation of the weight occurs over the time. Otherwise, when ρ = 1, the weight variation becomes too large and may impact the schedule efficiency. Choosing a value of ρ that exceeds 1 becomes unrealistic. Therefore, the selected range for ρ is [0,1]. Thanks to this new formula, the weight becomes in function of the time t. (t − r j + 1) is the difference between the current time and the release date of the job j. If t = r j , we would obtain a weight equal to zero, which is not the aimed effect. So, we have added plus one. Hence, the new objective function will be: This new weight expression helps the low weight earliest jobs to not wait too long time in the system, through increasing their weights in function of the time. The dates of jobs arrival are defined by their release dates r j and the dates of their execution by their completion times C j . Thus, to discuss the trade-off involved while increasing the weight in function of time, we analyzed two criteria. The first one is the Mean Flow Time (MFT), MFT = 1 /n n ∑ j = 1 (C j − r j ), which measures the sum of the jobs flowtimes divided by n'.
The flow time of a job measures the duration between its release date and its completion time, F j = C j − r j . The second one is the standard deviation of the flowtime. It assesses the deviations of the jobs' flowtimes from the MFT. Table 10 presents the obtained results for the same instance as the one studied in Section 5.2. The variation of ρ is from 0 to 1 and α from 0.5 to 1. As can be seen, the increase of ρ increases the value of MFT. However, the standard deviation of the flowtimes decreases. This means that the dispersion around the MFT decreases. In other words, the values of F j approximate to each other. Thus, the increase of ρ helps the low weight jobs to be scheduled.
On the other hand, the increase of α makes it difficult to decrease the standard deviation of the flowtimes. For example, the standard deviation of the flowtimes with ρ=0.6 is equal to 5.5 when α = 1, instead of 5.21 when α = 0.5. When α decreases, the stability criterion is more considered. Systemically, the new arriving jobs are placed at the end of the schedule, so as not to disrupt the established schedule. As well, the low-weight earliest jobs are systematically executed. Otherwise, if the value of α increases, the efficiency criterion is more considered, the system then will be more disrupted. In this case, a high value of ρ is needed. In conclusion, if α increases, the weight index ρ must also increase, to help the low weight jobs to be scheduled.
To analyze the overheads of the weight index, the variation of the objectives' values is studied for different values of ρ. Thus, for each value of ρ, the final value of TWWT, TWCTD and the objective function, with w j , as well as the percentage deviation of the objective function from the static weights case are presented. The study is performed on the same instance, to ensure comparability of results. It is also assumed that α = 0.8. In addition, different values of p θ are experimented. The results are presented in Table 11. The increase of the weight index ρ helps the system stability since the value of TWCTD decreases. The earliest jobs must keep the same position to be executed. However, the TWWT increases with ρ, since the stability becomes of more priority. An increase in the objective function value is also observed, larger when p θ = 0.6. In this case, the manager must make a decision about the use of the weight index or not. Other tests have been performed with p θ exceeding 0.6, but the MILP fails to solve the problem in a reasonable time.

Comparative Analysis
As mentioned in the introduction section, some papers consider different dispatching rules and heuristics to implement the reactive approach for rescheduling the jobs in response to disruptions. Some frequently used dispatching rules and heuristics are mentioned in [80]. Among them, we select FIFO (first in first out) to compare with the proposed MILP, since it is commonly used in industrial companies, when not any rescheduling strategy is implemented [43]. In addition, it is also interesting to compare with some heuristic considering the jobs' weights. In [81], the authors compared their proposed heuristics based on MILP with wSPT, since they also considered the jobs' weights in a flowshop rescheduling problem. Therefore, we adapted these two heuristics, FIFO and wSPT, to match our problem. Ten different instances are tested. Each instance starts with 5 initial jobs, then it will be disrupted, at each step, by the arrival of a new job with the appearance frequency p θ respectively equal to 0.2, 0.6 and 0.7. Averages are presented in Table 12. The proposed MILP always provides better solution than wSPT and FIFO. wSPT which consists in sequencing the jobs by increasing order of p i /w i misses some optimal solutions since, in the presence of the release dates, this heuristic becomes not able to optimally solve the problem. In addition, it does not take into account the stability aspect. However, FIFO which consists in placing, at each rescheduling step, the new job in the last position, provides bad solutions. Even if FIFO heuristic gives stable sequences, it is weak in terms of efficiency since it does not consider the weights to construct the schedules. Indeed, when α increases, the schedule efficiency becomes more important. In this case, FIFO solution moves away from MILP solution, this gap becomes wider when the number of jobs is important. Figure 7 depicts this behavior.
In terms of computation time, the two heuristics, wSPT and FIFO, are very fast compared to the MILP method. Using this latter heuristic, a break is observed when p θ = 0.7. In the following, a thorough study of the computing time is presented.

Computing Time Study
To analyse the maximum time spent in an iteration, in this study, we tested the maximum duration of an iteration (MDI) for the model resolution on different instances. The study has been performed on the solver FICO Xpress IVE, on a laptop of a Core i7 2.90 GHz. Ten different instances per problem size are tested. These are generated randomly, using the values of Section 5.1. Then, the maximums, the minimums, the averages and the standard deviations of MDI in seconds are calculated. We have tested sets of instances containing respectively 5 and 7 initial jobs. Then, over a horizon of T = 48 ut, we have disrupted these jobs by the arrival of new ones. α is given by 0.8. Different values of p θ are tested (0.2; 0.5; 0.6; 0.7) to analyze its impact on the MDI. The obtained results are given in Table 13.  The set of jobs that must be executed, at each iteration, increases too much with p θ , as well as the average of the MDI. This later increases also with the set of initial jobs. Thus, the average of MDI depends on both, the appearance frequency of the jobs and the set of initial jobs. If these ones become large, the average and the standard deviation of the MDI increase. This matter makes it difficult to estimate the necessary computing time that the MILP needs for solving the problem.
In the dynamic environments, the decision makers must establish a planning after each disruption. This one should be constructed as soon as possible, in preference before the occurrence of another disruption. In our case, according to the discretization that we have assumed, the arrival of a new job can be possible at each ∆t. So, if the MDI exceeds ∆t, it is considered as an unacceptable time. Based on the assumption assuming that ∆t is equivalent to 10 min (600 s) and according to Table 13, the proposed MILP model is effective in solving the problems made up of 7 initial jobs subjected to disruptions which occur at each period with an appearance frequency up to 0.6. It is averagely equivalent to 35 jobs in total. We have also performed the problems for 7 initial jobs with p θ = 0.7 (40 jobs). The program ran for 12 h. Then, we have interrupted the simulation.

Conclusions and Perspectives
In this paper, a new optimization criterion is proposed to evaluate the performance of the schedules on a single machine rescheduling problem. This new criterion combines simultaneously the total weighted waiting time as a schedule efficiency and the total weighted completion time deviation as a schedule stability. This association of criteria has never been studied in the literature before and it can be a very helpful and significant criterion in industrial and hospital environments. A MILP model is proposed and an iterative predictive reactive strategy is established for dealing with the online part. In dynamic environments, the rescheduling is often used to organize the production activities. In such situations, the customer priority has a major importance, since it defines which order to prioritize. Therefore, it was consistent to consider this factor on the performance measure. Thus, the weights are considered in the efficiency and stability criterion. However, by testing our approach, we have observed that the low weighted operations may be constantly postponed by the occurrence of new high weighted operations, the solution proposed to overcome this drawback consists in varying the weight of operations in function of the time spent in the system. In this case, the challenge will be how to regulate the weight variation so as not to impact the schedule efficiency. The established study demonstrates a large percentage deviation of the objective function from the case of the static weights, but the proposed conception helps the low weighted operations to be executed. The model resolution allows us to conclude that: - The increase of the TWWT at each rescheduling step is due to the low-weight jobs. These jobs move forwards after each disruption since the new arriving jobs are included in the middle of the schedule. -Increasing the weight in function of the time helps the low weight earliest jobs to be executed. However, the values of the weight index must be increased for the high values of α. -Analyzing the maximum duration of iteration, the computing time depends on both the set of initial jobs and the appearance frequency of the jobs.
This work can be of great interest for the decision makers that manage rescheduling systems in hospital or industrial environments. It can help them to provide, at each rescheduling step, an optimal schedule for a single machine problem. The main limitation of this work is the number of the jobs. Firstly, we have considered only one job arrives per period. Secondly, the MILP is limited to averagely solve 35 jobs in a horizon of 48 ut (equivalent to 8 h). At those time, even before COVID-19 disaster, the pressure on healthcare services is continuously increasing. In fact, the hospital decision makers must quickly establish a planning in response to the emergency patients' arrival, even if several ones arrive at the same time. Thus, the execution time is also an interesting debate. Heuristics wSPT and FIFO have proved their speed to cope with this kind of problem, but their efficiency is weak compared the MILP resolution. Therefore, one of the main short-term prospects is to design efficient heuristics and metaheuristics to browse even more jobs in a reasonable time, as well as a comparison of the proposed approximate methods with other existing heuristics. We will also consider multi machines problems at middle-term research objective.
Author Contributions: Conceptualization and methodology, A.T., C.S. and N.S. MILP development, computing tests and manuscript writing, A.T. Ideas validation, work supervision, reviewing and editing the manuscript, C.S. and N.S. All authors have read and agreed to the published version of the manuscript.
Funding: This work is financially supported by Urban Community of Sarreguemines-France and Grand Est Region-France.
Institutional Review Board Statement: Not applicable.