You are currently viewing a new version of our website. To view the old version click .
Logistics
  • Article
  • Open Access

4 November 2025

Scheduling Jobs on Unreliable Machines Subject to Linear Risk

and
Dipartimento di Ingegneria dell’Informazione e Scienze Matematiche, Università di Siena, 53100 Siena, Italy
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Multi-Criteria Decision-Making and Its Application in Sustainable Smart Logistics—2nd Edition

Abstract

Background: This paper addresses a new class of scheduling problems in the context of machines subject to (unrecoverable) interruptions; i.e., when a machine fails, the current and subsequently scheduled work on that machine is lost. Each job has a certain processing time and a reward that is attained if the job is successfully completed. Methods: For the failure process, we considered the linear risk model, according to which the probability of machine failure is uniform across a certain time horizon. Results: We analyzed both the situation in which the set of jobs is given, and that in which jobs must be selected from a pool of jobs, at a certain selection cost. Conclusions: We characterized the complexity of various problems, showing both hardness results and polynomial algorithms, and pointed out some open problems.

1. Introduction

This paper addresses a scheduling problem that arises in the context of machines subject to failures. In several models, failures imply the temporary unavailability of the machine and they occur at unpredicted times, though a probabilistic description of the failure process is usually available []. In many models, after the machine recovers from failure, the job that was being processed can either continue where it left off (preempt-resume models), or it has to be restarted from scratch (preempt-repeat models) [,]. In these cases, the problem is to optimize a classical scheduling-related objective, such as the weighted completion time or some tardiness-related measure, etc. In this paper, we address a different situation, namely when machines are subject to unrecoverable interruptions []. By “interruption”, we mean that, while processing a job, a processing resource may become unavailable for whatever reason (e.g., a real machine failure or the machine being preempted by some higher-priority user). In any case, when an interruption occurs, the current job and all subsequently scheduled jobs on the machine are lost. In such contexts, we consider that each job has a certain value if successfully carried out, and the problem becomes that of deciding the schedule of the jobs so as to maximize the expected amount of this value. Clearly, such a decision must take into account the specificity of the failure process of the machine(s). The models addressed in this paper have applications in both manufacturing [] and computer systems [].
This paper addresses the following scenario. Consider a job set J = { 1 , , n } ( | J | = n ) and m identical machines. Each job requires a certain processing time p j , j = 1 , , n and, if it is successfully carried out, a reward r j is gained. The machines are subject to unrecoverable interruptions.
In this context, we address two different problem types:
(i)
Given J, the problem is to assign the jobs to the m machines and sequence them on each machine in order to maximize the expected reward. We called this problem the unrecoverable interruption machine-scheduling problem, and denote it by UIMSP ( m ) .
(ii)
Given J, for each job, a selection cost c j is further specified for selecting it. The problem is then to decide which subset S J of jobs should be selected, and how the selected jobs should be assigned and sequenced on the machines in order to maximize the net expected reward, i.e., the expected reward of the selected jobs minus their selection cost. The problem is then called the unrecoverable interruption machine-scheduling problem with job selection, denoted by UIMSPS ( m ) . The cardinality of S may or may not be fixed, giving rise to two variants of the problem. If it is imposed that | S | = k , the problem is denoted by UIMSPSk(m).
Note that c j represents the cost to secure job j, e.g., the cost of raw materials or other fixed costs. It is important to underline that, even if a selected job is not carried out because of the interruption, the cost c j is borne anyhow.
For UIMSP (and UIMSPS) to be fully defined, one must specify the machine failure process. In this paper, we addressed the case in which the failure process follows a linear risk model, i.e., the probability of interruption is proportional to the amount of time elapsed since the beginning of the schedule (see, e.g., []). In our paper, we considered that a time horizon of length T is given, i.e., the jobs must be completed within a certain time horizon T, as the machine is not available afterwards. Besides representing all situations with a constant failure rate, the linear risk assumption can conveniently represent situations in which access to the processing resource is given at budget cost, but it is not granted for the whole time horizon, as some higher-priority users may claim the resource at any time []. Without any information on when the resource can be withdrawn, we assume that the interruption probability is uniformly distributed across the time horizon, hence yielding the linear risk model.
In this paper, we address various problems:
  • UIMSP(1). In this case, we show that the problem can be solved in polynomial time by a simple priority rule.
  • UIMSPS(1). We show that UIMSPSk(1) is, in general, NP-hard, but some special cases are polynomially solvable. We derived a particularly efficient algorithm for the case in which jobs have a unit-processing time ( p j = 1 ).
  • UIMSP(m). We show that the problem is, in general, NP-hard, but it can be solved in polynomial time for the special case in which all jobs have the same reward ( r j = r ) or the same processing times ( p j = p ).
The layout of this paper is as follows. In Section 2, we review the relevant literature on these kinds of problems. In Section 3, the various complexity results are presented. Section 4 reports the results of a set of computational experiments. Finally, in Section 5, some conclusions are drawn.

2. Literature Review

Scheduling models in which machines may fail constitute a meaningful subfield in the area of machine scheduling []. However, in most of these models, it is assumed that the machine remains unavailable for a certain time span (during which it is repaired), and thereafter, processing activity can be resumed []). Unlike these contexts, here we consider that machine interruptions are unrecoverable, i.e., there is no possibility of resuming jobs that have been interrupted or processing jobs that have not even started. Hence, scheduling decisions must carefully take this issue into account. Unlike usual scheduling settings, the manager’s objectives are not related to classical objective functions such as tardiness or flow time-related measures, but rather to maximizing the expected profit.
Enlarging the view beyond the scheduling domain, there are other situations in which unrecoverable events may bring activities to a halt. For example, in the case of project scheduling, uncontrollable events may force the project to close down [], or in diagnostic problems, the tests are stopped when the results of the previous tests show that the overall system is not properly working []. Within the scheduling field, unrecoverable interruptions are considered in cloud computing, and strategies are enacted to hedge against host permanent failures (HPCs) []. Benoit et al. [] analyzed the problem of sharing the workload among various computers subject to unrecoverable interruptions, using strategies such as job replication or checkpointing. A stream of works exists in which failure probabilities are associated with the jobs; i.e., for each job j, a probability π j is given that the machine will fail while performing that job. This problem is called the unreliable job-scheduling problem (UJSP). The problem is to sequence the jobs in order to maximize the expected reward. Under a different name, this problem was introduced by Stadje [], who was the first to show that a simple priority rule solves UJSP without further constraints. For m = 2 , UJSP becomes NP-hard [], and a list-scheduling algorithm on m machines produces a solution that is 0.8535 -approximate []. Notice that, if the machine failure process is exponentially distributed, the probability that a job is successfully carried out does not depend on the amount of time elapsed, but only on the job’s processing time. In this case, if  p j denotes the processing time of job j, then π j = e λ p j . In other words, when machine failures are exponentially distributed, UJSP and UIMSP coincide. This special case of UJSP is dealt with in []. We are not aware of papers that have addressed UIMSP with linear risks, but in [], the need for investigating this type of failure model is pointed out.
The selection variant of UJSP has received less attention in the literature; however, it has been shown [] that the problem of selecting the jobs in order to maximize the net expected reward can be solved by a greedy algorithm when all the jobs have the same selection cost. Ref. [] further characterized special cases of selection problems (including UJSP), which can be solved by a greedy approach. However, in the previous papers, the complexity of the general UJSP with selection costs was left open. We are not aware of papers that have addressed UIMSP with job selection (i.e., UIMSPS). Our paper, using a linear risk model, is the first contribution in this sense.

3. Methodology

In this section, we address the complexity of various cases of UIMSP and UIMSPS when failures follow a linear risk model. Under this model, a time horizon T is specified, and the failure probability is uniformly distributed across T. Hence, the probability that a machine is still working at time t < T , which can be called q t , is given by
q t = 1 t T .
When m 2 , machines may fail independently, and the time horizon T is the same for each machine.
In the following, if  T i p i , we say that T is nonbinding; otherwise, we say that T is binding.

3.1. UIMSP(1)

Let us start from the simplest problem, i.e., there is a single machine and there are no selection costs. In what follows, we consider the case when T is nonbinding.
Given a schedule σ , let σ ( i ) denote the job in the i-th position. So, the first scheduled job is σ ( 1 ) , of length p σ ( 1 ) and reward r σ ( 1 ) . Due to (1), the expected reward stemming from the execution of the first scheduled job is given by
r σ ( 1 ) 1 p σ ( 1 ) T .
The reward of σ ( 2 ) is attained if the machine does not fail within time p σ ( 1 ) + p σ ( 2 ) ; hence, its contribution to the expected reward is
r σ ( 2 ) 1 p σ ( 1 ) + p σ ( 2 ) T ,
and so on. Given a schedule σ = { σ ( 1 ) , σ ( 2 ) , , σ ( n ) } , the expected reward E R ( σ ) of the schedule σ is therefore given by
E R ( σ ) = i = 1 n r σ ( i ) 1 k = 1 i p σ ( k ) T .
Example 1.
Consider the example in Table 1, where T = 10 . Figure 1a shows the probability q t that the machine is still working at time t. Starting from 1, q t linearly decreases, becoming 0 at time T = 10 . Figure 1b illustrates a feasible schedule { 2 , 3 , 1 } . Under this schedule, the probability of successfully carrying out job 2 is ( 1 p 2 / T ) = 0.6 ; hence, the expected reward from scheduling job 2 is r 2 ( 1 p 2 / T ) = 80 · 0.6 = 48 . The probability of successfully carrying out job 3 is ( 1 ( p 2 + p 3 ) / T ) = 0.3 , and therefore, job 3 contributes by r 3 ( 1 ( p 2 + p 3 ) / T ) = 55 · 0.3 = 16.5 . A similar computation for job 1 yields r 1 ( 1 ( p 2 + p 3 + p 1 ) / T ) = 50 · 0.1 = 5 ; hence, the total expected reward of this schedule is z = 69.5 .
Table 1. Data for Example 1.
Figure 1. The linear risk function (a) and two feasible schedules (b) and (c) for Example 1.
We next show that UIMSP(1) can be efficiently solved when T is nonbinding. From (2), we have
E R ( σ ) = i = 1 n r σ ( i ) 1 T i = 1 n r σ ( i ) h = 1 i p σ ( h ) .
Since the first term of the above expression is a constant, the expected reward E R ( σ ) is maximized when
1 T i = 1 n r σ ( i ) h = 1 i p σ ( h )
is minimized. Now consider the classical scheduling problem 1 | | w j C j , consisting of minimizing the total weighted completion time of n jobs, with each having a processing time p j and a weight w j . If we let w j : = r j / T , observing that p σ ( 1 ) + , p σ ( i ) is the completion time of the job in the i-th position, (4) coincides with the value of the total weighted completion time of a given schedule. As a consequence, (4) is minimized by sequencing the jobs according to Smith’s rule [], i.e., by nondecreasing order of the ratios
p j r j .
Theorem 1.
When m = 1 , UIMSP under linear risk is solved in O ( n log n ) .
Note that the ordering rule is fairly intuitive, in that it gives precedence to jobs with a small processing time and a large reward.
Example 2.
By sequencing the jobs of Example 1 by increasing ratios p j / r j , one obtains the schedule in Figure 1c, i.e., schedule { 1 , 2 , 3 } . This is the optimal schedule, and its value is given by 50 ( 1 2 / 10 ) + 80 ( 1 ( 2 + 4 ) / 10 ) + 55 ( 1 ( 2 + 4 + 3 ) / 10 ) = 77.5 .
Finally, we observe that, if T is binding (i.e., T < j p j ), the nature of the problem changes significantly, since no jobs can complete after T. Thus, the problem becomes deciding which jobs should be completed before T, and UIMSP(1) becomes equivalent to UIMSPS(1) with no costs. The complexity of UIMSP(1) in this case is still open.

3.2. UIMSPS(1) and UIMSPSk(1)

In this section, we address the problem with selection costs; i.e., each job j also has a cost c j that is paid if the job is selected for processing. Notice that, once a subset S J of jobs is selected to be performed within T, the optimal sequence is obtained by ordering the jobs according to (5). Hence, UIMSPS(1) essentially consists of finding S. In what follows, we denote by z ( S ) the value of the net expected reward when S is selected, i.e., from (2):
z ( S ) = E R ( S ) j S c j
We next propose a dynamic programming algorithm for solving UIMSPS(1). In what follows, we assume that the jobs are numbered by nondecreasing values of the ratio (5). For a given positive integer B T , let P ( j , B ) denote the subproblem restricted to jobs { 1 , 2 , , j } , under the condition that the last selected job completes exactly at time B. We denote by F ( j , B ) the value of the optimal solution of P ( j , B ) . It is possible to express F ( j , B ) by means of the following recursive formula:
F ( j , B ) = max { F ( j 1 , B ) ; F ( j 1 , B p j ) + r j ( 1 B T ) c j }
The first term corresponds to the case in which j is not selected in the optimal solution of P ( j , B ) . The second term corresponds to selecting j, in which case job j is appended at the end of the optimal solution of P ( j , B p j ) , and the contribution of job j is accounted for. The value of the optimal solution of UIMSPS(1) is given by
z * = max 0 B T { F ( n , B ) } .
The correctness of the algorithm is derived from the following considerations. Suppose that, for some ( j , B ) , in the optimal solution of P ( j , B ) , there is some idle time of length I ˜ between two consecutive jobs (which is possible, to meet the constraint that the last job precisely completes at B). By eliminating such an idle time, we obtain a schedule that is feasible for the subproblem P ( j , B I ˜ ) and has a strictly larger net expected reward. Hence, as we consider all values of B in (8), the makespan B * T of the optimal solution to UIMSPS(1) is also included.
Formula (7) must be suitably initialized, as follows:
F ( j , 0 ) = 0 f o r   a l l   j = 1 , , n ,
F ( 0 , B ) = 0 f o r   a l l   B = 0 , , T ,
F ( j , B ) = f o r   a l l   B < 0 ,
F ( 1 , B ) = max { 0 , r 1 ( 1 p 1 / T ) c 1 } if   p 1 B if   p 1 > B
In conclusion, the following result holds.
Theorem 2.
UIMSPS(1) can be solved in O ( n T ) time.
Proof. 
Formula (7) has to be computed for each value of i and B. As each F ( i , B ) is simply obtained by comparing two terms, it can be computed in constant time, and the thesis follows.  □
In Section 4, we report the results of some computational experiments about the viability of the above dynamic programming approach. Notice that, although the complexity of UIMSPS(1) is open, the result of Theorem 2 rules out the strong NP-hardness of UIMSPS(1).
Concerning UIMSPS(1), as its complexity is open, a relevant issue is to figure out whether it could be solved by a greedy approach. A greedy algorithm in this context simply consists of iteratively adding to the set of selected jobs a new job that maximizes the improvement of the objective function (see Algorithm 1). The algorithm stops when no other jobs can be added to the current set, either because the time horizon would be violated or because any additions would only decrease the value of the objective function. In this algorithm, P ( S ) denotes the total processing time of a set S of jobs.
Algorithm 1: Greedy algorithm for UIMSPS(1)
Logistics 09 00157 i001
The following example shows that the greedy algorithm may not return the optimal solution.
Example 3.
Consider an instance of UIMSPS(1) with the data in Table 2, and the time horizon is T = 83 . Jobs are numbered according to (5). The greedy algorithm starts by considering singletons, as follows: z ( { 1 } ) = r 1 ( 1 p 1 / T ) c 1 = 5366.26 , z ( { 2 } ) = r 2 ( 1 p 2 / T ) c 2 = 5483.97 , and z ( { 3 } ) = r 3 ( 1 p 3 / T ) c 3 = 5101.2 . Among these options, the best is z ( { 2 } ) , so job 2 is added to S. Next, we observe that all sets with two jobs have a total length smaller than T. Hence, the next step of the greedy algorithm compares sets { 1 , 2 } and { 2 , 3 } , yielding z ( { 1 , 2 } ) = r 1 ( 1 p 1 / T ) + r 2 ( 1 ( p 1 + p 2 ) / T c 1 c 2 = 5493.61 and z ( { 2 , 3 } ) = r 2 ( 1 p 2 / T ) + r 3 ( 1 ( p 2 + p 3 ) / T c 2 c 3 = 5404.45 . As  z ( { 1 , 2 } ) > z ( { 2 , 3 } ) , job 1 is added to S. Since p 1 + p 2 + p 3 > T , the set { 1 , 2 , 3 } cannot be considered, and in conclusion, the greedy algorithm returns set S = { 1 , 2 } . However, it is easy to check that the optimal solution is S * = { 1 , 3 } , as  z ( { 1 , 3 } ) = r 1 ( 1 p 1 / T ) + r 3 ( 1 ( p 1 + p 3 ) / T c 1 c 3 = 5768.67 > z ( { 1 , 2 } ) .
Table 2. Data for Example 3.
We next address UIMSPSk, i.e., the case in which | S | = k . To establish the complexity of the problem, we first express UIMSPSk(1) in decision form:
UIMSPSk(1)_dec. A set J of n jobs is given. Each job has a processing time p i , a reward r i , and a cost c i , i = 1 , , n . Given an integer T, and the linear risk function
1 t T ,
is there a subset S of k jobs such that, when selected and scheduled on the machine, the net expected reward is at least R?
In the proof, we use the following well-known NP-complete problem:
Equal-size partition. Given a set of positive integers, N = { a 1 , a 2 , , a n } (n even), such that i = 1 n a i = A , is there an equal-size partition, i.e., a partition ( S , S ¯ ) of N, such that | S | = | S ¯ | = n / 2 and
i S a i = i S ¯ a i = A 2 ?
Theorem 3.
UIMSPSk(1)_dec is NP-complete.
Proof. 
Clearly, UIMSPSk(1)_dec is in NP. Consider an instance of an equal-size partition with n integers, in which we denote as a m i n the smallest integer in N. We define an instance of UIMSPS(1)_dec as follows. There are n : = n jobs, corresponding to the integers of the equal-size partition, and we let k : = n / 2 . The processing times and the rewards of the jobs are defined as
r i : = p i : = A 2 + a i , i = 1 , , n
Note that, since p i / r i = 1 for all jobs, the order in which the selected jobs are sequenced is immaterial. The time horizon T is defined as
T : = i = 1 n p i 2 = n A 2 + A 2 .
For all i = 1 , , n , let
c i : = K p i 2 2 T ,
where K is a suitable integer defined later on. We want to establish if there is a subset S of n / 2 jobs yielding a net expected profit of at least
R : = T 2 n 2 K .
Consider a subset S of jobs. For the sake of simplicity, let us number the selected jobs with 1 , 2 , , n / 2 . The corresponding value z of the net expected reward is, from (2),
z ( S ) = j = 1 n / 2 p j 1 i = 1 j p i T j = 1 n / 2 c j = j = 1 n / 2 ( p j c j ) 1 T j = 1 n / 2 i = 1 j p i p j =
= j = 1 n / 2 ( p j c j ) 1 T j = 1 n / 2 p j 2 + i < j p i p j .
Since, given n / 2 integers { p 1 , , p n / 2 } ,
i < j p i p j = ( i = 1 n / 2 p i ) 2 i = 1 n / 2 p i 2 2 ,
we can rewrite (14) as
z ( S ) = j = 1 n / 2 ( p j c j ) 1 T j = 1 n / 2 p j 2 + 1 2 ( ( j = 1 n / 2 p j ) 2 j = 1 n / 2 p j 2 )
= j = 1 n / 2 ( p j c j ) 1 2 T ( j = 1 n / 2 p j ) 2 + j = 1 n / 2 p j 2
and, from (11),
= j = 1 n / 2 ( p j K + p j 2 2 T ) 1 2 T ( j = 1 n / 2 p j ) 2 + j = 1 n / 2 p j 2
= j = 1 n / 2 p j n 2 K 1 2 T j = 1 n / 2 p j 2 .
Now observe that the term K n / 2 is a constant, as it does not depend on which jobs are selected. Hence, the problem is to select n / 2 jobs so that the target value R specified in (12) is met. Considering the function
f ( x ) = x 1 x 2 T ,
we can rewrite the net expected reward (18) as
z ( S ) = f ( j = 1 n / 2 p j ) n 2 K .
We observe that f ( x ) is maximized by x = T , and the maximum value is f ( T ) = T / 2 . This means that, if there is a set S of n / 2 jobs such that
j S p j = T ,
then (18) is the maximum, attaining the value
T 2 n 2 K = R .
Therefore, we can conclude that a selection S of n / 2 jobs achieving the net expected profit of R exists if, and only if, there are n / 2 jobs such that
j S p j = n A 2 2 + j S a j = T = n A 2 + A 2 ,
i.e., if, and only if,
j S a j = A 2 ,
which holds if, and only if, the instance of UIMSPSk(1)_dec is a yes-instance.
We are only left with showing that a suitable value of K exists. Since the costs c j must be positive, from (11), it must hold that, for all j = 1 , , n ,
K > p j 2 2 T .
On the other hand, it must also hold that c i < p i for all i; hence,
c j = K p j 2 2 T < p j .
Consider the following:
K : = ( A 2 + A ) 2 2 T .
Since p j 2 = ( A 2 + a j ) 2 < ( A 2 + A ) 2 , (20) holds for all j = 1 , , n . Moreover, recalling the definition of T (10), we can write
K = A 4 + A 2 + 2 A 3 2 T = A 4 2 T + A 2 + 2 A 3 n A 2 + A < A 4 2 T + 1 n + 2 A 2 n A + 1 < A 4 2 T + 1 n + 2 A n ;
since, obviously, 1 / n + 2 A / n < A 2 , and from (9), A 2 + a min p j for all j; thus, one has
K < A 4 2 T + A 2 < ( A 2 + a m i n ) 2 2 T + ( A 2 + a m i n ) p j 2 2 T + p j ,
Also, (21) is fulfilled for all j = 1 , , n . Finally, as  K < 4 A 4 , we observe that the number of bits necessary to encode K does not exceed log ( 4 A 4 ) = 16 log A , hence resulting in a polynomial in the input size of the equal-size partition.    □

3.2.1. UIMSPSk(1) with Identical Rewards

We next consider the special case of UIMSPSk in which all jobs have the same reward, i.e.,  r j = r for all j = 1 , , n . In this case, the selected jobs are simply sequenced by nondecreasing processing times (SPT rule). We show that this problem can be solved in polynomial time.
Given a set S, let us denote the k selected jobs as σ ( 1 ) , σ ( 2 ) , , σ ( k ) , and suppose they are numbered by nondecreasing processing times. Then, the value of the objective function is
z ( S ) = k r T ( k p σ ( 1 ) + ( k 1 ) p σ ( 2 ) + + ( k h + 1 ) p σ ( h ) + + p σ ( k ) ) i = 1 k c σ ( i )
= r k r T i = 1 k ( k i + 1 ) p σ ( i ) i = 1 k c σ ( i ) .
As k is fixed, the objective of maximizing z ( S ) is therefore equivalent to that of minimizing the following function z ¯ ( S ) :
z ¯ ( S ) = r T i = 1 k ( k i + 1 ) p σ ( i ) + i = 1 k c σ ( i ) .
From this expression, if a job j is selected and assigned to position h, its contribution to z ¯ ( S ) is given by
r T ( k h + 1 ) p j + c j .
Suppose now that the jobs are numbered by nondecreasing processing times. UIMSPSk is equivalent to the following assignment problem, in which x j h = 1 if job j is assigned to position h:
min j = 1 n h = 1 k q j h x j h
j = 1 n x j h = 1 h = 1 , , k
h = 1 k x j h 1 j = 1 , , n
x j h 0 j = 1 , , n h = 1 , , k
where
q j h = r T ( k h + 1 ) p j + c j if h j + if h > j
The correctness of Formulation (24) is derived from the fact that exactly k jobs are selected, as implied by the k assignment constraints (24b). Given an optimal solution { x * } of (24a)–(24d), the optimal set S k * of selected jobs is given by
S k * = { j | x j h * = 1 for   some h } .
Notice that the second case in (25) is derived from the observation that we can assume that a job j will not occupy a position h larger than j. Suppose, in fact, that j S k * and j is assigned position h > j . Hence, some other job u S k * such that p u p j is assigned a position < h . By switching positions between jobs j and u, it is easy to check that z ¯ ( S ) decreases in the amount of ( h ) ( p u p j ) , which is always nonnegative. We can therefore conclude the following:
Theorem 4.
UIMSPSk(1), when r j = r , can be solved in O ( n 3 ) .
If we solve UIMSPSk(1) for all values k = 1 , , n , we obtain n optimal subsets, S 1 * , S 2 * , , S n * . Clearly, the optimal solution to UIMSPS(1) (without a cardinality constraint on S) is
S * = arg max 1 k n { z ( S k * ) } ,
so we obtain the following result.
Theorem 5.
UIMSPS(1), when r j = r , can be solved in O ( n 4 ) .

3.2.2. UIMSPSk(1) with Unit-Processing Times

Let us now consider another special case, namely when p j = 1 for all j, and we must select k jobs. Since, in this case, the jobs have identical processing times, from (5), the selected jobs will be sequenced by nonincreasing rewards. Moreover, we obviously assume that k T , as otherwise, the problem would be infeasible.
As before, consider a set S of k selected jobs, denoted as σ ( 1 ) , σ ( 2 ) , , σ ( k ) , and suppose they are numbered by nonincreasing rewards. The value of the objective function is
z ( S ) = r σ ( 1 ) 1 1 T + r σ ( 2 ) 1 2 T + + r σ ( k ) 1 k T .
Hence, if job i is selected and assigned to position j, its contribution to the objective function is
r i 1 j T c i .
Therefore, this special case of UIMSPSk can also be solved through an assignment problem. However, in this case, it is easy to see that the problem can be solved more efficiently, taking advantage of the following property.
Theorem 6.
Given an instance of UIMSPS in which all jobs have the same processing time p j = 1 , let S k 1 be the optimal set of UIMSPSk−1. There exists an optimal set S k for UIMSPSk that includes S k 1 ( k = 2 , , n ).
Proof. 
The proof is by induction on k. Consider k = 1 , i.e., set S 1 . In this case, the selected job h is such that
r h 1 1 T c h = max i N r i 1 1 T c i .
Now suppose that the set S 2 does not include h, but it includes jobs i and j, so that
z ( S 2 ) = r i 1 1 T + r j 1 2 T c i c j .
If, in  S 2 , we replace i with h, we obtain set S 2 such that
z ( S 2 ) = r h 1 1 T + r j 1 2 T c h c j .
From (27), z ( S 2 ) z ( S 2 ) ; hence, S 2 is optimal and the thesis holds.
Now let us consider a generic value of k, and from the inductive hypothesis, the thesis holds for all of the following problems: UIMSPS1, UIMSPS2, …, up to UIMSPSk−1.
Let σ k denote the optimal schedule for S k , and, in such a schedule, let q be the rightmost job in σ k such that q S k and q S k 1 ; additionally, consider the set S k { q } . If  S k { q } = S k 1 , we are finished. Otherwise, let u be the position occupied by q in the optimal schedule σ k for S k . Since q is the rightmost “new” job, only jobs of S k 1 appear at the right of q (possibly none). Now, we can view σ k as obtained by inserting q in position u in the schedule, which can be called σ k 1 , for the set S k { q } . R denotes the total reward of the k u + 1 jobs occupying the positions u , u + 1 , , k in σ k 1 . When we insert q in position u in σ k 1 , each of the last k u + 1 jobs moves rightwards by one position, so their total contribution to the expected reward decreases by R / T . So, when inserting q in σ k 1 , the expected marginal reward is
z ( S k ) z ( S k { q } ) = r q 1 u T R T c q .
Now consider the optimal schedule σ k 1 for S k 1 , and a new schedule σ ˜ obtained from σ k 1 by adding q in position u in σ k 1 . Since we are inserting q in position u, its contribution to the increase in the expected reward is r q ( 1 u T ) , as in (28). Now, letting R denote the total reward of the jobs occupying the last k u + 1 positions in σ k 1 , as each of these jobs moves rightward by one position, going from σ k 1 to σ ˜ , their contribution to the total reward decreases by R / T . The key observation is that R is certainly not greater than the total reward R of the jobs belonging to S k 1 , which occupy the last k u + 1 positions in σ k 1 . This is because the jobs occupying the last k u + 1 positions in σ k 1 are precisely the k u + 1 jobs with the smallest rewards in S k 1 . As a consequence, when inserting q in σ k 1 , the expected marginal reward is
z ( S k 1 { q } ) z ( S k 1 ) = r q ( 1 u T ) R T c q .
Since R R , the marginal expected reward from adding q to S k 1 (given by (29)) is not smaller than that attained from adding q to S k { q } (given by (28)). But since, from the inductive hypothesis, S k 1 is optimal for UIMSPSk−1, z ( S k 1 ) z ( S k { q } ) , one has from (28) and (29) that
z ( S k 1 { q } ) z ( S k ) .
Hence, adding q in position u to σ k 1 yields a schedule σ ˜ that is not worse than σ k , so σ ˜ is optimal and the thesis holds.  □
Theorem 6 allows a very simple greedy algorithm to be devised for UIMSPSk when p j = 1 (Algorithm 2). Starting from an empty set S, at each step, the job that increases the net expected reward by the most is added to S, until the size k is reached.
Algorithm 2: Greedy algorithm for UIMSPSk when p j = 1
Logistics 09 00157 i002
Let us now consider the complexity. Suppose that we maintain a vector R ( u ) containing the total reward of the jobs occupying positions from u to the end of the schedule, i.e., to position i. For each value of i, we must select the job q that maximizes the marginal reward. For each job j S i , its optimal position can be found in O ( log n ) and the marginal reward computed in constant time through (29). Once the job q is found and its position p ( q ) in the schedule is determined, we must update the vector R ( u ) for all positions u preceding p ( q ) , which takes O ( n ) . As there are k steps, the following result holds.
Theorem 7.
When p j = 1 for all j, UIMPSPk can be solved in O ( n k log n ) .
By solving for all values of k from 1 to n, the optimal value for UIMSPS is given by max k { z ( S k ) } , so we obtain the following result:
Theorem 8.
When p j = 1 for all j, UIMPSP can be solved in O ( n 2 log n ) .

3.2.3. UIMSPSk(1) with Identical Selection Costs

Finally, let us consider the special case of UIMSPSk(1) in which all jobs have the same cost, i.e., c j = c for all j. In this case, the total cost is k c independent of S; hence, the problem is to select exactly k jobs so that the expected reward is maximized. In view of (4), and identifying the rewards r j with job weights, the problem is equivalent to that of selecting k jobs so that the value of their total weighted completion time is minimized. In turn, this can be shown to be equivalent to the special case of the classical job rejection problem, in which, given n jobs, n k jobs can be rejected, so that the total weighted completion time of the selected jobs is minimized. The complexity of this problem is currently open [,].

3.3. UIMSP(m)

Let us now consider UIMSP with two parallel machines (UIMSP(2)). The time horizon T is supposed to be the same for both machines and nonbinding. Consider the following NP-complete problem in decision form:
P 2 | | w i C i . A set J of n jobs is given, with two identical machines and a value F. Each job has a processing time p i and a weight w i . Is there a schedule for the jobs on the two machines such that their total weighted completion time does not exceed F?
We next prove the NP-completeness of UIMSP(2) in decision form, as follows.
UIMSP(2)_dec. A set J of n jobs is given with two machines, each of which has the linear risk function
1 t T .
Each job has a processing time p i and a reward r i , i = 1 , , n . Is there a schedule of the jobs on the two machines such that the expected reward is at least R?
Theorem 9.
UIMSP(2)_dec is NP-complete.
Proof. 
Obviously, UIMSP(2)_dec is in NP. Consider an instance of P 2 | | w j C j , with n jobs and with job i having a processing time p i and a weight w i , for i = 1 , , n and a threshold value F. We define an instance of UIMSP(2)_dec as follows. There are n : = n jobs, with job i having a processing time p i : = p i and a reward r i : = w i . The time horizon is defined as T : = i p i , which is nonbinding. Then, we let R : = i r i F T . Consider now a schedule σ for UIMSP(2)_dec, and let S 1 ( σ ) and S 2 ( σ ) denote the jobs assigned to M 1 and M 2 , respectively, in σ . Denoting with C j ( σ ) the sum of the processing times of the jobs preceding j (including j) on the respective machine, the expected reward of σ is given by
j = 1 n r j 1 T j S 1 ( σ ) r j C j ( σ ) + j S 2 ( σ ) r j C j ( σ ) = j = 1 n r j 1 T j = 1 n r j C j ( σ ) .
Hence, one can determine that there exists a schedule σ with an expected reward of at least R if, and only if,
j = 1 n r j 1 T j = 1 n r j C j ( σ ) R = j = 1 n r j F T .
Since r j = w j and the processing times in the two problems are identical, (31) holds if, and only if,
j = 1 n w j C j ( σ ) F .
Consider now the special case in which the jobs have the same reward, r i = r for all i. In this case, UIMSP(m) can be efficiently solved.
Theorem 10.
If r j = r for all j, UIMSP(m) can be solved in O ( n log n ) .
Proof. 
Given a feasible schedule for UIMSP(m), let S 1 , S 2 , , S m denote the subsets of jobs assigned to machines M 1 , M 2 , , M m , respectively ( S 1 S 2 S m = J ), and let σ ( i ) denote the job in position i on machine M . From (3), we can rewrite the expected reward as
= n r T = 1 m i = 1 | S | i = 1 s p σ ( i ) .
Now observe that maximizing (32) is equivalent to minimizing the rightmost term, i.e., the total completion time of all jobs. Hence, the problem is equivalent to an instance of P | | j C j , which is solved by ordering the jobs in SPT order and assigning them in this order to the machines, in a round-robin fashion. The complexity is therefore given by the sorting step, O ( n log n ) .  □

3.4. UIMSPS(2) and UIMSPSk(2) with Identical Rewards or Identical Processing Times

In the general case, as UIMSP(2) is NP-hard, so too is UIMSPS(2). We next focus on the special cases of UIMSPSk(2) in which either r j = r for all i = 1 , , n , or p j = p for all i = 1 , , n , when T is nonbinding. Both of these cases can be reduced to the assignment problem.
If r j = r for all j = 1 , , n , recalling that we want to minimize z ¯ ( S k ) (given in (22)), from (23), one has that, if a job j is assigned position h on machine M and a total of k jobs are assigned to M , the contribution to the objective function z ¯ ( S k ) of job j is
r T ( k h + 1 ) p j + c j .
while, if p j = p for all j = 1 , , n , and recalling that we want to maximize z ( S k ) , if j is assigned position h on machine M , it completes at time h p , so its contribution to the objective function z ( S k ) is given by
r j 1 h p T c j .
Now, consider UIMSPSk1,k2(2) in which k 1 k jobs must be assigned to M 1 and k 2 = k k 1 jobs to M 2 . If we let x j h ( ) = 1 and if j is assigned position h on M , the problem is solved through the following assignment problem:
min j = 1 n h = 1 k 1 q j h ( 1 ) x j h ( 1 ) + h = 1 k 2 q j h ( 2 ) x j h ( 2 )
j = 1 n x j h ( 1 ) = 1 h = 1 , , k 1
j = 1 n x j h ( 2 ) = 1 j = 1 , , k 2
h = 1 k 1 x j h ( 1 ) + h = 1 k 2 x j h ( 2 ) 1 i = 1 , , n
x j h ( 1 ) 0 j = 1 , , n h = 1 , , k 1
x j h ( 2 ) 0 j = 1 , , n h = 1 , , k 2
where r j = r for all j, and it is assumed that the jobs are numbered by nondecreasing processing times:
q j h ( ) = r T ( k h + 1 ) p j + c j if h j + if h > j
If p j = p for all j, assuming that the jobs are numbered by nonincreasing rewards, and recalling that we want to maximize z ( S k ) ,
q j h ( 1 ) = q j h ( 2 ) = c j r j 1 j h T if h j + if h > j
Note that, in both cases, j cannot be assigned a position h larger than j (on any machine).
Now, denoting by S k 1 , k 2 * the optimal set of selected jobs for UIMSPSk1,k2(2) and denoting by z ( S k 1 , k 2 * ) the corresponding net expected reward, the optimal set S k * for UIMSPSk(2) is given by
z ( S k * ) = max k 1 , k 2 { z ( S k 1 , k 2 * ) } .
Since the value k 1 can be chosen in k different ways (we can rule out k 1 = 0 ), the following result holds:
Theorem 11.
When r j = r for all j, or when p j = p for all j, and T is nonbinding, UIMSPSk(2) can be solved in O ( n 4 ) .
Notice that the same reasoning can be extended to UIMSPSk(m), fixing in all possible ways the number of jobs to be assigned to each machine k 1 , k 2 , , k m and showing that, in these two special cases, for fixed m, UIMSPSk(m) is polynomially solvable.
Theorem 12.
When r j = r for all j, or when p j = p for all j, T is nonbinding, and m is fixed, UIMSPSk(m) can be solved in O ( n m + 2 ) .
The complexity of UIMSPSk(m) when m is not fixed is open.
Complexity results are summarized in Table 3 for UIMSP and in Table 4 for UIMSPS and UIMSPSk.
Table 3. Summary of complexity results for UIMSP.
Table 4. Summary of complexity results for UIMSPS and UIMSPSk.

4. Results

In this section, we present the results of a computational experiment concerning UIMSPS(1). The aim of the experiment was to assess the viability of the dynamic programming algorithm presented in Section 3.2 to solve UIMSPS(1). The experiments were carried out in Python 3.11 within the PyCharm 2022.3.2 environment on a system equipped with a 12th Gen Intel(R) Core(TM) i9-12900 processor (2.40 GHz) and 128 GB of RAM. The machine runs Windows Server 2022 Standard, version 21H2.
We considered randomly generated instances for various values of the problem parameters. In particular, we generated instances with n {100, 1000, 10,000}, integer processing times uniformly drawn from [ 1 , 100 ] , and a time horizon T { 25 n , 50 n , 75 n } . The rewards are uniformly distributed in [ 1 , 100 ] . To define the costs, for each job, we sampled a coefficient
u j Uniform [ 0 , 0.8 ] ,
and set
c j = r j · u j .
Notice that, in this way and for all jobs, the cost of the job is smaller than its reward, so no job was discarded a priori in our instances.
For each pair ( n , T ) , 10 instances were generated. Table 5 reports the average CPU time of the dynamic programming algorithm. The results show that the computational time grows according to the theoretical complexity expressed in Theorem 2, and the dynamic program is a viable algorithm even for very large instances. In order to test the limits of the approach, we even ran an instance with n = 100,000 jobs and T = 2,500,000, which took a little more than 5 h to complete.
Table 5. Average CPU time (seconds) in each scenario (average over 10 instances).

5. Conclusions

In this paper, we addressed a new model for scheduling jobs on machines subject to unrecoverable interruptions. The model applies to all those situations in which a value (reward) is attached to the successful accomplishment of a job, but the resource is withdrawn within a certain time horizon T. Here, we addressed the case of linear risk; i.e., the probability of the resource being withdrawn is uniform across T. In this scenario, we established the complexity of various job-selection and scheduling problems.
Table 3 and Table 4 summarize the complexity results provided in the paper, and they also point out relevant open problems from the viewpoint of complexity. In particular, the precise status of UIMSPS(1) is still open for what concerns NP-hardness. Concerning this problem, an interesting research topic is the analysis of the greedy algorithm presented in Section 3.2. Future research might pursue a theoretical bound on the approximation provided by such an algorithm, complemented by a thorough experimental study of the greedy and/or other heuristics.
Other relevant topics for future research on both UIMSP and UIMSPS deal with modeling issues, including scenarios in which (i) interruptions follow distributions other than the linear risk model or (ii) correlations exist among unrecoverable interruptions on different machines.

Author Contributions

Conceptualization, A.A. and I.S.; methodology, A.A. and I.S.; writing—original draft preparation, A.A. and I.S.; writing—review and editing, A.A. and I.S.; funding acquisition, A.A. and I.S.; formal analysis, A.A. and I.S.; data curation, A.A. and I.S.; software, A.A. and I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Italian Ministry of University and Research, grant PNRR-Next Generation EU-THE-Spoke 10-CUP: B63C22000680007.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Schmidt, G. Scheduling with limited machine availability. Eur. J. Oper. Res. 2000, 121, 1–15. [Google Scholar] [CrossRef]
  2. Adiri, I.; Bruno, J.; Frostig, E.; Rinnooy Kan, A. Single Machine Flow-Time Scheduling with a Single Breakdown. Acta Inform. 1989, 26, 679–696. [Google Scholar] [CrossRef]
  3. Yin, Y.; Wang, Y.; Cheng, T.; Liu, W.; Li, J. Parallel-machine scheduling of deteriorating jobs with potential machine disruptions. Omega 2017, 69, 17–28. [Google Scholar] [CrossRef]
  4. Benoit, A.; Hakem, M.; Robert, Y. Contention awareness and fault-tolerant scheduling for precedence constrained tasks in heterogeneous systems. Parallel Comput. 2009, 35, 83–108. [Google Scholar] [CrossRef]
  5. Agnetis, A.; Detti, P.; Pranzo, M.; Sodhi, M.S. Sequencing unreliable jobs on parallel machines. J. Sched. 2009, 12, 45–54. [Google Scholar] [CrossRef]
  6. Benoit, A.; Robert, Y.; Rosenberg, A.L.; Vivien, F. Static strategies for worksharing with unrecoverable interruptions. Theory Comput. Syst. 2013, 53, 386–423. [Google Scholar] [CrossRef]
  7. Szmerekovsky, J.G.; Venkateshan, P.; Simonson, P.D. Project scheduling under the threat of catastrophic disruption. Eur. J. Oper. Res. 2023, 309, 784–794. [Google Scholar] [CrossRef]
  8. Unluyurt, T. Sequential testing problem: A follow-up review. Discret. Appl. Math. 2025, 377, 356–369. [Google Scholar] [CrossRef]
  9. Li, Z.; Chang, V.; Hu, H.; Li, C.; Ge, J. Real-time and dynamic fault-tolerant scheduling for scientific workflows in clouds. Inf. Sci. 2021, 568, 13–39. [Google Scholar] [CrossRef]
  10. Stadje, W. Selecting jobs for scheduling on a machine subject to failure. Discret. Appl. Math. 1995, 63, 257–265. [Google Scholar] [CrossRef]
  11. Agnetis, A.; Lidbetter, T. The largest-Z-ratio-first algorithm is 0.8531-approximate for scheduling unreliable jobs on m parallel machines. Oper. Res. Lett. 2020, 48, 405–409. [Google Scholar] [CrossRef]
  12. Agnetis, A.; Detti, P.; Martineau, P. Scheduling nonpreemptive jobs on parallel machines subject to exponential unrecoverable interruptions. Comput. Oper. Res. 2017, 79, 109–118. [Google Scholar] [CrossRef]
  13. Agnetis, A.; Benini, M.; Detti, P.; Hermans, B.; Pranzo, M.; Schewior, K. Replication and sequencing of unreliable jobs on m parallel machines: New results. Comput. Oper. Res. 2025, 183, 107085. [Google Scholar] [CrossRef]
  14. Olszewski, W.; Vohra, R. Simultaneous selection. Discret. Appl. Math. 2016, 200, 161–169. [Google Scholar] [CrossRef]
  15. Smith, W. Various optimizers for single stage production. Nav. Res. Logist. Q. 1956, 3, 59–66. [Google Scholar] [CrossRef]
  16. Shabtay, D.; Gaspar, N.; Kaspi, M. A survey on offline scheduling with rejection. J. Sched. 2013, 16, 3–28. [Google Scholar] [CrossRef]
  17. Schmidt, D. Scheduling with Position-Dependent Speed. Master’s Thesis, Fakultät für Mathematik, Technische Universität München, München, Germany, 2017. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.