Next Article in Journal
Benchmarking Statistical and Deep Generative Models for Privacy-Preserving Synthetic Student Data in Educational Data Mining
Previous Article in Journal
A Survey of Six Classical Classifiers, Including Algorithms, Methodological Characteristics, Foundational Variants, and Recent Advances
Previous Article in Special Issue
Deconstructing a Minimalist Transformer Architecture for Univariate Time Series Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability

by
Nodari Vakhania
1,*,
Frank Werner
2,* and
Kevin Johedan Ramírez-Fuentes
1
1
Centro de Investigación en Ciencias, Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Mexico
2
Faculty of Mathematics, Otto-von-Guericke University, 39106 Magdeburg, Germany
*
Authors to whom correspondence should be addressed.
Algorithms 2026, 19(1), 38; https://doi.org/10.3390/a19010038
Submission received: 1 November 2025 / Revised: 27 December 2025 / Accepted: 30 December 2025 / Published: 4 January 2026
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)

Abstract

Since the publication of the first scheduling paper in 1954, a huge number of works dealing with different types of single machine problems have appeared. They addressed many heuristics and enumerative procedures, complexity results or structural properties of certain problems. Regarding surveys, often particular subjects like special objective functions were discussed or more general scheduling problems were surveyed, in which a substantial part was devoted to single machine problems. In this paper, we focus on standard settings, basic structural properties of these settings, polynomial algorithms and complexity and approximation issues, which have not been reviewed so far, and suggest some future work in this area.

1. Introduction

Scheduling problems arise in many real-life applications, including in our everyday life. We commonly desire to optimize our time by arranging different activities, where the time is limited and the resources that we use are also scarce. We wish to optimize some of our personal criteria, but we also depend on the availability of the above-mentioned resources. As a simple but very common daily example, consider an apartment with a single bathroom, where a small family lives. The family members get up at some fixed time, and all of them need to spend some time in the bathroom, which is a scarce resource (or machine). Typically such a (renewable) resource may serve a single object or job, a family member in our example. Every member of the family needs to leave the apartment at some fixed time to reach their job or school without delay. This time can be different for different members of the family. Here arises a small optimization problem that requires us to schedule the time interval for each member of the family in order to meet the deadline of each member of the family.
Generally, in a scheduling problem we are given jobs or tasks, which are requests that need to be performed by some resource, such as a machine or a processor. In our small example, the bathroom requirements for each member of the family are jobs and the bathroom is a machine. We can give a vast number of examples of jobs and machines in our practical life, e.g., a job in a factory or a software application on a computer or smart phone or a lesson at school. Here machines, computers, smart phones, classrooms and teachers are resource examples. Every job requires some time units on a resource, whereas a resource is able to process at most one job at a time. Indeed, one classroom cannot be used for two different classes simultaneously, or a teacher cannot give two or more classes at the same time. There is a restriction on the whole time window during which a factory or a teacher can work. It is a scheduling task to arrange the order for processing the jobs. Often, there is some minimization or maximization criterion called an objective function. For example, a very common criterion is to minimize the maximum job or machine completion time, called the makespan  C max .
In this paper, we consider single machine scheduling problems. These problems are important for some practical multi-machine scheduling applications. Often there exists a bottleneck machine in a manufacturing environment. If one can solve the scheduling problem on the bottleneck machine optimally, this is at least a very good base for the overall schedule. An intelligent solution method for a single machine scheduling problem may also lead to important insights for the solution of more general shop and multiprocessor scheduling problems. We concentrate our attention on standard settings, basic properties for these settings, polynomial-time algorithms and complexity and approximability issues. Regarding standard settings, most single machine scheduling problems are relatively easy for simultaneously released jobs given that no weight function is associated with the jobs. With non-simultaneously released jobs, single machine scheduling problems become more challenging and essentially more complicated. Here, we deal with such settings, where the jobs, besides release times, are characterized by due dates and consider due-date-oriented objective functions. We review the most important ideas, properties and approaches related to classical standard settings at the borderline between polynomial solvability/approximability on one side and complexity issues on the other side, where a review of the extensive literature on single machine scheduling that considers mostly offline settings is still missing.
Most of the results we review were obtained after the early seminal works by Lenstra et al. [1] and Chen et al. [2], which covered a wide range of scheduling problems, including those on one machine. In general, there is a huge number of papers dealing with single machine problems. Since the publication of [1,2], surveys are mostly restricted to special subjects and/or consider more general problem settings. As an example, there exist three very detailed surveys on scheduling problems with setup times or costs, see [3,4,5]. For problems with so-called p-batching (i.e., several jobs can be processed in parallel in a batch), a survey was given by Fowler and Mönch [6]. The case of equal processing times was surveyed by Kravchenko and Werner [7]. Recently, several works have dealt with single machine problems with learning effects. The first paper on this subject was given by Briskorn [8]; for a survey on learning effects and periodic maintenance, see [9]. The paper by Koulamas and Kyparisis [10] reviews different kinds of dynamic programming algorithms for single machine scheduling problems. Other surveys on single machine problems deal with particular objective functions. As a few examples, we can mention, e.g., a survey on the total weighted tardiness problem [11] and on the weighted number of tardy jobs problem [12]; for more general problems, we mention a survey on scheduling with late work criteria [13] or more recently on early and late work scheduling [14], where a substantial part is dedicated to single machine scheduling. While mostly a minimization problem is considered, Gafarov et al. [15] reviewed algorithms and complexity issues for single machine maximization problems.
In Section 2, we study general structural properties of feasible schedules. These properties have been used for the construction of polynomial-time algorithms, and they may further be useful to build new efficient algorithms for both polynomially solvable and open versions of scheduling problems with job release and due times in general. Then in Section 3, we discuss polynomial algorithms, complexity and approximability issues, implicit enumeration algorithms and lower bounds for due-date-oriented objective functions with an emphasis on the classical maximum job lateness objective function. We also outline some subjects for future research, which includes in part more general multi-machine scheduling problems. This paper ends with concluding remarks in Section 4.
In Table 1 and Table 2, we summarize most of the reviewed works concerning exact and approximation algorithms for single machine scheduling problems. We refer to the particular sections where the corresponding algorithms are discussed. In the time complexity expressions, the variable n stands for the total number of jobs, while additional parameters are defined in the corresponding subsections. The criterion D max is defined in the following subsection.
Since for the borderline problems between polynomial solvability/approximability we outline here heuristic and meta-heuristic algorithms are usually not applied, we do not discuss the big variability of such algorithms for single machine problems. For the interested reader, we mention that for more general N P -hard single machine problems, the standard meta-heuristic techniques are applied. In the recent literature, one can find papers dealing with a genetic algorithm (see, e.g., [39]), both simulated annealing and tabu search (see [40]), ant colony optimization (see, e.g., [41]), variable neighborhood search (see, e.g., [42] or [43]), iterated local search (see [44]), differential evolution (see, e.g., [45]), the meerkat clan algorithm and bald eagle search (see [46]) or gray wolf optimization (see [47]). In addition, particular meta-heuristics have been developed, e.g., for stable scheduling on a single machine [48], a self-adaptive meta-heuristic for the problem with flexible and variable maintenance [49] or a data-driven heuristic combining machine learning with problem-specific characteristics [50], to name a few.
Table 2. Overview of approximation algorithms.
Table 2. Overview of approximation algorithms.
ProblemTime ComplexityReferenceApproximation Factor
Section 3.1
1 | r j | D max O ( n log n ) Schrage [51]2
1 | r j | D max O ( n 2 log n ) Potts [52] 3 / 2
1 | r j | D max O ( n log n ) Nowicki and Smutnicki [53] 3 / 2
1 | r j | D max O ( n 2 log n ) Hall and Shmoys [54] 4 / 3
1 | r j | D max O ( 16 k ( n k ) 3 + 4 k ) Hall and Shmoys [54] ( 1 + 1 / k ) (PTAS)
1 | r j | D max O ( n log n + n ( 4 k ) 8 k 2 + 8 k + 2 ) Hall and Shmoys [54] ( 1 + 1 / k ) (PTAS)
P | r j | D max O ( n + k O ( k ) ) Mastrolilli [55] ( 1 + 1 / k ) (PTAS)
1 | r j | D max O ( κ ! κ 1.1 n 2 log n ) Vakhania [19] ( 1 + 1 / k ) (PTAS)
Section 3.6
1 | r j | D max O n l i n e v e r s i o n O ( n log n ) Hoogeveen and Vestjens [56] ( 5 + 1 ) / 2 1.61803

Basic Notation and Terminology

We shall denote the given set of n jobs by { 1 , 2 , , n } . A schedule S prescribes to each job j the starting time  s j S on the machine (creating in this way a total order on the set of jobs by sequencing the jobs on the machine). Problem restrictions on the way in which the jobs are to be performed by the machine define the set of feasible schedules. For example, a machine can process at most one job simultaneously; the precedence relations defined on the set of jobs restrict the relative order in which the jobs can be scheduled. Scheduling problems are sometimes referred to as sequencing problems since the jobs need to be scheduled in a sequential manner (as no job overlapping is permitted). The precedence relations between the jobs can be represented by directed graphs, which are often referred to as precedence (task) graphs. In such graphs, nodes represent jobs, and there is an arc ( i , j ) if and only if job i immediately precedes job j.
Job preemptions might be allowed or not. In a preemptive scheduling problem, a job can be interrupted and later resumed on the machine. In a non-preemptive scheduling problem, job j, once scheduled on the machine, should be processed without any interruption until its full completion on that machine. The machine is called idle at some moment t if no job is processed by the machine at that moment, otherwise it is said to be busy.
Job j, once scheduled on the machine, needs the processing time  p j on that machine. Thus, if this job is processed without any interruption, its completion time on the machine in the schedule S is c j S = s j S + p j . Since basically all scheduling problems imply resource restrictions, in any feasible schedule S, if job j is scheduled on the machine, then s j S s i S + p i for any job i scheduled earlier on the machine in the schedule S.
Besides the processing time, job j may possess additional parameters (job parameters can normally be assumed to be non-negative integral numbers). For example, the time moment when it becomes available to be processed by the machine is called its release or readiness time and is commonly denoted by r j . Thus, s j S r j , for any feasible schedule S and any job j. The due date d j of job j is the “desirable” completion time of job j on the machine. It is a common situation in real-life applications that it is desirable for a job to be completed by some date, as otherwise it becomes less important.
With job due dates, the quality of a solution is measured by a due-date-oriented objective function. We can judge the urgency of a job j based on its slack  d j r j p j : the earliest job j can be started is at time r j , and hence, the earliest it can be completed is at time r j + p j . If r j + p j > d j , then there is no feasible schedule in which job j completes by its due date. Otherwise, the tighter d j is, equivalently, the less slack of job j is and the more urgent it is.
In some cases, we impose the condition that job j cannot be completed after its due date. Then the term deadline instead of “due date” is commonly used; in a feasible schedule, every job is to be completed by its deadline (in some applications, job deadlines are important since if a job is not completed by its deadline, then its benefit or functionality becomes negligible). Thus, if job deadlines are given, our objective is to find a feasible schedule, one in which all jobs are completed before or at their deadlines on the machine.
In a number of applications, a job, once completed by the machine, needs to be delivered to the customer. The delivery is carried out by an independent transportation unit, which is not a scarce resource in a given scenario. The delivery time of job j is commonly denoted by q j . On the one hand, job delivery takes no machine time. On the other hand, q j contributes to the full completion time of job j: In a given schedule S, the full completion time of job j is C j S = c j S + q j (whereas c j S is the completion time of job j on the machine in that schedule). Thus, the full completion time of a job depends on both, its completion time on the machine and its delivery time. Job due-dates and delivery times are alternative job parameters in the sense that they typically are not present together. With job delivery times, one wishes to minimize maximum job full completion time, whereas with job due-dates one wishes to minimize some due-date-oriented objective function (see below).
Concerning the objectives, an optimal schedule is a feasible schedule which minimizes or maximizes a given objective (goal) function. With job due dates, for example, our objective can be to minimize or maximize some due-date-oriented objective function. For example, let the lateness of a job in schedule S be L j ( S ) = c j S d j , i.e., it is the difference between the completion time of job j on the machine and its due date. One of the common due-date-oriented objective functions is the maximal job lateness L max (which is to be minimized). Another such common objective function that is to be minimized is U j , the number of late jobs, ones completed on the machine after their due dates ( U j is a 0–1 variable indicating whether job j is completed on time or not completed on time on the machine in a given schedule). We may allow the omission of some jobs in the case when the total workload is more than the machine capacity, maximizing the throughput, which is the number of jobs completed on time, i.e., before or at their due dates. We may also consider more general criteria to maximize some profits. The profit of job j can be expressed by its weight, commonly denoted by w j .
If jobs have no due dates and no delivery times, then one typically wishes to minimize the maximum job completion time on the machine, the makespan C max = max j c j . If jobs have delivery times (then the full completion time of every job with a positive delivery time is more than its completion time on the machine), the makespan is defined as the maximum job full completion time C max = max j C j . With job delivery times, to avoid a possible confusion between two job completion times c j and C j , recently Jan Karel Lenstra [57] suggested to use D j = c j + q j (instead of C j ) for the full completion time of job j. With this notation, one wishes to minimize D max = max j D j . This extends the three-field notation introduced much earlier by Graham, Lawler, Lenstra and Rinnooy Kan [58], a commonly used way to abbreviate scheduling problems. In this notation, the first, the second and the third fields define the machine environment, the job characteristics and the objective function, respectively. For example, for the single machine scheduling problems considered here, a “1” is used in the first field; the problem with job release times and due dates to minimize the maximum job lateness is abbreviated by 1 | r j | L max ; and the problem with job release and delivery times to minimize the maximum job full completion time can be expressed either by 1 | r j , q j | C max or by using the new notation [57] it can alternatively be expressed by 1 | r j | D max . Notice that we omit the parameters d j and q j in the 1 | r j | L max and 1 | r j | D max notations, respectively, which are implicitly assumed to be present; the use of L max implies that due dates d j have been specified, and the use of D max implies that delivery times q j have been specified. Note that both job delivery times and due dates specify the urgency of job j: the larger q j is, the more urgent job j is (as its delivery will take a “considerable time”); in contrast, the less d j is, the more urgent job j is (as it has a “small” slack).
The mathematical equivalence of the two above settings 1 | r j | L max and 1 | r j | D max ( 1 | r j , q j | C max ) is established by letting q j = d j for all j. We can avoid a slightly unnatural due-date negativity constraint considering an alternative setting, which is practically intuitive. We can still conserve non-negative job due dates by the following transformation from an instance of the problem 1 | r j , q j | C max ( 1 | r j | D max ) to an equivalent instance of the problem 1 | r j | L max . Let K be a large constant, say, no less than the maximum job delivery time, and let the due date of each job j be d j = K q j . Vice versa, given an instance of problem 1 | r j | L max , an equivalent instance of the problem 1 | r j , q j | C max ( 1 | r j | D max ) is obtained by letting q j = D d j for a large constant D no less than the maximum job due date. It is easy to see that whenever the makespan for the version 1 | r j , q j | C max is minimized, the maximum job lateness in the problem 1 | r j | L max is minimized, and vice versa, see an early paper by Bratley et al. [59] for the details. It should be noted that, for the approximation purposes, the setting 1 | r j | L max is inappropriate as the maximum job lateness in a schedule can be positive, negative or even 0. However, the problem 1 | r j | L max can be approximated via its alternative formulation 1 | r j | D max (see Section 3.1).

2. Some Basic Concepts and Properties

In this section, we give some basic concepts and schedule properties that have been used in the study of single machine scheduling problems. Single machine scheduling problems are trivial if every job has a single parameter, its processing time. For minimizing the makespan, it suffices to schedule the jobs in any order without leaving a machine with idle time or a gap: a gap on a machine is the longest time interval during which this machine is idle, i.e., it processes no job. A block is a consecutive part in a schedule which is preceded and succeeded by a gap.
Thus, from any permutation of n jobs, an optimal makespan schedule can be constructed in linear time by assigning the jobs from that permutation without leaving any idle time interval on the machine. In particular, if all jobs are simultaneously released, any non-idle time schedule minimizes the makespan C max . Single machine scheduling problems get less trivial if jobs are not simultaneously released, i.e., they have individual release times. Then, to minimize the makespan, we just need to keep the machine busy whenever this is possible. This goal will be achieved if we just order the jobs according to non-decreasing release times and include them in this order without leaving any avoidable gap. This procedure will take O ( n log n ) time due to the preprocessing step.
Minimizing maximum job lateness L max is a bit more complex. If only the job due dates are given (all the jobs being released at time 0), the venerable ED-heuristic (earliest due date) obtains an optimal schedule, minimizing maximum job lateness by scheduling the jobs in non-decreasing order of their due dates without any idle time intervals (we describe this optimization algorithm later in more detail). Vice versa, if only job release times are given, the earliest release time (ER) heuristic generates an optimal makespan schedule. In that algorithm, initially, the minimum job release time is assigned to the current scheduling time, and any job released at that time is assigned to the machine. Iteratively, the next scheduling time is defined as the maximum between the completion time of the latest-so-far-assigned job and the minimum release time taken among the yet unscheduled jobs. Again, any (unscheduled) job released at the current scheduling time is assigned to the machine.
Because of the job release times, a schedule created by the ER-heuristic may contain a gap. However, it may contain no gap that might be avoided. In particular, the total gap length in such a schedule is the minimal possible one. We can also use this heuristic if all the jobs have a common due date d. In other words, all jobs are equally urgent. Then, as is easy to see, an optimal schedule minimizing the maximum job lateness contains no gap that might be avoided, and it has the minimum possible total gap length. Therefore, the ER-heuristic also minimizes the maximum job lateness.
It is easy to see that the ER-heuristic will not further minimize the maximum job lateness even if only two possible job due dates are allowed. For example, consider an instance with two jobs, job 1 released at time 0 and having the due date 11 and job 2 released at time 1 with the due date 6. The ER-heuristic will assign job 1 to the time interval [ 0 , 5 ) and job 2 to the time interval [ 5 , 10 ) , resulting in a maximum job lateness of 4 (that of job 2). Observe that this schedule contains no gap. At the same time, an optimal schedule does contain a gap [ 0 , 1 ) : it assigns job 2 to the time interval [ 1 , 6 ) and then job 1 to the time interval [ 6 , 11 ) , completing both jobs on time with the maximum job lateness equal to 0.
If with job release times and due dates job preemptions are allowed, then we have a simple method to solve the problem optimally and even online. We usually start at the earliest job release time and initially set the current time t to that magnitude. Iteratively, among the available jobs, we schedule the next most urgent job, one with the smallest due date (or the maximum delivery time for the problem 1 | r j | D max ). However, if during the execution of that job another more urgent job gets released, the former job is interrupted at the release time of the latter job. The latter job is then included at its release time. In this way, there will occur no forced delay for any urgent job by a less urgent one; the delay of job j in the schedule S is c j S r j (recall that a job is more urgent than another job if the former one has a larger delivery time or a smaller due date). Note that the so-constructed schedule also minimizes the makespan, since it has no avoidable machine idle time.
Suppose now a job preemption is not allowed and that job i is processed by the machine at time r j in schedule S, where a yet unscheduled job j is such that d j < d i ; more precisely, job j gets released within the execution interval of job i so that job i delays the starting of job j. Note that this delay will be less than p i if upon the completion of job i, job j is immediately scheduled. The delay of job j may, however, result in a non-optimal schedule. Indeed, this will be the case when job j is urgent “enough” that in an optimal schedule it should be included with no or with a smaller delay.
Now we give a description of the non-preemptive version of the heuristic. This popular greedy algorithm, commonly referred to as earliest due date ED-heuristic, has been widely used in the process of the solution of variants of single machine, multiprocessor and also shop scheduling problems. As in the above described preemptive version, it is online. It was initially proposed by Jackson [20] and proved to be optimal for problem 1 | | L max . Later, it was extended for the setting with job release times 1 | r j | L max by Schrage [51]. The extended algorithm is as follows. We let the current scheduling time t be the minimal release time of an unscheduled job or the time when the machine completes the last job, whichever magnitude is larger. If no job is yet scheduled, we let t : = 0 . Iteratively, among the jobs already released by time t, the heuristic schedules the most urgent job (it breaks ties by selecting the longest job). So for problem 1 | r j | L max , the most urgent job is an already released one with the smallest due date, whereas for the alternative setting 1 | r j | D max , it is an already released one with the largest delivery time. We shall refer to the extended versions of the above greedy algorithm applied to the problems 1 | r j | L max and 1 | r j | D max as the ED-heuristic and LDT-heuristic, respectively. Given the obvious similarity between the two versions of the heuristic and the equivalence of the two settings 1 | r j | L max and 1 | r j | D max , we make no distinction between the heuristics, these settings and L max and D max , which may be used interchangeably.
We now introduce some additional concepts. Among all jobs in the schedule S, it is useful to distinguish the ones whose lateness attains the maximum job lateness L max ( S ) in that schedule. In a given block B S , among all such jobs, the last scheduled one is said to be an overflow job o in schedule S. In general, more than one overflow job in the schedule S may exist, each one in a different block of that schedule.
Let e be a job from block B such that d e > d o (i.e., job e is less urgent than the overflow job o). Then job e is referred to as an emerging job in the schedule S associated with job o. There may exist one or more emerging jobs for the overflow job o. The one scheduled last before job o is referred to as the delaying emerging job l associated with job o.
During the scheduling process, it is important to determine an “appropriate” position of each emerging job. Then we might be able to “adjust” the full completion time of the overflow jobs and reduce in this way the makespan. This property makes the “non-urgent” emerging jobs most important during the scheduling process.
Given an ED-schedule S (one constructed by the ED-heuristic) and a block B S with the overflow job o, the corresponding kernel  K B consists of the set of jobs included in that block between the delaying emerging job l associated with the overflow job o and job o including the latter job. It immediately follows that the due date of any job of the kernel K ( S ) is no less than that of job o. Note that any schedule contains the same number of kernels, the overflow jobs and the corresponding delaying emerging jobs. This number cannot be more than the total number of blocks in the schedule.
We illustrate the above concepts using the problem instance with eight jobs from Table 3. The LDT-schedule for that instance is depicted in Figure 1. There are two blocks and one kernel in that schedule. The numbers in the circles indicate the objective value C max (full completion time) of the corresponding jobs. The kernel jobs are marked in blue, whereas the last overflow job of the kernel is marked in red. The two emerging jobs are marked in green.
We have the following known optimality condition (see [60] for a proof):
Lemma 1.
The ED-schedule S is optimal if it contains an overflow job with no associated delaying emerging job.
The ED-heuristic is quite a powerful scheduling tool and can be used to “capture” other important structural properties of the problem. For example, another sufficient optimality condition can relatively easily be derived. Suppose we apply the heuristic online. Denote by j t the job scheduled at time t and denote by S t the partial ED-schedule constructed by the scheduling time t. Suppose further that once job j t is scheduled, another job j with d j < d j t gets released within the execution interval of the former job (job j t is “pushing” a more urgent job j). Then we shall refer to the scheduling time t as a conflict scheduling time. Let σ be the schedule generated by the ED-heuristic for the originally given problem instance.
Lemma 2.
The ED-schedule σ is optimal if no conflict scheduling time occurs while it is built. If this condition is satisfied for all t, no job released within the execution interval of job j t can start a kernel at any scheduling time t during the construction of the schedule σ.
Proof. 
The lemma follows, since by the hypothesis we can have no overflow job with an associated delaying emerging job in the schedule σ (see Lemma 1). □
Next, we define a lower bound L B on the optimum objective function value OPT. Given an LDT-schedule S (one constructed by the LDT-heuristic) with the kernel K and the corresponding delaying emerging job l ( K ) , we have a lower bound
L B D max ( S ) p l ( K )
(see, e.g., [60] for a proof). Combining this lower bound with other easily seen lower bounds, an overall lower bound on OPT is
L B = max { max K S ( D max ( S ) p l ( K ) ) , P , max j { r j + p j + q j } } ,
where P is the total processing time of all jobs.
Inequality (1) has useful implications for approximation purposes, where we consider the setting with job delivery times 1 | r j | D max . Indeed, it is obvious how the approximation can be measured for this setting, unlike the setting 1 | r j | L max , where, for example, we may have a negative lateness. By the mere observation that L B OPT and p l ( K ) OPT, we obtain that
D max ( S ) 2 OPT ,
i.e., the LDT-heuristic already delivers a two-approximation schedule. Furthermore, suppose we are interested in a ( 1 + 1 / k ) -approximation solution for the problem 1 | r j | D max for a given constant k. By the next lemma, there exist certain values for k that guarantee such an approximation:
Lemma 3.
The inequality | S | / OPT < ( 1 + 1 / k ) holds if k L B / p l ( K ) .
Proof. 
From L B L max ( S ) p l ( K ) , we obtain that | S | / OPT < ( OPT + p l ( K ) ) / OPT = 1 + p l ( K ) / OPT 1 + p l ( K ) / L B 1 + 1 / k .
If an ED-schedule S is not optimal, then an overflow job o in it is to be restarted earlier. By increasing artificially the release time of a corresponding emerging job and applying the ED-heuristic to such a modified instance, different ED-schedules can be constructed. We intend to reduce L max ( S ) in this way by activating an emerging job e for the kernel K, i.e., rescheduling it after that kernel. The activation is carried out in two steps. In step 1, the release time of job e is increased to a magnitude no less than the largest job release time in kernel K. In step 2, the ED-heuristic is applied to the modified job in this way for this problem instance. Since now the release time of job e is no less than that of any job of kernel K ( S ) and d e is larger than the due date of any job from kernel K, the ED-heuristic will give the priority to the jobs of kernel K and will reschedule job e after all former jobs. The ED-schedule { S } e , obtained in this way, is referred to as (weakly) complementary to the S schedule.
Note that in the schedule S, some emerging jobs might have been originally included after kernel K. Then such an emerging job can be included before the jobs of kernel K in the schedule { S } e by the ED-heuristic (since no job of kernel K is released at time s e ( S ) ). As a result, kernel K may not be restarted at the earliest possible time. However, this will happen if the above emerging jobs are forced to remain after kernel K: we increase the release times of these jobs similar to how we did this for job l. The ED-schedule S e , obtained by the ED-heuristic for the modified job in this way for this problem instance, is referred to as (strongly) complementary to the S schedule and is denoted by . The complementary schedule is referred to as a C-schedule.
Note that there is exactly one job less included before kernel K in a strong C-schedule S e compared to schedule S. The same condition does not hold for a weak C-schedule { S } e . In fact, we may have more jobs scheduled before kernel K in schedule { S } e than in the schedule S. Then, as is easy to see, the overflow job o will be completed earlier in the schedule S e than it was completed in the schedule S.
There are polynomial-time algorithms and implicit enumeration procedures that repeatedly generate (weak or strong) C-schedules (see, e.g., [19,23,24,25,30,33]). Depending on a particular solution approach, either weak or strong C-schedules can be enumerated. The enumeration of strong C-schedules is less costly but may not guarantee optimality.

3. The Overview

In the previous section, we gave some basic properties related to single machine scheduling problems. In this section, we survey some state-of-the-art results, concentrating our attention on L max and D max criteria, complexity, polynomial solvability and approximability issues for standard settings. (As earlier noted, we do not pretend to cover all the existing literature on single machine scheduling problems, including extended settings with setup times and batch processing.)

3.1. Scheduling Jobs with Release Times with L max and D max Criteria

Exact algorithms. The general settings 1 | r j | L max and 1 | r j | D max are strongly NP-hard [61]. The problem is fixed-parameter tractable for the parameters p max and δ max , where δ max is the maximum difference between the job due dates, see Vakhania and Werner [25]. The problem is weakly N P -hard with only two job release times and due dates being allowed: 1 | r j { r 1 , r 2 } , d j { d 1 , d 2 } | L max (see Lenstra et al. [1] and Chinos and Vakhania [60]). Apparently, it is a minimal N P -hard special case of the problem 1 | r j | L max .
Among early branch-and-bound algorithms, we mention those by McMahon and Florian [16] and Carlier [17]. As to the more recent works on implicit enumeration methods, Pan and Shi [18] proposed branch-and-bound algorithms using the basic framework from that of Carlier [17]. In particular, to each node of the search tree, a complete LDT-schedule is associated, and for each already created LDT-schedule, two possibilities are considered: in the first generated branch of the node, the delaying emerging job is activated for the corresponding kernel, and in the second branch it is forced to be scheduled before that kernel in any further created schedule in this branch (technically, the delivery time of that job is increased so that the LDT-heuristic will always include it before all jobs of the kernel). A lower bound at each node is determined if the kernel of the corresponding LDT-schedule contains at most two jobs. The at most 3! possible feasible solutions for the jobs of the kernel and the delaying emerging job are formed, and the makespan of the best solution is set as a lower bound.
Besides this lower bound, the authors in [18] give a procedure that permits us to discard some sets of jobs in the following enumeration process. In a created LDT-schedule, let i be the last scheduled job of a block such that no job preceding job i in that schedule has a full completion time more than the current lower bound. Then note that the corresponding part of the schedule including job i can be “frozen” (left as is) in any further created LDT-schedule, and the original instance can be trimmed in this way.
Della Croce and T’kindt [62] consider the version of the problem 1 | r j | L max with job due dates and propose a lower bound, an alternative to the standard preemptive ED-schedule. Let 1 , , j , , n be the jobs in the preemptive ED-schedule (PED), where j is the first overflow job in that schedule. Four different cases where job j can appear in a non-preemptive optimal schedule S O P T are distinguished:
  • Job j is in the position j in S O P T as in the PED schedule, being preceded by the jobs 1 , , j 1 and followed by the jobs j + 1 , , n . Then in schedule S O P T , job j cannot be started before the completion time c E D ( 1 , , j 1 ) of the last scheduled job in the non-preemptive ED-schedule E D ( 1 , , j 1 ) constructed for the jobs 1 , , j 1 . Hence, L 1 a = max { r j , c E D ( 1 , , j 1 ) } + p j d j is a lower bound for the lateness of job j in this case. Likewise, since the jobs j + 1 , , n come after job j, the release times of these jobs can correspondingly be increased. Let P E D ( j + 1 , , n ) be the PED-schedule constructed for an updated sub-instance. Then L 1 b = L max ( P E D { j + 1 , , n } ) is another lower bound, and L 1 = max { L 1 a , L 1 b } is a lower bound.
  • Job j is in one of the positions 1 , , j 1 in the schedule S O P T , and the jobs 1 , , j 1 precede the jobs j + 1 , , n . Now, let E D ( 1 , , j 1 , j ) be the ED-schedule constructed for the first j jobs, i.e., the jobs 1 , , j 1 , j , and let c E D ( 1 , , j 1 , j ) be the completion time of the last scheduled job in E D ( 1 , , j 1 , j ) . Then note that c E D ( 1 , , j 1 , j ) max i = 1 , , j d i is a lower bound. Again, the release times of the remaining jobs j + 1 , , n are updated and then scheduled by the PED-heuristic. In the resultant schedule, the maximum job lateness is a lower bound, and the maximum between this and the former lower bound is another lower bound L 2 .
  • Job j has one of the positions j + 1 , , n in the schedule S O P T . So j is preceded by the jobs 1 , , j 1 and at least one more job k from the set { j + 1 , , n } . Similarly, we define the ED-schedule E D ( 1 , , j 1 , k ) , where k is a job that minimizes the maximum job completion time c E D ( 1 , , j 1 , k ) in such an ED-schedule. Then L 3 = c E D ( 1 , , j 1 , k ) + p j d j is another lower bound.
  • There exists a job h j 1 that occupies one of the positions j + 1 , , n in the schedule S O P T . Now we have at least one job k scheduled before job h. Similarly to case 3, we compute the ED-schedule E D ( 1 , , h 1 , h + 1 , , j , k ) for the jobs in { 1 , , h 1 , h + 1 , , j , k } , where k is defined similarly as in case 3. Then L 4 = c E D ( 1 , , h 1 , h + 1 , , j , k ) max i = 1 , , h 1 , h + 1 , , j , k d i is a lower bound.
The authors in [62] build their last two lower bounds L 5 and L 6 as follows. Given a pair of jobs u , v , they use the fact that
L u , v = min { max { r u + p u , r v } + p v d v , max { r v + p v , r u } + p u d u }
is a lower bound on L max . Indeed, either u comes before v or vice versa. In the first case, we have the first term and in the second case, we have the second one in the above min expression. Then L 5 = max i j L i , j and L 6 = max i o L i , o , where j is the first overflow job in the complete preemptive ED-schedule and o is the first overflow job in the non-preemptive ED-schedule. Thus, L B = max { L 5 , L 6 , min { L 1 , L 2 , L 3 , L 4 } } is a lower bound.
Gharbi and Labidi [63] proposed yet another lower bound by applying the common idea that the lower the number of preemptions in a PED-schedule is, the stronger the corresponding lower bound can be. Intuitively, an optimal makespan of a schedule with a “restricted” number of preemptions would approximate better an optimal non-preemptive makespan than a preemptive schedule. Hence, they define a semi-preemptive LDT-schedule in which the preemption of some job parts is not allowed. In particular, every job is divided into two smaller jobs, the fixed and the free parts. The fixed part is considered a job that cannot be preempted, unlike the free part. After that, the decision version of the problem 1 | r j , q j | C max ( 1 | r j | D max ) with a threshold C is considered: Is there a feasible non-preemptive schedule S with C max ( S ) = C ? Note that in using binary search on the possible values of C max within an appropriate interval [ L , U ] , the optimization version can be solved using at most log ( U L ) solutions of the decision problem. In the binary search, semi-preemptive LDT-schedules are treated. Observe that if a preemptive schedule S p has a makespan of more than C, then there will exist no non-preemptive schedule S with a makespan of at most C, since
C max ( S ) C opt L B = C max ( S p ) > C .
For a threshold C in the decision problem, to each job j, a deadline d j = C q j is assigned so that in a feasible semi-preemptive schedule, all jobs are to be completed by their deadlines. The fixed part of job j cannot be preempted whenever this job is to be completed by time d ˜ j .
The fixed and free parts of job j are defined as follows. Two cases are distinguished. In case 1, r j + p j > d ˜ j p j , and in case 2, r j + p j d ˜ j p j .
Case 1:
  • Fixed part j : r j : = d ˜ j p j ; p j : = 2 p j d ˜ j + r j ; d ˜ j : = r j + p j .
  • Free part j : r j : = r j ; p j : = d ˜ j r j p j ; d ˜ j : = d ˜ j .
Case 2:
  • Fixed part j : r j : = r j + p j x j ; p j : = x j ; d ˜ j : = d ˜ j p j + x j .
  • Free part j : r j : = r j ; p j : = p j x j ; d ˜ j : = d ˜ j .
where the parameter x j [ 1 , p j 1 ] . We illustrate the concept of fixed and free parts for cases 1 and 2 in Figure 2 and Figure 3.
The binary search procedure can be described as follows: Let J be the set of all jobs.
  • LB : = makespan of the LDT-preemptive schedule for set J.
  • UB : = makespan of the non-preemptive LDT-schedule for set J.
  • Carry out binary search in the interval [ L B , U B ] . For each derived C, perform the following:
  • Calculate d ˜ j for all j J .
  • Create a new instance J = { j , j | j J } .
  • Create an LDT-preemptive schedule S for set J .
  • IF schedule S is feasible, i.e., all jobs are completed by their deadlines THEN increase C.
  • ELSE decrease C.
  • RETURN C and the corresponding feasible semi-preemptive schedule.
Once the above procedure completes, the job release and delivery times are adjusted as follows. Let S s p be the semi-preemptive schedule returned by the above procedure for the obtained threshold C . Then for each job h:
  • r ¯ ¯ h := c h ( S s p ) p h .
  • d ¯ ¯ h = t h ( S s p ) + p h .
The authors in [63] rely on the following observation (without giving a formal proof): In an optimal non-preemptive schedule, job h cannot start before time r ¯ ¯ h and cannot complete before time d ¯ ¯ h . Then the original instance can be modified by r h : = r ¯ ¯ h y d ˜ h : = d ¯ ¯ h .
A recent implicit enumeration algorithm (IEA) by Vakhania [19] is based on a special kind of parameterized analysis, called the Variable Parameter Analysis (VPA). Applying VPA, one can express an exponential dependence in the time complexity expression of an N P -hard optimization problem only in terms of variable parameters, the numbers of some specific types of objects and the jobs for the scheduling problem. For scheduling problems with delivery times or due dates, the variable parameter ν can be defined as the number of emerging jobs.
In the IEA from [19], different types of jobs are distinguished. The set of emerging jobs is partitioned into two basic types, the type 1.1 and type 1.2 emerging jobs. Intuitively, a type 1.2 emerging job initially belongs to some kernel, whereas a type 1.1 emerging job comes either before or after some kernel(s). The set of the type 1.1 emerging jobs is formed in a specially constructed iterative procedure, and the set of type 1.2 emerging jobs associated with a given kernel is detected in another iterative procedure that decomposes that kernel. The set of non-emerging jobs is partitioned into two types. The non-type (1.2) kernel jobs are partitioned into type (2) and all the remaining (non-type (1–2)) jobs are of type (4).
The IEA first generates a partial schedule without type (1) emerging jobs in O ( n 2 log n ) time using an extension of the LDT-heuristic. To complete this partial schedule, the type (1) jobs are incorporated into that schedule. This can be performed in polynomial time if certain assumptions on the dynamic behavior of the emerging jobs hold. In the worst-case scenario, all possible ν ! permutations of emerging jobs are enumerated, resulting in an
O ( n log n ( n ν 1.1 ) 2 log ( n ν 1.1 ) ν ! )
running time, where ν ( ν 1.1 ) is the variable parameter equal to the total number of the type (1) (type (1.1), respectively) jobs. The author in [19] reports a preliminary computational study according to which the ratio ν / n asymptotically converged to 0.
Polynomial-time approximation algorithms. As to the polynomial-time approximation, recall that the LDT-heuristic gives a two-approximation solution by Inequality (2). Potts [52] showed that by a repeated application of the LDT-heuristic O ( n ) times, the performance ratio can be improved to 3/2, resulting in an O ( n 2 log n ) time performance. Nowicki and Smutnicki [53] proposed another 3/2-approximation algorithm with time complexity O ( n log n ) . Hall and Shmoys [54] illustrated that the application of the LDT-heuristic to the original problem and a specially defined reversed problem leads to a further improved approximation of 4/3 in O ( n 2 log n ) time. Lazarev et al. [64] considered an interesting “metric reduction” approach. Given an arbitrary instance A of the problem, problem instance B, which is the “closest” by a given metric, is built, which is known to belong to a polynomially solvable subclass of the problem. An optimal solution to problem B provides an absolute approximation solution to problem A.
There are polynomial-time approximation schemes (PTASs) for the problem 1 | r j | D max . These schemes provide a ( 1 + 1 / k ) -approximation for an arbitrary constant k, where the exponential-time dependence is only on the parameter k. The first two such PTASs were described by Hall and Shmoys [54]. The two schemes have different time complexities, namely,
O ( 16 k ( n k ) 3 + 4 k )
and
O ( n log n + n ( 4 k ) 8 k 2 + 8 k + 2 ) ,
respectively. Mastrolilli [55] described another approximation scheme for a more general setting with identical processors. It has an aggregated time complexity
O ( n k O ( 1 ) + k O ( k ) ) = O ( n + k O ( k ) ) .
These approximation schemes carry out a complete enumeration of all possible combinations for scheduling specially determined k jobs, yielding a tight exponential-time dependence O ( k k ) . In these schemes, the set of job release times is artificially reduced to a subset with no more than k elements. As was proved in [54], such a reduction maintains the desired approximation ratio. Below, we give more details on the above three PTASs.
In the first approximation scheme presented in [54], the processing times of jobs are rounded to the nearest multiples of the total processing time P of all jobs, divided by n k . By scaling the remaining job parameters, respectively, the original problem instance is transformed to one in which the total number of job processing times becomes restricted by n k . The obtained simplified instance can then be solved by the proposed dynamic programming algorithm. This algorithm works with the sub-intervals obtained by partitioning the scheduling horizon. The algorithm considers the possible distributions of the total processing time of the jobs assigned to each sub-interval. The way the sub-intervals are constructed permits the LDT-heuristic to schedule optimally the jobs from each sub-interval. Through the construction of the sub-intervals, the dynamic programming algorithm needs to deal with no more than P O ( k ) distributions. Due to the accomplished rounding of the job processing times, the total number of possible distributions becomes
( n k ) O ( k ) .
The other two approximation schemes (the second one from [54] and the scheme from [55]) partition the set of jobs into short and long jobs so that there are no more than O ( k ) long jobs. In the second scheme from [54], as in the first one, O ( k ) sub-intervals are derived and the possible ways in which long jobs can be distributed into the derived sub-intervals are considered. The short jobs with an appropriate total length together with the corresponding long ones are scheduled by the LDT-heuristic. All the jobs scheduled in a given sub-interval are released at the beginning of this sub-interval, and hence, the LDT-heuristic schedules them optimally.
In the approximation scheme from [55], there is also a constant number of small jobs and no more than O ( k ) job release times, job processing times and job delivery times. The set of jobs is partitioned into no more than O ( k 2 ) subsets so that each subset contains the jobs with the same release times and delivery times. The jobs of each subset are compiled into longer jobs, with each of these compiled jobs having the same release time and delivery time so that each subset is left with no more than one small job. After these transformations, the cost of a complete enumeration does not depend exponentially on n. Similarly to the second scheme in [54], a constant number of starting times of the long jobs is considered. The jobs are again scheduled by the fast LDT-heuristic.
A recent approximation scheme from Vakhania [19] is based on the earlier mentioned VPA. Like the earlier approximation schemes, it partitions the set of jobs into short and long jobs. The exponential-time dependence in the time complexity expression of this PTAS is only on the number κ of long emerging jobs. The total number of long jobs is no more than k, where k is an integer such that the PTAS guarantees the generation of a feasible solution, at most ( 1 + 1 / k ) times an optimal one. A worst-case running time of the PTAS can be expressed as
O ( κ ! κ 1.1 n 2 log n ) ,
where κ 1.1 is the number of long type 1.1 jobs. Note that for any fixed k, the parameter κ is a constant smaller than k.
Polynomially solvable special cases. As we have already seen from Section 2, if all r j values are equal, then scheduling the jobs in order of non-decreasing due dates gives an optimal schedule in O ( n log n ) time, see Jackson [20]. Similarly, if all d j values are equal, then scheduling the jobs in order of non-decreasing release times is optimal. If all jobs are of unit length, the problem with both job release times and due dates is again solvable. As Horn [21] observed, in this case, the ED-heuristic delivers an optimal solution given that the job release times are integers. Indeed, while applying the ED-heuristic, no job can be released within the execution interval of a currently running job. As a result, no job may potentially cause a forced delay of any more urgent job. With real job release times and unit job processing times, the problem can be scaled to an equivalent one in which all three job parameters are integers and the job processing times are equal (but not unit-length ones). Then, while applying the ED-heuristic (the non-preemptive case), during the execution of a job another more urgent job may be released, which basically makes the heuristic non-optimal for the general setting 1 | r j | L max . However, if we allow it to preempt the former job and schedule the latter one at its release time, the preemptive problem 1 | r j , p m t n | L max can be solved optimally, as was mentioned in Section 2. The problem with equal (non-unit)-length jobs 1 | p j = p , r j | L max can still be solved in polynomial O ( n 2 log n ) time. Garey et al. [22] used a very sophisticated data structure with a union and find tree with path compression and achieved an improvement of the time complexity to O ( n log n ) .
The problem with two allowable job processing times p and 2 p for an arbitrary integer p, denoted as 1 | p j { p , 2 p } , r j | L max , remains polynomially solvable, see Vakhania [23]. The algorithm from the latter reference is optimal when the processing times of only some emerging jobs (which become known during the execution of the algorithm) are restricted to p or 2 p . The algorithm is obtained as a consequence of two implicit enumeration algorithms for the problem 1 | r j | D max . The first algorithm is exponential. The second algorithm is a restriction of the first one and runs in O ( n 2 log n ) time under certain conditions, which are to be held during its execution. Otherwise, another auxiliary procedure has to be applied to guarantee the optimality. An O ( n 2 log n log p ) implementation of that procedure for the two allowed job processing times yields an algorithm with the same time complexity for this setting.
Relatively recently, it was shown that an essentially more general setting, where every job processing time divides every larger job processing time (for example, the job processing times are powers of 2), 1 | d i v , r j | D max , can be solved optimally in O ( n 2 log n log p max ) time, see Vakhania [24] (here p max is the maximum job processing time). Note that this is a maximal polynomially solvable special case of the problem with restricted job processing times (as we mentioned earlier, the problem remains strongly N P -hard if the job processing times are from the set { p , 2 p , 3 p , } for any integer p). If the maximum job processing time p max is bounded by a polynomial function in n, P ( n ) = O ( n k ) , and the maximum difference between the job release times is bounded by a constant R, then the corresponding parameterized problem 1 | p max P ( n ) , | r j r i | < R | L max can still be solved in polynomial time O ( n k + 1 log n log p max ) , see Vakhania & Werner [25]. Note that in this setting, the general problem is parameterized by p max and the maximum difference between two job release times.
For the problem 1 | p j = p , r j | L max , Lazarev et al. [26] presented two polynomial algorithms. The first procedure is based on a trisection search and determines an optimal solution in O ( Q n log n ) time, where 10 Q represents the accuracy of the parameters of the problem. The second approach uses an auxiliary problem, where the maximum completion time is minimized under the constraint that L max is bounded by a particular value. Using this approach, the authors determine a solution for the bi-criteria problem with the two criteria C max and L max and the Pareto set of optimal solutions in O ( n 3 log n ) time.
There are other known conditions that allow a polynomial-time solution of the problem. For example, if the job release times, processing times and due dates are restricted in a specific manner so that each r j lies within a certain time interval defined by p j and d j , then the problem can be solved in O ( n log n ) time, see Hoogoveen [65]. If the job parameters are embedded in another special manner, such that for any pair of jobs j , i with r i > r j and d i < d j ,
d j r j p j d i r i p i
holds, and with
r i + p i r j + p j ,
d i d j holds, then the resultant problem 1 | r j , e m b e d d e d | L max is solvable in O ( n log n ) time, see Vakhania [27]. As was recently observed in Vakhania and Werner [66], allowing the compression of job processing times may lead to an efficient solution of the general problem under some reasonable restrictions (in practice, job compression is accomplished at the cost of using some extra resources). There is an upper limit on the total allowable compression cost, and one looks for a schedule that minimizes the maximum job lateness. The authors propose an O ( n 2 log n ) algorithm that determines an amount of additional resources required to solve the problem optimally. If this yields a violation of the restriction on the allowable maximum cost, i.e., the solution is infeasible, it is transformed into a “tight minimal” feasible one in the sense that by increasing the processing time of any compressed job by some amount of time, the maximum job lateness in the resultant solution will increase by the same amount. As is shown, it suffices to compress only some emerging jobs.
As already mentioned, the problem with only two allowable job release times and due dates 1 | r j { r 1 , r 2 } , d j { d 1 , d 2 } | L max is weakly N P -hard. Reynoso and Vakhania [28] considered this setting. They established structural properties of feasible schedules that have led to a number of optimality conditions, under which the problem was solved in O ( n log n ) time. For bottleneck instances, where none of these conditions apply, the problem is reduced to the SUBSET SUM problem. The authors presented the results of computational experiments accomplished for 50 million randomly generated problem instances, reporting that almost all these instances were solved optimally by one of the established optimality conditions. Remarkably, the heuristic algorithms that incorporate the above optimality conditions also succeeded in solving the SUBSET SUM problem optimally for almost all 50 million instances. A more recent work considered a natural generalization of the latter problem with any fixed number of release times (see Escalona [67]). For this setting, the optimality conditions were derived, and again it was shown how the problem can be reduced to the SUBSET SUM problem. It is a challenging question regarding to what extent the above obtained results can be generalized for any fixed number of release and delivery times, a setting with still wider practical applications.
In Azharonok et al. [29], a general objective function f max defined as the maximum of non-decreasing cost functions f j was considered for the preemptive problem 1 | r j , p r e c , p m t n | f max . It is assumed that for any pair i , j of jobs, the cost functions fulfill either the order relation f i ( t ) f j ( t ) for any time t or f j ( t ) f i ( t ) for any time t of the scheduling interval. In the case of tree-like precedence constraints, the derived algorithm has a time complexity of O ( n log n ) , which improves the complexity of O ( n 2 ) of a known algorithm for the general case [68]. In addition, a parallel algorithm is given with a time complexity of O ( log 2 n ) when using O ( n 2 log n ) processors.
Some applications. The problem 1 | r j | L max is important on its own and also because of its possible applications for the solution of other more complex scheduling problems and other optimization problems. For example, Carlier and Pinson [69] used the problem as an auxiliary one to obtain a lower bound for the version of the problem with identical processors. Adams, Balas and Zawack [70] used the problem for the solution of the job shop scheduling problem.
The problem can also be used for the solution of Vehicle Routing Problems (VRPs) with k vehicles (salespeople), each starting and ending at a special point 0 called the depot. Each salesperson has to visit a number of clients and return to the depot. The objective is to minimize the total cost of all k tours (see, for example, [71]). In the setting with time windows, each client has to be visited within a specified time window (interval). In this setting, there may exist no feasible solution. Hence, it is crucial to determine whether there exists a feasible solution for a given k. More generally, one may be interested in calculating the minimum k for which there exists a feasible solution. Let us consider a simplified version of the problem with k = 1 , i.e., the Traveling Salesman Problem (TSP) with time windows, TSP-TW. For a given instance of TSP-TW with n + 1 clients, we create an instance of the problem 1 | p j = 0 , r j | L max with n jobs, associating job j with client j for each i = 1 , , n and letting the processing time of each job be 0. Note that this is a simplified version of the generic problem 1 | r j | L max . In this version, the release and due times of each job are determined according to the time window of the corresponding client. In the feasibility version of the problem, one looks for a schedule in which no job has a positive lateness. One can immediately see that if there is no feasible solution to the scheduling problem, there is also no feasible solution for the corresponding instance of TSP-TW. If we let u be an upper bound on the total number of salesmen, we can carry out binary search within the interval [ 1 , u ] , solving at each iteration an instance of the corresponding scheduling problem P k | p j = 0 , r j | L max with k parallel identical machines and with trial k [ 1 , u ] . In this way, we can find the minimum k for which there exists a feasible solution to the problem (if it exists).
In the capacitated VRP, each vehicle has some capacity and each client has some request. In a feasible vehicle tour, the total demand of all clients in that tour cannot exceed the capacity of that vehicle. A feasible solution consists of k feasible tours. With equal capacity C of all vehicles, one can verify the feasibility by solving the feasibility version of the multiprocessor version of our genetic problem without job release and due times with k identical processors, where the objective is to minimize the makespan (the maximum job or machine completion time), P k | | C max . Here we associate jobs with clients, where the processing time of each job is the demand of the corresponding client, and we wish to know if there exists a feasible solution to the scheduling problem with a makespan at most C. This is an essentially simpler problem than a VRP. Alternatively, strong lower bounds calculated in polynomial time are known for scheduling problems, which, in particular, can be useful to establish that there exists no feasible solution to an instance of a VRP (whenever this is the case).

3.2. Minimizing the Number of Late Jobs

Another common objective function with given due dates is the number of late jobs: a job in a schedule is called late if it is completed after its due date in that schedule, otherwise it is called on time. In the problem of this subsection, we deal with n jobs with release times and due dates which have to be scheduled on the machine to maximize the number of on-time jobs, or equivalently, to minimize the number of late jobs. We consider both preemptive and non-preemptive cases and also cases when the jobs have an additional parameter called weight.

3.2.1. Non-Preemptive Scheduling to Minimize the Number of Late Jobs: 1 | r j ; p j = p | U j

In this subsection, we deal with the non-preemptive version of the problem 1 | r j | U j . This problem was studied in Vakhania [30], where two polynomial-time algorithms were proposed. The first one with the time complexity O ( n 3 log n ) optimally solves the problem if during its execution no job with some specially defined property occurs. The second algorithm is an adaptation of the first one for the special case of the problem when all jobs have the same length 1 | r j ; p j = p | U j . The time complexity of this algorithm is O ( n 3 log n ) (it improved an earlier known fastest algorithm with the time complexity of O ( n 5 ) by Chrobak et al. [72]).

3.2.2. Preemptive Scheduling to Maximize the Number of On-Time Jobs: 1 | r j ; p m t n | U j

In the problem of this subsection, n jobs with release times and due dates have to be scheduled preemptively on a single machine to minimize the number of late jobs, i.e., problem 1 | r j ; p m t n | U j . We call a job late (on time, respectively) if it is completed after (at or before, respectively) its due date. We will say that job j is feasibly scheduled if it starts no earlier than at time r j and overlaps with no other job on the machine. A feasible schedule is one which feasibly schedules all jobs. The objective is to find a feasible schedule with a minimal number of late jobs or, equivalently, with the maximal number of on-time jobs.
This problem is known to be polynomially solvable. The first polynomial dynamic programming algorithm with a time complexity of O ( n 5 ) and a space complexity of O ( n 3 ) was developed by Lawler [31]. The result was improved by Baptiste [32], who suggested another dynamic programming algorithm with a time complexity of O ( n 4 ) and a space complexity of O ( n 2 ) . The latter result was further improved by Vakhania [33], where an algorithm with the time and space complexities of O ( n 3 ) and O ( n ) , respectively, was proposed. Unlike the earlier algorithms, the latter algorithm is not based on dynamic programming. It uses the ED-heuristic for scheduling the jobs. The main task is to construct a schedule in which all jobs are scheduled on time, with the number of such jobs being maximal. As any late job can be scheduled arbitrarily late without affecting the objective function, jobs not included in the former schedule can be later added at the end of it in an arbitrary order. During the construction of the schedule, some jobs are disregarded, whereas other jobs are omitted with the possibility to be included in a later created schedule.

3.3. Preemptive Scheduling to Minimize the Weighted Number of Late Jobs with Equal Length: 1 | r j ; p j = p ; p m t n | w j U j

In this section, we consider the weighted version of the problem of the previous section in which preemptions are allowed and all jobs have an equal length. Moreover, w j is the non-negative integer weight of job j. We minimize the weighted throughput, which is the total weight of the completed jobs. Equivalently, we minimize the weighted number of late jobs. The problem can be abbreviated as 1 | r j ; p j = p ; p m t n | w j U j .
The first polynomial dynamic programming algorithm of complexity O ( n 10 ) for the problem was suggested by Baptiste [34]. Baptiste et al. [35] improved the above time complexity and proposed another dynamic programming algorithm, which runs in O ( n 4 ) time.
Next, we briefly discuss some directions for further research. First, we note that for the multiprocessor case, the weighted version is N P -complete [73]. At the same time, the non-weighted version still remains open. In particular, it is not known whether the problem P | r j ; p j = p ; p m t n | U j can be solved in polynomial time. (Even for two processors, one cannot restrict the search to solely earliest deadline schedules. As an example, consider an instance with three jobs with feasible intervals ( 0 , 3 ) , ( 0 , 4 ) , and ( 0 , 5 ) and the processing time p = 3 . There exists a feasible schedule that completes all three jobs, whereas the earliest-deadline schedule completes only jobs 1 and 2.) For the multiprocessor case, it is also interesting to study the preemptive version, where jobs are not allowed to migrate between processors.

3.4. Scheduling to Minimize the Weighted Number of Late Jobs with Deadlines and Release/Due-Date Intervals

In [74], Gordon et al. dealt with the problem where in addition to release and due dates, deadlines are also given. This means that certain jobs have to be completed on time. The authors considered the two cases of similarly ordered release and dues dates (i.e., r 1 r 2 r n and d 1 d 2 d n ) as well as nested release and due-date intervals (i.e., none of two such intervals overlap; thus they have at most a joint point or one covers the other one). For these problems, a reduced problem is constructed where the deadlines are eliminated, and it is shown that the optimal solution of the reduced problem can be used to find an optimal solution of the initial problem.
In a later paper, Gordon et al. [75] considered the preemptive version of the above problem with nested release and due-date intervals. They derived necessary and sufficient conditions for the existence of a schedule in which all jobs are completed by their due dates. For the case of oppositely ordered processing times and job weights, a polynomial algorithm of the complexity O ( n log n ) was presented.

3.5. Scheduling with Tardiness-Based Objectives

The single machine problem 1 | | T i is known to be NP-hard in the ordinary sense (see [76]). In [77], Swarcz and Mukhopadhyay developed a decomposition rule which is used in a very powerful branch-and-bound algorithm. It turned out that the algorithm can solve problems with 100 jobs even without using any lower bounds, and the authors were able to solve problems until n = 150 jobs. This was possible by eliminating dynamic programming and replacing it with the newly developed decomposition rule, which significantly outperformed the earlier rules by Emmons [78].
The single machine problem of minimizing total tardiness has received a lot of attention. An early review on exact and approximation algorithms was given by Koulamas [79] in 1994. Sixteen years later, Koulamas [80] discussed the most important developments for the total tardiness problem up to 2010. The latter survey discusses some theoretical developments, exact algorithms, fully polynomial-time approximation schemes, approximation algorithms, some special cases and also a generalization of the problem as well as the total tardiness problem with assignable due dates. Recall that for the total weighted tardiness scheduling problem, an early review of algorithms was given by Abdul-Razaq et al. [11] in 1990.
Our main focus in this sub-section is on new results in the last 15 years which have not been included in reviews so far. While the problem 1 | | T j is known to be NP-hard, Gafarov et al. [36] considered the special case of equal processing times, i.e., problem 1 | r j , p j = p | w j T j . After deriving some properties, they gave polynomial algorithms for two special cases of the problem. If the difference between the maximum due date and the minimum due date is not greater than the processing time p, the problem can be solved by a modification of the algorithm by Baptiste [81] in O ( n 10 ) time. The special case with ordered weights and processing times according to w 1 w 2 w n and d 1 d 2 d n can be solved in O ( n 9 ) time by a similar modification from [81].
In [82], Lazarev and Werner presented a new general graphical algorithm which often improves the complexity or at least the running time of dynamic programming algorithms, and it can be used for efficient approximation algorithms. Compared to standard dynamic programming algorithms, the number of states can often substantially be reduced.
In a standard dynamic programming algorithm for permutation problems, the functions F l ( t ) and the corresponding partial sequences have to be stored for any stage l = 1 , 2 , , n and any time t = 0 , 1 , , U B . Here, U B is an upper bound for the scheduling interval, e.g., the sum of all processing times (or the maximal due date). The values F 1 ( t ) typically represent the best function value when considering the first l jobs and allowing at most t time units for the processing of the scheduled (e.g., early) jobs. This results in a pseudo-polynomial algorithm of the complexity O ( n p j ) .
The graphical algorithm combines different states, where the best partial solution does not change and the corresponding objective function F l ( t ) can be described by the same linear function. As a result, at each stage l, there are certain intervals separated by the so-called breakpoints t 1 l < t 2 l < t h l l , where the structure of the linear function belonging to the best partial solution changes. In addition, for the graphical algorithm, the times t do not need to be integers.
In [83], Gafarov et al. described the application of this graphical algorithm to the approximate solution of various single machine problems. In particular, the graphical algorithm is applied to five total tardiness problems, where a pseudo-polynomial algorithm exists, and the graphical algorithm yields fully polynomial-time approximation schemes, yielding the best known running time.
In [15], Gafarov et al. considered the maximization of total tardiness, and by applying the graphical algorithm, they transformed a pseudo-polynomial algorithm into a polynomial one. More precisely, the complexity of O ( n p j ) can be reduced to O ( n 2 ) , which settled the open complexity status of the problem max 1 | ( n d ) | T j ), where n d means that no inserted idle times occur. The authors present a detailed numerical example and also give a practical application of such maximization problems, where wind turbines have to be mounted in several regions of a country.
While for more general settings such a graphical algorithm often improves the running time in comparison with existing algorithms, the complexity remains pseudo-polynomial. For instance, in [84], Lazarev et al. presented graphical algorithms with a complexity of O ( n p j ) for a special case of a generalized tardiness problem corresponding to the minimization of late work as well as for the minimization of the weighted number of late jobs.
In [85], Lazarev and Werner considered various special cases of the (unweighted) total tardiness problem 1 | | T j . After proving that the special case of oppositely ordered processing times and due dates, i.e., p 1 p 2 p n and d 1 d 2 d n , is already N P -hard in the ordinary sense, they gave several pseudo-polynomial and polynomial algorithms for special cases of this problem. In particular, O ( n 2 ) algorithms were given for the cases d j d j 1 > p j , j = 2 , 3 , , n , as well as for the case when the difference between the maximal and the minimal due date is not greater than 1. In both cases, the processing times do not need to be integers.
In further research, the graphical algorithm can be applied to more general problems, where a pseudo-polynomial algorithm exists and Boolean variables are used for making yes/no decisions.

3.6. Some Online Settings

First, we consider preemptive scheduling models to maximize overall profit. Jobs with release and processing times, deadlines and weights are to be scheduled on a single machine. Job weight reflects its profit. Any job can be preempted during the execution, and the objective is to maximize the overall profit, respecting job deadlines. Notice that in a feasible schedule, all jobs must complete by their deadlines. In particular, there may exist no feasible schedule in which all jobs are included. Hence, we wish to create a feasible schedule in which the overall profit is maximized. Two possible models and the definition of the profit gained by the execution of task j have been studied. In the standard model, a completed task j gives the profit w j p j , whereas a non-completed task gives no profit. In a more flexible metered model, a task j executed for t p j time units gives the profit w j t (this task may not be completed). We just mention here the work by Chrobak et al. [86], who considered both the offline and online metered profit models. For the offline setting, they used a reduction to bipartite matchings and maximal flows to develop a polynomial-time algorithm with a better running time than the solution of a direct linear programming model would yield. For the online metered case, they presented an algorithm with competitive ratio e / ( e 1 ) 1.5820 and proved a lower bound of 5 1 1.236 on the competitive ratio of the algorithms for this problem.
In an online setting with job release times, the jobs arrive over time. The delivery time of a job becomes known at its arrival (release) time. The objective is to minimize the maximum full job completion time, the makespan. In Section 3.1, we considered the corresponding offline problem, which is strongly NP-hard. We also mentioned that the preemptive version can be solved in polynomial time by the preemptive LDT-heuristic, whereas for the non-preemptive case, the LDT-heuristic has a performance guarantee of two. This is the best possible performance ratio unless we allow the machine to be idle when there is an unprocessed job available. Hoogeveen and Vestjens [56] developed an online algorithm with the best possible worst-case ratio
( 5 + 1 ) / 2 1.61803
in which a waiting strategy is incorporated. Van den Akker et al. [87] considered an extension of the standard model by allowing job restarts: If an urgent job comes while the machine is busy, then the currently running job can be interrupted in favor of the urgent job. In this model, unlike traditional preemptive models, the already processed part of the interrupted job is wasted and this job is to be started all over again. The algorithm proposed in [87] has the best possible worst-case ratio of 1.5 (which is better than the above-mentioned bound for the case without restarts).
As is easy to see, in the offline setting, a mere modification of the LDT-heuristic that removes and postpones each currently running job in favor of a just released more urgent job can deliver a worst-possible non-preemptive schedule. In this sense, restarts do not make much sense in the offline case.
In the online setting, in cases where a job cannot be split, allowing job restarts can be a reasonable compromise. However, note that an almost completed job will be “lost” if a more urgent job arrives just before its completion on the machine. This scenario can be softened if an already completed part of an abandoned job can give some benefit and this job can be restarted from the moment when it was interrupted. Note that a direct compromise would lead us to the standard preemptive case. But other alternative reasonable compromises are possible. For example, the preemptive part may yield some penalty and the total penalty can be minimized. Furthermore, as in the above-mentioned metered model, a partially completed job may still give some benefit. Hence, one may well consider models where an already processed part of an interrupted job counts in one way or another. It is interesting to consider whether there exist better worst-case performance bounds for such models.

3.7. Scheduling with Non-Renewable Resources

Unlike a traditional renewable resource which is constantly available and not consumed by a job, a non-renewable resource is consumed by a job so that its availability is not permanent and constant. A machine or a processor is a renewable resource, whereas energy, money and fuel, for example, are non-renewable resources that can be created in different ways. Nowadays, different kinds of sustainable energy sources including solar and wind energies are widely available.
Gafarov et al. [88] considered single machine problems with a non-renewable resource (abbreviated N R ). This means that at any time t i , i = 0 , 1 , , U B , with t 0 = 0 and U B as an upper bound on the scheduling interval, one gets an amount G ( t i ) 0 of the resource, while any job j has a consumption of g j at the starting time s j of the job. Hence, the equality
j = 1 n g j = i = 0 U B G ( t i )
holds. One looks for an optimal permutation ( j 1 , j 2 , , j n ) for a particular objective function with the starting times s j 1 , s j 2 , , s j n of the jobs such that the resource constraints are satisfied, i.e., the inequality
l = 1 i g j l { G ( t v ) t v s j i   for   all   v s . s j i }
holds for any i = 1 , 2 , , n . Such problems play a role, e.g., for government-financed organizations when a project can only be started after receiving the required funds.
The authors focused on complexity results and proved that the problems 1 | N R | f with f { C max , C j , U j } are strongly N P -hard. When minimizing total tardiness T j , the case of equal due dates ( d j = d ) is already strongly N P -hard. The following special cases with minimizing total tardiness were proven to be weakly N P -hard:
(1)
Equal processing times ( p j = p );
(2)
Equal due dates ( d j = d ) and equal consumptions ( g j = g ).
On the other hand, the special case of equal processing times ( p j = p ) and ordered consumptions and due dates according to g 1 g 2 g n and d 1 d 2 d n is polynomially solvable.
This paper considered also the budget scheduling problem with the objective to minimize the makespan. Instead of a resource consumption g j for each job, two values g j and g j + are given, where g j has the same meaning as g j before. The value g j + describes the additional earnings at the completion time of job j. The authors showed that the problem 1 | N R , g j , g j + | C max belongs to the class A P X of problems, allowing a constant-factor approximation algorithm.
Recently, Ramirez et al. [37] considered a different non-renewable setting on a single machine. In that setting, every job has a release time, processing time and due date, where the execution of job j, besides the machine time, requires e j amount of the non-renewable resource (it consumes energy e j ). Here, a schedule is a mapping that assigns to each job a starting time on the machine and the required amount of the non-renewable resource. The objective is to minimize the maximum job lateness. The problem is abbreviated as 1 | r j , e j | L max . The authors proposed an O ( n log n ) -time algorithm for the case, where all jobs have a unit processing time. Earlier, an O ( n 2 log n ) time algorithm was suggested for a similar scenario but without job release times by Grigoriev et al. [38]. Note that job release times essentially complicate the problem, even without energy requirements, since the setting without them is polynomially solvable. The algorithm in [37] iteratively takes the next unscheduled job with the minimum due date and schedules it at the earliest time moment where enough energy for that job becomes available. This is an offline algorithm, as the jobs are not scheduled according to their arrival (release) times.
For future work, due to the N P -hardness of the majority of problems, it would be interesting to derive elimination rules and structural properties of optimal solutions which can be used in enumerative algorithms.

4. Concluding Remarks

In this paper, we reviewed some theoretical results on classical single machine scheduling problems. Since there exists a huge number of heuristic/meta-heuristic and enumerative algorithms, our focus was on polynomial algorithms as well as complexity and approximability issues. The summarized polynomial special cases are also of interest when the conditions are not exactly satisfied and the resulting solution can be taken as a good heuristic one. We hope that the reviewed results will stimulate further investigation in related research fields.
For single machine scheduling problems, efficient solution methods can often be found more easily than for the corresponding multiprocessor or shop scheduling problems. However, such methods may give a better insight into the latter, more complex problems and may well serve as a basis for the development of solution methods for these problems. At the same time, optimal solutions to single machine scheduling problems can directly be used to solve the corresponding multiprocessor or shop scheduling problems. For example, for such N P -hard problems, strong lower bounds can be obtained based on the solutions of the corresponding single machine problems. Non-strict lower bounds, which are easier and faster to obtain, can be used in any search-tree-based approximation algorithm, such as beam search, for example. Similarly, using strict lower bounds, we obtain exact branch-and-bound algorithms. Once implemented and tested, the above algorithms can be used as an algorithmic engine for the construction of decision support systems. Finally, we note that a number of algorithms for basic single machine problems can relatively easily be adopted using additional graph-completion mechanisms to solve the corresponding extended versions with precedence relations, transportation and setup times and other real-life job scheduling problems.

Author Contributions

Conceptualization, N.V. and F.W.; methodology, N.V. and F.W.; validation, N.V. and F.W.; formal analysis, N.V., F.W. and K.J.R.-F.; investigation, N.V. and F.W.; writing—original draft preparation, N.V. and F.W.; writing—review and editing, N.V. and F.W.; visualization, K.J.R.-F.; funding acquisition, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first author was partially supported by the DAAD grant 57507438 for research stays for university academics and scientists.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors are very grateful to Jan Karel Lenstra and Sergey Sevastyanov for their insightful and careful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lenstra, J.K.; Rinnooy Kan, A.H.G.; Brucker, P. Complexity of machine scheduling problems. Ann. Oper. Res. 1975, 1, 343–362. [Google Scholar]
  2. Chen, B.; Potts, C.N.; Woeginger, G.J. A Review of Machine Scheduling: Complexity, Algorithms and Approximability. In Handbook of Combinatorial Optimization; Du, D.-Z., Pardalos, P.M., Eds.; Kluwer: Boston, MA, USA, 1998; Volume 3, pp. 1493–1641. [Google Scholar]
  3. Allahverdi, A.; Gupta, J.N.D.; Aldowaisan, T. A Review of Scheduling Research Involving Setup Considerations. Omega Int. J. Manag. Sci. 1999, 27, 219–239. [Google Scholar] [CrossRef]
  4. Allahverdi, A.; Ng, C.T.; Cheng, T.C.E.; Kovalyov, M.Y. A Survey of Scheduling Problems with Setup Times or Costs. Eur. J. Oper. Res. 2008, 187, 985–1032. [Google Scholar] [CrossRef]
  5. Allahverdi, A. The Third Comprehensive Survey on Scheduling Problems with Setup Times/Costs. Eur. J. Oper. Res. 2015, 246, 345–378. [Google Scholar] [CrossRef]
  6. Fowler, J.W.; Mönch, L. A Survey of Scheduling with Parallel Batch (p-Batch) Processing. Eur. J. Oper. Res. 2022, 298, 1–24. [Google Scholar] [CrossRef]
  7. Kravchenko, S.; Werner, F. Parallel Machine Problems with Equal Processing Times: A Survey. J. Sched. 2011, 14, 435–444. [Google Scholar] [CrossRef]
  8. Briskorn, D. Single-Machine Schduling with Learning Considerations. Eur. J. Oper. Res. 1999, 115, 173–178. [Google Scholar]
  9. Wu, H.; Zhang, H. Single-Machine Scheduling with Periodic Maintenance and Learning Effect. Sci. Rep. 2023, 13, 9309. [Google Scholar] [CrossRef] [PubMed]
  10. Koulamas, C.; Kyparisis, G.J. A Classification of Dynamic Programming Formulations for Offline Deterministic Single-Machine Scheduling Problems. Eur. J. Oper. Res. 2023, 305, 999–1017. [Google Scholar] [CrossRef]
  11. Abdul-Razaq, T.S.; Potts, C.; van Wassenhove, L. A Survey of Algorithms for the Single Machine Total Weighted Tardiness Scheduling Problem. Discret. Appl. Math. 1990, 26, 235–253. [Google Scholar] [CrossRef]
  12. Adumu, M.O.; Adewumi, A. A Survey of Single Machine Scheduling to Minimize Weighted Number of Tardy Jobs. J. Ind. Manag. Optim. 2014, 10, 219–241. [Google Scholar] [CrossRef]
  13. Sterna, M. A Survey of Scheduling Problems with Late Work Criteria. Omega Int. J. Manag. Sci. 2011, 39, 120–129. [Google Scholar] [CrossRef]
  14. Sterna, M. Late and Early Work Scheduling: A Survey. Omega Int. J. Manag. Sci. 2021, 104, 102453. [Google Scholar] [CrossRef]
  15. Gafarov, E.; Lazarev, A.; Werner, F. Single Machine Total Tardiness Maximization Problems: Complexity and Algorithms. Ann. Oper. Res. 2013, 207, 121–136. [Google Scholar] [CrossRef]
  16. McMahon, G.; Florian, M. On Scheduling with Ready Times and due-dates to Minimize Maximum Lateness. Oper. Res. 1975, 23, 475–482. [Google Scholar] [CrossRef]
  17. Carlier, J. The One-Machine Sequencing Problem. Eur. J. Oper. Res. 1982, 11, 42–47. [Google Scholar] [CrossRef]
  18. Pan, Y.; Shi, L. Branch-and-bound Algorithms for Solving Hard Instances of the One-Machine Sequencing Problem. Eur. J. Oper. Res. 2006, 168, 1030–1069. [Google Scholar] [CrossRef]
  19. Vakhania, N. Variable Parameter Analysis for Scheduling One Machine. arXiv 2022, arXiv:2211.02107. [Google Scholar] [CrossRef]
  20. Jackson, J.R. Scheduling a Production Line to Minimize the Maximum Tardiness. Ph.D. Thesis, University of California, Los Angeles, CA, USA, 1955. [Google Scholar]
  21. Horn, W.A. Some Simple Scheduling Algorithms. Nav. Res. Logist. Q. 1974, 21, 177–185. [Google Scholar] [CrossRef]
  22. Garey, M.R.; Johnson, D.S.; Simons, B.B.; Tarjan, R.E. Scheduling Unit-Time Tasks with Arbitrary Release Times and Deadlines. SIAM J. Comput. 1981, 10, 256–269. [Google Scholar] [CrossRef]
  23. Vakhania, N. Single-Machine Scheduling with Release Times and Tails. Ann. Oper. Res. 2004, 129, 253–271. [Google Scholar] [CrossRef]
  24. Vakhania, N. Dynamic Restructuring Framework for Scheduling with Release Times and Due-Dates. Mathematics 2019, 7, 1104. [Google Scholar] [CrossRef]
  25. Vakhania, N.; Werner, F. Minimizing Maximum Lateness of Jobs with Naturally Bounded Job Data on a Single Machine in Polynomial Time. Theor. Comput. Sci. 2013, 501, 72–81. [Google Scholar] [CrossRef]
  26. Lazarev, A.A.; Arkhipov, D.I.; Werner, F. Scheduling Jobs with Equal Processing Times on a Single Machine: Minimizing Maximum Lateness and Makespan. Optim. Lett. 2017, 11, 165–177. [Google Scholar] [CrossRef]
  27. Vakhania, N. Fast Solution of Single-Machine Scheduling Problem with Embedded Jobs. Theor. Comput. Sci. 2019, 782, 91–106. [Google Scholar] [CrossRef]
  28. Reynoso, A.; Vakhania, N. Theoretical and Practical Issues in Single-Machine Scheduling with two Job Release and Delivery Times. J. Sched. 2021, 24, 615–647. [Google Scholar] [CrossRef]
  29. Azhanarok, E.; Gordon, V.; Werner, F. Single Machine Preemptive Scheduling with Special Cost Functions. Optimization 1995, 34, 271–286. [Google Scholar] [CrossRef]
  30. Vakhania, N. A Study of Single-Machine Scheduling Problem to Maximize Throughput. J. Sched. 2013, 16, 395–403. [Google Scholar] [CrossRef]
  31. Lawler, E.L. A Dynamic Programming Algorithm for Preemptive Scheduling of a Single Machine to Minimize the Number of Late Jobs. Ann. Oper. Res. 1990, 26, 125–133. [Google Scholar] [CrossRef]
  32. Baptiste, P. An O(n4) Algorithm for Preemptive Scheduling of a Single Machine to Minimize the Number of Late Jobs. Oper. Res. Lett. 1999, 24, 175–180. [Google Scholar] [CrossRef]
  33. Vakhania, N. Scheduling Jobs with Release Times Preemptively on a Single Machine to Minimize the Number of Late Jobs. Oper. Res. Lett. 2009, 37, 405–410. [Google Scholar] [CrossRef]
  34. Baptiste, P. Polynomial Time Algorithms for Minimizing the Weighted Number of Late Jobs on a Single Machine with Equal Processing Times. J. Sched. 1999, 2, 245–252. [Google Scholar] [CrossRef]
  35. Baptiste, P.; Chrobak, M.; Dürr, C.; Jawor, W.; Vakhania, N. Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted Throughput. Oper. Res. Lett. 2004, 32, 258–264. [Google Scholar] [CrossRef]
  36. Gafarov, E.; Lazarev, A.; Werner, F. Minimizing Total Weighted Tardiness for Scheduling Equal-Length Jobs on a Single Machine. Autom. Remote Control 2020, 81, 853–868. [Google Scholar] [CrossRef]
  37. Ramirez, K.; Vakhania, N.; Hernandez, A. Fast Algorithm for Single Machine Scheduling of Unit-Length Jobs with Non-renewable Resource Requirements to Minimize Maximum Job Lateness. In Proceedings of the 12th Annual Conference on Computational Science & Computational Intelligence, Las Vegas, NV, USA, 3–5 December 2025. [Google Scholar]
  38. Grigoriev, A.; Holthuijsen, M.; van de Klundert, J. Basic Scheduling Problems with Raw Material Constraints. Nav. Res. Logist. 2005, 52, 527–535. [Google Scholar] [CrossRef]
  39. Chaubey, P.K.; Sundar, S. A Steady-State Genetic Algorithms for the Single Machine Scheduling Problem with Periodic Maintenance Availability. SN Comput. Sci. 2023, 4, 651. [Google Scholar] [CrossRef]
  40. Mexicano, A.; Carmona-Frausto, J.C.; Montes-Dorantes, P.N.; Cervantes, S.; Cervantes, J.-A.; Rodriguez, R. Simulated Annealing and Tabu Search for Solving the Single-Machine Problem. In Advances on P2P, Parallel, Grid, Cloud and Internet Computing; Springer: Cham, Switzerland, 2023; Volume 571, pp. 86–95. [Google Scholar]
  41. Mellouli, A.; Wafi, C.; Mellouli, R. An Efficent Ant Colony Optimization-Based Heuristic for the Single Machine Scheduling with Sequence-Dependent Setup Times. In Advsnces in Mechanical Engineering, Materials and Mechanics II (ICAMEM2024); Springer: Berlin/Heidelberg, Germany, 2025; pp. 179–188. [Google Scholar]
  42. Rosa, B.F.; Souza, M.J.F.; de Souza, S.R. Algorithms based on VNS for Solving the Single Machine Scheduling Problem with Earliness and Tardiness Penalties. Electron. Notes Discret. Math. 2018, 66, 47–54. [Google Scholar] [CrossRef]
  43. Benmansour, R.; Todosijevic, R.; Hanafi, S. Variable Neighborhood Search for the Single Machine Scheduling Problem to Minimize the Total Early Work. Optim. Lett. 2023, 17, 2169–2184. [Google Scholar] [CrossRef]
  44. Israni, M.; Sundar, S. An Iterative Local Search for the Single Machine Scheduling Problem with Periodic Machine Availability. SN Comput. Sci. 2025, 6, 108. [Google Scholar] [CrossRef]
  45. Zhao, Y.; Wang, G. A Dynamic Differential Evolution Algorithm for the Dynamic Single-Machine Scheduling Problem with Sequence-Dependent Setup Times. J. Oper. Res. Soc. 2020, 71, 225–236. [Google Scholar] [CrossRef]
  46. Khraibet, T.J.; Kalaf, B.A.; Mansoor, W. A New Metaheuristic Algorithm for Solving Multi-Objective Single-Machine Scheduling Problems. J. Intell. Syst. 2025, 34, 20240373. [Google Scholar] [CrossRef]
  47. Moharam, R.; Morsy, E.; Ali, A.F.; Ahmed, M.A.; Mostafa, M.-S.M. A Discrete Grey Wolf Optimization Algorithm for Minimizing Penalties on a Single Machine Scheduling Problem. In Proceedings of the 8th International Conference on Advanced Machine Learning and Technologies and Applications (AMLTA2022), Cairo, Egypt, 5–7 May 2022; Volume 113, pp. 678–687. [Google Scholar]
  48. Ballestin, F.; Leus, R. Meta-heuristics for Stable Scheduling on a Single Machine. Comput. Oper. Res. 2008, 35, 2165–2192. [Google Scholar] [CrossRef]
  49. Corsini, R.R.; Fichera, V.; Longo, L.; Oriti, G. A Self-Adaptive Metaheuristic to Minimize the Total Weighted Tardiness for a Single-Machine Scheduling Problem with Flexible and Variable Maintenance. J. Ind. Prod. Eng. 2025, 42, 422–439. [Google Scholar] [CrossRef]
  50. Antonov, N.; Sucha, P.; Janota, M.; Hula, J. Minimizing the Weighted Number of Tardy Jobs: Data-Driven Heuristic for Single-Machine Scheduling. Comput. Oper. Res. 2026, 185, 107281. [Google Scholar] [CrossRef]
  51. Schrage, L. (University of Chicago, Chicago, IL, USA). Obtaining Optimal Solutions to Resource Constrained Network Scheduling Problems. Unpublished work. 1971. [Google Scholar]
  52. Potts, C.N. Analysis of a Heuristic for One Machine Sequencing with Release Dates and Delivery Times. Oper. Res. 1980, 28, 1436–1441. [Google Scholar] [CrossRef]
  53. Nowicki, E.; Smutnicki, C. An Approximation Algorithm for Single-Machine Scheduling with Release Times and Delivery Times. Discret. Appl. Math. 1994, 48, 69–79. [Google Scholar] [CrossRef]
  54. Hall, L.A.; Shmoys, D.B. Jackson’s Rule for Single-Machine Scheduling: Making a Good Heuristic Better. Math. Oper. Res. 1992, 17, 22–35. [Google Scholar] [CrossRef]
  55. Mastrolilli, M. Efficient Approximation Schemes for Scheduling Problems with Release Dates and Delivery Times. J. Sched. 2003, 6, 521–531. [Google Scholar] [CrossRef]
  56. Hoogeveen, J.A.; Vestjens, A.P.A. A Best Possible Deterministic On-line Algorithm for Minimizing Maximum Delivery Time on a Single Machine. SIAM J. Discret. Math. 2000, 13, 56–63. [Google Scholar] [CrossRef]
  57. Lenstra, J.K. (Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands). Private communication, 2024.
  58. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Optimization and Approximation in Deterministic Sequencing and Scheduling: A Servey. Ann. Discret. Math. 1979, 5, 287–326. [Google Scholar]
  59. Bratley, P.; Florian, M.; Robillard, P. On Sequencing with Earliest Start Times and Due–Dates with Application to Computing Bounds for (n/m/G/Fmax) Problem. Nav. Res. Logist. Q. 1973, 20, 57–67. [Google Scholar] [CrossRef]
  60. Chinos, E.; Vakhania, N. Adjusting Scheduling Model with Release and due-dates in Production Planning. Cogent Eng. 2017, 4, 1321175. [Google Scholar] [CrossRef]
  61. Garey, M.R.; Johnson. D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; Freeman: San Francisco, CA, USA, 1979. [Google Scholar]
  62. Della Croce, F.; T’kindt, T. Improving the Preemptive Bound for the Single Machine Dynamic Maximum Lateness Problem. Oper. Res. Lett. 2010, 38, 589–591. [Google Scholar] [CrossRef]
  63. Gharbi, A.; Labidi, M. Jackson’s Semi-preemptive Scheduling on a single Machine. Comput. Oper. Res. 2010, 37, 2082–2088. [Google Scholar] [CrossRef]
  64. Lazarev, A.A.; Sadykov, R.R.; Sevastyanov, S.V. A scheme of an approximate solution of the 1|rj|Lmax problem. Diskretn. Anal. Issled. Oper. 2006, 13, 57–76. (In Russian) [Google Scholar]
  65. Hoogeveen, J.A. Minimizing Maximum Promptness and Maximum Lateness on a Single Machine. Math. Oper. Res. 1995, 21, 100–114. [Google Scholar] [CrossRef]
  66. Vakhania, N.; Werner, F. Scheduling a single machine with compressible jobs to minimize maximum lateness: N. Vakhania, F. Werner. 4OR-Q J. Oper. Res. 2025; published online. [Google Scholar] [CrossRef]
  67. Escalona, D.; Vakhania, N. A Study of Single-Machine Scheduling Problem with a Fixed Number of Release Times and Two Delivery Times. Working Manuscript. Bachelor’s Thesis, Universidad Autónoma del Estado de Morelos, Cuernavaca, Mexico, 2024; pp. 1–49. Available online: https://riaa.uaem.mx/xmlui/bitstream/handle/20.500.12055/4772/EACDNN07.pdf?sequence=1&isAllowed=y (accessed on 29 December 2025).
  68. Baker, K.R.; Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Preemptive Scheduling of a Single Machine to Minimize Maximum Cost Subject to Release Dates and Precedence Constraints. Oper. Res. 1983, 31, 381–386. [Google Scholar] [CrossRef]
  69. Carlier, J.; Pinson, E. Jackson’s Pseudo Preemptive Schedule for Pm/ri,qi/Cmax problem. Ann. Oper. Res. 1998, 83, 41–58. [Google Scholar] [CrossRef]
  70. Adams, J.; Balas, E.; Zawack, D. The Shifting Bottleneck Procedure for Job Shop Scheduling. Manag. Sci. 1988, 34, 391–401. [Google Scholar] [CrossRef]
  71. Pacheco-Valencia, V.H.; Vakhania, N.; Hernández-Mira, F.Á.; Hernández-Aguilar, J.A. A Multi-Phase Method for Euclidean Traveling Salesman Problems. Axioms 2022, 11, 439. [Google Scholar] [CrossRef]
  72. Chrobak, M.; Dürr, C.; Jawor, W.; Kowalik, L.; Kurowski, M. A Note on Scheduling Equal-Length Jobs to Maximize Throughput. J. Sched. 2006, 9, 71–73. [Google Scholar] [CrossRef]
  73. Brucker, P.; Kravchenko, S. Preemption can Make Parallel Machine Scheduling Problems Hard. Osnabru¨cker Schriften Math. Fachbereich Mathematik/Informatik, Reihe P 1999, 211. [Google Scholar]
  74. Gordon, V.; Potapneva, E.; Werner, F. Single Machine Scheduling with Deadlines, Release and due-dates. Optimization 1997, 42, 219–244. [Google Scholar] [CrossRef]
  75. Gordon, V.; Werner, F.; Yanushkevich, O.A. Single Machine Preepmptive Scheduling to Minimize the Weighted Number of Late Jobs with Deadlines and Nested Release/due-date Intervals. RAIRO Oper. Res. 2001, 35, 71–83. [Google Scholar] [CrossRef]
  76. Du, J.; Leung, J.Y.T. Minimizing Total Tardiness on One Processor is NP-hard. Math. Oper. Res. 1990, 25, 483–495. [Google Scholar] [CrossRef]
  77. Swarcz, W.; Mukhopadhyay, S. Decomposition of the Single Machine Total Tardiness Problem. Oper. Res. Lett. 1996, 19, 245–251. [Google Scholar] [CrossRef]
  78. Emmons, H. One-Machine Sequencing to Minimize Certain Functions of Job Tardiness. Oper. Res. 1969, 17, 701–715. [Google Scholar] [CrossRef]
  79. Koulamas, C. The Total Tardiness Problem: Review and Extensions. Oper. Res. 1994, 42, 1025–1041. [Google Scholar] [CrossRef]
  80. Koulamas, C. The Single-Machine Total Tardiness Scheduling Problem: Review and Extensions. Eur. J. Oper. Res. 2010, 202, 1–7. [Google Scholar] [CrossRef]
  81. Baptiste, P. Scheduling Equal-Length Jobs on Identical Parallel Machines. Discret. Appl. Math. 2000, 103, 21–32. [Google Scholar] [CrossRef]
  82. Lazarev, A.; Werner, F. A Graphical Realization of the Dynamic Programming Method for Solving NP-Hard Combinatorial Problems. Comput. Math. Appl. 2009, 58, 619–631. [Google Scholar] [CrossRef]
  83. Gafarov, E.; Dolgui, A.; Werner, F. A New Graphical Approach for Solving Single Machine Problems Approximately. Int. J. Prod. Res. 2014, 52, 597–614. [Google Scholar] [CrossRef]
  84. Gafarov, E.; Lazarev, A.; Werner, F. A Note on a Single Machine Scheduling with Generalized Total Tardiness Objective Function. Inf. Process. Lett. 2012, 112, 72–76. [Google Scholar] [CrossRef]
  85. Lazarev, A.; Werner, F. Algorithms for Special Cases of the Single Machine Total Tardiness Problem and an Application to the Even-Odd Partition Problem. Math. Comput. Model. 2009, 49, 2061–2072. [Google Scholar] [CrossRef]
  86. Chrobak, M.; Epstein, L.; Noga, J.; Sgall, J.; van Stee, R.; Tichý, T.; Vakhania, N. Preemptive Scheduling in Overloaded Systems. J. Comput. Syst. Sci. 2003, 67, 183–197. [Google Scholar] [CrossRef]
  87. van den Akker, M.; Hoogeveen, J.; Vakhania, N. Restarts can Help in On-line Minimization of the Maximum Delivery Time on a Single Machine. J. Sched. 2000, 3, 333–341. [Google Scholar] [CrossRef]
  88. Gafarov, E.; Lazarev, A.; Werner, F. Single Machine Scheduling Problems with Financial Resource Constraints: Some Complexity ResultResults and Properties. Math. Soc. Sci. 2011, 62, 7–13. [Google Scholar] [CrossRef]
Figure 1. LDT-schedule for the example instance.
Figure 1. LDT-schedule for the example instance.
Algorithms 19 00038 g001
Figure 2. Illustration of the fixed and free parts: case 1.
Figure 2. Illustration of the fixed and free parts: case 1.
Algorithms 19 00038 g002
Figure 3. Illustration of the fixed and free parts: case 2.
Figure 3. Illustration of the fixed and free parts: case 2.
Algorithms 19 00038 g003
Table 1. Overview of exact algorithms.
Table 1. Overview of exact algorithms.
ProblemTime ComplexityReference
Section 3.1
1 | r j | L max Exponential in nMcMahon and Florian [16]
1 | r j | D max Exponential in nCarlier [17]
1 | r j | D max Exponential in nPan and Shi [18]
1 | r j | D max O ( ν ! n log n ) (Exponential in ν )Vakhania [19]
1 | r j = r | L max O ( n log n ) Jackson [20]
1 | d j = d | L max O ( n log n ) Jackson [20]
1 | r j , p j = 1 | L max O ( n log n ) Horn [21]
1 | r j , p m t n | L max O ( n log n ) Horn [21]
1 | r j , p j = p | L max O ( n log n ) Garey et al. [22]
1 | p j { p , 2 p } , r j | D max O ( n 2 log n log p ) Vakhania [23]
1 | r j , d i v | D max O ( n 2 log n log p max ) Vakhania [24]
1 | p max P ( n ) , | r j r i | < R | L max O ( n k + 1 log n log p max ) Vakhania and Werner [25]
1 | r j , p j = p | C max , L max O ( n 3 log n ) Lazarev et al. [26]
1 | r j , e m b e d d e d | L max O ( n log n ) Vakhania [27]
1 | r j { r 1 , r 2 } , d j { d 1 , d 2 } | L max O ( n log n ) + S U B S E T S U M Reynoso and Vakhania [28]
1 | r j , p r e c , p m t n | f max O ( n log n ) Azharonok et al. [29]
Section 3.2.1
1 | r j | U j O ( n 3 log n ) Vakhania [30]
1 | r j ; p j = p | U j O ( n 3 log n ) Vakhania [30]
Section 3.2.2
1 | r j ; p m t n | U j O ( n 5 ) Lawler [31]
1 | r j ; p m t n | U j O ( n 4 ) Baptiste [32]
1 | r j ; p m t n | U j O ( n 3 ) Vakhania [33]
Section 3.3
1 | r j ; p j ; p m t n | w j U j O ( n 10 ) Baptiste [34]
1 | r j ; p j ; p m t n | w j U j O ( n 4 ) Baptiste et al. [35]
Section 3.5
1 | r j , p j = p | w j T j Ordered weights O ( n 9 ) Gafarov et al. [36]
1 | ( n d ) | T j O ( n 2 ) Gafarov et al. [15]
Section 3.7
1 | r j , p j = 1 , e j | L max O ( n log n ) Ramirez et al. [37]
1 | r m = 1 , p j = 1 | L max O ( n 2 log n ) Grigoriev et al. [38]
Table 3. Data of the example instance for problem 1 | r j | D max .
Table 3. Data of the example instance for problem 1 | r j | D max .
r 1 = 0 p 1 = 6 q 1 = 17
r 2 = 2 p 2 = 7 q 2 = 21
r 3 = 6 p 3 = 10 q 3 = 16
r 4 = 21 p 1 = 5 q 1 = 33
r 5 = 19 p 2 = 7 q 2 = 28
r 6 = 30 p 3 = 7 q 3 = 23
r 7 = 42 p 1 = 9 q 1 = 1
r 8 = 45 p 2 = 7 q 2 = 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vakhania, N.; Werner, F.; Ramírez-Fuentes, K.J. Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability. Algorithms 2026, 19, 38. https://doi.org/10.3390/a19010038

AMA Style

Vakhania N, Werner F, Ramírez-Fuentes KJ. Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability. Algorithms. 2026; 19(1):38. https://doi.org/10.3390/a19010038

Chicago/Turabian Style

Vakhania, Nodari, Frank Werner, and Kevin Johedan Ramírez-Fuentes. 2026. "Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability" Algorithms 19, no. 1: 38. https://doi.org/10.3390/a19010038

APA Style

Vakhania, N., Werner, F., & Ramírez-Fuentes, K. J. (2026). Single Machine Scheduling Problems: Standard Settings and Properties, Polynomially Solvable Cases, Complexity and Approximability. Algorithms, 19(1), 38. https://doi.org/10.3390/a19010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop