Next Article in Journal
A Novel Self-Adaptive Cooperative Coevolution Algorithm for Solving Continuous Large-Scale Global Optimization Problems
Next Article in Special Issue
Assembly and Production Line Designing, Balancing and Scheduling with Inaccurate Data: A Survey and Perspectives
Previous Article in Journal
Do Neural Transformers Learn Human-Defined Concepts? An Extensive Study in Source Code Processing Domain
Previous Article in Special Issue
The Assignment Problem and Its Relation to Logistics Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity

1
Department of Electrical and Computer Engineering, University of Patras, 26500 Patras, Greece
2
Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece
3
Department of Electrical and Computer Engineering, National Technical University of Athens, 15772 Athens, Greece
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(12), 450; https://doi.org/10.3390/a15120450
Submission received: 25 October 2022 / Revised: 22 November 2022 / Accepted: 25 November 2022 / Published: 29 November 2022
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)

Abstract

:
Machine scheduling is a hard combinatorial problem having many manifestations in real life. Due to the schedule followed, the possibility of installations of machines operating sub-optimally is high. In this work, we examine the problem of a single machine with time-dependent capacity that performs jobs of deterministic durations, while for each job, its due time is known in advance. The objective is to minimize the aggregated tardiness in all tasks. The problem was motivated by the need to schedule charging times of electric vehicles effectively. We formulate an integer programming model that clearly describes the problem and a constraint programming model capable of effectively solving it. Due to the usage of interval variables, global constraints, a powerful constraint programming solver, and a heuristic we have identified, which we call the “due times rule”, the constraint programming model can reach excellent solutions. Furthermore, we employ a hybrid approach that exploits three local search improvement procedures in a schema where the constraint programming part of the solver plays a central role. These improvement procedures exhaustively enumerate portions of the search space by exchanging consecutive jobs with a single job of the same duration, moving cost-incurring jobs to earlier times in a consecutive sequence of jobs or even exploiting periods where capacity is not fully utilized to rearrange jobs. On the other hand, subproblems are given to the exact constraint programming solver, allowing freedom of movement only to certain parts of the schedule, either in vertical ribbons of the time axis or in groups of consecutive sequences of jobs. Experiments on publicly available data show that our approach is highly competitive and achieves the new best results in many problem instances.

1. Introduction

Scheduling problems are interesting due to their practical usefulness and hardness. Such problems emerge in various domains, including manufacturing, computing, project management, and many others. Since scheduling problems are typically NP-hard, several approaches compete to attain the often unreached optimal solutions. Metaheuristics, heuristics, constraint programming, and mathematical programming often yield excellent results. The latter two are especially sensitive to problem sizes since their exact nature implies that all solutions must be checked or intelligently pruned.
Scheduling is a thoroughly studied subject that is considered a discipline on its own [1]. Scheduling problems are classified using the commonly accepted ( α | β | γ ) notation [2], where α refers to the machine environment, β refers to the constraints, and γ refers to the objective function. A valuable asset for appreciating the variety of scheduling problems under the ( α | β | γ ) notation can be found at [3].
In this work, we study a variation in the one-machine scheduling with time-dependent capacity problem that Mencia et al. introduced in [4,5,6]. The problem emerged in the context of scheduling charge times for a fleet of electric vehicles and is an abstraction of the real problem. It is classified as ( 1 , C a p ( t ) | | t i ) , meaning that it involves a single machine with capacity that fluctuates through time, with no other constraints, and the objective is minimizing the accumulated tardiness from all jobs. The problem piqued our interest due to its precise definition, the publicly available datasets, and the excellent results that were already published and which we used for comparisons.
We propose a novel way to approach the problem based on a model with an embedded rule (the due times rule) that helps constraint programming solvers reach good solutions fast. We also propose three improvement procedures that are local search routines. We combine the exact and local search parts in our approach to the problem, and we manage to achieve results equal to or better than the best-known results for 91 and 48 cases, respectively, out of 190 public problem instances.

2. Problem Description

A detailed description of the problem exists in [6], so we give a brief description in this section. The problem involves n jobs and one machine with a certain capacity that varies over time. Each job i has a duration P i and a due date D i . All jobs are available from the start time ( t = 0 ) and consume one unit of the machine’s capacity for the period in which the job will eventually be scheduled. Once a job starts, it cannot be preempted and should continue execution until completion. It is imperative that the capacity of the machine is not exceeded at any time. Finally, the objective that should be minimized is the total tardiness of all jobs, which is computed based on the due dates of the jobs. If job i completes execution before its due date, it does not affect the cost. Otherwise, it imposes a cost equal to c i D i , where c i is the completion time that job i assumes in the schedule. The mathematical formulation of the problem is presented in Section 7.
The problem ( 1 , C a p ( t ) | | t i ) is NP-hard, since problems ( 1 | | t i ) and ( P | | t i ) (P denotes a known number of identical machines), which are known to be NP-hard [7], can be reduced to it.

Terminology

In line with the definitions of terms in [6], we use S i , p i , d i , and C i for the start time, duration, due time, and completion time, respectively, of a job i in a given schedule. Then, T i is the tardiness of job i which is m a x { 0 , C i d i } .
A concrete example involving 12 jobs and a capacity line that reaches a maximum of four units is presented below. This example is the one used as Example 1 in [5]. Table 1 summarizes information related to it alongside the values associated with an optimal schedule for this problem instance, achieving an optimal cost of 20. The schedule corresponding to the table’s third line is presented graphically in Figure 1.

3. Dataset

A dataset consisting of a relatively large number of artificially generated problem instances is publicly available in [8]. The procedure for generating these instances is described in [6], and special care has been taken so that the problems’ structures resemble the structure manifested during the process of electric vehicle charging [9]. In total, 190 problem instances exist, as seen in Table 2. The naming of each problem instance is i<n>_<MC>_<k>, where n is the number of jobs, MC is the maximum capacity, and k is the sequence number of each problem instance for this n, MC pair.
Note that the capacity in all problem instances is a unimodal step function that grows until reaching a peak, then decreases, and finally stabilizes at a positive value.

4. Related Work

Hard combinatorial optimization problems, such as the one-machine scheduling with time-dependent capacity problem, are approached using numerous solving methods [10,11]. The two basic categories of such approaches are the exact ones and the heuristic–metaheuristic ones. In the first category, one can identify mathematical programming (i.e., linear programming, integer programming, and others) [12], constraint programming [13], approaches based on SAT (satisfiability) [14] or SMT (satisfiability modulo theory) solvers [15], and, in general, methods that intelligently examine the complete search space while pruning parts of it during their quest for proven optimal solutions [16]. In the second category, the approaches are numerous, including local search methods [17,18], genetic algorithms [19], genetic programming [20], memetic algorithms [21], differential evolution [22], ant colony optimization [23], particle swarm optimization [24], bees algorithms [25], hyper-heuristics [20], and others.

Heuristically Constructed Schedules

In [6], the authors present the schedule builder algorithm, where jobs are ordered in an arbitrary sequence, and each job is scheduled to start at the earliest possible time. After positioning each job, the capacity of the machine is updated in accordance with the partial schedule. When all jobs are scheduled, the algorithm finishes and returns a feasible schedule. This search space is guaranteed to contain an optimal solution to any problem instance [5]. So, each possible solution can be represented as a sequence of jobs to the schedule builder. Certain metaheuristic algorithms, such as genetic algorithms, may benefit from the idea of representing each possible schedule as a sequence of jobs and searching through the space of all possible permutations.
Here, we present in Algorithm 1 a modification to the schedule builder algorithm that introduces lanes, which are levels formed by the capacity of the problem. So, for example, a problem with a maximum capacity of four has four lanes; the first is always available during the time horizon, and the three others, based on the capacity, have periods of availability and unavailability. Lanes help plot schedules, disambiguate solutions with identical costs, and quickly identify jobs that form sequences of consecutive jobs. In Algorithm 1, we call the function find_lowest_available_lane that finds the lowest available lane that can accommodate a job starting at period t.
Algorithm 1 Schedule builder using lanes
  • Input: A problem instance P
  • Output: A feasible schedule S
  • demand ← [0,...]
  • left ← 0       ▹ Leftmost time point where capacity is not yet fully utilized
  • for all jobs job do
  •     t ← left
  •     while True do
  •         flag = True
  •         for all capacity[t:t+job.duration], demand[t:t+job.duration] c, d do
  •            if d > c then
  •                flag ← False
  •                break
  •            end if
  •         end for
  •         if flag then
  •            lane = find_lowest_available_lane(job, t)
  •            S[job.id] ← lane, t
  •            for all [t:t+job.duration] t’ do
  •                demand[t’] ← demand[t’] + 1
  •                if demand[t’] = capacity[t’] then
  •                    left ← t’ + 1
  •                end if
  •            end for
  •            t ← t + 1
  •            break
  •         end if
  •     end while
  • end for

5. C-Paths

A fundamental concept that was introduced by Mencia et al. in [4] is the concept of a C-Path. A C-Path is a sequence of consecutive jobs (i.e., in a C-Path, the finish time of each previous job coincides with the start time of the following job) in a schedule. The importance of C-Paths stems from the fact that jobs in each C-Path can easily swap places and keep the schedule feasible. We can consider a graph view of a schedule, where each job is a node and directed edges connect nodes that correspond to consecutive jobs. Then, each path from a source node of the graph (i.e., a node with no incoming edges) to a sink node (i.e., a node with no outgoing edges) is a C-Path. This is demonstrated in Figure 2 for a sample schedule of cost 35 for the toy problem instance shown in Table 1 and its corresponding graph in Figure 3. The list of C-Paths identified in this graph includes the following six: (3,12,4), (3,10,1,6), (3,10,1,2), (7,9,8,5), (7,11,2), and (7,11,6).
The number of C-Paths might be very large, especially for schedules of big-size problems. This is demonstrated in Table 3, which shows the number of C-Paths for specific schedules of selected problem instances. Note that the number of C-Paths might change dramatically for different schedules of the same problem instance and that larger problem instances might have fewer C-Paths than smaller problem instances for some schedules.

Fast Computation of C-Paths

Since complete enumeration of all C-Paths is out of the question for problems of large sizes, we opted for a faster method of generating a single C-Path each time it is needed. This method starts by picking a random job, followed by two processes that find the right and the left part of a C-Path, having the selected job as a pivot element. The right-side part of the C-path is formed by choosing the next job that starts at the finish time of the current job. If more than one job exists, one of them is randomly selected and becomes the new current job. This process continues until no more subsequent jobs are found. The left side of the C-Path is formed by setting the initially selected job as the current job once again and finding previous jobs that end at the start time of the current job. Similarly, if more than one exists, one is chosen at random and becomes the new current job. The process ends when no more suitable prior jobs can be found.

6. Due Times Rule

This work contributes to identifying a rule that involves the due times of jobs with equal durations. The rule states that “Jobs with equal duration should be scheduled in the order of their due times”. In other words, if two jobs have equal durations, they should swap places if the due time of the job that is scheduled later is sooner than the due time of the job that is scheduled sooner. This rule can be used to strengthen mathematical formulations or as a heuristic for reaching better solutions.
Proof. 
Suppose that two jobs, i and j, with the same duration, are scheduled such that job i comes first and job j follows. Additionally, suppose that job j has an earlier due time than job i, i.e., d j < d i . Let t and t be the finish times of jobs i and j, respectively, and since both jobs have the same duration, t < t holds. When both jobs have non-negative tardiness values, i.e., t d i 0 and t d j 0 , swapping the jobs results again in non-negative tardiness values for both jobs. This holds because the tardiness of job j after the swap will be t d j > t d i 0 . Moreover, t > t d i , so the tardiness of job i after the swap will be t d i 0 . So, the total tardiness before the swap is ( t d i ) + ( t d j ) , which is equal to the total tardiness after the swap, which is ( t d j ) + ( t d i ) . This situation is depicted in Figure 4a. The top figure shows a possible configuration of two equal-duration jobs that both incur tardiness. It can be easily seen that the total length of the gray bars that represent the tardiness remains the same after swapping jobs i and j.
The benefit of positioning equal-duration jobs according to their due times occurs when job i completes execution before its due time, and job j incurs some positive value of tardiness. Since job i has no tardiness, the total tardiness of the initial configuration is t d j . After the swap, the total tardiness will be ( t d j ) + ( t d i ) . In order to prove that the total tardiness is no greater after the swap, the following inequality must hold: ( t d j ) + ( t d i ) t d j . This inequality leads to the following true proposition: ( t d j ) + ( t d i ) t d j t d i 0 t d i . The last inequality holds since it is the assumption made for this case, i.e., job i has no tardiness. A visual representation of such a situation is depicted in Figure 4b. The upper part of the figure shows the situation where job i is scheduled first, while the lower part of the figure shows the situation after swapping the jobs. Again, the gray bars represent the tardiness of the jobs, and it can be seen that the total tardiness is decreased when the jobs swap places.    □

7. Formulation and Implementation

A formulation of the problem that will be used to construct initial solutions to the problem is presented below. Then, the formulation is slightly modified and used for solving problems involving subsets of tasks in an effort to attain better schedules overall.
Let J be the set of jobs.
Let P j be the duration of each job j J .
Let D j be the due date of each job j J .
Let T be the number of time points. Note that the value of T is not given explicitly by the problem, but such a value can be computed by aggregating the duration of all jobs.
Let C a p ( t ) be the capacity of the machine at each time point t 0 T 1 .
We define integer decision variables s j 0 T 1 P j that denote the start time of each job j J .
Likewise, we define integer decision variables f j P j T 1 that denote the finish time of each job j J .
We also define integer decision variables z j 0 which denote the tardiness of each job j J .
Finally, we define binary decision variables x j t and y j t . Each one of the former variables assumes the value 1 if job j starts its execution at time point t, or else it assumes the value 0. Likewise, each y j t variable marks the time point at which job j finishes.
m i n j J z j
f j = s j + P j j J
z j f j D j j J
s j t · x j t j J t 0 T 1
s j t + ( M t ) · ( 1 x j t ) j J t 0 T 1
t 0 T 1 x j t = 1 j J
y j t + P j = x j t j J t 0 T 1 P j
j J t 0 . . t x j t j J t 0 . . t y j t C a p ( t ) t 0 . . T 1
A brief explanation of the above model follows.
The aim of the objective function in Equation (1) is to minimize the total tardiness of all jobs.
Equation (2) assigns the proper finish time value to each job given its start time and duration.
Equation (3) assigns tardiness values to the jobs. In particular, when job j finishes before its due time, the right side of the inequality is a negative number, and variable z j assumes the value 0 since its domain is non-negative integers. When job j finishes after its due time, z j becomes f j D j . This occurs because z j is included in the minimized objective function and therefore forced to assume the smallest possible non-negative value.
Equations (4) and (5) drive the variables x j t to proper values based on the s j values. This occurs because, when s j assumes the value t, x j t becomes 1. It should be noted that M in Equation (5) represents a big value, and T 1 can be used for it. For the specific time point t at which a job will be scheduled to begin, the right sides of both equalities will assume the value t. For all other time points besides t, the right sides of the former and the latter equations become 0 and M, respectively.
Equation (6) enforces that only one among all of the x j t variables of each job j will assume the value 1.
Equation (7) dictates the following association rule: for each job j, when x j t becomes 1 or 0, the corresponding y variable of j with the time offset P j , which is y j t + P j , will also be 1 or 0, respectively.
Equation (8) guarantees that for each time point, the capacity of the machine will not be violated. The values that the left side of the equation assumes are the numbers of active jobs at each time point t. The first double summation counts the jobs that have started no later than t, while the second double summation counts the jobs that have also finished no later than t. Their difference is obviously the number of active jobs.

Constraint Programming Formulation

The IBM ILOG constraint programming (CP) solver seems to be a good choice for solving scheduling problems involving jobs that occupy intervals of time and consume some types of resources that have time-varying availability [27]. The one-machine scheduling problem can be easily formulated in the IBM ILOG CP solver using one fixed-size interval variable per job (j) and the constraint always_in, which restricts all of them to assume values that collectively never exceed the maximum available capacity over time. This is possible by using a pulse cumulative function expression that represents the contribution of our fixed interval variables over time. Each job execution requires one capacity unit, which is occupied when the job starts, retained through its execution, and released when the job finishes. In our case, the variable usage aggregates all pulse requirements by all jobs. The objective function uses a set of integer variables z[job.id] that are stored in a dictionary that has the identifier of each job as keys. Each z[job.id] variable assumes the value of the job’s tardiness (i.e., the non-negative difference of the job’s due time (job.due_time) and its finish time (end_of(j[job.id])). An additional constraint is added that corresponds to the due times rule mentioned in Section 6. Jobs are grouped by duration, and a list ordered by due times is prepared for each group. Then, for all jobs in the list, the constraint enforces that the order of the jobs must be respected. This means that each job in the list should have an earlier start time than the start time of the job that follows it in the list. The model implementation using the IBM ILOG CP solver’s python API is presented below.
import docplex.cp.model as cpx
model = cpx.CpoModel()
x_ub = int(problem.ideal_duration() * 1.1)
j = {
   job.id: model.interval_var(
     start=[0, x_ub - job.duration - 1],
     end=[job.duration, x_ub - 1],
     size=job.duration
   )
   for job in problem.jobs
}
z = {
   job.id: model.integer_var(lb=0, ub=x_ub - 1)
   for job in problem.jobs
   }
usage = sum([model.pulse(j[job.id], 1) for job in problem.jobs])
for i in range(problem.nint):
   model.add(
      model.always_in(
         usage,
         [problem.capacities[i].start, problem.capacities[i].end],
         0,
         problem.capacities[i].capacity,
      )
   )
for job in problem.jobs:
   model.add(z[job.id] >= model.end_of(j[job.id]) - job.due_time)
for k in problem.size_jobs: # iterate over discrete job durations
   jobs_by_due_time = same_duration_jobs[k]
   for i in range(len(jobs_by_due_time)-1):
      j1, j2 = jobs_by_due_time[i][1], jobs_by_due_time[i+1][1]
      model.add(model.start_of(z[j1]) <= model.start_of(z[j2]))
model.minimize(sum([z[job.id] for job in problem.jobs]))
The object problem is supposed to be an instance of a class that has all relevant information for the problem instance under question (i.e., jobs is the list of all jobs, each job besides id and due_time has also a duration property, nint is the number of capacity intervals, and capacities[i].start and capacities[i].end are the start time and end time of the ith capacity step, respectively). Finally, the problem object has the ideal_duration method that estimates a tight value for the makespan of the schedule, which is incremented by 10% to accommodate possible gaps that hinder the full exploitation of the available capacity. The “ideal duration” is computed by totaling the durations of all jobs and then filling the area under the capacity line from left to right and from bottom to top with blocks of the size 1 × 1 until the totaled durations quantity runs out. The rightmost point on the time axis of the filled area becomes the “ideal duration” and is clearly a relaxation of the actual completion time of the optimal solution since each job is decomposed in blocks of the duration one, and no gaps appear in the filled area.
An effort was undertaken to implement the above model using Google’s ORTools CP-SAT Solver [28]. This solver has a cumulative constraint that can be used in place of always_in to describe the machine’s varying capacity. A series of F i x e d S i z e I n t e r v a l V a r variables were used that transformed the pulse of the capacity to a flat line equal to the maximum capacity. Unfortunately, the solver under this specific model implementation could not approximate good results and was finally not used.

8. Local Search Improvement Procedures

We have identified three local search procedures that have the potential to improve the cost of a given schedule. These local search procedures can be considered as “large” moves since they examine a significant number of schedules neighboring the current one.

8.1. Local Search Improve1

The first local search procedure starts by iterating overall jobs. For each job j 1 , each other consecutive job j 2 is identified, and then each job j 3 with a duration equal to the aggregated durations of j 1 and j 2 is found. Since j 1 and j 2 are consecutive, they can be swapped with job j 3 , and the schedule will still remain feasible, as seen in Figure 5. Moreover, the order of the two first jobs does not influence the feasibility. So, two alternatives are tested that compare the imposed penalties before and after the swap, and, if an improvement is found, the swap occurs. The time complexity of this procedure is O ( | J | ) since the maximum number of consecutive jobs for each job is bounded by the maximum capacity, which is a constant number much smaller than the number of jobs. Moreover, the identification of consecutive jobs and jobs of durations that are equal to the aggregated duration of two other consecutive jobs is performed using Hash Maps that effectively contribute O ( 1 ) to the above complexity. The first one uses times as keys and has a list of jobs starting at these times as values. By using as a key the finish time of a job j 1 , the dictionary returns each job j 2 that is consecutive to j 1 . The second Hash Map uses the jobs’ durations as keys and, as the value for each key x, the list of jobs with the duration x. Note that the second Hash Map is computed only once and remains unchanged through the solution process.

8.2. Local Search Improve2

The second local search procedure uses C-Paths that are computed as described in Section 5. Each C-Path is traversed from left to right until a job j is found that imposes a cost to the schedule (i.e., has a finish time greater than its due time). The only way that the penalty of a job j can be reduced is by moving it to the left side of the C-Path. So, all jobs that start earlier than job j are examined by swapping places with job j. If the total penalty imposed by job j and a sequence of jobs up to another job k is greater than the penalty after swapping jobs j and k, followed by shifts of jobs in between, then this set of moves occurs. The time complexity of this procedure is O ( | J | 2 ) . Since each C-Path has a length that is O ( | J | ) , and each C-Path is traversed once for identifying jobs with penalties, and then, for each such job, the C-path is again traversed, it follows that the complexity is quadratic. The construction of each C-Path costs O ( | J | ) , which is added to the time of the above procedure and gives O ( | J | ) + O ( | J | 2 ) . Since this occurs for every job, | J | C-Paths are generated, and this results in a total complexity of O ( | J | 3 ) for the second local search procedure. An example of this procedure is depicted in Figure 6.

8.3. Local Search Improve3

The third local search procedure starts by identifying periods where the capacity is not fully used. Given a capacity profile that has the form of a pulse, for each job, the pulse is lowered by one unit for the period during which it is active. This is iterated for all jobs, and, finally, it is possible to exist periods scattered across the horizon that have non-zero capacity remaining. So, jobs with finish times that fall inside these periods (gaps) can possibly be moved to the right, and the schedule should still be feasible. The main idea of this local search procedure is that it allows two jobs of marginally different durations to swap places. This occurs by first identifying two C-Paths with no common jobs that have jobs with finish times falling inside gaps as their rightmost jobs. Given two such C-Paths, each job of them can be swapped with a job of the other C-Path, provided that the slack that the gap provides is adequate for this move. This means that all jobs of a C-Path that are to the right of the smaller of the two swapping jobs should shift to the right, and all jobs of the other C-Path that are to the right of the bigger job should shift to the left, giving the opportunity for further penalty gains. In principle, the number of jobs that might have finish times that fall inside gaps is O ( | J | ) , but our experiments showed that, in practice, this number is a small fraction of | J | . Since two C-Paths are involved, and each job of a C-Path has to be checked with each job of the other C-Path, this contributes O ( | J | 2 ) . Moreover, all possible pairs of jobs that fall in gaps are used as starting points in the construction of the corresponding C-Paths, resulting in another O ( | J | 2 ) term. So, the time complexity of the overall procedure is O ( | J | 4 ) . It should be noted that shifts due to penalty reductions occur rarely, and their amortized contribution to the time complexity is neglected. In practice, the time needed for this move is comparable to the previous one due to the relatively small number of jobs that fall inside gaps. An example of this procedure is depicted in Figure 7.

9. A Multi-Staged Approach

The approach we employed for addressing the problem uses several stages that operate cyclically until the available time runs out.

10. Results

Our experiments were run on a workstation with 32GB of RAM and an Intel Core i7-7700K 4.2GHz CPU (four cores, eight threads) running Windows 10. The constraint programming solver that we used was IBM ILOG CP Optimizer Version 22. The local search procedures, the implementation of the constraint programming model, and the driver program were all implemented in Python. Our results are compared with the results of Mencia et al. in  [6], which is a continuation of their previous work in [4,5]. In their most recent work, they present and compare six memetic algorithms termed MA SCP , MA iSCP , MA SCP + , MA CB , MA ICP , and MA HYB . The last one gives the best results out of all others and the previous approaches of the authors, and this is the algorithm with which we compare our approach. MA HYB combines CB and ICP procedures under a memetic algorithm. Both procedures use the concept of a cover. A cover is a disjoint set of C-Paths that covers all jobs. In CB, once a cover is generated, the C-Paths of the cover are examined in isolation for improvements. On the other hand, ICP swaps jobs between C-Paths, again using a cover to select the C-Paths participating in the procedure. In [6], no values for the schedule costs are given, but the relative performances of the six approaches are recorded in tables and graphs instead. So, results about the actual schedule costs of MA HYB and the other approaches that we use in our comparisons hereafter were taken from [8], which the authors cite in their paper.

10.1. CPO vs. CPO+

We call the constraint programming approach, briefly described in [6], CPO (constraint programming optimizer), and our approach described in Section “Constraint Programming Formulation” that exploits the “due rule”, CPO+. Results about the performance of CPO were taken from the web repository cited at the end of the previous paragraph.
Table 4 depicts for each problem instance the total tardiness of all jobs for the schedule that CPO and CPO+ produced. It shows that CPO+ manages to find solutions equal to the best-known solutions for 25 out of 190 problem instances, while CPO achieves this for only 2 problem instances (i120_3_3 and i120_5_4). The best values are written in bold. An allotted time of n / 2 seconds for each problem instance was given for each run, where n is the number of jobs. We used all available cores, which was the default setup for the IBM ILOG CP Optimizer. The results of CPO+ are the best among 10 runs for each problem instance, and random seeds were used to achieve diversity.

10.2. Hybrid Exact–Local Search

The HELS (hybrid exact–local search) approach is shown using the flowchart in Figure 8. First, the constraint programming solver is employed for the full problem. A period of time of n / 2 seconds is given for executing this stage. Then, for 3 × n seconds, a loop occurs that includes the three local search procedures, followed by activation of the constraint programming solver again, but this time for subproblems. These subproblems might involve jobs that intersect with vertical ribbons on the time axis or groups of consecutive sequences of jobs (i.e., multiple C-Paths). Note that a reordering of jobs might be needed so that the current solution conforms with the “due rule”, else the constraint programming solver might consider the fixed parts of the partial solution to be infeasible. This is denoted by the extra stage “Reorder same duration jobs” after the third local search procedure in Figure 8.
Table 5 presents the best results that were achieved by the approach with the best-known results, all of which are provided by MA HYB .
Table 6 consolidates the relative performance of our approach when compared with the best-known results. We can observe that our approach manages to find new best results for 48 of the problem instances. It also equals the best-known results for 91 problem instances. MA HYB achieves better results than our approach for the remaining 51 problem instances. We also compare our solutions with how closely it reaches the previously best-known solution as a percentage value, and we call this the metric “distance%”. Negative values mean that our approach sets a new best-known value. We see that the average distance% metric for all problem instances assumes a negative value of 0.1727 % , demonstrating its very good performance. This is further supported by Figure 9, which shows for each problem subset consisting of 10 instances, a boxplot of the distance% values. Again, negative values are advantageous for our approach, and the graph clearly shows that our approach achieves excellent results, especially for large problem instances.

10.3. Discussion

Figure 10 includes three graphs showing the cost of solutions during the allotted execution time. We can see that the costs start from very high values and sharply fall to smaller values, not very far from the final ones. The long tails of the graphs indicate that less time is needed for achieving good quality schedules than the 3.5 × n seconds (n is the number of jobs) that we have used in our experiments. This is further demonstrated by the values of CPO+ and HELS in Table 4 and Table 5, with the first ones being close to the second. In particular, for the set of all 190 problem instances, the percentage distance of CPO+ to HELS has a mean equal to 0.077 and a standard deviation equal to 0.069.
Regarding comparing our approach with the one by Mencia et al. [6], we can make a few remarks. First, Mencia et al. do not report in their papers the exact values that their approaches returned; instead, they compare their results to their other less efficient approaches. So, we retrieved the values we use in our comparisons from the site [8] that the authors reference in their paper. Providing exact values alongside solution files that other researchers can download from our repository [26] may attract more interest to the problem. Since the run-time environments used in our case and in Mencia et al.’s case are different, we focus mainly on whether each approach can find the best possible result, given that limited processing power is exploited in both cases. Furthermore, in our case, Figure 10 clearly shows a trend observed in other problem instances: our approach reaches results very close to the final results using only about 1 / 5 of the allotted execution time.

11. Conclusions

In this manuscript, we examined an abstraction of a real-world electrical vehicle charging scheduling problem that comes with a public dataset and high-quality published solutions. This problem involves a number of jobs with given durations and due times that should be scheduled to a single machine with time-dependent capacity, while the objective is the minimization of the aggregated tardiness of all jobs. We proposed some novel ideas, such as the due times rule, local improvement procedures, and problem decompositions. The result was that we managed to achieve better solutions for many problem instances of the already excellent solved dataset. A central component of the proposed approach, which we call the hybrid exact–local search (HELS) approach, is a constraint programming model that utilizes interval variables and global constraints and that is initially called for the full problem and then iteratively for subproblems that stochastically drive the objective to more favorable values. Three improvement procedures are embedded in our HELS approach that perform local searches and succeed in further improving solutions. Our work demonstrates an example of a general tendency. Challenging combinatorial problems increasingly fall into the realm of exact solvers, which are nowadays capable of solving problems of big sizes. If this is not possible for the full problem, hybrid approaches that combine exact solvers over subproblems and approximate solvers can be combined to decompose the problem in a process that results in complete, high-quality solutions.

Author Contributions

Conceptualization, C.G. and C.V.; methodology, C.V.; software, C.G., C.V., and A.D.; validation, C.V., A.D., and P.P.; formal analysis, P.P.; investigation, A.D. and C.G.; resources, A.V.; data curation, A.V.; writing—original draft preparation, A.D. and C.G.; writing—review and editing, C.G., C.V., and A.D.; visualization, A.V.; supervision, C.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Problem instances and all solution files using our hybrid exact–local search approach are archived in [26].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pinedo, M.L. Scheduling; Springer: Berlin/Heidelberg, Germany, 2012; Volume 29. [Google Scholar]
  2. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Kan, A.R. Optimization and approximation in deterministic sequencing and scheduling: A survey. In Annals of Discrete Mathematics; Elsevier: Amsterdam, The Netherlands, 1979; Volume 5, pp. 287–326. [Google Scholar]
  3. Scheduling Zoo. Available online: http://schedulingzoo.lip6.fr/ (accessed on 21 November 2022).
  4. Mencía, C.; Sierra, M.R.; Mencía, R.; Varela, R. Genetic algorithm for scheduling charging times of electric vehicles subject to time dependent power availability. In Proceedings of the International Work-Conference on the Interplay between Natural and Artificial Computation, Corunna, Spain, 19–23 June 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 160–169. [Google Scholar]
  5. Mencía, C.; Sierra, M.R.; Mencía, R.; Varela, R. Evolutionary one-machine scheduling in the context of electric vehicles charging. Integr.-Comput.-Aided Eng. 2019, 26, 49–63. [Google Scholar] [CrossRef]
  6. Mencía, R.; Mencía, C. One-Machine Scheduling with Time-Dependent Capacity via Efficient Memetic Algorithms. Mathematics 2021, 9, 3030. [Google Scholar] [CrossRef]
  7. Koulamas, C. The single-machine total tardiness scheduling problem: Review and extensions. Eur. J. Oper. Res. 2010, 202, 1–7. [Google Scholar] [CrossRef]
  8. GitHub Repository for “One Machine Scheduling with Time Dependent Capacity via Efficient Memetic Algorithms” by Mencia R. Available online: https://github.com/raulmencia/One-Machine-Scheduling-with-Time-Dependent-Capacity-via-Efficient-Memetic-Algorithms (accessed on 21 November 2022).
  9. Hernández-Arauzo, A.; Puente, J.; Varela, R.; Sedano, J. Electric vehicle charging under power and balance constraints as dynamic scheduling. Comput. Ind. Eng. 2015, 85, 306–315. [Google Scholar] [CrossRef] [Green Version]
  10. Lenstra, J.K.; Kan, A.R.; Brucker, P. Complexity of machine scheduling problems. In Annals of Discrete Mathematics; Elsevier: Amsterdam, The Netherlands, 1977; Volume 1, pp. 343–362. [Google Scholar]
  11. Gupta, S.K.; Kyparisis, J. Single machine scheduling research. Omega 1987, 15, 207–227. [Google Scholar] [CrossRef]
  12. Gogos, C.; Valouxis, C.; Alefragis, P.; Goulas, G.; Voros, N.; Housos, E. Scheduling independent tasks on heterogeneous processors using heuristics and Column Pricing. Future Gener. Comput. Syst. 2016, 60, 48–66. [Google Scholar] [CrossRef]
  13. Baptiste, P.; Le Pape, C.; Nuijten, W. Constraint-Based Scheduling: Applying Constraint Programming to Scheduling Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 39. [Google Scholar]
  14. Großmann, P.; Hölldobler, S.; Manthey, N.; Nachtigall, K.; Opitz, J.; Steinke, P. Solving periodic event scheduling problems with SAT. In Proceedings of the International Conference on Industrial, Engineering and other Applications of Applied Intelligent Systems, Dalian, China, 9–12 June 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 166–175. [Google Scholar]
  15. Ansótegui, C.; Bofill, M.; Palahí, M.; Suy, J.; Villaret, M. Satisfiability modulo theories: An efficient approach for the resource-constrained project scheduling problem. In Proceedings of the Ninth Symposium of Abstraction, Reformulation, and Approximation, Parador de Cardona, Spain, 17–18 July 2011. [Google Scholar]
  16. Brucker, P.; Knust, S.; Schoo, A.; Thiele, O. A branch and bound algorithm for the resource-constrained project scheduling problem. Eur. J. Oper. Res. 1998, 107, 272–288. [Google Scholar] [CrossRef]
  17. Vaessens, R.J.M.; Aarts, E.H.; Lenstra, J.K. Job shop scheduling by local search. Informs J. Comput. 1996, 8, 302–317. [Google Scholar] [CrossRef] [Green Version]
  18. Matsuo, H.; Juck Suh, C.; Sullivan, R.S. A controlled search simulated annealing method for the single machine weighted tardiness problem. Ann. Oper. Res. 1989, 21, 85–108. [Google Scholar] [CrossRef]
  19. Lee, K.M.; Yamakawa, T.; Lee, K.M. A genetic algorithm for general machine scheduling problems. In Proceedings of the 1998 Second International Conference Knowledge-Based Intelligent Electronic Systems, Proceedings KES’98 (Cat. No. 98EX111), Adelaide, Australia, 21–23 April 1998; IEEE: Piscataway, NJ, USA, 1998; Volume 2, pp. 60–66. [Google Scholar]
  20. Gil-Gala, F.J.; Mencía, C.; Sierra, M.R.; Varela, R. Evolving priority rules for on-line scheduling of jobs on a single machine with variable capacity over time. Appl. Soft Comput. 2019, 85, 105782. [Google Scholar] [CrossRef]
  21. França, P.M.; Mendes, A.; Moscato, P. A memetic algorithm for the total tardiness single machine scheduling problem. Eur. J. Oper. Res. 2001, 132, 224–242. [Google Scholar] [CrossRef]
  22. Wu, X.; Che, A. A memetic differential evolution algorithm for energy-efficient parallel machine scheduling. Omega 2019, 82, 155–165. [Google Scholar] [CrossRef]
  23. Merkle, D.; Middendorf, M. Ant colony optimization with global pheromone evaluation for scheduling a single machine. Appl. Intell. 2003, 18, 105–111. [Google Scholar] [CrossRef]
  24. Lin, T.L.; Horng, S.J.; Kao, T.W.; Chen, Y.H.; Run, R.S.; Chen, R.J.; Lai, J.L.; Kuo, I.H. An efficient job-shop scheduling algorithm based on particle swarm optimization. Expert Syst. Appl. 2010, 37, 2629–2636. [Google Scholar] [CrossRef]
  25. Yuce, B.; Fruggiero, F.; Packianather, M.S.; Pham, D.T.; Mastrocinque, E.; Lambiase, A.; Fera, M. Hybrid Genetic Bees Algorithm applied to single machine scheduling with earliness and tardiness penalties. Comput. Ind. Eng. 2017, 113, 842–858. [Google Scholar] [CrossRef]
  26. GitHub Repository for “A Hybrid Exact-Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity” by Gogos C. Available online: https://github.com/chgogos/1MSTDC (accessed on 21 November 2022).
  27. Laborie, P.; Rogerie, J.; Shaw, P.; Vilím, P. IBM ILOG CP optimizer for scheduling. Constraints 2018, 23, 210–250. [Google Scholar] [CrossRef]
  28. Google OR Tools CP-SAT Solver. Available online: https://developers.google.com/optimization/cp/cp_solver (accessed on 21 November 2022).
Figure 1. A graphical representation of the schedule in the last line of Table 1.
Figure 1. A graphical representation of the schedule in the last line of Table 1.
Algorithms 15 00450 g001
Figure 2. A suboptimal schedule of cost 35 for the toy problem of Table 2. Each job is depicted with a box and annotated with a label of the form x ( y , z ) , where x is the job identification number, y is the duration of the job, and z is its due time.
Figure 2. A suboptimal schedule of cost 35 for the toy problem of Table 2. Each job is depicted with a box and annotated with a label of the form x ( y , z ) , where x is the job identification number, y is the duration of the job, and z is its due time.
Algorithms 15 00450 g002
Figure 3. The graph that corresponds to the schedule of Figure 2.
Figure 3. The graph that corresponds to the schedule of Figure 2.
Algorithms 15 00450 g003
Figure 4. (a) Both job i and job j have non-negative tardiness values. (b) Job i has no tardiness, but job j incurs tardiness.
Figure 4. (a) Both job i and job j have non-negative tardiness values. (b) Job i has no tardiness, but job j incurs tardiness.
Algorithms 15 00450 g004
Figure 5. Assuming that job j 3 has the same duration as the aggregated duration of jobs j 1 and j 2 , two cases for swapping them become possible. The first one puts j 1 first and j 2 second, and the other one puts j 2 first and j 1 second.
Figure 5. Assuming that job j 3 has the same duration as the aggregated duration of jobs j 1 and j 2 , two cases for swapping them become possible. The first one puts j 1 first and j 2 second, and the other one puts j 2 first and j 1 second.
Algorithms 15 00450 g005
Figure 6. Given a C-Path, this local search procedure swaps two non-consecutive jobs ( j 1 and j 2 ) and appropriately shifts the in-between jobs ( j 3 ) so as to keep the C-Path property for all involved jobs.
Figure 6. Given a C-Path, this local search procedure swaps two non-consecutive jobs ( j 1 and j 2 ) and appropriately shifts the in-between jobs ( j 3 ) so as to keep the C-Path property for all involved jobs.
Algorithms 15 00450 g006
Figure 7. Jobs belonging to two C-Paths swap places to reduce the length or even remove gaps in the schedule.
Figure 7. Jobs belonging to two C-Paths swap places to reduce the length or even remove gaps in the schedule.
Algorithms 15 00450 g007
Figure 8. Hybrid Exact–Local Search approach.
Figure 8. Hybrid Exact–Local Search approach.
Algorithms 15 00450 g008
Figure 9. The HELS approach compared with the best-known results derived from the MA HYB approach in [6].
Figure 9. The HELS approach compared with the best-known results derived from the MA HYB approach in [6].
Algorithms 15 00450 g009
Figure 10. Cost values during the execution time using the HELS approach.
Figure 10. Cost values during the execution time using the HELS approach.
Algorithms 15 00450 g010
Table 1. A sample problem instance with 12 jobs. For each job i, the table shows its duration p i and its due time d i . Additionally, for a certain schedule, the table shows the start time ( S i ) , completion time ( C i ) , and penalty incurred ( T i ) for each job i.
Table 1. A sample problem instance with 12 jobs. For each job i, the table shows its duration p i and its due time d i . Additionally, for a certain schedule, the table shows the start time ( S i ) , completion time ( C i ) , and penalty incurred ( T i ) for each job i.
i123456789101112
p i , d i 4,44,92,133,44,73,82,103,32,133,53,95,7
S i , C i , T i 4,8,48,12,310,12,02,5,16,10,35,8,08,10,00,3,012,14,13,6,16,9,09,14,7
Table 2. Problem instances in the dataset. For each pair of a number of jobs and a maximum capacity, 10 individual problem instances exist.
Table 2. Problem instances in the dataset. For each pair of a number of jobs and a maximum capacity, 10 individual problem instances exist.
Number of Jobs (n)Maximum Capacity (MC)
1203, 5, 7, 10
25010, 20, 30
50010, 20, 30
75010, 20, 30, 50
100010, 20, 30, 50, 100
Table 3. Number of C-Paths for schedules of selected problem instances, which can be found in [26].
Table 3. Number of C-Paths for schedules of selected problem instances, which can be found in [26].
Problem InstanceSchedule CostNumber of C-Paths
i120_3_18483
i120_10_174971
i250_10_1409448
i250_30_13013276
i500_10_14614204
i500_30_126704528
i750_10_1440922,302
i750_50_15134206,022
i1000_10_1641903,826
i1000_100_171,012216,694
Table 4. Best results (total tardiness) from the CPO approach and our CPO+ approach.
Table 4. Best results (total tardiness) from the CPO approach and our CPO+ approach.
Problem SetCPOCPO+
1234567891012345678910
i120_39513834241010593951325873040951690132686736202410101938593123720408416631268
i120_5105416739994963128458550122734861245103015389594963062455519121433321151
i120_7120127683798420632202786171233064694546113327253551408330952665165532254510503
i120_10764121215549351612115725307771056881752112915118871467111824647211006823
i250_10431155214804871692165634857610854531405414450613544755652262884604594053631320
i250_2059051974297127741343178616442408173066325674189928252545109716311594220215616563
i250_30338133485273509945605333399080318085600308531144999419443045129372669515615418
i500_104791865110922266756312326146884972227461482296321596106100286444724652130
i500_208212899944014951859934962635366614603077057909144127317999232525325264205886
i500_30306461363174787658318384256936010994822822501604844256114169738929051921369
i750_1046708244605154051655822710398638135944149446078205841511015397909970804213,1033906
i750_206100889159310,151992712,396517511,91048122866604670313209633917911,652487511,41443992690
i750_30509260453200324073330281243328131182256495453142534303068726091114296328122089
i750_505989576613,945269911,09612,590574811,425568961395303509812,615243310,19011,998512210,89854025242
i1000_1071224,629107115,71116,299218877920,90223,213447164123,82183315,34215,443202574320,89122,4954147
i1000_2010,37917,52520,318821415,21125,04410,01217,48211,58311,316947016,35520,265808314,51024,915991217,04810,43610,892
i1000_307871207420,96413,991430813,63010,763271315,85620,9597054176918,58012,753380012,29810,232223915,63119,138
i1000_50331516,49113,0471520163018,21621,181222715,4734682296316,23612,1301239147817,13120,133202214,4544199
i1000_10073,63781,10428,29239,05776,08050,54147,74655,97326,58517,80378,96381,63826,44637,71977,29849,31345,03653,63924,82316,330
Table 5. Best previously known results (total tardiness) achieved from the MA HYB approach and the results of our HELS approach.
Table 5. Best previously known results (total tardiness) achieved from the MA HYB approach and the results of our HELS approach.
Problem Set MA HYB HELS
1234567891012345678910
i120_38483568241010193858312072040841663126884835702410101938583120720408416631268
i120_5103015119594963061455519121432231131103015149594963061455519121432311131
i120_7112027253493402130592640165532254470493112127253505402830782640165532254474493
i120_107461124144288714471118246372197782074911251453887144711182463721983820
i250_10409450613494731639062804497588153211293410350613494731639162844511588753271293
i250_2055731882281325251054158315652190155365315573188828132525105416051570219015536541
i250_30301330544758409841975034364168615025197301330544753409341945019364168615025193
i500_10461482295121026105981276844604621998461482295321166105981278344604622008
i500_207649790894112721719911052331806291566175697908970127217449097523318863385659
i500_3026704775975430758691614379589468623522670477597443075873162637958888862352
i750_10437977445819508615177895952799612,8373840431277445821508215007895943796212,8143840
i750_20589170013149674907311,434485511,35343932632597970013149562907311,434485511,28443932632
i750_30471351282364295166125771070291127001994476751282364294566325771070291227001994
i750_505134497812,6012356984011,483486810,580515452045134479212,1472356984711,500486810,58351545028
i1000_1064123,72981515,20415,393202573520,68622,358402864123,52181215,20615,180202572920,57422,1623982
i1000_20944016,06920,168796214,18324,693981616,99410,29010,781944015,97119,986797514,18324,507972616,78910,30110,781
i1000_306902168718,43312,399376812,0909795208515,62518,7286780166818,08712,411370712,0909765208515,52818,483
i1000_50288316,41811,6131142139016,65619,566188613,7403989288315,97711,6181142139016,66719,566188913,7473989
i1000_10071,03475,10125,97736,37467,10947,54044,54552,26623,70415,66471,01275,18125,59436,37666,41947,54143,43752,26623,69015,230
Table 6. The HELS approach’s performance against the best recorded results.
Table 6. The HELS approach’s performance against the best recorded results.
BestEqualWorseDistance (%)
489151 0.1727
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Valouxis, C.; Gogos, C.; Dimitsas, A.; Potikas, P.; Vittas, A. A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity. Algorithms 2022, 15, 450. https://doi.org/10.3390/a15120450

AMA Style

Valouxis C, Gogos C, Dimitsas A, Potikas P, Vittas A. A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity. Algorithms. 2022; 15(12):450. https://doi.org/10.3390/a15120450

Chicago/Turabian Style

Valouxis, Christos, Christos Gogos, Angelos Dimitsas, Petros Potikas, and Anastasios Vittas. 2022. "A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity" Algorithms 15, no. 12: 450. https://doi.org/10.3390/a15120450

APA Style

Valouxis, C., Gogos, C., Dimitsas, A., Potikas, P., & Vittas, A. (2022). A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity. Algorithms, 15(12), 450. https://doi.org/10.3390/a15120450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop