1. Introduction
In classical scheduling, it is commonly assumed that a fixed set of machines or processors remains continuously available throughout the entire planning horizon [
1,
2]. However, this assumption is often unrealistic in practical environments, where machine availability may be affected by maintenance operations [
3], unexpected breakdowns [
4], or other operational constraints [
5,
6]. As a result, scheduling problems with machine non-availability have attracted considerable attention in the literature where different problems were treated such as scheduling where only a limited number of identical processors are available [
7], scheduling on uniform parallel machines with periodic unavailability constraints [
8,
9], scheduling on distributed systems [
10,
11], and scheduling with communication costs [
12,
13].
In parallel, the scheduling of unit execution time (UET) tasks [
14] has been extensively investigated due to its relevance for modeling preemptable and fine-grained tasks [
15,
16]. Moreover, precedence constraints and, in particular, intree structures naturally arise in many applications such as parallel computing and workflow scheduling, where communication delays between tasks cannot be neglected such as in the problem of scheduling precedence-constrained tasks [
17] and the problem of scheduling-related jobs [
18].
Despite these advances, the combined consideration of unitary tasks with intree-precedence constraints, communication costs, and machine non-availability remains largely unexplored.
Motivated by this gap, this paper addresses the problem of scheduling N unit execution time tasks related by intree-precedence constraints with unit communication costs on two identical parallel machines, where one machine is subject to non-availability periods. Using the three-field notation introduced by Graham et al. [
19], the problem is denoted as
. This problem has not been previously investigated, and neither its computational complexity nor an optimal solution method has been reported in the literature.
The contributions in this paper are as follows:
Prove that the complexity of the treated problem is polynomial and does not have an open complexity.
Propose an optimal algorithm with polynomial complexity.
Prove the optimality of the proposed algorithm by a set of theorems and lemmas.
The paper is structured as follows: A review of related work is detailed in
Section 2, while
Section 3 provides a formal description of the problem under study.The proposed SIwUC solution is introduced and explained in
Section 4, followed by its evaluation on two illustrative examples in
Section 5.
Section 6 provides the optimality proof for the algorithm. The paper concludes in
Section 7 with a summary and suggestions for future research directions.
2. Related Works
This research examines a dual-constrained scheduling problem that combines communication delays with precedence and non-availability period constraints. Although these constraints have been studied separately in the past, there is not a perfect way to combine them. The new heuristic, Adapted CBoS (ACBoS), is introduced in this section after each constraint has been reviewed separately. For studying the behavior of this heuristic, a new lower bound is proposed to carry out a comparison between ACBoS and the lower bound.
2.1. Scheduling with Unavailabilities
The study of scheduling with non-availability constraints was first conducted by Lee et al. [
20,
21]. They proved that it is an NP-Hard problem to schedule independent tasks with non-unitary execution times on identical machines while taking unavailability periods into account [
22,
23,
24].
The study also proved that there is not a single, universal approximation algorithm for this issue. This is due to the fact that certain situations can be created in which only an optimal schedule produces a makespan that is acceptable; in these cases, any algorithm that is not optimal would perform arbitrarily poorly. Lee therefore made the simplifying assumption that a machine is always available. Lee showed that the classical Longest Processing Time (LPT) algorithm achieves a worst-case bound of under this assumption and when machine j has a single unavailability period (where ) within the scheduling horizon.
In [
25], the authors study a scheduling scenario that does not allow more than half of the machines to be down at any given time. Given this setting, the authors prove that the LPT heuristic has a performance ratio of at most 2. In [
26], the same results are extended to a situation where up to
machines can be down at once, that is,
.
In this generalized setting, the authors show the performance ratio of LPT is guaranteed to be upper-bounded by the study of scheduling with non-availability constraints was first conducted by Lee et al. [
20,
21]. They proved that it is an NP-Hard problem to schedule independent tasks with non-unitary execution times on identical machines while taking unavailability periods into account [
22].
In [
8], the authors address the scheduling problem involving two processors, each of which experiences a single interval of unavailability. They proposed exponential time algorithms that gives optimal solutions and showed, through experiments, that their algorithms generate good results especially in practical cases.
2.2. Scheduling Task Problems with Precedence Constraints and Communications
The CBoS algorithm [
27] (Cluster-Based Scheduling) is a polynomial-time optimal algorithm for scheduling intree-structured tasks on two identical processors under UECT constraints.
Characteristics:
Assign the root always to processor P1.
Identify clusters (subtrees) that can be entirely assigned to P2.
Balance load between P1 and P2 while minimizing communication delays (by keeping predecessor–successor pairs together when beneficial).
Advantages:
Polynomial time for two processors.
Minimizes communication by clustering-related tasks.
Load balancing through R parameter.
Optimal for many intree structures under UECT constraints
Simple to implement.
Limits:
Only for two processors.
Assumes unit execution and communication times.
May not be optimal for all intree structures.
Does not extend easily to m > 2 processors.
Algorithm steps:
R computing, which is the number of tasks that will be assigned to the processor P2.
Cluster selection for P2.
Processor P1 scheduling.
Processor P2 scheduling.
CBoS algorithm can be summarized as in Algorithm 1:
| Algorithm 1 Clusters determination algorithm |
- Input:
Intree T-tree, Integer N - Output:
Tasks allocated to - 1:
Begin - 2:
Integer - 3:
Integer - 4:
while do - 5:
- 6:
Sorted in decreasing order by weight - 7:
if then - 8:
Let be the first task in L such that - 9:
if such a exists then - 10:
cluster is allocated to - 11:
cluster is marked - 12:
- 13:
is removed from L - 14:
else - 15:
- 16:
end if - 17:
else - 18:
Let be the only task of L - 19:
- 20:
- 21:
end if - 22:
end while - 23:
End
|
In the following
Figure 1, an example of scheduling an intree by the CBoS algorithm is presented:
R is initialized to 8. In this example, the CBoS algorithm assigns 3 subtrees to the processor . The sum of tasks of the three subtrees is 8 and the rest is allocated to the processor .
The computed schedule is described by
Figure 2.
2.3. Problem of Scheduling Tasks with Unavailabilities and Precedence Constraints
In the current literature, there is no best algorithm for scheduling precedence-constrained jobs on processors that are unavailable for certain time periods. In [
5], the authors proposed a method to address this problem, specifically considering scheduling UECT intrees on two identical processors, called P1 and P2, both of which may experience unavailability periods multiple times.
The new algorithm named Adapted CBoS (ACBoS) is based on the original CBoS algorithm. The most important difference between the two algorithms is how ACBoS cal-culates the parameter R, which is the upper bound on the number of jobs that can be assigned to processor P2 in any optimal schedule. The algorithm computes the value for R as follows:
- a.
Initialization
Let N denote the total number of tasks in the intree.
Set as the task count allocated to .
Set as the discrete time slot index.
- b.
Processor availability scanning:
At each time slot t, the availability states of P1 and P2 are evaluated.
- c.
Task assignment when P1 is available:
If P1(t) is available, decrement N by 1, corresponding to the assignment of one task to P1.
- d.
Task assignment when P2 is available:
If P2(t) is available, decrement N by 1 and increment R by 1, representing the assignment of one task to P2.
- e.
Time progression
After processing the availability at time t, increment t by 1. If both processors are unavailable at t, increment t until at least one becomes available.
- f.
Iteration condition
Repeat steps be until N ≤ 3.
- g.
Boundary case N = 3
If both P1(t) and P2(t) are available, we simulate assigning two tasks (one to each processor): set N = N − 2 and R = R + 1.
If only P1(t) is available, we simulate assigning one task to P1: set N = N − 1 (no change to R).
If only P2(t) is available, we simulate assigning one task to P2: set N = N − 1 and R = R + 1. In each case, t is incremented after the simulated assignment to reflect time progression.
- h.
Final case N = 2
Increment R by 1 if P1 is unavailable at both t and t + 1, indicating that a task must be assigned to P2 due to consecutive unavailability of P1.
The algorithm terminates by returning R, which serves as an adaptive upper bound for task allocation to P2 under intermittent processor unavailability.
The two processors continue to be simulated until N = 0. At this point, the ACBoS algorithm continues in the same manner as the original CBoS algorithm.
Steps of R computing (for the considered ACBoS algorithm) are illustrated by Algorithm 2:
| Algorithm 2 R computing algorithm |
- Input:
v : root of the intree - Output:
R - 1:
number of tasks in the intree - 2:
, - 3:
while do - 4:
if then - 5:
- 6:
end if - 7:
if then - 8:
- 9:
- 10:
end if - 11:
- 12:
end while - 13:
while and do - 14:
- 15:
end while - 16:
if then - 17:
if then - 18:
if then - 19:
- 20:
- 21:
else - 22:
- 23:
- 24:
end if - 25:
else - 26:
- 27:
- 28:
- 29:
end if - 30:
end if - 31:
if then - 32:
while and do - 33:
- 34:
end while - 35:
if and then - 36:
- 37:
end if - 38:
end if - 39:
return R
|
In order to study the behavior of this algorithm, a lower bound is proposed and a comparison between ACBoS and the lower bound is carried out. The proposed lower bound can be summarized as in Algorithm 3.
| Algorithm 3 Bound computation algorithm |
- 1:
number of tasks in the intree - 2:
▷ current time slot - 3:
while
do - 4:
if then - 5:
- 6:
end if - 7:
if then - 8:
- 9:
end if - 10:
- 11:
end while - 12:
return
|
The simulation of the heuristic [
28] shows that it gives good results for instances of large trees (
Figure 3) but mediocre results for chain instances (
Figure 4).
Figure 4 illustrates that the disparity in makespan between the lower bound and the ACBoS algorithm widens as the number of tasks increases. Additionally, the computational complexity of this problem specifically for schedules involving multiple unavailability periods on both machines remains an open research question.
3. Problem Formulation
We are given the following:
Tasks: N tasks, numbered 1, 2, …, N.
Precedence relation: These tasks form an intree (also called intree or converging tree).
An intree:
Each task has at most one immediate successor (children point to a parent in terms of precedence, but precedence arrows go from predecessor to successor in scheduling terms).
In precedence terms: each task can have multiple predecessors but only one successor (except the root task, which has no successor). So, the successor is the task that must wait until all its predecessors finish. leaf tasks are the ones with no predecessors (incoming arcs), and they all point toward one root. So, in scheduling, a predecessor must finish before a successor starts.
Execution time:
Communication cost:
If a task Ti and its immediate successor Tj are scheduled on different processors, then after finishing Ti, we must wait one unit of communication delay before Tj can start.
If they are on the same processor, no communication delay occurs. Otherwise, communication time is Unit (UCT = Unit Communication Time).
Communication is only between predecessor–successor pairs.
Processors:
Two identical processors are available to schedule tasks. The first processor denoted P1, which is always available, and the second processor P2, subjected to unavailability periods.
Objective:
Minimize total schedule length (makespan) subject to precedence, communication, and resource constraints.
Notation:
Using the three-field notation introduced by Graham et al. [
19], the problem is denoted as
.
An example of scheduling under UECT assumption and UET assumption is provided to explain the difference between these (
Figure 5).
Figure 5 represents the N tasks related by intree-precedence constraints. All execution time of tasks is considered to be unitary.
Figure 6 represents a scheduling under UET assumption and
Figure 7 represents a scheduling under UECT assumption.
The first schedule assumes negligible communication overhead between tasks, regardless of processor assignment, thus focusing solely on precedence constraints. In contrast, the second schedule imposes a unit communication cost for dependent tasks executed on different processors. This restriction prevents task 2 from starting at the second time unit because its predecessor, task 3, runs on P1 while it is assigned to P2.
The objective of this work is to schedule N intree-structured tasks across two identical machines, respecting precedence relations to minimize the makespan. The schedule must also account for unavailability periods on one machine, with the constraint that each machine processes only one task at a time.
4. The Proposed Scheduling Algorithm: Scheduling Intrees with Unavailability Constraints (SIwUC)
In this section, we introduce a polynomial algorithm for minimizing the makespan of UECT intrees on two identical processors (P1 and P2), where one of them is subject to non-availability constraints. Without loss of generality, the processor subject to unavailabilities is denoted P2.
4.1. Principle of the Algorithm
The core strategy of the algorithm is to maximize the workload assigned to processor P2, subject to its scheduled downtime. Once P2 has been loaded to capacity, the remaining tasks are distributed to P1 or partitioned between both processors, based on the structural characteristics of the given task set.
Before presenting the complete algorithmic procedure, we introduce a set of dominance rules that govern task assignment and ordering.
Theorem 1. A schedule that eliminates idle time on processor P1 is considered dominant. This means that any optimal schedule exhibiting idle periods on P1 can be reconfigured without increasing the makespan into an equally optimal schedule where P1 operates continuously with no idle intervals.
Proof of Theorem 1. Consider an optimal schedule with idleness on
(
Figure 8). □
The idle time on in this case originated from a communication delay introduced between task X and its single successor, task Y.
Specifically, X was processed on at time , while Y was scheduled on at time t. Although X may have had multiple predecessors, the last of these completed by time , leaving sufficient room to reassign X from at to at t without violating any communication constraints or increasing the makespan.
This transformation illustrates a general principle: schedules that eliminate idle time on dominate those that do not, since any schedule with idle periods can be reconverted without worsening the makespan into one where operates continuously.
Theorem 2. Consider an optimal schedule represented by Figure 9 in which processor P2 exhibits idle time. This situation arises because task Y, a predecessor of task X, was only executed on P1 starting at time . Due to the inherent properties of the intree structure, Y can have at most one successor, which is task X. As a result, Y can be relocated from P1 at time to P2 at time t without violating any constraints, as all predecessors of Y are completed by time . Consequently, reassigning Y to P2 at time t eliminates the initial idle interval on P2. While this shift may introduce new idle time on P1, Theorem 1 guarantees that idle periods on P1 can be removed without increasing the overall makespan. Moreover, this removal does not reintroduce the original idle interval on P1. Therefore, by combining the reassignment of Y to P2 with the elimination of P1’s idle time, we obtain a revised schedule in which both processors operate without any idle intervals, preserving optimality.
4.2. Description of the Proposed Method
4.2.1. First Step: R Computing
The parameter R is designed to act as a tight upper bound for the number of tasks that can be assigned to processor P2 within the SIwUC (Scheduling with Unavailability Constraints) algorithm. The earlier bound of established in the CBoS algorithm no longer applies, since processors may now be subject to intermittent unavailability periods. To address this, we introduce a new iterative procedure to compute R.
We initialize and then incrementally increase R by verifying whether any task in the intree can be feasibly assigned to P2 without inducing idle time on P1. This feasibility check is repeated iteratively; each time a valid task is found, R is increased by one. The procedure terminates when no additional task can be assigned to P2 without violating the idle-free condition on P1. The value of R at termination is adopted as the final bound. Before elaborating on the algorithm, we introduce two conditions that must be satisfied for any task to be eligible for assignment to P2.
First necessary condition: C1 In a scheduling without idleness, if a task T can be executed on the processor P2 at time t, then
Proof of C1. Consider a task T scheduled on P2 at time t.
In the intree, there are N − level(T) tasks that are not successors of T. For the schedule to be feasible, this number must be sufficient to occupy all time slots that require filling. Specifically, these slots include the following:
Because all successors of T incur a communication delay at time t, the slot on P1 at t must be taken by a task that is not a successor of T. Therefore, we obtain the following necessary condition
□
Second necessary condition: C2
In any given schedule, if a task T can be executed on the processor P2 at the slot t, then
Proof of C2. The algorithmic step establishes the upper bound R, representing the maximum number of tasks that can be allocated to processor P2 without introducing idle time on processor P1. This bound is determined via an iterative analysis of the intree structure. A candidate task T is considered eligible for assignment to P2 only if it satisfies two key conditions C1 and C2. Each time a task meets both criteria, R is incremented. The logical foundation for these assignment conditions rests on the observation that a task T placed on P2 faces two obstructions that limit the execution of its entire subtree at specific times:
The execution of task T cannot be performed because there is a blocking condition of task T with regards to time; the insertion of its successor(s) can only occur after it has completed execution.
The execution of task T cannot be performed at with regard to the communication cost between processors; when a successor(s) of task T is scheduled to execute on P1, there is a one-unit delay for message transmission once task T is complete on P2.
Here, we have calculated the maximum possible value for R. At the next scheduling stage, the algorithm will try to form a schedule for P2 containing R tasks and a schedule for P1 containing N-R tasks; if this does not succeed in creating a feasible schedule, the algorithm falls back to allocating all remaining tasks to P1. □
R computing algorithm can be summarized as in Algorithm 4.
| Algorithm 4 R Computation |
- Input:
Intree , Integer N - Output:
Integer R - 1:
▷ 1 operation - 2:
▷ 1 operation - 3:
▷ 1 operation - 4:
▷ 1 operation - 5:
while do ▷ comparisons - 6:
if then ▷ evaluations - 7:
if then ▷ comparisons - 8:
▷ assignments - 9:
else - 10:
▷ assignments - 11:
end if - 12:
if in exists an unmarked task such that - 13:
and then ▷ operations - 14:
if then ▷ comparisons - 15:
mark all tasks in the tree except the successors of ▷ operations - 16:
end if - 17:
▷ operations - 18:
mark ▷ operations - 19:
else - 20:
▷ assignments - 21:
end if - 22:
end if - 23:
if then ▷ comparisons - 24:
- 25:
end if - 26:
▷ operations - 27:
end while - 28:
remove task markings for all tasks in the intree ▷N operations - 29:
return R
|
4.2.2. Second Step: Schedule Construction
The
Figure 10 introduces the core task assignment algorithm which represented by. The fundamental principle guiding this algorithm is to distribute tasks between processors P1 and P2 so as to minimize the overall makespan.
To achieve this, we employ a multi-case heuristic tailored to reduce communication delays and, most critically, to evenly balance the computational workload across both processors. Maintaining a balanced load prevents either processor from becoming a bottleneck, which is essential for minimizing the total schedule length.
5. Complete Examples
In this section, two complete examples are described. In the first one, we consider an instance of intree and scheduling environment such that the optimal schedule assigns the root to the processor P1. Then, in the second example, we present another instance of intree such that the optimal schedule allocates the root to the processor P2.
5.1. Example 1
In this example, we consider an instance with 16 tasks as illustrated by
Figure 11 and a scheduling environment as described by
Figure 12.
Table 1 presents the calculation details of R, which represents the maximum number of tasks that can be assigned to the P2 processor.
Table 2 presents the details in the scheduling calculation, step by step.
The
Figure 13 presents the optimal schedule where 5 tasks are assigned to the processor P2 and 11 tasks are assigned to P1.
5.2. Example 2
In this example, we consider an instance with 7 tasks as illustrated by
Figure 14 and a scheduling environment as described by
Figure 15.
- Step 1:
Initialization: The algorithm initializes , as determined by Algorithm 4. This parameter R quantifies the remaining communication capacity or available time slack for task scheduling.
- Step 2:
Level Selection: The current processing level is set to , establishing the baseline for subsequent scheduling decisions.
- Step 3:
Task List Identification: The scheduler identifies the task set for processing. Notably, all tasks in L have weights exceeding the current resource constraint R.
- Step 4:
Resource Verification: Based on the condition , the scheduler examines the next processing level to locate a task T whose corresponding cluster, , can be feasibly scheduled within the time interval .
- Step 5:
Cluster Assignment: Cluster 3 is determined to be schedulable within . This assignment satisfies the feasibility conditions since and processor maintains availability (experiencing no unavailability periods) during the subsequent interval .
- Step 6:
Resource Update: Following the successful allocation of cluster 3, the remaining resource R is decremented to zero, reflecting the complete utilization of available resources.
- Step 7:
Chain Scheduling: The residual task set undergoes sequential chain scheduling: tasks are assigned to processor during , then to processor during , ensuring efficient processor utilization and meeting timing constraints.
The optimal computed schedule is described by
Figure 16:
5.3. Example 3
In this example, we consider an instance with 22 tasks as illustrated by
Figure 17 and a scheduling environment as described by
Figure 18.
For a number of tasks equal to 22, the value of R calculated by Algorithm 3 is equal to 7, and the optimal schedule calculated by Algorithm 4 is represented by
Figure 19. Indeed, the subtree rooted by task T3 is allocated to P2, and the remaining tasks are assigned to P1.
6. Optimality Proof
Lemma 1. An optimal schedule cannot initially assign more than R tasks to the processor .
Proof. The parameter R, derived from the initial algorithm, represents the upper bound on tasks assignable to while maintaining a schedule with no idle time on . Under the assumption that the root task of the intree is allocated to , the makespan achieved by such a schedule is , where N denotes the total number of tasks. This section aims to demonstrate that this makespan is optimal.
For contradiction, suppose a schedule exists that assigns tasks to . By the definition of R, at least one of these tasks violates one or both conditions (C1 or C2) required for a non-idle schedule on . Consequently, must experience idle time due to that task. Given the intree structure and the communication delays between tasks, this idle time will not be isolated; rather, it will propagate, leading to at least two distinct idle periods on . As a result, the makespan would increase to at least . This contradicts the optimal makespan of , thereby confirming that the maximum feasible R yields , which is indeed optimal. □
Lemma 2. The schedule computed by SIwUC algorithm is without idleness on the processor .
Proof. Constraints C1 and C2 form the foundational criteria for assigning tasks to processor . By adhering to these constraints during task allocation, it is guaranteed that processor will not encounter any idle periods resulting from the distribution of work to . Consequently, any schedule constructed in compliance with C1 and C2 will maintain continuous execution on without interruptions.
Moreover, Theorem 1 addresses schedules that deviate from C1 and C2 by providing a corrective mechanism: any idle interval on
can be resolved by transferring an appropriate task currently scheduled on
to
. The steps explained by the flowchart in
Figure 10 incorporates this principle by enforcing conditions C1 and C2 during initial task assignment, and when idle time arises on
, by applying the transformation prescribed in Theorem 1 to reassign work from
to
, thereby restoring an idle-free schedule. □
Lemma 3. The schedule computed by SIwUC algorithm is without idleness on the processor .
Proof. It is impossible that that a time slot t would be able to violate either of the necessary conditions (C1 or C2) at the same time that its following time slot, , meets both conditions. In particular, if a necessary condition is not met at time t, it cannot be met at any subsequent time slot , where .
First case: At time t, the first necessary condition (C1) is not satisfied: there exists no task T in the tree such that . C1 remains not satisfied for all future times () because .
Second case: At time t, the second necessary condition (C2) is not satisfied: there exists no task T in the intree such that and . Observe that for , both and increase by at least 2 between t and . Therefore, at time t, C2 does not hold.
The completion of task T that occurs at some future moment (C2) and satisfies both conditions is completed at time . Task T meets the level requirement at time (). Therefore, it has also met the relaxed level requirement at a previous point in time, time t (). In order to have violated C2 at time t, task T must have violated the weight requirement for that task (). To satisfy C2 at time , we must have a weight requirement that satisfies C2 () in order to satisfy . Additionally, any newly available task that becomes available at , which causes an increase in , will automatically violate the level requirement (). Thus, C2 is not able to satisfy the task due to contradictions. □
Lemma 4. If it is not possible to assign a hole cluster to (i.e., tasks of the cluster will be allocated to and ) or the root cannot be assigned to and the rest of tasks can be allocated without idleness, then the makespan of the schedule is optimal.
Proof. Once all tasks are assigned to their respective processors, the resulting schedule achieves an optimal makespan of . □
Lemma 5. If it is not possible to obtain a schedule without idleness when the root is allocated to the processor and the root cannot be assigned to , then a decrease of R is the only solution.
Proof. Reducing R is necessary when processor has idle time and assigning the root task to would increase the makespan.
For this reason, in the definition of R, we speak about the maximum number of tasks that can be allocated to and not the exact number of tasks that can be allocated to . □
7. Conclusions and Future Works
The problem of scheduling intrees with unit execution time on two processors where one of these is subjected to unavailability periods is studied in this paper. In this paper, a new optimal algorithm is proposed for this problem.
The proposed algorithm is entitled SIwUC.
An optimal proof of the proposed SIwUC algorithm is presented. Depending on the instance of the graph and the scheduling environment, the optimal schedule assigns the root to the processor P1 or the processor P2.
The two cases are treated in this paper. The obtained results emphasize how the scheduling solutions exhibit a form of symmetry in balancing tasks between the two processors despite unavailability constraints.
As future work, this problem can be extended to the case of unavailability on both processors or a number of machines greater than two.
Author Contributions
Conceptualization, K.B.A.; formal analysis, K.B.A. and K.Z.; methodology, K.B.A. and W.G.; software, K.B.A.; validation, K.B.A. and K.Z.; writing—original draft preparation, K.B.A.; writing—review and editing, K.B.A., K.Z., and W.G.; visualization, K.B.A. and W.G.; supervision, K.Z. and W.G.; project administration, K.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data is contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Abdellafou, K.B.; Sanlaville, E.; Mahjoub, A.; Korbaa, O. Scheduling UECT trees with communication delays on two processors with unavailabilities. IFAC-PapersOnLine 2015, 48, 1790–1795. [Google Scholar] [CrossRef]
- Bal, P.K.; Mohapatra, S.K.; Das, T.K.; Srinivasan, K.; Hu, Y.-C. A joint resource allocation, security with efficient task scheduling in cloud computing using hybrid machine learning techniques. Sensors 2022, 22, 1242. [Google Scholar] [CrossRef] [PubMed]
- Garg, H.; Rani, M.; Sharma, S.P. Preventive maintenance scheduling of the pulping unit in a paper plant. Jpn. J. Ind. Appl. Math. 2013, 30, 397–414. [Google Scholar] [CrossRef]
- Liao, C.-J.; Chen, W.-J. Scheduling under machine breakdown in a continuous process industry. Comput. Oper. Res. 2004, 31, 415–428. [Google Scholar] [CrossRef]
- Sanlaville, E.; Mahjoub, A.; Guinand, F. Scheduling problems on parallel machines with communicating tasks and unavailability. In Proceedings of the 15th Congress of the French Society of Operations Research and Decision Support (ROADEF), Marseille, France, 26–28 February 2014. [Google Scholar]
- Bamatraf, K.; Gharbi, A. Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling. Systems 2024, 12, 221. [Google Scholar] [CrossRef]
- Munier Kordon, A.; Kacem, F.; de Dinechin, B.D.; Finta, L. Scheduling an interval ordered precedence graph with communication delays and a limited number of processors. RAIRO-Oper. Res. 2013, 47, 73–87. [Google Scholar] [CrossRef]
- Kaabi, J.; Harrath, Y. Scheduling on uniform parallel machines with periodic unavailability constraints. Int. J. Prod. Res. 2019, 57, 216–227. [Google Scholar] [CrossRef]
- He, S.; Wu, J.; Wei, B.; Wu, J. Algorithms for tree-shaped task partition and allocation on heterogeneous multiprocessors. J. Supercomput. 2023, 79, 13210–13240. [Google Scholar] [CrossRef]
- Shukur, H.; Zeebaree, S.R.M.; Ahmed, A.J.; Zebari, R.R.; Ahmed, O.; Tahir, B.S.A.; Sadeeq, M.A.M. A state of art: Survey for concurrent computation and clustering of parallel computing for distributed systems. J. Appl. Sci. Technol. Trends 2020, 1, 148–154. [Google Scholar] [CrossRef]
- Ben Abdellafou, K.; Hadda, H.; Korbaa, O. An improved tabu search meta-heuristic approach for solving scheduling problem with non-availability constraints. Arab. J. Sci. Eng. 2019, 44, 3369–3379. [Google Scholar] [CrossRef]
- Fuentes, Y.O.; Kim, S. Parallel computational microhydrodynamics: Communication scheduling strategies. AIChE J. 1992, 38, 1059–1078. [Google Scholar] [CrossRef]
- Amoura, A.K.; Bampis, E.; Konig, J.-C. Scheduling algorithms for parallel Gaussian elimination with communication costs. IEEE Trans. Parallel Distrib. Syst. 1998, 9, 679–686. [Google Scholar] [CrossRef]
- Zinder, Y.; Su, B.; Singh, G.; Sorli, R. Scheduling UET-UCT tasks: Branch-and-bound search in the priority space. Optim. Eng. 2010, 11, 627–646. [Google Scholar] [CrossRef]
- Tang, N. Calculation of Latency of Real-Time System and Fixed-Parameter Tractibility of UET-UCT Scheduling Problems. Ph.D. Thesis, Sorbonne Université, Paris, France, 2022. [Google Scholar]
- Giroudeau, R.; König, J.-C.; Valery, B. Scheduling UET-tasks on a star network: Complexity and approximation. 4OR 2011, 9, 29–48. [Google Scholar] [CrossRef]
- Su, Y.; Vardi, S.; Ren, X.; Wierman, A. Communication-aware scheduling of precedence-constrained tasks on related machines. Oper. Res. Lett. 2023, 51, 709–716. [Google Scholar] [CrossRef]
- Maiti, B.; Rajaraman, R.; Stalfa, D.; Svitkina, Z.; Vijayaraghavan, A. Scheduling precedence-constrained jobs on related machines with communication delay. In Proceedings of the IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), Durham, NC, USA, 16–19 November 2020; pp. 834–845. [Google Scholar]
- Graham, R.I. The combinatorial mathematics of scheduling. Sci. Am. 1978, 238, 124–133. [Google Scholar] [CrossRef]
- Lee, C.-Y. Machine scheduling with an availability constraint. J. Glob. Optim. 1996, 9, 395–416. [Google Scholar] [CrossRef]
- Lee, C.-Y. Two-machine flowshop scheduling with availability constraints. Eur. J. Oper. Res. 1999, 114, 420–429. [Google Scholar] [CrossRef]
- Canon, L.-C.; Essafi, A.; Trystram, D. A proactive approach for coping with uncertain resource availabilities on desktop grids. In Proceedings of the 21st International Conference on High Performance Computing (HiPC), Goa, India, 17–20 December 2014; pp. 1–9. [Google Scholar]
- Jaykrishnan, G.; Levin, A. Scheduling with cardinality dependent unavailability periods. Eur. J. Oper. Res. 2024, 316, 443–458. [Google Scholar] [CrossRef]
- Shabtay, D. Single-machine scheduling with machine unavailability periods and resource dependent processing times. Eur. J. Oper. Res. 2022, 296, 423–439. [Google Scholar] [CrossRef]
- Ait Aba, M.; Zaourar, L.; Munier, A. Efficient algorithm for scheduling parallel applications on hybrid multicore machines with communications delays and energy constraint. Concurr. Comput. Pract. Exp. 2020, 32, e5573. [Google Scholar] [CrossRef]
- Amiri, M.M.; Gündüz, D. Computation scheduling for distributed machine learning with straggling workers. IEEE Trans. Signal Process. 2019, 67, 6270–6284. [Google Scholar] [CrossRef]
- Trystram, D.; Guinand, F. Scheduling UET Trees with communication delays on two processors. RAIRO Oper. Res. 2000, 34, 131–144. [Google Scholar][Green Version]
- Ben Abdellafou, K.; Hadda, H.; Korbaa, O. Heuristic algorithms for scheduling intrees on m machines with non-availability constraints. Oper. Res. 2021, 21, 55–71. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |