Next Article in Journal
IAVOA–EATCN: An Adaptive Deep Framework for Accurate Power Load Forecasting
Next Article in Special Issue
Symmetry in Process Optimization
Previous Article in Journal
Statistical Estimation of Common Percentile in Birnbaum–Saunders Distributions: Insights from PM2.5 Data in Thailand
Previous Article in Special Issue
Symmetry-Aware Superpixel-Enhanced Few-Shot Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling Intrees with Unavailability Constraints on Two Parallel Machines

1
Higher Institute of Applied Mathematics and Computer Science, University of Kairouan, Kairouan 3100, Tunisia
2
Research Lab LR17ES05, Modeling of Automated Reasoning Systems (MARS), Higher Institute of Computer Sciences and Communication Technologies (ISITCom), University of Sousse, Sousse 4011, Tunisia
3
Applied College, University of Tabuk, Tabuk 71491, Saudi Arabia
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(1), 103; https://doi.org/10.3390/sym18010103
Submission received: 12 November 2025 / Revised: 23 December 2025 / Accepted: 29 December 2025 / Published: 6 January 2026
(This article belongs to the Special Issue Symmetry in Process Optimization)

Abstract

This paper considers the two parallel-machine scheduling problem with intree-precedence constraints where machines are subject to non-availability constraints. In the literature, this problem is considered to be an open problem of unknown complexity. The proposed solution proves that the problem under consideration has polynomial complexity. Periods of machine unavailability are predetermined, and both task execution and inter-task communication are modeled as requiring one unit of time. The optimization criterion central to this study is the minimization of the makespan. Such a scheduling challenge is directly applicable to manufacturing environments, where production equipment can be intermittently offline for reasons such as unscheduled repairs or planned preventative maintenance. Adopting a unit-time task model offers a valuable framework for subsequently scheduling larger, preemptable jobs.This work presents a new method, called Scheduling Intrees with Unavailability Constraints (SIwUC), which operates by aggregating tasks into distinct groups. The analysis establishes that the SIwUC algorithm produces optimal schedules and reveals how the underlying problem architecture and its solutions demonstrate a symmetrical property in the distribution of tasks across the two parallel machines. This paper demonstrates that the proposed SIwUC algorithm builds optimal schedules and highlight how the problem structure and its solutions exhibit a form of symmetry in balancing task allocation between the two parallel machines.

1. Introduction

In classical scheduling, it is commonly assumed that a fixed set of machines or processors remains continuously available throughout the entire planning horizon [1,2]. However, this assumption is often unrealistic in practical environments, where machine availability may be affected by maintenance operations [3], unexpected breakdowns [4], or other operational constraints [5,6]. As a result, scheduling problems with machine non-availability have attracted considerable attention in the literature where different problems were treated such as scheduling where only a limited number of identical processors are available [7], scheduling on uniform parallel machines with periodic unavailability constraints [8,9], scheduling on distributed systems [10,11], and scheduling with communication costs [12,13].
In parallel, the scheduling of unit execution time (UET) tasks [14] has been extensively investigated due to its relevance for modeling preemptable and fine-grained tasks [15,16]. Moreover, precedence constraints and, in particular, intree structures naturally arise in many applications such as parallel computing and workflow scheduling, where communication delays between tasks cannot be neglected such as in the problem of scheduling precedence-constrained tasks [17] and the problem of scheduling-related jobs [18].
Despite these advances, the combined consideration of unitary tasks with intree-precedence constraints, communication costs, and machine non-availability remains largely unexplored.
Motivated by this gap, this paper addresses the problem of scheduling N unit execution time tasks related by intree-precedence constraints with unit communication costs on two identical parallel machines, where one machine is subject to non-availability periods. Using the three-field notation introduced by Graham et al. [19], the problem is denoted as P 2 | i n t r e e , p i = 1 , c = 1 , n r a | C m a x . This problem has not been previously investigated, and neither its computational complexity nor an optimal solution method has been reported in the literature.
The contributions in this paper are as follows:
  • Prove that the complexity of the treated problem is polynomial and does not have an open complexity.
  • Propose an optimal algorithm with polynomial complexity.
  • Prove the optimality of the proposed algorithm by a set of theorems and lemmas.
The paper is structured as follows: A review of related work is detailed in Section 2, while Section 3 provides a formal description of the problem under study.The proposed SIwUC solution is introduced and explained in Section 4, followed by its evaluation on two illustrative examples in Section 5. Section 6 provides the optimality proof for the algorithm. The paper concludes in Section 7 with a summary and suggestions for future research directions.

2. Related Works

This research examines a dual-constrained scheduling problem that combines communication delays with precedence and non-availability period constraints. Although these constraints have been studied separately in the past, there is not a perfect way to combine them. The new heuristic, Adapted CBoS (ACBoS), is introduced in this section after each constraint has been reviewed separately. For studying the behavior of this heuristic, a new lower bound is proposed to carry out a comparison between ACBoS and the lower bound.

2.1. Scheduling with Unavailabilities

The study of scheduling with non-availability constraints was first conducted by Lee et al. [20,21]. They proved that it is an NP-Hard problem to schedule independent tasks with non-unitary execution times on identical machines while taking unavailability periods into account [22,23,24].
The study also proved that there is not a single, universal approximation algorithm for this issue. This is due to the fact that certain situations can be created in which only an optimal schedule produces a makespan that is acceptable; in these cases, any algorithm that is not optimal would perform arbitrarily poorly. Lee therefore made the simplifying assumption that a machine is always available. Lee showed that the classical Longest Processing Time (LPT) algorithm achieves a worst-case bound of ( m + 1 ) / 2 under this assumption and when machine j has a single unavailability period s j , t j (where 0 s j t j ) within the scheduling horizon.
In [25], the authors study a scheduling scenario that does not allow more than half of the machines to be down at any given time. Given this setting, the authors prove that the LPT heuristic has a performance ratio of at most 2. In [26], the same results are extended to a situation where up to λ machines can be down at once, that is, 1 λ m 1 .
In this generalized setting, the authors show the performance ratio of LPT is guaranteed to be upper-bounded by the study of scheduling with non-availability constraints was first conducted by Lee et al. [20,21]. They proved that it is an NP-Hard problem to schedule independent tasks with non-unitary execution times on identical machines while taking unavailability periods into account [22].
In [8], the authors address the scheduling problem involving two processors, each of which experiences a single interval of unavailability. They proposed exponential time algorithms that gives optimal solutions and showed, through experiments, that their algorithms generate good results especially in practical cases.

2.2. Scheduling Task Problems with Precedence Constraints and Communications

The CBoS algorithm [27] (Cluster-Based Scheduling) is a polynomial-time optimal algorithm for scheduling intree-structured tasks on two identical processors under UECT constraints.
Characteristics:
  • Assign the root always to processor P1.
  • Identify clusters (subtrees) that can be entirely assigned to P2.
  • Balance load between P1 and P2 while minimizing communication delays (by keeping predecessor–successor pairs together when beneficial).
Advantages:
  • Polynomial time for two processors.
  • Minimizes communication by clustering-related tasks.
  • Load balancing through R parameter.
  • Optimal for many intree structures under UECT constraints
  • Simple to implement.
Limits:
  • Only for two processors.
  • Assumes unit execution and communication times.
  • May not be optimal for all intree structures.
  • Does not extend easily to m > 2 processors.
Algorithm steps:
  • R computing, which is the number of tasks that will be assigned to the processor P2.
  • Cluster selection for P2.
  • Processor P1 scheduling.
  • Processor P2 scheduling.
CBoS algorithm can be summarized as in Algorithm 1:
Algorithm 1 Clusters determination algorithm
Input: 
Intree T-tree, Integer N
Output: 
Tasks allocated to P 2
 1:
 Begin
 2:
       Integer R ( n 2 ) / 2
 3:
       Integer c u r r e n t L e v e l 2
 4:
       while  R > 0  do
 5:
              L { T T not marked and T c u r r e n t L e v e l }
 6:
             Sorted in decreasing order by weight
 7:
             if  | L | > 1  then
 8:
                  Let T i be the first task in L such that w ( T i ) R
 9:
                  if such a T i exists then
10:
                        cluster ( T i ) is allocated to P 2
11:
                        cluster ( T i ) is marked
12:
                         R R w ( T i )
13:
                         T i is removed from L
14:
                  else
15:
                         c u r r e n t L e v e l c u r r e n t L e v e l + 1
16:
                  end if
17:
             else
18:
                  Let T i be the only task of L
19:
                   R min ( R , ( w ( T a l o n e 1 ) / 2 )
20:
                   c u r r e n t L e v e l c u r r e n t L e v e l + 1
21:
             end if
22:
       end while
23:
 End
In the following Figure 1, an example of scheduling an intree by the CBoS algorithm is presented:
N u m b e r o f t a s k s = 18 and 18 2 / 2 = 8 .
R is initialized to 8. In this example, the CBoS algorithm assigns 3 subtrees to the processor P 2 . The sum of tasks of the three subtrees is 8 and the rest is allocated to the processor P 1 .
The computed schedule is described by Figure 2.

2.3. Problem of Scheduling Tasks with Unavailabilities and Precedence Constraints

In the current literature, there is no best algorithm for scheduling precedence-constrained jobs on processors that are unavailable for certain time periods. In [5], the authors proposed a method to address this problem, specifically considering scheduling UECT intrees on two identical processors, called P1 and P2, both of which may experience unavailability periods multiple times.
The new algorithm named Adapted CBoS (ACBoS) is based on the original CBoS algorithm. The most important difference between the two algorithms is how ACBoS cal-culates the parameter R, which is the upper bound on the number of jobs that can be assigned to processor P2 in any optimal schedule. The algorithm computes the value for R as follows:
a.
Initialization
Let N denote the total number of tasks in the intree.
Set R = 0 as the task count allocated to P 2 .
Set t = 0 as the discrete time slot index.
b.
Processor availability scanning:
At each time slot t, the availability states of P1 and P2 are evaluated.
c.
Task assignment when P1 is available:
If P1(t) is available, decrement N by 1, corresponding to the assignment of one task to P1.
d.
Task assignment when P2 is available:
If P2(t) is available, decrement N by 1 and increment R by 1, representing the assignment of one task to P2.
e.
Time progression
After processing the availability at time t, increment t by 1. If both processors are unavailable at t, increment t until at least one becomes available.
f.
Iteration condition
Repeat steps be until N ≤ 3.
g.
Boundary case N = 3
  • If both P1(t) and P2(t) are available, we simulate assigning two tasks (one to each processor): set N = N − 2 and R = R + 1.
  • If only P1(t) is available, we simulate assigning one task to P1: set N = N − 1 (no change to R).
  • If only P2(t) is available, we simulate assigning one task to P2: set N = N − 1 and R = R + 1. In each case, t is incremented after the simulated assignment to reflect time progression.
h.
Final case N = 2
Increment R by 1 if P1 is unavailable at both t and t + 1, indicating that a task must be assigned to P2 due to consecutive unavailability of P1.
The algorithm terminates by returning R, which serves as an adaptive upper bound for task allocation to P2 under intermittent processor unavailability.
The two processors continue to be simulated until N = 0. At this point, the ACBoS algorithm continues in the same manner as the original CBoS algorithm.
Steps of R computing (for the considered ACBoS algorithm) are illustrated by Algorithm 2:
Algorithm 2 R computing algorithm
Input: 
v : root of the intree
Output: 
R
 1:
  N number of tasks in the intree
 2:
  t 0 , R 0
 3:
 while  N > 3  do
 4:
       if  P 1 ( t )  then
 5:
              N N 1
 6:
       end if
 7:
       if  P 2 ( t )  then
 8:
              N N 1
 9:
              R R + 1
10:
       end if
11:
        t t + 1
12:
 end while
13:
 while  P 1 ( t )  and  P 2 ( t )  do
14:
        t t + 1
15:
 end while
16:
 if  N = 3  then
17:
       if  P 1 ( t )  then
18:
             if  P 2 ( t )  then
19:
                   R R + 1
20:
                   N 1
21:
             else
22:
                   N 2
23:
                   t t + 1
24:
             end if
25:
       else
26:
              N 2
27:
              R R + 1
28:
              t t + 1
29:
       end if
30:
 end if
31:
 if  N = 2  then
32:
       while  P 1 ( t )  and  P 2 ( t )  do
33:
              t t + 1
34:
       end while
35:
       if  P 1 ( t )  and  P 1 ( t + 1 )  then
36:
              R R + 1
37:
       end if
38:
 end if
39:
 return R
In order to study the behavior of this algorithm, a lower bound is proposed and a comparison between ACBoS and the lower bound is carried out. The proposed lower bound can be summarized as in Algorithm 3.
Algorithm 3 Bound computation algorithm
 1:
  N number of tasks in the intree
 2:
  b o u n d 0      ▷ current time slot
 3:
 while  N > 0  do
 4:
       if  P 1 ( t )  then
 5:
              N N 1
 6:
       end if
 7:
       if  P 2 ( t )  then
 8:
              N N 1
 9:
       end if
10:
        b o u n d b o u n d + 1
11:
 end while
12:
 return  b o u n d
The simulation of the heuristic [28] shows that it gives good results for instances of large trees (Figure 3) but mediocre results for chain instances (Figure 4).
Figure 4 illustrates that the disparity in makespan between the lower bound and the ACBoS algorithm widens as the number of tasks increases. Additionally, the computational complexity of this problem specifically for schedules involving multiple unavailability periods on both machines remains an open research question.

3. Problem Formulation

We are given the following:
  • Tasks: N tasks, numbered 1, 2, …, N.
  • Precedence relation: These tasks form an intree (also called intree or converging tree).
An intree:
  • Each task has at most one immediate successor (children point to a parent in terms of precedence, but precedence arrows go from predecessor to successor in scheduling terms).
  • In precedence terms: each task can have multiple predecessors but only one successor (except the root task, which has no successor). So, the successor is the task that must wait until all its predecessors finish. leaf tasks are the ones with no predecessors (incoming arcs), and they all point toward one root. So, in scheduling, a predecessor must finish before a successor starts.
Execution time:
  • Every task takes unit time to execute (UET = Unit Execution Time).
Communication cost:
  • If a task Ti and its immediate successor Tj are scheduled on different processors, then after finishing Ti, we must wait one unit of communication delay before Tj can start.
  • If they are on the same processor, no communication delay occurs. Otherwise, communication time is Unit (UCT = Unit Communication Time).
  • Communication is only between predecessor–successor pairs.
Processors:
  • Two identical processors are available to schedule tasks. The first processor denoted P1, which is always available, and the second processor P2, subjected to unavailability periods.
    Objective:
  • Minimize total schedule length (makespan) subject to precedence, communication, and resource constraints.
    Notation:
  • Using the three-field notation introduced by Graham et al. [19], the problem is denoted as P 2 | i n t r e e , p i = 1 , c = 1 , n r a | C m a x .
An example of scheduling under UECT assumption and UET assumption is provided to explain the difference between these (Figure 5).
Figure 5 represents the N tasks related by intree-precedence constraints. All execution time of tasks is considered to be unitary.
Figure 6 represents a scheduling under UET assumption and Figure 7 represents a scheduling under UECT assumption.
The first schedule assumes negligible communication overhead between tasks, regardless of processor assignment, thus focusing solely on precedence constraints. In contrast, the second schedule imposes a unit communication cost for dependent tasks executed on different processors. This restriction prevents task 2 from starting at the second time unit because its predecessor, task 3, runs on P1 while it is assigned to P2.
The objective of this work is to schedule N intree-structured tasks across two identical machines, respecting precedence relations to minimize the makespan. The schedule must also account for unavailability periods on one machine, with the constraint that each machine processes only one task at a time.

4. The Proposed Scheduling Algorithm: Scheduling Intrees with Unavailability Constraints (SIwUC)

In this section, we introduce a polynomial algorithm for minimizing the makespan of UECT intrees on two identical processors (P1 and P2), where one of them is subject to non-availability constraints. Without loss of generality, the processor subject to unavailabilities is denoted P2.

4.1. Principle of the Algorithm

The core strategy of the algorithm is to maximize the workload assigned to processor P2, subject to its scheduled downtime. Once P2 has been loaded to capacity, the remaining tasks are distributed to P1 or partitioned between both processors, based on the structural characteristics of the given task set.
Before presenting the complete algorithmic procedure, we introduce a set of dominance rules that govern task assignment and ordering.
Theorem 1.
A schedule that eliminates idle time on processor P1 is considered dominant. This means that any optimal schedule exhibiting idle periods on P1 can be reconfigured without increasing the makespan into an equally optimal schedule where P1 operates continuously with no idle intervals.
Proof of Theorem 1.
Consider an optimal schedule with idleness on P 1 (Figure 8).    □
The idle time on P 1 in this case originated from a communication delay introduced between task X and its single successor, task Y.
Specifically, X was processed on P 2 at time t 1 , while Y was scheduled on P 1 at time t. Although X may have had multiple predecessors, the last of these completed by time t 3 , leaving sufficient room to reassign X from P 2 at t 1 to P 1 at t without violating any communication constraints or increasing the makespan.
This transformation illustrates a general principle: schedules that eliminate idle time on P 1 dominate those that do not, since any schedule with idle periods can be reconverted without worsening the makespan into one where P 1 operates continuously.
Theorem 2.
Consider an optimal schedule represented by Figure 9 in which processor P2 exhibits idle time.
This situation arises because task Y, a predecessor of task X, was only executed on P1 starting at time t 1 . Due to the inherent properties of the intree structure, Y can have at most one successor, which is task X. As a result, Y can be relocated from P1 at time t 1 to P2 at time t without violating any constraints, as all predecessors of Y are completed by time t 2 . Consequently, reassigning Y to P2 at time t eliminates the initial idle interval on P2. While this shift may introduce new idle time on P1, Theorem 1 guarantees that idle periods on P1 can be removed without increasing the overall makespan. Moreover, this removal does not reintroduce the original idle interval on P1. Therefore, by combining the reassignment of Y to P2 with the elimination of P1’s idle time, we obtain a revised schedule in which both processors operate without any idle intervals, preserving optimality.

4.2. Description of the Proposed Method

4.2.1. First Step: R Computing

The parameter R is designed to act as a tight upper bound for the number of tasks that can be assigned to processor P2 within the SIwUC (Scheduling with Unavailability Constraints) algorithm. The earlier bound of n 2 2 established in the CBoS algorithm no longer applies, since processors may now be subject to intermittent unavailability periods. To address this, we introduce a new iterative procedure to compute R.
We initialize R = 0 and then incrementally increase R by verifying whether any task in the intree can be feasibly assigned to P2 without inducing idle time on P1. This feasibility check is repeated iteratively; each time a valid task is found, R is increased by one. The procedure terminates when no additional task can be assigned to P2 without violating the idle-free condition on P1. The value of R at termination is adopted as the final bound. Before elaborating on the algorithm, we introduce two conditions that must be satisfied for any task to be eligible for assignment to P2.
First necessary condition: C1 In a scheduling without idleness, if a task T can be executed on the processor P2 at time t, then
P2(t) is true.
Level (T) ≤ N − NB(t)
Proof of C1.
Consider a task T scheduled on P2 at time t.
In the intree, there are N − level(T) tasks that are not successors of T. For the schedule to be feasible, this number must be sufficient to occupy all time slots that require filling. Specifically, these slots include the following:
  • NB(t) − 1 slots on P2 before time t (excluding the slot occupied by T itself);
  • The slot on P1 at time t.
Because all successors of T incur a communication delay at time t, the slot on P1 at t must be taken by a task that is not a successor of T. Therefore, we obtain the following necessary condition
N l e v e l ( T ) N B ( t ) 1 + 1 .
   □
Second necessary condition: C2
In any given schedule, if a task T can be executed on the processor P2 at the slot t, then
P 2 ( t )   is   true and w e i g h t ( T ) N B ( t ) 2   i f   t > 1 w e i g h t ( T ) = 1   i f   t = 1
Proof of C2.
The algorithmic step establishes the upper bound R, representing the maximum number of tasks that can be allocated to processor P2 without introducing idle time on processor P1. This bound is determined via an iterative analysis of the intree structure. A candidate task T is considered eligible for assignment to P2 only if it satisfies two key conditions C1 and C2. Each time a task meets both criteria, R is incremented. The logical foundation for these assignment conditions rests on the observation that a task T placed on P2 faces two obstructions that limit the execution of its entire subtree at specific times:
  • The execution of task T cannot be performed because there is a blocking condition of task T with regards to time; the insertion of its successor(s) can only occur after it has completed execution.
  • The execution of task T cannot be performed at t 1 with regard to the communication cost between processors; when a successor(s) of task T is scheduled to execute on P1, there is a one-unit delay for message transmission once task T is complete on P2.
Here, we have calculated the maximum possible value for R. At the next scheduling stage, the algorithm will try to form a schedule for P2 containing R tasks and a schedule for P1 containing N-R tasks; if this does not succeed in creating a feasible schedule, the algorithm falls back to allocating all remaining tasks to P1.    □
R computing algorithm can be summarized as in Algorithm 4.
Algorithm 4 R Computation
Input: 
Intree T r e e , Integer N
Output: 
Integer R
 1:
  B t r u e                                    ▷ 1 operation
 2:
  t 1                                    ▷ 1 operation
 3:
  R 0                                     ▷ 1 operation
 4:
  p 0                                      ▷ 1 operation
 5:
 while  B = t r u e  do                          ▷  N 5 comparisons
 6:
       if  P 2 ( t )  then                            ▷  N 5 evaluations
 7:
             if  t = 1  then                          ▷  N 5 comparisons
 8:
                   p 1                             ▷  N 5 assignments
 9:
             else
10:
                   p R + ( t 1 )                         ▷  N 5 assignments
11:
             end if
12:
             if in T r e e exists an unmarked task T i such that
13:
       w ( T i ) p and level ( T i ) N ( t + R + 1 )  then             ▷  7 N ( N 5 ) operations
14:
                  if  level ( T i ) = N ( t + R + 1 )  then             ▷  5 ( N 5 ) comparisons
15:
                        mark all tasks in the tree except the successors of T i  ▷  2 ( N 1 ) ( N 5 ) operations
16:
                  end if
17:
                   R R + 1                          ▷  2 ( N 5 ) operations
18:
                  mark T i                          ▷  2 ( N 5 ) operations
19:
             else
20:
                        B F a l s e                           ▷  N 5 assignments
21:
             end if
22:
       end if
23:
       if  t + R > N 5  then                      ▷  3 ( N 5 ) comparisons
24:
              B F a l s e
25:
       end if
26:
        t t + 1                              ▷  2 ( N 5 ) operations
27:
 end while
28:
 remove task markings for all tasks in the intree                 ▷N operations
29:
 return R

4.2.2. Second Step: Schedule Construction

The Figure 10 introduces the core task assignment algorithm which represented by. The fundamental principle guiding this algorithm is to distribute tasks between processors P1 and P2 so as to minimize the overall makespan.
To achieve this, we employ a multi-case heuristic tailored to reduce communication delays and, most critically, to evenly balance the computational workload across both processors. Maintaining a balanced load prevents either processor from becoming a bottleneck, which is essential for minimizing the total schedule length.

5. Complete Examples

In this section, two complete examples are described. In the first one, we consider an instance of intree and scheduling environment such that the optimal schedule assigns the root to the processor P1. Then, in the second example, we present another instance of intree such that the optimal schedule allocates the root to the processor P2.

5.1. Example 1

In this example, we consider an instance with 16 tasks as illustrated by Figure 11 and a scheduling environment as described by Figure 12.
  • R computing steps
  • Table 1 presents the calculation details of R, which represents the maximum number of tasks that can be assigned to the P2 processor.
  • Schedule Computing steps
  • Table 2 presents the details in the scheduling calculation, step by step.
  • Computed schedule
  • The Figure 13 presents the optimal schedule where 5 tasks are assigned to the processor P2 and 11 tasks are assigned to P1.

5.2. Example 2

In this example, we consider an instance with 7 tasks as illustrated by Figure 14 and a scheduling environment as described by Figure 15.
Step 1:
Initialization: The algorithm initializes R : = 2 , as determined by Algorithm 4. This parameter R quantifies the remaining communication capacity or available time slack for task scheduling.
Step 2:
Level Selection: The current processing level is set to c l : = 2 , establishing the baseline for subsequent scheduling decisions.
Step 3:
Task List Identification: The scheduler identifies the task set L = { 5 , 2 } for processing. Notably, all tasks in L have weights exceeding the current resource constraint R.
Step 4:
Resource Verification: Based on the condition N B ( t ( R ) ) = N c l , the scheduler examines the next processing level to locate a task T whose corresponding cluster, cluster ( T ) , can be feasibly scheduled within the time interval [ 0 , 3 ] .
Step 5:
Cluster Assignment: Cluster 3 is determined to be schedulable within [ 0 , 3 ] . This assignment satisfies the feasibility conditions since w ( 2 ) < 6 and processor P 2 maintains availability (experiencing no unavailability periods) during the subsequent interval [ 3 , 5 ] .
Step 6:
Resource Update: Following the successful allocation of cluster 3, the remaining resource R is decremented to zero, reflecting the complete utilization of available resources.
Step 7:
Chain Scheduling: The residual task set { 7 , 6 , 5 , 2 , 1 } undergoes sequential chain scheduling: tasks are assigned to processor P 1 during [ 0 , 3 ] , then to processor P 2 during [ 3 , 5 ] , ensuring efficient processor utilization and meeting timing constraints.
The optimal computed schedule is described by Figure 16:

5.3. Example 3

In this example, we consider an instance with 22 tasks as illustrated by Figure 17 and a scheduling environment as described by Figure 18.
For a number of tasks equal to 22, the value of R calculated by Algorithm 3 is equal to 7, and the optimal schedule calculated by Algorithm 4 is represented by Figure 19. Indeed, the subtree rooted by task T3 is allocated to P2, and the remaining tasks are assigned to P1.

6. Optimality Proof

Lemma 1.
An optimal schedule cannot initially assign more than R tasks to the processor P 2 .
Proof. 
The parameter R, derived from the initial algorithm, represents the upper bound on tasks assignable to P 2 while maintaining a schedule with no idle time on P 1 . Under the assumption that the root task of the intree is allocated to P 1 , the makespan achieved by such a schedule is N R , where N denotes the total number of tasks. This section aims to demonstrate that this makespan is optimal.
For contradiction, suppose a schedule exists that assigns R + 1 tasks to P 2 . By the definition of R, at least one of these R + 1 tasks violates one or both conditions (C1 or C2) required for a non-idle schedule on P 1 . Consequently, P 1 must experience idle time due to that task. Given the intree structure and the communication delays between tasks, this idle time will not be isolated; rather, it will propagate, leading to at least two distinct idle periods on P 1 . As a result, the makespan would increase to at least ( N R ) + 1 . This contradicts the optimal makespan of N R , thereby confirming that the maximum feasible R yields C max = N R , which is indeed optimal. □
Lemma 2.
The schedule computed by SIwUC algorithm is without idleness on the processor P 1 .
Proof. 
Constraints C1 and C2 form the foundational criteria for assigning tasks to processor P 2 . By adhering to these constraints during task allocation, it is guaranteed that processor P 1 will not encounter any idle periods resulting from the distribution of work to P 2 . Consequently, any schedule constructed in compliance with C1 and C2 will maintain continuous execution on P 1 without interruptions.
Moreover, Theorem 1 addresses schedules that deviate from C1 and C2 by providing a corrective mechanism: any idle interval on P 1 can be resolved by transferring an appropriate task currently scheduled on P 2 to P 1 . The steps explained by the flowchart in Figure 10 incorporates this principle by enforcing conditions C1 and C2 during initial task assignment, and when idle time arises on P 1 , by applying the transformation prescribed in Theorem 1 to reassign work from P 2 to P 1 , thereby restoring an idle-free schedule. □
Lemma 3.
The schedule computed by SIwUC algorithm is without idleness on the processor P 2 .
Proof. 
It is impossible that that a time slot t would be able to violate either of the necessary conditions (C1 or C2) at the same time that its following time slot, t + 1 , meets both conditions. In particular, if a necessary condition is not met at time t, it cannot be met at any subsequent time slot t + k , where k 1 .
First case: At time t, the first necessary condition (C1) is not satisfied: there exists no task T in the tree such that level ( T ) N N B ( t ) . C1 remains not satisfied for all future times t + k ( k 0 ) because N B ( t + k ) > N B ( t ) .
Second case: At time t, the second necessary condition (C2) is not satisfied: there exists no task T in the intree such that weight ( T ) N B ( t ) 2 and level ( T ) N N B ( t ) . Observe that for k > 0 , both N B ( t + k ) and weight ( T ) increase by at least 2 between t and t + k . Therefore, at time t, C2 does not hold.
The completion of task T that occurs at some future moment (C2) and satisfies both conditions is completed at time ( t + k ) . Task T meets the level requirement at time ( t + k ) ( level ( T ) N N B ( t + k ) ). Therefore, it has also met the relaxed level requirement at a previous point in time, time t ( level ( T ) N N B ( t ) ). In order to have violated C2 at time t, task T must have violated the weight requirement for that task ( weight ( T ) > N B ( t ) 2 ). To satisfy C2 at time ( t + k ) , we must have a weight requirement that satisfies C2 ( weight ( T ) N B ( t + k ) 2 ) in order to satisfy N B ( t + k ) > N B ( t ) . Additionally, any newly available task that becomes available at ( t + k ) , which causes an increase in N B ( t ) , will automatically violate the level requirement ( level ( T ) ). Thus, C2 is not able to satisfy the task due to contradictions. □
Lemma 4.
If it is not possible to assign a hole cluster to P 2 (i.e., tasks of the cluster will be allocated to P 1 and P 2 ) or the root cannot be assigned to P 1 and the rest of tasks can be allocated without idleness, then the makespan of the schedule is optimal.
Proof. 
Once all tasks are assigned to their respective processors, the resulting schedule achieves an optimal makespan of N R . □
Lemma 5.
If it is not possible to obtain a schedule without idleness when the root is allocated to the processor P 1 and the root cannot be assigned to P 2 , then a decrease of R is the only solution.
Proof. 
Reducing R is necessary when processor P 1 has idle time and assigning the root task to P 2 would increase the makespan.
For this reason, in the definition of R, we speak about the maximum number of tasks that can be allocated to P 2 and not the exact number of tasks that can be allocated to P 2 . □

7. Conclusions and Future Works

The problem of scheduling intrees with unit execution time on two processors where one of these is subjected to unavailability periods is studied in this paper. In this paper, a new optimal algorithm is proposed for this problem.
The proposed algorithm is entitled SIwUC.
An optimal proof of the proposed SIwUC algorithm is presented. Depending on the instance of the graph and the scheduling environment, the optimal schedule assigns the root to the processor P1 or the processor P2.
The two cases are treated in this paper. The obtained results emphasize how the scheduling solutions exhibit a form of symmetry in balancing tasks between the two processors despite unavailability constraints.
As future work, this problem can be extended to the case of unavailability on both processors or a number of machines greater than two.

Author Contributions

Conceptualization, K.B.A.; formal analysis, K.B.A. and K.Z.; methodology, K.B.A. and W.G.; software, K.B.A.; validation, K.B.A. and K.Z.; writing—original draft preparation, K.B.A.; writing—review and editing, K.B.A., K.Z., and W.G.; visualization, K.B.A. and W.G.; supervision, K.Z. and W.G.; project administration, K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abdellafou, K.B.; Sanlaville, E.; Mahjoub, A.; Korbaa, O. Scheduling UECT trees with communication delays on two processors with unavailabilities. IFAC-PapersOnLine 2015, 48, 1790–1795. [Google Scholar] [CrossRef]
  2. Bal, P.K.; Mohapatra, S.K.; Das, T.K.; Srinivasan, K.; Hu, Y.-C. A joint resource allocation, security with efficient task scheduling in cloud computing using hybrid machine learning techniques. Sensors 2022, 22, 1242. [Google Scholar] [CrossRef] [PubMed]
  3. Garg, H.; Rani, M.; Sharma, S.P. Preventive maintenance scheduling of the pulping unit in a paper plant. Jpn. J. Ind. Appl. Math. 2013, 30, 397–414. [Google Scholar] [CrossRef]
  4. Liao, C.-J.; Chen, W.-J. Scheduling under machine breakdown in a continuous process industry. Comput. Oper. Res. 2004, 31, 415–428. [Google Scholar] [CrossRef]
  5. Sanlaville, E.; Mahjoub, A.; Guinand, F. Scheduling problems on parallel machines with communicating tasks and unavailability. In Proceedings of the 15th Congress of the French Society of Operations Research and Decision Support (ROADEF), Marseille, France, 26–28 February 2014. [Google Scholar]
  6. Bamatraf, K.; Gharbi, A. Variable Neighborhood Search for Minimizing the Makespan in a Uniform Parallel Machine Scheduling. Systems 2024, 12, 221. [Google Scholar] [CrossRef]
  7. Munier Kordon, A.; Kacem, F.; de Dinechin, B.D.; Finta, L. Scheduling an interval ordered precedence graph with communication delays and a limited number of processors. RAIRO-Oper. Res. 2013, 47, 73–87. [Google Scholar] [CrossRef]
  8. Kaabi, J.; Harrath, Y. Scheduling on uniform parallel machines with periodic unavailability constraints. Int. J. Prod. Res. 2019, 57, 216–227. [Google Scholar] [CrossRef]
  9. He, S.; Wu, J.; Wei, B.; Wu, J. Algorithms for tree-shaped task partition and allocation on heterogeneous multiprocessors. J. Supercomput. 2023, 79, 13210–13240. [Google Scholar] [CrossRef]
  10. Shukur, H.; Zeebaree, S.R.M.; Ahmed, A.J.; Zebari, R.R.; Ahmed, O.; Tahir, B.S.A.; Sadeeq, M.A.M. A state of art: Survey for concurrent computation and clustering of parallel computing for distributed systems. J. Appl. Sci. Technol. Trends 2020, 1, 148–154. [Google Scholar] [CrossRef]
  11. Ben Abdellafou, K.; Hadda, H.; Korbaa, O. An improved tabu search meta-heuristic approach for solving scheduling problem with non-availability constraints. Arab. J. Sci. Eng. 2019, 44, 3369–3379. [Google Scholar] [CrossRef]
  12. Fuentes, Y.O.; Kim, S. Parallel computational microhydrodynamics: Communication scheduling strategies. AIChE J. 1992, 38, 1059–1078. [Google Scholar] [CrossRef]
  13. Amoura, A.K.; Bampis, E.; Konig, J.-C. Scheduling algorithms for parallel Gaussian elimination with communication costs. IEEE Trans. Parallel Distrib. Syst. 1998, 9, 679–686. [Google Scholar] [CrossRef]
  14. Zinder, Y.; Su, B.; Singh, G.; Sorli, R. Scheduling UET-UCT tasks: Branch-and-bound search in the priority space. Optim. Eng. 2010, 11, 627–646. [Google Scholar] [CrossRef]
  15. Tang, N. Calculation of Latency of Real-Time System and Fixed-Parameter Tractibility of UET-UCT Scheduling Problems. Ph.D. Thesis, Sorbonne Université, Paris, France, 2022. [Google Scholar]
  16. Giroudeau, R.; König, J.-C.; Valery, B. Scheduling UET-tasks on a star network: Complexity and approximation. 4OR 2011, 9, 29–48. [Google Scholar] [CrossRef]
  17. Su, Y.; Vardi, S.; Ren, X.; Wierman, A. Communication-aware scheduling of precedence-constrained tasks on related machines. Oper. Res. Lett. 2023, 51, 709–716. [Google Scholar] [CrossRef]
  18. Maiti, B.; Rajaraman, R.; Stalfa, D.; Svitkina, Z.; Vijayaraghavan, A. Scheduling precedence-constrained jobs on related machines with communication delay. In Proceedings of the IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), Durham, NC, USA, 16–19 November 2020; pp. 834–845. [Google Scholar]
  19. Graham, R.I. The combinatorial mathematics of scheduling. Sci. Am. 1978, 238, 124–133. [Google Scholar] [CrossRef]
  20. Lee, C.-Y. Machine scheduling with an availability constraint. J. Glob. Optim. 1996, 9, 395–416. [Google Scholar] [CrossRef]
  21. Lee, C.-Y. Two-machine flowshop scheduling with availability constraints. Eur. J. Oper. Res. 1999, 114, 420–429. [Google Scholar] [CrossRef]
  22. Canon, L.-C.; Essafi, A.; Trystram, D. A proactive approach for coping with uncertain resource availabilities on desktop grids. In Proceedings of the 21st International Conference on High Performance Computing (HiPC), Goa, India, 17–20 December 2014; pp. 1–9. [Google Scholar]
  23. Jaykrishnan, G.; Levin, A. Scheduling with cardinality dependent unavailability periods. Eur. J. Oper. Res. 2024, 316, 443–458. [Google Scholar] [CrossRef]
  24. Shabtay, D. Single-machine scheduling with machine unavailability periods and resource dependent processing times. Eur. J. Oper. Res. 2022, 296, 423–439. [Google Scholar] [CrossRef]
  25. Ait Aba, M.; Zaourar, L.; Munier, A. Efficient algorithm for scheduling parallel applications on hybrid multicore machines with communications delays and energy constraint. Concurr. Comput. Pract. Exp. 2020, 32, e5573. [Google Scholar] [CrossRef]
  26. Amiri, M.M.; Gündüz, D. Computation scheduling for distributed machine learning with straggling workers. IEEE Trans. Signal Process. 2019, 67, 6270–6284. [Google Scholar] [CrossRef]
  27. Trystram, D.; Guinand, F. Scheduling UET Trees with communication delays on two processors. RAIRO Oper. Res. 2000, 34, 131–144. [Google Scholar][Green Version]
  28. Ben Abdellafou, K.; Hadda, H.; Korbaa, O. Heuristic algorithms for scheduling intrees on m machines with non-availability constraints. Oper. Res. 2021, 21, 55–71. [Google Scholar] [CrossRef]
Figure 1. Task graph of the instance.
Figure 1. Task graph of the instance.
Symmetry 18 00103 g001
Figure 2. Optimal schedule.
Figure 2. Optimal schedule.
Symmetry 18 00103 g002
Figure 3. Simulation of ACBoS heuristic.
Figure 3. Simulation of ACBoS heuristic.
Symmetry 18 00103 g003
Figure 4. Simulation of ACBoS heuristic.
Figure 4. Simulation of ACBoS heuristic.
Symmetry 18 00103 g004
Figure 5. Task graph.
Figure 5. Task graph.
Symmetry 18 00103 g005
Figure 6. Sheduling under UET assumption.
Figure 6. Sheduling under UET assumption.
Symmetry 18 00103 g006
Figure 7. Scheduling under UECT assumption.
Figure 7. Scheduling under UECT assumption.
Symmetry 18 00103 g007
Figure 8. Optimal schedule.
Figure 8. Optimal schedule.
Symmetry 18 00103 g008
Figure 9. Optimal schedule.
Figure 9. Optimal schedule.
Symmetry 18 00103 g009
Figure 10. Flowchart of the proposed algorithm. L: The list of candidate tasks. R: The current upper bound for the number of tasks on P2. w(T): The weight or processing time of task T. cluster(T): The set of tasks in the subtree rooted at T. t(R): A function that maps the value R to a specific time point on P2. N: The total number of tasks. C m a x : The makespan (total schedule length). level cl + 1: A specific level in the intree graph. The complexity of SIwUC algorithm is O (N2).
Figure 10. Flowchart of the proposed algorithm. L: The list of candidate tasks. R: The current upper bound for the number of tasks on P2. w(T): The weight or processing time of task T. cluster(T): The set of tasks in the subtree rooted at T. t(R): A function that maps the value R to a specific time point on P2. N: The total number of tasks. C m a x : The makespan (total schedule length). level cl + 1: A specific level in the intree graph. The complexity of SIwUC algorithm is O (N2).
Symmetry 18 00103 g010
Figure 11. Instance of the intree.
Figure 11. Instance of the intree.
Symmetry 18 00103 g011
Figure 12. Machine profile.
Figure 12. Machine profile.
Symmetry 18 00103 g012
Figure 13. Optimal schedule.
Figure 13. Optimal schedule.
Symmetry 18 00103 g013
Figure 14. Instance of the intree.
Figure 14. Instance of the intree.
Symmetry 18 00103 g014
Figure 15. Machine profile.
Figure 15. Machine profile.
Symmetry 18 00103 g015
Figure 16. Optimal schedule.
Figure 16. Optimal schedule.
Symmetry 18 00103 g016
Figure 17. Instance of the intree.
Figure 17. Instance of the intree.
Symmetry 18 00103 g017
Figure 18. Machine profile.
Figure 18. Machine profile.
Symmetry 18 00103 g018
Figure 19. Optimal schedule.
Figure 19. Optimal schedule.
Symmetry 18 00103 g019
Table 1. Iteration values of the algorithm parameters.
Table 1. Iteration values of the algorithm parameters.
Iteration (t)P2(t)p ValueR Value
1true11
2false11
3true32
4true53
5true74
6true95
Table 2. Execution steps of the algorithm.
Table 2. Execution steps of the algorithm.
LevelList LSatisfied ConditionChosen TaskRMarked Tasks
2 { 2 } 1. All tasks in L have weights higher than R.
2. N B ( t ( R ) ) < N c l .
3. c l : = c l + 1 .
5
3 { 3 } 1. All tasks in L have weights higher than R.
2. N B ( t ( R ) ) < N c l .
3. c l : = c l + 1 .
5
4 { 4 , 11 } 1. All tasks in L have weights higher than R.
2. N B ( t ( R ) ) < N c l .
3. c l : = c l + 1 .
5
5 { 5 , 6 , 7 , 8 , 9 , 10 , 12 } 1. In L exists a task T i such that w ( T i ) R .120 { 12 , 13 , 14 , 15 , 16 }
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ben Abdellafou, K.; Zidi, K.; Ghaban, W. Scheduling Intrees with Unavailability Constraints on Two Parallel Machines. Symmetry 2026, 18, 103. https://doi.org/10.3390/sym18010103

AMA Style

Ben Abdellafou K, Zidi K, Ghaban W. Scheduling Intrees with Unavailability Constraints on Two Parallel Machines. Symmetry. 2026; 18(1):103. https://doi.org/10.3390/sym18010103

Chicago/Turabian Style

Ben Abdellafou, Khaoula, Kamel Zidi, and Wad Ghaban. 2026. "Scheduling Intrees with Unavailability Constraints on Two Parallel Machines" Symmetry 18, no. 1: 103. https://doi.org/10.3390/sym18010103

APA Style

Ben Abdellafou, K., Zidi, K., & Ghaban, W. (2026). Scheduling Intrees with Unavailability Constraints on Two Parallel Machines. Symmetry, 18(1), 103. https://doi.org/10.3390/sym18010103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop