1. Introduction
Sustainability has become a critical design consideration in modern information and communication technologies (ICTs), as the energy footprint of digital infrastructure continues to grow rapidly. With the increasing deployment of cloud data centers, edge computing devices, and embedded systems, ICT systems are estimated to contribute significantly to global energy consumption and carbon emissions [
1,
2]. This concern has led to widespread efforts in energy-aware computing, where task scheduling has emerged as a core mechanism for reducing energy usage without sacrificing system performance. However, the growing complexity of computing and industrial systems has introduced a variety of scheduling challenges, stemming from hardware limitations and diverse job characteristics [
3,
4]. In embedded real-time platforms, energy-aware control loops often involve bursts of computation mixed with lightweight sensing or communication jobs [
5]. In cloud computing clusters, tasks of varying computational intensity and deadline sensitivity arrive in a fixed order determined by upstream systems. In both cases, hardware units such as CPU cores, memory banks, or pipeline stages exhibit performance deterioration over time, resulting in increasing per-job energy costs with position or usage [
6,
7]. Meanwhile, due to strict timing or dependency constraints, jobs must often be processed in their arrival sequence, and their energy profiles are far from uniform. These asymmetries—across jobs, time, and resource cost—necessitate new scheduling models that go beyond traditional speed-scaling or homogeneous job assumptions.
Energy-aware scheduling plays a pivotal role in the design of modern computational and manufacturing systems, particularly amid rising energy costs and growing environmental concerns [
8]. In applications such as cloud data centers, embedded control systems, industrial production pipelines, and autonomous devices, energy consumption typically increases with processing time, job position, or accumulated system workload [
5,
9,
10]. This pattern is especially pronounced in contexts involving hardware degradation over time, such as battery-powered platforms and mission-critical real-time systems [
11]. Consequently, there is a pressing need for scheduling strategies that minimize total energy consumption or operational costs under time-sensitive and resource-constrained environments.
Over the past decades, extensive research has focused on energy-efficient scheduling. A foundational model is the speed-scaling framework introduced by Yao et al. [
9], where processor speeds are dynamically adjusted to balance energy consumption and execution time. This model has been extended to incorporate various energy constraints, as summarized in the survey by Albers [
12]. However, these models often assume homogeneous jobs and do not account for cost escalation due to job position or system degradation.
An alternative line of research focuses on deteriorating jobs, where processing times or costs increase with job position, as introduced by Mosheiov [
13]. Subsequent studies have explored time-dependent processing costs and interval scheduling problems, introducing complexities such as job release times and position-based penalties [
14,
15,
16,
17]. While these studies offer rich insights into temporal degradation, many assume jobs can be reordered to minimize costs, making them unsuitable for streaming or pipelined environments where job arrival order is fixed. Several recent studies have explored learning-based approaches for energy-aware scheduling. For example, Wang et al. [
18] proposed a deep reinforcement learning (DRL) scheduler for fog and edge computing systems, targeting trade-offs between system load and response time. A recent survey by Hou et al. [
19] reviewed DRL-based strategies for energy-efficient task scheduling in cloud systems. Furthermore, López et al. [
20] introduced an intelligent energy-pairing scheduler for heterogeneous HPC clusters, and metaheuristic methods have also been applied to multi-robot swarm scheduling problems [
21]. These studies reflect a growing interest in adaptive scheduling methods, which complement our structural and provably optimal approach. While such methods offer flexibility and empirical effectiveness, they typically yield approximate solutions without structural optimality guarantees. By contrast, our work focuses on deriving exact or provably near-optimal schedules under a formally defined position-dependent cost model.
More recently, researchers have begun exploring energy-aware scheduling in heterogeneous and constrained systems. For instance, in embedded and cyber–physical systems, Mahmood et al. [
22] and Khaitan and McCalley [
23] investigated scheduling in cyber–physical and embedded real-time environments, emphasizing the impact of system heterogeneity and energy constraints. In high-performance computing (HPC) contexts, Kocot et al. [
8] reviewed energy-aware scheduling techniques in modern HPC platforms, highlighting mechanisms such as DVFS, job allocation heuristics, and power capping. Mei et al. [
5] developed a deadline-aware scheduler for DVFS-enabled heterogeneous machines. Similarly, Esmaili and Pedram [
24] applied deep reinforcement learning to minimize energy in cluster systems, and Zhang and Chen [
25] surveyed scheduling methods in mixed-criticality systems, stressing the importance of balancing energy consumption and job reliability under strict timing constraints. Meanwhile, algorithmic studies have tackled time-dependent and cost-aware scheduling problems, offering complexity classifications and algorithmic solutions under various structural assumptions [
26,
27,
28,
29]. In particular, Im et al. [
27] and Chen et al. [
28] analyzed models with temporally evolving cost functions, while additional frameworks such as bipartite matching with decomposable weights [
30,
31], interval-constrained job assignments [
32,
33], and weighted matching under cost constraints [
34] have enriched the algorithmic toolbox for cost-sensitive scheduling.
Despite these advancements, many existing models fall short in capturing key characteristics of real-world systems. First, most assume flexible job orderings or job independence, whereas many practical applications impose strict sequential constraints due to job arrival patterns or dependency relations, such as fixed workflow pipelines in cloud platforms [
35,
36]. Second, job heterogeneity is often overlooked, despite substantial differences in energy profiles (e.g., light vs. heavy jobs) [
5,
8,
37,
38]. Third, models with variable slot costs frequently assume piecewise or non-monotonic functions, failing to represent realistic scenarios with strictly increasing cost structures [
39]. Collectively, these limitations restrict the applicability of prior approaches in energy-sensitive, real-time scheduling environments.
In this paper, we address these challenges by investigating a scheduling model with the following four key features: (i) each of the m identical machines processes exactly n sequentially arriving jobs; (ii) the job sequence is fixed and must be scheduled in order without rearrangement; (iii) each job is categorized as either a light job (with zero cost) or a heavy job (with positive cost); and (iv) each machine’s slot cost is a fixed monotonically increasing sequence, independent of the job type. To minimize the total energy cost incurred by assigning heavy jobs to higher-cost slots, we formally define this scheduling problem as an integer linear program (ILP) and normalize it to a simplified version where only the placement of heavy jobs affects the objective.
To illustrate the real-world relevance of this model, consider a battery-powered drone control system where lightweight sensing tasks and heavy navigation computations arrive in a fixed order and must be assigned to a limited set of processors. Such systems often exhibit energy asymmetry: heavy tasks consume more power, and later slots incur higher energy due to thermal or resource degradation—conditions that naturally align with our model assumptions.
Our contributions are summarized as follows. First, we formalize an energy-aware scheduling problem in which a fixed sequence of heterogeneous jobs must be assigned to identical machines with strictly increasing slot costs. By reformulating the objective function through cost simplification, we reduce the problem to a position-based assignment model focused solely on heavy jobs and provide an equivalent integer linear programming (ILP) formulation under positional constraints. Second, we introduce the concept of monotonic machine assignment, a structured scheduling framework that guarantees both feasibility and optimality preservation. We prove that any feasible assignment can be transformed into a monotonic one without increasing the total energy cost, and characterize the reduced solution space via a vector of heavy job counts per machine. This transformation effectively restores a form of functional symmetry in job allocation under asymmetric cost constraints. Third, building on this structure, we develop two algorithms for computing optimal monotonic schedules. The first is a dynamic programming algorithm with time complexity . More importantly, we propose a misalignment elimination algorithm that incrementally refines job distributions and provably converges to the global optimum in only time, offering significant scalability for large-scale scheduling applications. Collectively, our work delivers both theoretical insight and provably optimal algorithms for position-sensitive, energy-aware scheduling, with practical relevance to streaming computation, embedded systems, and real-time industrial platforms.
The remainder of this paper is structured as follows.
Section 2 presents the formal problem definition, introduces the integer linear programming (ILP) formulation, and performs a cost simplification that reduces the objective to the placement of heavy jobs.
Section 3 introduces the concept of monotonic machine assignments, establishes key structural properties, and develops a dynamic programming algorithm to compute the optimal schedule within this restricted space.
Section 4 proposes an iterative misalignment elimination algorithm that incrementally refines any feasible schedule and guarantees convergence to the global optimum in polynomial time. Finally,
Section 5 concludes this paper with a summary of the findings and a discussion of potential applications in energy-aware and real-time systems.
3. Monotonic Machine Assignment
This section introduces a structured assignment model, referred to as the monotonic machine assignment, to facilitate the theoretical analysis of the energy-aware scheduling problem. We prove that any feasible assignment can be transformed into a monotonic one without increasing the total energy cost, and we further present a dynamic programming algorithm to compute an optimal assignment under this structure.
3.1. Definition
A monotonic machine assignment is defined as a job-to-machine allocation that satisfies the following three properties:
Job Number Monotonicity: Let
denote the number of jobs assigned to machine
after the first
j jobs have been scheduled. Then, for all
and all
, we require
That is, machines with smaller indices must have at least as many jobs as those with larger indices at any moment.
Light Job Monotonicity: Let
be the machine to which job
is assigned. If job
is a light job (
), then it must be assigned to the machine with the smallest index
i such that
. Formally, for all
with
, we require
Heavy Job Monotonicity: For any two heavy jobs
and
, if
, we require their machine indices to be in non-decreasing order:
That is, heavy jobs must be assigned to machines in a non-decreasing order of machine indices with respect to their arrival order.
To illustrate these properties, we provide a simple numerical example. Suppose there are machines, each with slots, and thus 9 jobs in total. Let the arrival sequence have heavy jobs at positions 2, 5, and 8 (i.e., jobs , , and are heavy, and others are light). Under a monotonic assignment, the following hold:
Machine handles jobs 1–3: job (light) in slot 1, job (heavy) in slot 2, and job (light) in slot 3.
Machine handles jobs 4–6: job (light) in slot 1, job (heavy) in slot 2, and job (light) in slot 3.
Machine handles jobs 7–9: job (light) in slot 1, job (heavy) in slot 2, and job (light) in slot 3.
This simple example shows how job number monotonicity ensures machines fill in order, and heavy job monotonicity places each heavy job in the earliest available slot on its machine while preserving non-decreasing machine indices. Light jobs occupy the remaining slots by light job monotonicity. Such a distribution concretely illustrates the abstract rules before proceeding to the formal algorithm.
Theorem 1. For any feasible job assignment , there exists a monotonic machine assignment that satisfies the above three monotonicity properties such that the total energy cost under is no greater than that under . That is, Proof. We show that any feasible job assignment can be transformed into a monotonic machine assignment without increasing the total energy cost, by applying the following three steps:
Step 1: Enforcing Job Number Monotonicity. For each job , , if there exists a pair of adjacent machines such that , we select the earliest such violation. Then we have and swap the next job assigned to machine (starting from ) with a job on machine . Because all machines have the same slot cost structure, and only the indices of the machines are affected—not the relative positions of the slots within each machine—the total cost remains unchanged. We repeat this process until for all and all , we have . This ensures job number monotonicity.
Step 2: Enforcing Light Job Monotonicity. Suppose that
is a light job that is not assigned to the available machine
with the smallest index, but is instead scheduled on a later machine
with
. Let
be the earliest subsequent job (
) that is assigned to machine
. We divide the job sequence into three consecutive phases: Phase I includes jobs scheduled before
; Phase II consists of jobs from
to
; and Phase III includes jobs scheduled after
, as shown on the left side of
Figure 1. We construct a new schedule
by swapping the assignments of
and
, reassigning
to machine
and
to machine
, as shown on the right side of
Figure 1. Since the original schedule
satisfies job number monotonicity, the slot position of
in
does not increase. Moreover, because
is a light job with negligible cost, this reassignment does not increase the total energy cost. The updated schedule
continues to maintain job number monotonicity. Therefore, this correction process can be applied iteratively until light job monotonicity is fully achieved.
Step 3: Enforcing Heavy Job Monotonicity. Assume that the first violation of heavy job monotonicity occurs when a heavy job
is assigned to machine
, despite the existence of an earlier heavy job
(
) that was previously assigned to a later machine
with
. In other words,
is scheduled to a machine with a smaller index than a preceding heavy job, violating the required non-decreasing order of machine indices for heavy job assignments. We focus on this first violation, as depicted on the left side of
Figure 2. Since machine
was still available at the time
was scheduled (i.e.,
), light job monotonicity ensures that no light jobs could have been assigned to
before
. Therefore, all jobs on
scheduled before
must be heavy jobs, labeled as set
in the figure. Let
be the earliest such heavy job on
.
Meanwhile, because
represents the first violation of heavy job monotonicity, all jobs scheduled to
between
and
must be light jobs. These are labeled as set
in
Figure 2. The job sequence is thus divided into three phases: Phase I consists of all jobs prior to
; Phase II spans from
to
, and includes both heavy jobs on
(set
) and light jobs on
(set
); and Phase III contains all jobs after
. To restore monotonicity, we construct a new schedule
by swapping the assignments of
and
, reassigning
to machine
and
to machine
, as illustrated on the right side of
Figure 2. Since
has only received light jobs between
and
, and because light jobs incur negligible energy cost, this swap does not affect the overall cost contribution of the light jobs. Furthermore, the slot index of
in
is no greater than in the original schedule
, and the number of heavy jobs assigned to
remains unchanged. As the slot cost function is fixed and strictly increasing, the total energy cost does not increase.
Importantly, the reassignment does not violate light job monotonicity, and the updated schedule continues to satisfy job number monotonicity. Therefore, this correction procedure can be applied iteratively until all violations of heavy job monotonicity are eliminated.
Termination and Feasibility: Each of the above transformations resolves one violation without introducing new ones, and there is a finite number of jobs. Therefore, the process terminates in finite steps. The resulting schedule
is feasible, satisfies all three monotonicity conditions, and achieves
3.2. Constructing Monotonic Assignments via Heavy Job Distribution
A monotonic machine assignment can be completely specified by a vector of heavy job quotas
, where
denotes the number of heavy jobs assigned to machine
, and the total number of heavy jobs satisfies
with
H denoting the total number of heavy jobs in the system.
Given a fixed job arrival sequence and a heavy job distribution vector, the monotonic machine assignment can be constructed by assigning each job dynamically based on its type, as outlined in Algorithm 1.
Assignment Rules.
Heavy jobs: Assign each heavy job to the first available machine (smallest index) such that and ;
Light jobs: Assign each light job to the first machine (smallest index) with an available slot, i.e., .
Here,
is the number of heavy jobs remaining to be assigned to machine
, and
tracks the number of total jobs (heavy or light) assigned to
thus far.
Algorithm 1 Monotonic assignment construction from heavy quotas. |
- 1:
for each machine do - 2:
Initialize {heavy jobs remaining for } - 3:
Initialize {current number of jobs assigned to } - 4:
end for - 5:
for each job in arrival order do - 6:
if is a heavy job then - 7:
Find the smallest i such that and - 8:
Assign to - 9:
Update , - 10:
else - 11:
Find the smallest i such that - 12:
Assign to - 13:
Update - 14:
end if - 15:
end for
|
Feasibility Condition: Not all heavy job vectors satisfying
correspond to feasible schedules. To ensure feasibility, each machine
must always have sufficient capacity to accommodate its remaining heavy job quota at every stage of the sequential assignment. That is, the following constraint must be satisfied throughout the process:
We define
as the number of heavy jobs in the
i-th block of the job sequence
, given by
Then, a heavy job allocation vector
is
feasible if and only if
3.3. Dynamic Programming for Optimal Monotonic Assignment
We present a dynamic programming algorithm to compute an optimal monotonic machine assignment, minimizing total energy cost under a fixed job arrival order and slot-based cost structure.
DP State Definition: Let DP[i][h] denote the minimum total cost of assigning the first i machines with exactly h heavy jobs scheduled in total.
Transition Function: Let
represent the minimum cost of assigning exactly
k heavy jobs and
light jobs to machine
, assuming that
heavy jobs have already been scheduled across the first
machines. Then, the recurrence is
with the base cases
The final solution is given by
Feasibility Condition: If assigning k heavy jobs to machine is infeasible—e.g., there are insufficient remaining heavy jobs or too few available slots due to prior light jobs—then is set to and excluded from the transition.
Time Complexity: The total number of DP states is
Each state enumerates up to
n values of
k, and the corresponding
values are retrieved in
time via preprocessing. Hence, the total time complexity of the dynamic programming procedure is
Preprocessing of Transition Costs: Each
value is computed by selecting the appropriate subsequence of jobs to be assigned to machine
, and calculating the total slot cost of the top-
k heavy jobs within that segment. This preprocessing step is efficiently performed using a prefix sum or scan-based technique, as detailed in Algorithm 2. For any fixed machine index
i and heavy job prefix count
, the full cost vector
can be computed in
time. Since there are at most
distinct values of
(i.e., the number of ways to distribute heavy jobs across the first
machines), and
m choices of
i, the total number of
pairs is
. Therefore, the total time required to compute all
values for all machines and heavy job prefix states is
Thus, the overall time complexity of the proposed algorithm—including both dynamic programming and preprocessing of all
values—is
Algorithm 2 ComputeAllGi: Preprocessing for machine . |
Require: job sequence , machine index i, total number of heavy jobs scheduled so far: Ensure: Array , where is the cost of assigning k heavy jobs to
- 1:
Filter the job sequence to select the next n jobs assigned to :
Skip the first h heavy jobs, Skip the first light jobs, Select the next n jobs to form the assignment block for .
- 2:
Let be the indices of heavy jobs in the selected n jobs. - 3:
for to n do - 4:
if then - 5:
{Final machine must take remaining heavy jobs} - 6:
else if then - 7:
{Not enough heavy jobs available} - 8:
else - 9:
- 10:
end if - 11:
end for - 12:
returng
|
4. Misalignment Elimination Algorithm
In this section, we develop an iterative refinement algorithm that transforms an initial feasible assignment into an optimal monotonic one. Building on the structural properties of the assignment, we identify local inconsistencies—referred to as misaligned assignments—and propose a targeted adjustment strategy to eliminate them. The algorithm operates by incrementally resolving these misalignments, one at a time, while ensuring that feasibility is preserved and the total cost does not increase. Through repeated application of this refinement step, the algorithm progressively improves the assignment structure until all machines are properly aligned. We provide both the algorithmic procedure and a formal proof of its correctness and computational efficiency.
4.1. Misaligned Assignment
Definition 1 (Misaligned Assignment)
. Given a monotonic machine assignment , we denote a misaligned assignment on adjacent machines and if the slot index of the last heavy job on is greater than or equal to that of the first light job on . This implies a sub-optimal assignment of heavy and light jobs across adjacent machines, which may degrade scheduling efficiency. Formally, let the following hold:
denotes the slot index of the last heavy job assigned to (with if no heavy job exists);
denotes the slot index of the first light job assigned to (with if no light job exists).
Consider two consecutive machines
and
that exhibit a misaligned assignment. Let
denote the last heavy job assigned to
, and
denote the first light job assigned to
, as illustrated on the left side of
Figure 3. To eliminate this misalignment, we construct a new assignment
by swapping these two jobs:
is reassigned to
, and
is reassigned to
, as shown on the right side of the
Figure 3. Since the slot index of job
on machine
is no smaller than that of job
on machine
(i.e.,
), the reassignment does not increase the total energy cost. Furthermore, the reassignment updates the number of heavy jobs as follows:
The swap affects only the slot configuration of machines and . Specifically, the following properties hold in the updated assignment :
;
;
;
.
These local updates imply that the misalignment condition may no longer hold in the updated assignment , thereby resolving the inconsistency between machines and . However, this swap may introduce new misalignments involving adjacent machine pairs, specifically or . Nevertheless, each swap strictly improves the structural alignment by moving a heavy job forward along the machine sequence. Since there are at most heavy jobs in total, and each job can traverse at most m machines (from to in the worst case), the total number of swaps is bounded by .
4.2. Efficient Computation of Slot Positions
Let
and
denote the cumulative number of light and heavy jobs, respectively, among the first
j jobs in the input sequence:
Let be the number of heavy jobs assigned to machine , and let denote the total number of heavy jobs assigned to machines through .
(1) Computation of
: The total number of heavy jobs assigned to the first
i machines is
. The last heavy job assigned to machine
corresponds to the
-th heavy job in the global sequence. Let
j be the smallest index such that
The number of light jobs among the first
j jobs is
, and the number of light jobs assigned to machines
through
is
. Let
denote the number of light jobs already assigned to machine
before job
. Then, the slot position of the last heavy job
on
is
(2) Computation of
: The number of light jobs assigned to machines
through
is
. Hence, the first light job assigned to machine
corresponds to the
-th light job in the global sequence. Let
j be the smallest index such that
The number of heavy jobs among the first
j jobs is
, and the number of heavy jobs already assigned to machines
through
is
y. Let
denote the number of heavy jobs already assigned to machine
before job
. Then, the slot position of this first light job
on
is
Both prefix arrays can be preprocessed in time. By recording the first occurrence of each prefix value, the queries for and can be answered in constant time using hash maps or direct indexing. Consequently, both slot positions can be computed in time per query.
4.3. Misalignment Elimination Algorithm
We now present the complete iterative refinement procedure in Algorithm 3, which incrementally eliminates misaligned assignments until none remain, ultimately yielding an optimal monotonic assignment. The algorithm begins with the initial heavy job distribution , computed directly from the input job sequence. It then maintains a dynamic misalignment set C, which stores the indices of all machines currently involved in misaligned assignments, based on the slot positions and .
This reassignment strategy preserves feasibility at every step and ensures that the total assignment cost is non-increasing. Importantly, due to the monotonic behavior of
and
during each swap, the misalignment set
C is only locally affected. Specifically, machine pairs that do not involve indices
i or
remain unaffected. This locality property guarantees that each iteration modifies only a small portion of the assignment and simplifies the update of
C.
Algorithm 3 Misalignment elimination algorithm. |
Require: Job types sequence , number of machines m, slot size n Ensure: Optimized heavy job counts
- 1:
Initialize misalignment set - 2:
for each machine to m do - 3:
Set {Initial heavy job count on machine i} - 4:
Compute and - 5:
if then - 6:
- 7:
end if - 8:
end for - 9:
while do - 10:
Select any - 11:
Update , - 12:
Update , , , - 13:
for each such that do - 14:
if then - 15:
- 16:
else - 17:
- 18:
end if - 19:
end for - 20:
end while - 21:
return
|
Moreover, the slot positions and can be maintained and updated in constant time per iteration, i.e., , as previously discussed. Since each iteration performs a single resolution step by relocating one heavy job between adjacent machines, and each heavy job can be moved across at most m machines, the total number of iterations is bounded by . As each iteration requires constant time, the overall time complexity of the algorithm is . The memory usage is .
We summarize the correctness and computational complexity of the proposed method in the following theorem.
Theorem 2. The misalignment elimination algorithm computes an optimal monotonic assignment in time.
Proof. This proof is primarily devoted to establishing the optimality of the output produced by the misalignment elimination algorithm. The time complexity of the method has already been analyzed earlier and is not the focus here. We denote by the assignment produced by the misalignment elimination algorithm when starting from the initial assignment .
We first prove that the algorithm always converges to a unique assignment. Assume, for the sake of contradiction, that the algorithm terminates at two distinct monotonic assignment vectors, denoted by
and
, such that
. Without loss of generality, let
i be the smallest index such that
. Then there exists an intermediate assignment
, which occurs during the execution of the algorithm that leads to
, such that the partial sum up to index
i satisfies
and yet
contains a misalignment at index
i, that is,
By the monotonicity of the assignment process, we have
which implies
, contradicting the assumption that
is misalignment-free. Therefore, the algorithm must converge to a unique final assignment.
Next, we show that the algorithm’s final assignment is optimal. Let denote the optimal initial assignment, i.e., the one that minimizes total cost among all monotonic assignments. Then is the corresponding optimal misalignment-free assignment produced by the algorithm . Let the assignment generated from the natural initial input be .
We prove by induction on
m that
. That is, we show that the final assignments match:
Base case: When , the result holds trivially since all heavy jobs must be assigned to the single machine.
Inductive step: Assume the claim holds for . For the case of m machines, we compare the first coordinate of the final assignment.
Suppose
. Consider the modified input
which preserves the total number of heavy jobs. By the monotonicity of
, we obtain
Now, by the induction hypothesis, we know that
so the entire assignment equals
, contradicting the assumption
.
Now suppose . Because the algorithm moves heavy jobs forward to eliminate misalignments, once the assignment becomes misalignment-free, any further movement of heavy jobs would violate the alignment constraint and increase the total cost due to the monotonic increase in slot costs. Thus, no additional improvement is possible beyond the optimal misalignment-free configuration.
Applying this logic repeatedly, we obtain
where the last equality follows from the induction hypothesis. This contradicts the optimality of the corresponding optimal misalignment-free assignment produced by the algorithm
.
Therefore, we must have
. By the induction hypothesis again, the assignments on the remaining
machines also match. Thus, the entire final assignment produced by the algorithm matches the optimal one:
□
4.4. Experimental Evaluation
To evaluate the computational efficiency of the proposed misalignment elimination algorithm, a comprehensive runtime comparison was conducted against a baseline dynamic programming approach. The comparison focuses on two dimensions: the number of machines m, and the ratio of heavy jobs , under realistic scheduling configurations.
Implementation Details: All algorithms were implemented in C++17 and compiled with g++ using the -O2 optimization flag. Experiments were executed on a Linux machine with an Intel i7-12700H CPU and 32 GB RAM. High-precision timing was measured using std::chrono. For each data point, 100 random instances were generated with fixed parameters, and the average runtime was recorded to reduce variance.
Each test case consists of jobs, with exactly heavy jobs, randomly mixed with light jobs. The binary job sequence is randomly shuffled using std::shuffle to ensure unbiased sampling.
Scaling with Machine Number: The number of machines
was varied while keeping the slot number fixed at
and the heavy job ratio constant at
. As illustrated in the left panel of
Figure 4, the runtime of the dynamic programming method increases rapidly with
m, exhibiting a clear quadratic trend. This is consistent with its theoretical complexity of
, where both the number of machines and the slot count contribute multiplicatively to the computational cost. Since
n is fixed in this experiment, the observed runtime growth reflects the
term in the complexity, confirming that the method becomes substantially more expensive as the number of machines increases.
In contrast, the misalignment elimination method achieves significantly faster runtime and better scalability. By targeting only local conflicts and incrementally refining job assignments, it avoids the combinatorial overhead of global enumeration. The observed runtime follows a near-quadratic pattern, consistent with its theoretical bound of , yet with substantially smaller constant factors. This efficiency enables it to handle large-scale instances with minimal delay, demonstrating clear practical advantages over dynamic programming.
Scaling with Heavy Job Ratio: With machine and slot counts fixed at
and
, the heavy job ratio
was varied. The right panel of
Figure 4 shows that dynamic programming runtime increases with
, peaking around
due to a denser and more complex state space. Slight reductions beyond this point are attributed to more uniform job types reducing branching complexity.
Meanwhile, misalignment elimination remains consistently fast across all , showing only minor variation. This robustness arises from its conflict-driven logic, which depends primarily on local slot-level constraints rather than the global distribution of job types.
Across both scaling dimensions, misalignment elimination demonstrates significantly better runtime performance than dynamic programming. It achieves reliable low-latency execution consistent with the bound, making it more suitable for large-scale or real-time scheduling scenarios where computational efficiency is critical.
5. Conclusions
This paper investigates an energy-aware scheduling problem in which a fixed sequence of heterogeneous jobs must be assigned to multiple identical machines with monotonically increasing slot costs. A key observation is that the contribution of light jobs remains invariant across all feasible assignments. This enables the problem to be reformulated as minimizing the cumulative cost associated with heavy job placements. To address the cost asymmetry under structural constraints, we propose the concept of monotonic machine assignment, which imposes index-based rules to guide job placement and restore regularity. This structure-aware reformulation narrows the solution space without sacrificing optimality, thereby facilitating efficient algorithmic design.
Based on this framework, we develop two optimal algorithms: a dynamic programming method with time complexity and a more scalable misalignment elimination algorithm that achieves global optimality in time. Although the machines are structurally symmetric, inherent asymmetry in job costs and arrival order can lead to imbalanced naive assignments. In contrast, optimal solutions restore regularity through structured job allocation. Beyond algorithmic efficiency, the monotonic assignment model offers structural insight into how order-based constraints can be exploited to reduce complexity in asymmetric scheduling environments. This contributes to the broader literature on structure-aware optimization and highlights the role of latent regularities in enabling efficient scheduling under combinatorial constraints. Moreover, the simplicity and modularity of the proposed algorithms make them suitable for deployment in decentralized controllers, energy-sensitive embedded processors, and edge computing environments with real-time constraints.
While our algorithms are designed under a fixed job sequence and deterministic slot costs, this structural assumption reflects many real-world systems such as embedded controllers, pipelined compute chains, and energy-aware edge processors. In such environments, jobs typically arrive in a fixed order due to sensing or control dependencies, and energy or delay costs increase with position as a result of thermal buildup or battery degradation. Compared to heuristic or learning-based scheduling strategies, our model-driven approach provides structural transparency and provable optimality, without relying on extensive parameter tuning or large training datasets. Future research may explore online extensions or hybrid models that integrate structure-aware optimization with adaptive learning mechanisms. In addition, further directions include handling dynamically arriving jobs, accommodating heterogeneous machine architectures, and formulating multi-objective strategies that balance trade-offs between energy efficiency, latency, and fairness. It would also be valuable to investigate learning-based scheduling frameworks that adapt to workload patterns over time, as well as stochastic models that incorporate uncertainty in job characteristics and energy estimation.