Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function

Efficient job scheduling reduces energy consumption and enhances the performance of machines in data centers and battery-based computing devices. Practically important online non-clairvoyant job scheduling is studied less extensively than other algorithms. In this paper, an online non-clairvoyant scheduling algorithm Highest Scaled Importance First (HSIF) is proposed, where HSIF selects an active job with the highest scaled importance. The objective considered is to minimize the scaled importance based flow time plus energy. The processor’s speed is proportional to the total scaled importance of all active jobs. The performance of HSIF is evaluated by using the potential analysis against an optimal offline adversary and simulating the execution of a set of jobs by using traditional power function. HSIF is 2-competitive under the arbitrary power function and dynamic speed scaling. The competitive ratio obtained by HSIF is the least to date among non-clairvoyant scheduling. The simulation analysis reflects that the performance of HSIF is best among the online non-clairvoyant job scheduling algorithms.


Introduction
In the current era, the importance of the reduction of energy consumption in data centers and battery based computing devices is emerging.Energy consumption has become a prime concern in the design of modern microprocessors, especially for battery based devices and data centers.Modern microprocessors [1,2] use dynamic speed scaling to save energy.The processors are designed in such a way that they can vary its speed to conserve energy using dynamic speed scaling.The software developed assists operating system to vary the speed of a processor and save energy.As per United States Protection Agency [3], data centers represent 1.5% of total US electricity consumption.The US data center workload requires estimated for 2020 requiring a total electricity use that varies by about 135 billion kWh.Data center workloads continue to grow exponentially; comparable increases in electricity demand have been avoided through the adoption of key energy efficiency measures [4].Energy consumption can be reduced by scheduling jobs in an appropriate order.In the last few years, a lot of job scheduling algorithms are proposed with dual objectives [5,6].The objectives considered Appl.Sci.2019, 9, 1467 2 of 22 are: the first, to optimize some scheduling quality (criteria, e.g., flow time, weighted flow) and the second, to minimize energy consumption.Scheduling algorithms with dual objectives have two components [7]: Job Selection: It determines that out of active jobs which job to execute first on a processor.Speed Scaling: At any time t, it determines the speed of a processor.
The traditional power function (power P = s α , where s and α > 1 are speed of a processor and a constant, respectively [8,9]) is used widely for the analysis of scheduling algorithms.In this paper, the arbitrary power function [10] is considered.The arbitrary power function is having certain advantages over traditional power function [10].The motivation to use the arbitrary power function rather than traditional power function is explained comprehensively by the Bansal et al. [10].Different types of job scheduling models are available in literature.A job is a unit of work/task that an operating system performs.It is like the applications you execute on computer (email client, word-processing, web browsing, printing, information transfer over the Internet, or a specific action accomplished by the computer).Any user/system activity on a computer is handled through some job.The size of a job is the set of operations and microoperations required to be executed for completing some course of action on a computer.In offline job scheduling, the complete job sequence is known in advance, whereas jobs arrive arbitrarily in online job scheduling.To minimize the flow time, big jobs execute at high speed with respect to their actual importance and small jobs execute at low speed with respect to their actual importance.In non-clairvoyant job scheduling, there is no information regarding the size of jobs at arrival time, whereas in clairvoyant job scheduling, the size of any job is known at its arrival time.The practical importance of online non-clairvoyant job scheduling is higher than clairvoyant scheduling [11].Most processors do not have natural deadlines associated with them, for example in Linux and Microsoft Windows [12].The non-clairvoyant scheduling problem is faced by the operating system in a time sharing environment [13].There are several situations where the scheduler has to schedule jobs without knowing the sizes of the jobs [14].The Shortest Elapsed Time First (SETF) algorithm, a variant of which is used in the Windows NT and Unix operating system scheduling policies, is a non-clairvoyant for minimizing mean slowdown [14].
The theoretical study of speed scaling was initiated by Yao et al. [15].Motwani et al. [13] introduced the analysis of non-clairvoyant scheduling algorithm.Initial researches [16][17][18][19][20][21] considered the objective to minimize the flow time, i.e., only the quality of service criteria.Later on, some new algorithms were proposed with an objective of minimizing the weighted/prioritized flow time [22][23][24], i.e., not only the quality of service but also the reduction in energy consumption by the machines.Albers and Fujiwara [25] studied the scheduling problem with an objective to minimize the flow time plus energy in the dynamic speed scaling approach.Online non-clairvoyant job scheduling algorithms are studied less extensively than online clairvoyant job scheduling algorithms.Highest Density First (HDF) is optimal [10] in online clairvoyant settings for the objective of fractional weighted/importance-based flow time plus energy.HDF cannot operate in the non-clairvoyant settings.HDF [10] algorithm always runs the job of highest density and the density of a job is its importance divided by its size.In non-clairvoyant settings, the complete size of a job is only known at the completion of it.Therefore, the HDF cannot be used directly in the non-clairvoyant settings.Azar et al. [11] proposed an algorithm (Non-Clairvoyant) NC for the known job densities in the online non-clairvoyant settings on a uniprocessor, using the traditional power function.In NC, the density (i.e., the importance/size) is known at arrival time.Speed scaling and job assignment policy used in non-clairvoyant algorithm NC-PAR (Non-Clairvoyant on Parallel identical machines) is based on a clairvoyant algorithmic approach, which shows that NC-PAR is not a pure non-clairvoyant algorithm.WLAPS (Weighted Latest Arrival Processor Sharing) [26] provides high priority to some latest jobs which increases the average response time.WLAPS does not schedule a fixed portion of active jobs rather it selects jobs having total importance equal to a fixed portion of the total importance of all active jobs.It needs to update the importance of some job to avoid under-scheduling or over-scheduling.It does not consider the importance of jobs in appropriate manner and suffers from high average response time.The above-mentioned deficiencies motivated us to continue the study in this field for the objective of minimizing importance-based importance based flow time plus energy.
In this paper, an online non-clairvoyant scheduling Highest Scaled Importance First (HSIF) is proposed with an objective of minimizing the scaled importance-based flow time plus energy.In HSIF, rather than the complete importance of a job the scaled importance of a job is considered.The scaled importance of a job increases if the job is new and it does not get the chance to execute; consequently, the starvation condition is avoided.If a job executes then the scaled importance will decrease.It the HSIF, the impotence of any job is calculated and it is the scaled value of the fixed importance of that job.As the importance is time dependent it can be termed as dynamic importance/scaled importance.This balances the speed and energy consumption.The speed of a processor is a function of the total scaled importance of all active jobs.The competitive ratio of HSIF is analysed using the arbitrary power function and amortized potential function analysis.
The remaining paper is segregated in the following sections: Next section describes some related previous scheduling algorithms and their results.Section 3 provides notations used in our paper and definitions necessary for discussion.In Section 4, the authors have explained a 2-comptitive scheduling Highest Scaled Importance First (HSIF), which includes the algorithm as well as the comparison of HSIF with the optimal algorithm using amortized analysis (potential function).In Section 5, a set of jobs and traditional power function is used to examine the performance of HSIF.Section 6 draws some concluding remarks and future scope of this study.

Related Work
In this section, review of some related work on the online non-clairvoyant job scheduling algorithms using the traditional power function is presented.Irn et al. [27] proposed a concept of migration of jobs and gave an online non-clairvoyant algorithm Selfish Migrate (SelMig).SelMig is O(α 2 )-competitive using traditional power function with an objective of minimizing the total weighted flow time plus energy on unrelated machines.Azar et al. [11] presented an online non-clairvoyant uni-processor algorithm NC, wherein all jobs arrive with uniform density (i.e., weight/size = 1).NC is 2 + 1 α−1 -competitive using the traditional power function with an objective of minimizing the fractional flow time plus energy.NC uses unbounded speed model.Most of the studies using arbitrary power function have been conducted with clairvoyant settings.Bansal et al. [12] showed that an online clairvoyant algorithm ALG (Algorithm proposed by Bansal et al.) is γ-competitive with an objective of minimizing the fractional weighted/importance-based flow time plus energy.ALG uses Highest Density First (HDF) for job selection.The competitive ratio γ = , more specifically For large α, the value of γ ≈ 2α lnα .Bansal et al. [10] introduced the concept of arbitrary power function and proved that an online clairvoyant algorithm (OCA) is (2 + )-competitive with an objective of minimizing the fractional weighted flow time plus energy.Authors presented [28] an expert and intelligent system that applies various energy policies to maximize the energy-efficiency of data-center resources.Authors claimed that around 20% of energy consumption can be saved in without exerting any noticeable impact on data-center performance.Duy et al. [29] described a design, implementation, and evaluation of a green scheduling algorithm using a neural network predictor to predict future load demand based on historical demand for optimizing server power consumption in cloud computing.The algorithm turns off unused servers (and restarts them whenever required) to minimize the number of running servers; thus, minimizing the energy consumption.Authors defined [30] an architectural framework and principles for energy-efficient cloud computing.They presented an energy-aware resource provisioning heuristics that improves energy efficiency of the data center, while delivering the negotiated Quality of Service.Sohrabi et al. [31] introduced a Bayesian Belief Network.It learns over time that which of the overloaded virtual machines is best to be removed from a host.The probabilistic choice is made among virtual machines that are grouped by their degree of Central processing unit (CPU) usage.Juarez et al. [32] proposed a real-time dynamic scheduling system to execute efficiently task-based applications on distributed computing platforms in order to minimize the energy consumption.They presented a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale.In OCA, the work and weights/importance are arbitrary.It uses HDF for job selection and the power consumed is calculated on the basis of speed of a processor, which is a function of fractional weights of all active jobs.Chan et al. [26] showed that an online non-clairvoyant job scheduling algorithm named Weighted Latest Arrival Processor Sharing (WLAPS) is 16 1 + 1 2  -competitive under the arbitrary power model with an objective of minimizing the weighted flow time plus energy, where > 1.The value of α is commonly believed to be 2 or 3 [26].HDF is optimal [10] in online clairvoyant settings for the objective of fractional weighted/importance-based flow time plus energy.In clairvoyant job scheduling, the size of a job is known at arrival time but the same is not true in case of non-clairvoyant scheduling, therefore HDF cannot be applied in non-clairvoyant setting.In this paper, a variant strategy of HDF is considered but in online non-clairvoyant setting for the objective of minimizing the scaled importance based flow time plus energy.Authors proposed a new strategy Highest Scaled Importance First (HSIF) in which rather than the complete importance of a job the scaled importance of a job is considered.The scaled importance of a job increases if the job is new and it does not get the chance to execute; consequently, the starvation condition is avoided.If a job executes then the scaled importance will decrease.This balances the speed and energy consumption.The speed of a processor is a function of the total scaled importance of all active jobs.2-competitive HSIF is analysed using the amortized potential function against an offline adversary and arbitrary power function.The results of HSIF and other related online non-clairvoyant job scheduling algorithm are provided in Table 1.

Definitions and Notations
The necessary definitions, explanation of the terms for the study, the concept of arbitrary power function and amortized potential function analysis are as follows:

Scheduling Basics
An online non-clairvoyant uni-processor job scheduling HSIF is proposed, where jobs arrive over time and there is no information about the sizes of jobs.The importance/weight/priority (generated by the system) imp(j) of any job j is known at job's arrival and size is known only at the completion of a job.Jobs are sequential in nature and preemption is permitted with no penalty.The speed of a processor s is a rate at which the work is completed.At any time t, a job j is active if arrival time ar(j) ≤ t and the remaining work rem(j, t) > 0. At time t, the scaled importance of a job j is pr(j, t).Executed time ext(j, t) of a job j is current time t minus arrival time ar(j), i.e., ext(j, t) = (t − ar(j)).
The scaled importance based flow of a job is integral over times between the job's release time and its completion time of its scaled importance at that time.The ascending inverse density (a) of a job j is executed time divided by its importance, i.e., a(j) = ext(j, t) imp(j) .The ascending inverse density is recalculated discretely either on arrival of a new job or on completion of any job.The response time of a job is the time interval between the starting time of execution and arrival time of a job.The turnaround time is the time duration between completion time and arrival time of a job.The weight, importance and significance of a job are used as the synonyms of the priority of jobs.

Power Function
The power function P(s) specifies the power used when processor executes at speed s.Any reasonable power function which satisfies the following conditions is permitted [33]: All the intervals, excluding probably the rightmost, are closed on both ends • The rightmost interval may be open on the right if the power P(s) approaches infinity, as the speed s approaches the rightmost endpoint of that interval • P(s) is non-negative, continuous and differentiable on all but countable many points

•
Either there is a maximum allowable speed T, or the limit inferior of as s approaches infinity is not zero Without loss of generality, it can be assumed that [24]: P is strictly convex and increasing • P is unbounded, continuous and differentiable Let Q be P −1 , i.e., Q(x) provides the speed that a processor can run at, if the limit of x is specified.

Amortized Local Competitive Analysis
The objective considered is (G) scaled importance-based flow time plus energy.Let G A (t) and G o (t) be the increase in the objective in the schedule for any algorithm A and offline adversary Opt, respectively at time t.Opt optimizes G.At any time t, for algorithm A, G A (t) is P t A s t A + pr t A , where s t A , P t A s t A and pr t A are speed of processor, power at speed s t a and scaled importance of all active jobs, respectively.To prove that A is c-competitive a potential function Φ(t) is required which follows the following conditions: Boundary Condition: Initially, when no job is released and at the end, after all jobs are completed Φ = 0. Job Arrival and Completion Condition: There is no increment in Φ, when any job arrives or completes.Running Condition: At any other time when no job arrives or completes, G A (t) plus the rate of change of Φ is no more than c times of G o (t): Lemma 1. (Young's Inequality [34]) Let f be any real-valued, continuous and strictly increasing function such that f(0 where, f −1 is the inverse function of f .

Scaled Importance-Based Flow Plus Energy
An online non-clairvoyant uni-processor scheduling algorithm Highest Scaled Importance First (HSIF) is proposed.In HSIF, all jobs arrive arbitrarily along with their importance and without information about their sizes.The sizes of jobs are known only on the completion of jobs.The possible speeds of a processor are a countable collection of disjoint subintervals of [0, ∞ ).The working of HSIF is observed using amortized potential analysis.HSIF is 2-competitive for the objective to minimize the scaled importance based flow time plus energy.

Algorithm HSIF
The algorithm HSIF always selects an active job with the highest scaled importance at any time, where the scaled importance pr(j i , t) of a job j i is computed as follows: , if job is executing imp(j i , t) * (1+log(ext(j i , t))) if job is not executing The executed time ext(j i , t) of a job j is ext(j i , t) = (t − ar(j i )).At any time t, the processor executes at speed s t h = Q pr t h , where Q = P −1 and pr t h is the total scaled importance of all active jobs for HSIF.As the algorithm HSIF is non-clairvoyant, the executed time assumed is its current size.The intension here is that the instantaneous importance/priority must depend on its importance (system generated) and size.If the job is not executing (job is waiting) then the scaled importance will increase and if the job starts execution the partial importance of a job j will decrease with respect to increase in execution.
1. On arrival of a job j i 2. If CPU is idle allocate the job to CPU 3.
pr(j k , t) = imp(j k , t)/(1/2 + log(ext(j k , t))) 15. else speed of CPU s t h = 0 Theorem 1.An online non-clairvoyant uni-processor scheduling Highest Scaled Importance First (HSIF) selects job with highest partial importance and consumes power equal to the total partial importance of all active jobs under dynamic speed scaling.HSIF is 2-comptitive for the objective of minimizing scaled importance-based flow time plus energy on arbitrary-work and arbitrary-importance of jobs.
In the rest of this section Theorem 1 is proven.For amortized local competitive analysis of HSIF, a potential function is provided in next sub section.

Potential Function Φ(t)
Let Opt be the optimal offline adversary that minimizes scaled importance based flow time plus energy.At any time t, let pr t o and pr t h be the total scaled importance of all active jobs for Opt and HSIF, respectively.At any time t, let pr t o (a) and pr t h (a) be the total scaled importance of all active jobs with at least a ascending inverse density in Opt and HSIF, respectively.Let pr t (a) be pr t h (a) − pr t o (a) + , where (•) + = max{0, •}.A potential function can be defined as follows: Since P (x) and Q(x) are increasing, P (Q(x)) is an increasing function of x.Therefore, To observe the effectiveness of the algorithm, it is required to observe the boundary condition, job arrival and completion condition, and running conditions.
For the boundary condition, one can observe that before arrival of any job and after completion of all jobs pr t (a) = 0, ∀ a.Therefore, Φ(t) = 0. On arrival of any job, the value of pr t (a) remains the same for all a, therefore Φ(t) remains the same.The scaled importance of a job decreases continuously when a job is executed by the HSIF or Opt, hence Φ(t) does not decrease on completion of a job.At any other time t when no job arrives or completes, it is required to prove that the following inequality follows: Since t is the current time only, the superscript t is omitted from the parameters in the rest of the analysis.Let a o and a h be the minimum ascending inverse densities of an active job using Opt and HSIF, respectively.Let a h (or a o ) be ∞ if HSIF (or Opt) has no active job.HSIF executes jobs on the basis of the highest scaled importance first at a speed s h .Therefore, pr h (a) decreases at the rate of (s h /a h ), ∀a ∈ [0, a h ], and pr h (a) remains the same for a > a h .Similarly, pr o (a) changes at the rate of (s o /a o ) ∀a ∈ [0, Hence the running condition is satisfied for pr o > pr h .Case 2: If pr o = pr h then, one can observe that ∀a ∈ [0, a o ] there is a decrement in pr o (a) at the rate of (s o /a o ), due to which the possible maximum rate of increment in Φ is: In Equation ( 1) substituting the values for f (x) = P (x), m = s o and n = P (Q(0)), it provides: Using Equations ( 3) and ( 4) in ( 2), it provides Hence the running condition is satisfied for pr o = pr h .Case 3: If pr o < pr h then, one can observe that a decrement in pr h (a) creates a decrement in Φ and a decrement in pr o (a) creates an increment in Φ.
∀a ∈ [0, a h ], there is a decrement in pr h (a) at the rate of (s h /a h ), due to which the possible rate of change of Φ is: ∀a ∈ [0, a o ], there is a decrement in pr o (a) at the rate of (s o /a o ), due to which the possible rate of change of Φ is dΦ dt ≤ 2 Adding the Equations ( 6) and ( 8) Let i, s h and s o ≥ 0 be real numbers.Since P is strictly increasing and convex, P (0) ≥ 0 and P (x) is strictly increasing.Substituting the values of f(x) = P (x), m = s o and n = P Q pr h − pr o in Equation ( 1), it provides Appl.Sci.2019, 9, 1467 Substituting the values from Equation ( 11) to (9), it provides Since Hence the running condition is satisfied for pr o < pr h .

Illustrative Example
To examine the performance of HSIF, a set of seven jobs and the traditional power model is considered, where Power = speed α and 2 ≤ α ≤ 3. The jobs arrived along with their importance but the size of jobs was only known at their completion.The jobs are executed by using algorithms HSIF and NC (the best known to date [11]); their executions are simulated.To demonstrate the effectiveness of proposed scheduling, a simulator is used which is developed using Linux kernel.Simulator facilitates to segregate the scheduling algorithm and decisively do not include the effects of other activity present in a real kernel implementation.The jobs are considered as independent.The proposed algorithm is for the identical homogeneous machines.To evaluate the performance of the algorithm the (average) turnaround time and (average) response time is considered.The lesser value of average response time reflects the prompt response of a request of jobs, which helps in avoiding starvation condition.The least value of the turnaround time gives the indication that the algorithm is capable to fulfil the resource requirement of all jobs in minimum time, which is a parameter of better resource utilization.The hardware specifications are mentioned in the Table 2.The details of jobs and results computed are shown in the Table 3 and Figures 1-4.As per the result stated in the Table 3, the response time (and turnaround time) of most of the jobs and average response time (and average turnaround time) of all jobs executed by HSIF are lesser than NC.It shows that the performance of HSIF is better than NC with respect to scheduling criteria.After observing the graphs of Figure 1a,b, it is clear that the HSIF adjusts the sum of the importance of active jobs frequently (count of maxima), but the change in the values is small (difference in the consecutive maxima).This shows that HSIF is maintaining the consistency in the performance.The speed of a processor depends on the sum of importance; therefore, the speed of a processor is having the same reflection.This frequent but less change in the speed makes the HSIF  After observing the graphs of Figure 1a,b, it is clear that the HSIF adjusts the sum of the importance of active jobs frequently (count of maxima), but the change in the values is small (difference in the consecutive maxima).This shows that HSIF is maintaining the consistency in the performance.The speed of a processor depends on the sum of importance; therefore, the speed of a processor is having the same reflection.This frequent but less change in the speed makes the HSIF consistent in performance.In NC, the frequency of change in the sum of the importance of active jobs is lesser, but the difference in the change is very high.The speed of a processor using NC depends on the sum of executed size of active jobs; therefore, the speed of a processor variates highly.This high variation makes the NC less consistent in performance.
In Figure 1c, the number of high speed change (local minima) is six when processor is executing jobs by using NC, which is due to the completion and start of execution of the jobs.There is a big change in the speed of processor when the executing job is changed.There is no affect on the new job's arrival (accumulation of importance based flow) on the execution speed of executing job.In the speed growth graph of processor using HSIF, more than six maxima and minima are available; it shows that the speed of a processor increases on arrival of a new job, i.e., increases on accumulation of scaled importance based flow time.It eliminates the possibility of starvation condition and improves the performance.It shows that HSIF is capable to adjust the speed for maintaining and improving the performance.Figure 2a shows that initially, at any time the total energy consumed by processor using HSIF is higher than NC, but at the later stage the total energy consumed by processor using NC increases.
The total flowtime of all active jobs when executed using NC is more than HSIF; consequently, the energy consumed by processor when using NC is more than HSIF.The energy consumed by most of the individual jobs when they are executed by HSIF is more than NC, as shown is Figure 2b.  Figure 2a shows that initially, at any time the total energy consumed by processor using HSIF is higher than NC, but at the later stage the total energy consumed by processor using NC increases.
The total flowtime of all active jobs when executed using NC is more than HSIF; consequently, the energy consumed by processor when using NC is more than HSIF.The energy consumed by most of the individual jobs when they are executed by HSIF is more than NC, as shown is Figure 2b 3a  and 4a, respectively.In the later stage the total values for HSIF is lesser than NC.The total value of importance based flow time and importance based flow time plus energy for a processor by using HSIF is lesser than NC.The value of objective considered is lesser at most of the time when using HSIF.From the above observation, it is concluded that the performance of the HSIF is better and consistent than the best known algorithm NC. Figure 2a shows that initially, at any time the total energy consumed by processor using HSIF is higher than NC, but at the later stage the total energy consumed by processor using NC increases.
The total flowtime of all active jobs when executed using NC is more than HSIF; consequently, the energy consumed by processor when using NC is more than HSIF.The energy consumed by most of the individual jobs when they are executed by HSIF is more than NC, as shown is Figure 2b.To extend the analysis of the performance of HSIF, a second set of ten jobs and the traditional power model is considered.The jobs arrived along with their importance but the size of jobs was only known at their completion.This case is designed by assuming that the jobs arrive in the increasing order of size.The jobs are executed by using algorithms HSIF and NC (the best known to date [11]); their executions are simulated.The analysed data is mentioned in the Tables 5 and 6.In the Table 5 the job's arrival time and importance are mentioned.The size is computed and observed at the completion of the jobs.On the basis of the arrival time, starting time of execution and computed completion time the metrics of quality are computed.In this analysis, the metrics of quality considered are turnaround time, response time, power consumed and important based flow time.To extend the analysis of the performance of HSIF, a second set of ten jobs and the traditional power model is considered.The jobs arrived along with their importance but the size of jobs was only known at their completion.This case is designed by assuming that the jobs arrive in the increasing order of size.The jobs are executed by using algorithms HSIF and NC (the best known to date [11]); their executions are simulated.The analysed data is mentioned in the Tables 5 and 6.In the Table 5 the job's arrival time and importance are mentioned.The size is computed and observed at the completion of the jobs.On the basis of the arrival time, starting time of execution and computed completion time the metrics of quality are computed.In this analysis, the metrics of quality considered are turnaround time, response time, power consumed and important based flow time.The computed results are mentioned in the Tables 5 and 6.The lower values computed using HSIF and NC are marked in bold.The jobs details such as arrival time, completion time importance and size are same for table as well as Table 6.
Table 5.Details and execution information of jobs with increasing-order of size using HSIF and NC.In Table 5, the turnaround time of nine jobs (out of ten) is lesser using HSIF than NC.It is clearly visible from the data of Table 4 that nine jobs are having response time lesser using HSIF than NC.As well as the average value of turnaround time and response time is lesser using HSIF than NC.On the basis of such observations one can conclude that the working of HSIF is better than the best-known NC in the special case also where the jobs may arrive in the increasing order of size.In Table 6, three values of three objectives energy consumed, importance-based flow time and importance-based flow time plus energy are mentioned.As per the values of energy consumed by the jobs six out of ten jobs consumes lesser power using HSIF than NC; although, the total energy consumed by all ten jobs is more using HSIF than NC.The importance is one of the main factors which forced the schedule of execution of the jobs.The importance based flow times of eight jobs (out of ten) are lesser using HSIF than NC.It is clearly visible from the data of Table 6 that the total importance based flow times of all ten jobs are also lesser using HSIF than NC.This lesser value of metric reflects the better performance of HSIF than NC.The third metric importance-based flow time plus energy (the main objective of the proposed algorithm) of eight jobs (out of ten) are lesser using HSIF than NC.As well as this, the average values of importance-based flow time plus energy of all ten jobs lesser using HSIF than NC.It can be concluded from the observations mentioned above that the objective is better fulfilled by HSIF than NC.

Job
To extend the analysis and increase the performance evaluation, a set of fifty arbitrary jobs with arbitrary arrival time is considered.The size of jobs is computed at the completion time only.Five different objective sets of values turnaround time, completion time, response time, important based flow time, and importance-based flow time plus energy are computed.The simulation results are stated in the Tables 7 and 8.  On the simulation data provided in the Tables 7 and 8, the statistical analysis is conducted.The Independent Samples t Test is used to compare the means of two independent groups in order to determine whether there is statistical evidence that the associated objective means are significantly different.
In the first Table 9, Group Statistics, provides basic information about the group comparisons, including the sample size (n), mean, standard deviation, and standard error for objectives by group.
In the second section, Independent Samples Test, displays the results most relevant to the Independent Samples t Test.There are two parts that provide different pieces of information: t-test for Equality of Means and Levene's Test for Equality of Variances.If the p value is less than or equal to the 0.05, then one should use the lower row of the output (the row labeled "Equal variances not assumed").If the p value is greater than 0.05, then one should use the upper row of the output (the row labeled "Equal variances assumed").Based on the results provided in the Tables 9 and 10, the following conclusive remarks are considered: It is clearly evident from the above statistical analysis and deduced results that HSIF performance better than the best available scheduling algorithm NC.
To extend the perfection of the analysis of the evaluation of working of HSIF in comparison to NC, the normalized Z-values of Energy Consumed by individual job (ECiJ) and importance-based flow time of individual job (IbFTiJ) are computed and provided in the Tables 11 and 12 11 and 12.For all jobs, the total of normalized values of ECiJ+IbFTiJ and the average of the normalized values of ECiJ+IbFTiJ are provided (in the Tables 11 and 12) to reflect the difference between the working of both algorithms.The normalized total values and average values of ECiJ+IbFTiJ are lesser for HSIF than NC.It reflects that the normalized value of dual objective (i.e., the sum of energy consumed and importance based flow time) for HSIF is lesser and better than NC.It is concluded from the above analysis that HSIF performs better than NC.

1 :
a o ], and pr o (a) remains the same for a > a o .Rest of the analysis is based on the three cases depending on pr o > pr h , pr o > pr h and pr o = pr h .Case If pr o > pr h then, one can observe that (a) ∀a ∈ [0, a o ], pr o (a) = pr o > pr h ≥ pr h (a) ⇒ ∀a ∈ [0, a o ] , pr(a) = pr t h (a) − pr t o (a) + = 0. Therefore, pr(a) remains the same.Hence for a ≤ a o the rate of change of pr(a) = 0, i.e., d dt pr(a) = 0. (b) If a > a o , pr o (a) remains the same, therefore the rate of change of pr(a) ≤ 0, i.e., d dt pr(a) ≤ 0. Considering both the sub cases it is observed that ∀a ∈ [0, a o ], (a) = (pr h (a) − pr o (a)) + ≤ (pr h − pr o ),

Figure 1 .
Figure 1.The execution results of jobs and processor speed with respect to time.

Figure 1 .
Figure 1.The execution results of jobs and processor speed with respect to time.

Figure 2 .
Figure 2. Energy Consumption of processes and jobs The importance based flow time and importance based flow time plus energy values of individual jobs are shown in the Figures3 and 4respectively.Most of the individual jobs are having

Figure 2 .
Figure 2. Energy Consumption of processes and jobs.
. The importance based flow time and importance based flow time plus energy values of individual jobs are shown in the Figures 3 and 4 respectively.Most of the individual jobs are having the lesser values of importance based flow time and importance based flow time plus energy when they are executed by using HSIB than NC.HSIF and NC both are competing to reduce the value of the sum of importance based flow time and importance based flow time plus energy, as shown in Figures

Figure 2 .
Figure 2. Energy Consumption of processes and jobs

Figure 3 .
figure3aand Figure4a, respectively.In the later stage the total values for HSIF is lesser than NC.The total value of importance based flow time and importance based flow time plus energy for a processor by using HSIF is lesser than NC.The value of objective considered is lesser at most of the time when using HSIF.From the above observation, it is concluded that the performance of the HSIF is better and consistent than the best known algorithm NC.

Figure 3 .
Figure 3. Importance based flow time of jobs.Appl.Sci 2019, 1, x FOR PEER REVIEW 13 of 25

Figure 4 .
Figure 4. Importance based flow time with energy of jobs.

Figure 4 .
Figure 4. Importance based flow time with energy of jobs.
. The sum of Z values of Energy Consumed by individual job (ECiJ) and the importance-based flow time of individual job (IbFTiJ) are added and converted in to the range [0 1] for individual job, as shown in the Tables

Table 3 .
Job details and execution information using HSIF and NC.

Table 4 .
Three objectives values for jobs using HSIF and NC.

Table 6 .
Three objectives values for jobs arriving with increasing-order of size using HSIF and NC.

Table 7 .
Details and execution information of jobs with random-order of size and importance using HSIF and NC.

Table 8 .
Three objectives values for jobs arriving with random-order of size using HSIF and NC.

•
For Turnaround Time p-value is less than 0.05 in Levene's Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected.The lower row of the output (the row labeled "Equal variances not assumed") is considered.A t test passed to reveal a statistically reliable difference between the mean values of Turnaround Time of HSIF (M = 13.42,s = 12.511366261) and NC (M = 20.6,s = 19.786616792)with t(82.782)= 2.17, p = 0.033.• The total Turnaround Time for HSIF is 359 time unit lesser than the total Turnaround Time for NC.The average Turnaround Time for HSIF is 7.18 time unit lesser than the average Turnaround Time for NC.• For Response Time p-value is less than 0.05 in Levene's Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected.The lower row of the output (the row labeled "Equal variances not assumed") is considered.A t test passed to reveal a statistically reliable difference between the mean values of Response Time of HSIF (M = 11.72,s = 12.748813) and NC (M = 18.32, s = 19.976966)with t(83.23)= 2.17, p = 0.05.• The total Response Time for HSIF is 330 time unit lesser than the total Response Time for NC.The average Response Time for HSIF is 6.6 time unit lesser than the average Response Time for NC.• For Completion Time p-value is greater than 0.05 in Levene's Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is considered.Although, the statistical test failed to identify the difference in HSIF and NC on the basis of energy consumed, the total Energy Consumed for HSIF is 3909.937747unit lesser than the total Energy Consumed for NC.The average Energy Consumed for HSIF is 78.07875 unit lesser than the average Energy Consumed for NC.• For Importance-based Flow Time p-value is less than 0.05 in Levene's Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected.The lower row of the output (the row labeled "Equal variances not assumed") is considered.A t test passed to reveal a statistically reliable difference between the mean values of Importance-based Flow Time of HSIF (M = 2479.15,s = 3625.2051)and NC (M = 15373.3,s = 21122.893)with t(63.08)= 1.95, p = 0.05.• The total Importance-based Flow Time for HSIF is 139381.7662unit lesser than the total Importance-based Flow Time for NC.The average Importance-based Flow Time for HSIF is 2787.635324unit lesser than the average Importance-based Flow Time for NC.• For Importance-based Flow Time plus Energy p-value is less than 0.05 in Levene's Test for Equality of Variances; therefore, the null hypothesis (the variability of the two groups is equal) is rejected.The lower row of the output (the row labeled "Equal variances not assumed") is considered.A t test passed to reveal a statistically reliable difference between the mean values of Importance-based Flow Time plus Energy of HSIF (M = 2675.83,s = 3774.8105)and NC (M = 5541.57,s = 9740.346)with t(63.39)= 1.94, p = 0.05.• The total Importance-based Flow Time plus Energy for HSIF is 143286.703unit lesser than the total Importance-based Flow Time plus Energy for NC.The average Importance-based Flow Time plus Energy for HSIF is 2865.73406unit lesser than the average Importance-based Flow Time plus Energy for NC.

Table 9 .
Group statistics of objectives values for HSIF and NC.

Table 10 .
Statistics of objectives values for HSIF and NC using Independent Samples t Test.