Next Article in Journal
Design and Experiment of Dry-Ice Cleaning Mechanical Arm for Insulators in Substation
Next Article in Special Issue
Optimal Design and Control of MMC STATCOM for Improving Power Quality Indicators
Previous Article in Journal
Multi-Criteria Efficiency Analysis of Using Waste-Based Fuel Mixtures in the Power Industries of China, Japan, and Russia
Previous Article in Special Issue
Comparison of Various Mother Wavelets for Fault Classification in Electrical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Managing Energy Plus Performance in Data Centers and Battery-Based Devices Using an Online Non-Clairvoyant Speed-Bounded Multiprocessor Scheduling

1
Department of Computer Science and Engineering, Amity School of Engineering and Technology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow 226010, India
2
Department of Electrical and Computer Engineering, Hawassa University, Hawassa P.O. Box 05, Ethiopia
3
Power System Planning Division, Rajasthan Rajya Vidhyut Prasaran Nigam Ltd., Jaipur 302005, India
4
Department of Electrical Power Engineering, Faculty of Mechanical and Electrical Engineering, Tishreen University, 2230 Lattakia, Syria
5
Department of Electrical Power Engineering, Dresden University, 01069 Dresden, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2459; https://doi.org/10.3390/app10072459
Submission received: 5 March 2020 / Revised: 22 March 2020 / Accepted: 30 March 2020 / Published: 3 April 2020

Abstract

:
An efficient scheduling reduces the time required to process the jobs, and energy management decreases the service cost as well as increases the lifetime of a battery. A balanced trade-off between the energy consumed and processing time gives an ideal objective for scheduling jobs in data centers and battery based devices. An online multiprocessor scheduling multiprocessor with bounded speed (MBS) is proposed in this paper. The objective of MBS is to minimize the importance-based flow time plus energy (IbFt+E), wherein the jobs arrive over time and the job’s sizes are known only at completion time. Every processor can execute at a different speed, to reduce the energy consumption. MBS is using the tradition power function and bounded speed model. The functioning of MBS is evaluated by utilizing potential function analysis against an offline adversary. For processors m ≥ 2, MBS is O(1)-competitive. The working of a set of jobs is simulated to compare MBS with the best known non-clairvoyant scheduling. The comparative analysis shows that the MBS outperforms other algorithms. The competitiveness of MBS is the least to date.

1. Introduction

There are number of server farms equipped with hundreds of processors. The cost of energy used for cooling and running a machine for around three years surpasses the hardware cost of the machine [1]. Consequently, the major integrated chips manufacturers such as Intel and AMD are producing the dynamic speed scaling (DSS) enabled multiprocessor/multi-core machine and software such as Intel’s SpeedStep [2], which support the operating system in managing the energy by varying the execution speed of processors. A founder chip maker Tilera forecasted that the numbers of processors/cores will be doubled every eighteen months [3], which will increase the energy demand to a great extent. Data centers consume 1.5% of total electricity usage in United States [4]. To avoid such critical circumstances, the current issue in the scheduling is to attain the good quality of service by generating an optimal schedule of jobs and to save the energy consumption, which is a conflicting and complicated problem [5].
The power P consumed by a processor running at speed s is s V 2 , where V is a voltage [6]. The traditional power function is P = s α ( α 2 for CMOS based chips [7,8]). There are two types of speed models: the first unbounded speed model, in which the processor’s speed range is, i.e., [ 0 ,   ) ; the second bounded speed model, in which the speed of a processor can range from zero to some maximum speed, i.e., [ 0 ,   η ] . This DSS plays a vital role in energy management, where in a processor can regulate its speed to save energy. A few qualities of service metrics are slowdown, throughput, makespan, flow time and weighted flow time. At low speed, the processor finishes jobs slower and save energy, whereas at high speed, the processor finishes jobs faster but consumes more energy, as shown in Figure 1. To get a better quality of service and low energy consumption the objective should be to minimize the sum of flow time and energy; in case, if the importance or priority is attached, the objective should be to minimize the sum of importance-based flow time and energy. The objective of minimizing the IbFt+Ehas a natural explanation, as it can be considered in monetary terms [9].
In the multiprocessor systems, there is a requirement of three different policies: the first policy is job selection, which decides the next job to be executed on every processor; the second policy is speed scaling, which decides every processor’s execution speed at all time; the third policy is job assignment, which indicates that to which processor the new job should be assigned. In the c-competitive online scheduling algorithm, for each input the cost received is less than or equal to c times the cost of optimal offline algorithm [9]. Unlike non-clairvoyant scheduling, the size of job is unknown at arrival time, such as in UNIX operating system where jobs arrive with no information of processing requirement. Unlike online modes, in the offline mode, the whole job progression is known in advance. No online algorithm can attain a constant competitiveness with equal maximum speed to optimal offline algorithm [10].
Motwani et al. [10] commenced the study of the non-clairvoyant scheduling algorithms. Yao et al. inducted the theoretical study of speed scaling scheduling algorithm [11]. Yao et al. proposed an algorithm average rate heuristic (AVR) with a competitive ratio at most 2 α 1 α α using the traditional power function. Koren et al. [12] presented an optimal online scheduling algorithm D o v e r for a overloaded uniprocessor system with competitive ratio- ( 1 ( 1 + k ) 2 ) for the objective of minimizing the throughput, where k is the importance ratio. The competitiveness of shortest remaining processing time (SRPT) for multiprocessor system is O ( m i n ( l o g ( m n ) ,   l o g   σ ) ) , where m is number of processors, n is total number of jobs and σ represents the ratio of minimum to maximum job size [13]. Kalyanasundaram et al. [14] presented the idea of resource augmentation. If the resources are augmented and, ( 2 + Δ ) -speed p processors are used then the competitive ratio of Equi-partition lies between 2 3 ( 1 + Δ ) and ( 2 + 4 Δ ) [15]. Multilevel feedback queue, a randomized algorithm with n jobs is O ( l o g   n ) -competitive [16,17]. The first algorithm with non trivial guarantee is O ( log 2 σ ) -competitive [18], where σ is the ratio of minimum to maximum job size. There are different algorithms proposed with different objectives over a span of time [19,20,21,22,23,24,25,26,27].
Chen et al. [19] proposed algorithms with different approximation bounds for processors with/without constraints on the maximum processor speed. The concept of merging dual objective of energy used and total flow time into single objective of energy used plus total flow time is proposed by Albers et al. [20]. Bansal et al. [21] proposed an algorithm, which uses highest density first (HDF) for the job selection with a traditional power function. Lam et al. [22] proposed a multiprocessor algorithm for homogeneous processors in which job assignment policy is a variant of round robin, the job selection. Random dispatching can provide ( 1 + Δ ) -speed O ( 1 Δ 3 ) -competitive non-migratory algorithm [23]. Chan et al. [24] proposed an O ( 1 ) -competitive algorithm using sleep management for the objective of minimizing the flow time plus energy. Albers et al. [25] studied an offline problem in polynomial time and proposed a fully combitorial algorithm that relies on repeated maximum flow computation. Gupta et al. [26] proved that highest density first, weighted shortest elapsed time first and weighted late arrival processor sharing are not O ( 1 ) -speed O ( 1 ) -competitive for the objective of minimizing the weighted flow time even in fixed variable speed processors for heterogeneous multiprocessor setting. Chan et al. [27] studied an online clairvoyant sleep management algorithm scheduling with arrival-time-alignment (SATA) which is ( 1 + Δ ) -speed O ( 1 Δ 2 ) -competitive for the objective of minimizing the flow time plus energy. For a detailed survey refer to [28,29,30,31,32,33,34].
In this paper, the problem of online non-clairvoyant (ON-C) DSS scheduling is studied and an algorithm multiprocessor with bounded speed (MBS) is proposed with an objective to minimize the IbFt+E. On the basis of potential function analysis MBS is O(1)- competitive. The notations used in this paper are mentioned in the Table 1.
The organization of the paper is as follows. In Section 2, some related non-clairvoyant algorithms are explained and their competitive values are compared to the proposed algorithm MBS. Section 3 presents the preliminary definition and information for the proposed work. In Section 4, the proposed algorithm, its flow chart and potential function analysis is presented. The processing of a set of jobs are simulated using MBS and the best identified algorithm to observe the working of MBS. Section 6 provides the conclusion and future scope of the work.

2. Related Work

Gupta et al. [35] gave an online clairvoyant scheduling algorithm GKP (proposed by Gupta, Krishnaswamy and Pruhs) for the objective of minimizing the weighted flow time plus energy. Under the traditional power function, GKP is O ( α 2 ) -competitive without a resource augmentation for power heterogeneous processors. GKP uses highest density first (HDF) for the selection of jobs on each processor; the speed of any processor scales such that the power of a processor is the fractional weight of unfinished jobs; jobs are assigned in such a way that it gives the least increase in the projected future weighted flow time. Gupta et al. [35] used a local competitiveness analysis to prove their work. Fox et al. [36] considered the problem of scheduling the parallelizable jobs in the non-clairvoyant speed scaling settings for the objective of minimizing the weighted flow time plus energy and they used the potential function analysis to prove it. Fox et al. presented weighted latest arrival processor sharing with energy (WLAPS+E), which schedules the late arrival jobs and every job use the same number of machines proportioned by the job weight. WLAPS+E spares some machines to save the energy. WLAPS+E is ( 1 + 6 Δ ) -speed ( 5 Δ 2 ) -competitive, where 0 < Δ 1 6 . Thang [37] studied the online clairvoyant scheduling problem for the objective of minimizing the weighted flow time plus energy in the unbounded speed model and using the traditional power function. Thang gave an algorithm (ALGThang) on unrelated machines and proved that ALGThang is 8 ( 1 + α l n α ) -competitive. In AlGThang, the speed of any processor depends on the total weight of pending jobs on that machine, and any new job is assigned to a processor that minimizes the total weighted flow time.
Im et al. [38] proposed an ON-C scheduling algorithm SelfishMigrate-Energy (SM-E) for the objective of minimizing the weighted flow time plus energy for the unrelated machines. Using the traditional power function SM-E is O ( α 2 ) -competitive. In SM-E, a virtual queue is maintained on every processor where the new or migrated jobs are added at tail; the jobs migrate selfishly until equilibrium is gained. Im et al. simulates sequential best response (SBR) dynamics and they migrates each job to the machine that is provided by the Nash equilibrium. The scheduling policy applied on every processor is a variant of weighted round robin (WRR), wherein the larger speed is allotted to jobs residing at the tail of the queue (like Latest Arrival Processor Sharing (LAPS) and Weighted Latest Arrival Processor Sharing (WLAPS)). Bell et al. [39] proposed an online deterministic clairvoyant algorithm dual-classified round robin (DCRR) for the multiprocessor system using the traditional power function. The motive of ( 2 4 α ( l o g α P + α α 2 α 1 ) ) -competitive DCRR is to schedule the jobs so that they can be completed within deadlines using minimum energy, i.e., the objective is to maximize the throughput and energy consumption. In DCRR, the sizes and the maximum densities (= size/(deadline – release time)) of jobs are known and the classification of jobs depends on the size and the maximum density both. The competitive ratio of DCRR is high, as it considers the jobs with deadlines and using a variation of round robin with the speed scaling.
Azar et al. [40] gave an ON-C scheduling algorithm NC-PAR (Non-Clairvoyant for Parallel Machine) for the identical parallel machines, wherein the job migration is not permitted. Using traditional function NC-PAR is ( α + 1 α 1 ) -competitive for the objective of minimizing the weighted flow time plus energy in unbounded speed model. In NC-PAR a global queue of unassigned jobs is maintained in First In First Out (FIFO) order. A new job is assigned to a machine, when a machine becomes free. In NC-PAR jobs are having uniform density (i.e., w e i g h t / s i z e = 1 ) and the jobs are not immediately allotted to the processors at release time. The speed of a processor using NC-PAR is based on the total remaining weight of the active jobs. In non-clairvoyant model with known arbitrary weights no results are known [40].
An ON-C multiprocessor speed scaling scheduling algorithm MBS is proposed and studied against an offline adversary with an objective of minimizing IbFt+E. The speed of a processor using MBS is proportional to the sum of importance of all active jobs on that processor. In MBS, the processor’s maximum speed can be ( 1 + Δ 3 m ) η (i.e., the range of speed is from zero to ( 1 + Δ 3 m ) η ), whereas the processor’s maximum speed using Opt (Optimal algorithm) is η , where m is number of processors and 0 < Δ ( 3 α ) 1 a constant. In MBS, a new job is assigned to an idle processor (if available) or to a processor having the minimum sum of the ratio of importance and executed size for all jobs on that processor; the policy for job selection is weighted/importance-based round robin, and each active job receives the processor speed equal to the ratio of its importance to the total importance of jobs on that processor. In this paper, the performance of MBS is analysed using a competitive analysis, i.e., the worst-case comparison of MBS and optimal offline scheduling algorithm. MBS is ( 1 + Δ 3 m ) -speed, ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) = O ( 1 ) competitive, i.e., the value for competitive ratio c for m = 2 , α = 2 is 2.442; for m = 2 , α = 3 is 2.399; the detailed results for different values of m, Δ = ( 3 α ) 1 and α = 2   &   3 is shown in Table 2. The comparison of results is given along with the summary of results in Table 3.
On the basis of the values mentioned in the Table 2, it can be observed that in proposed algorithm MBS if the number of processor increases then the speed ratio and competitive ratio increases. The data mentioned in Table 3 describe the competitive values of different scheduling algorithm. Some clairvoyant and non-clairvoyant algorithms competitive ratio are considered at α = 2 , α = 3 . The lower competitive value represents the better algorithm. The value of competitiveness is least for the proposed algorithm MBS.

3. Definitions and Notations

An ON-C job scheduling on a multiprocessor using speed bounded setting is considered, where the jobs arrive over time, the job’s importance/weight are known at release time and the size of a job is revealed only after the job’s completion. Processor’s speed using Opt can vary dynamically from 0 to the maximum speed η i.e., [ 0 ,   η ] . The nature of jobs is sequential as well as unrestricted pre-emption is permitted without penalty. The traditional power function P o w e r   P = s p e e d α is considered, where α > 1 a fixed constant. If s is the processor’s speed then a processor executes s unit of work per unit time. An active job j has release time lesser than the current time t, and it is not completely executed. The flow time F ( j ) of job j is the time duration since j released and until it is completed. The total importance-based flow time F is j I i m p ( j ) F ( j ) . Amortized analysis is used for algorithms where an occasional operation is very slow, but most of the other operations are faster. In amortized analysis, we analyse a sequence of operations and guarantee a worst case average time which is lower than the worst case time of a particular expensive operation.

4. Methodology

In this study, the amortized potential function analysis of the objective is used to examine the performance of the proposed algorithm. Amortized analysis is a worst-case analysis of a sequence of operations—to obtain a tighter bound on the overall or average cost per operation in the sequence than is obtained by separately analyzing each operation in the sequence. The amortized potential method, in which we derive a potential function characterizing the amount of extra work we can do in each step. This potential either increases or decreases with each successive operation, but cannot be negative. The objective of study is to minimize the total IbFt+E, denoted by G = F + E. It reflects that the target is to minimize the quality of service and energy consumed. The input to the problem is the set of jobs I. A scheduler generates the schedule S of jobs in I. The total energy consumption E for the scheduling is 0 s ( t ) α   d t . Let Opt be an optimal offline algorithm such that for any job sequence I, IbFt+E F O p t ( I ) + E O p t ( I ) of Opt is minimized among all schedule of I. The notations used in MBS are mentioned in the Table 1. Any online algorithm ALG is said to be c-competitive for c ≥ 1, if for all job sequences I and any input the cost incurred is never greater than c times the cost of optimal offline algorithm Opt, and the following inequality is satisfied:
( F A L G ( I ) + E A L G ( I ) ) c · ( F O p t ( I ) + E O p t ( I ) )
The traditional power function is utilized to simulate the working of the proposed algorithm and compare the effectiveness by comparing with the available best known algorithm. The jobs are taken of different sizes and the arrival of jobs is considered in different scenario to critically examine the performance of the proposed algorithm. Different parameters (such as IbFt, IbFt+E, speed of processor and speed growth) are considered to evaluate the algorithm.

5. An O ( 1 ) -Competitive Algorithm

An ON-C multiprocessor scheduling algorithm multiprocessor with bounded speed (MBS) is explained in this section. The performance of MBS is observed by using potential function analysis, i.e., the worst-case comparison of MBS with an offline adversary Opt. The competitiveness of MBS is O ( 1 ) with an objective to minimize the IbFt+E for m processors with the highest speed ( 1 + Δ 3 m ) η .

5.1. Multiprocessor with Bounded Speed Algorithm: MBS

At time t, the processing speed of u adjusts to s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( ( i m p u a ( t ) Ϯ ) 1 α ,   η ) , where 0 < Δ ( 1 3 α ) , Ϯ 1 and α 2 are constants. The importance i m p ( j ) of a job is uninformed and acknowledged only at release time r ( j ) . The policies considered for the multiprocessor scheduling MBS are as follows:
Job selection policy: The importance-based/weighted round robin is used on every processor.
Job assignment policy: a newly arrived job is allotted to an idle processor (if available) or to a processor having the minimum sum of the ratio of importance to the executed size for all jobs on that processor (i.e., m i n   f = 1 n u a ( i m p u ( j f ) e x s u ( j f ) ) ).
Speed scaling policy: The speed of every processor is scaled on the bases of the total importance of active jobs on that processor. Every active job j i on u obtains the fraction of speed:
p r o c e s s o r s   s p e e d ( i m p o r t a n c e   o f j i   t o t a l   i m p o r t a n c e   o f   a l l   a c t i v e   j o b s   o n   t h a t   p r o c e s s o r )
i.e., s u a · ( i m p u ( j i ) k = 1 n u a i m p u ( j k ) ) or s u a · ( i m p u ( j i ) i m p u a ) . The speed of any processor gets adjusted (re-evaluated) on alteration in total importance of active jobs on that processor. MBS is compared against an optimal offline algorithm Opt, using potential function analysis. The principal result of this study is stated in Theorem 1. The Algorithm 1 of MBS is given next and the flow chart for MBS is given in Figure 2.
Algorithm 1: MBS (Multiprocessor with Bounded Speed)
Input: total m number of processors { u 1 , ,   u k , ,   u m } , n a NoAJ { j 1 , ,   j i ,   ,   j n a } and the importance of all n a active jobs { imp ( j 1 ) , ,   imp ( j i ) , ,   imp (   j n a ) } .
Output: number of jobs allocated to every processor, the speed of all processors, at any time and execution speed share of each active job.
Repeat until all processors become idle:
1. If any job j i arrives
2. if m n a
3. allocate job j i to a idle processor u
4. otherwise, when m < n a
5. allocate job j i to a processor u with m i n   f = 1 n ua ( imp u ( j f ) exs u ( j f ) )
6. imp ua = imp ua + imp u ( j i )
7. s ua = ( 1 + Δ 3 m ) · m i n ( ( imp ua Ϯ ) 1 α ,   η ) , where 0 < Δ ( 1 3 α ) and Ϯ > 1 is a constant value
8. Otherwise, if any job j i completes on any processor u and other active jobs are available for execution on that processor then
9. imp ua = imp ua imp u ( j i )
10. s ua = ( 1 + Δ 3 m ) · m i n ( ( imp ua Ϯ ) 1 α ,   η ) , where 0 < Δ ( 1 3 α ) and Ϯ 1 is a constant value
11. the speed received by any job j i , which is executing on a processor u, is s ua · ( imp u ( j i ) imp ua )
12. otherwise, processors continue to execute remaining jobs
Theorem 1.
When using more than two processors ( i . e . ,   m 2 ) and each processor has the permitted maximum speed ( 1 + Δ 3 m ) η , MBS is c-competitive for the objective of minimizing the IbFt+E, where c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) = O ( 1 ) and 0 < Δ 1 3 α .

5.2. Necessary Conditions to be Fulfilled

A potential function is needed to calculate the c-competitiveness of an algorithm. An algorithm is called c-competitive if at any time t, the sum of augmentation in the objective cost of algorithm and the modification in the value of potential is at the most c times the augmentation in the objective cost of the optimal adversary algorithm. A potential function Φ ( t ) is required to demonstrate that MBS is c-competitive. A c-competitive algorithm should satisfy the conditions:
Boundary Condition: The value of potential function is zero before the release of any job and after the completion of all jobs.
Job Arrival and Completion Condition: The value of potential function remains same on arrival or completion of a job.
Running Condition: At time when the above condition do not exist, the sum of the (rate of change) RoC of G a and the RoC of Φ is at the most c times the RoC of G o .
d G a ( t ) d t + γ · d Φ d t c · d G o ( t ) d t ,   where   γ > 0 .

5.3. Potential Function Φ ( t )

An active job j is lagging, if ( p w k a ( j ,   t ) p w k o ( j ,   t ) ) > 0 . Since t is the instantaneous time, this factor is dropped from the rest of the analysis. For any processor u, let L G u = { j 1 ,   j 2 ,   ,   j l g u } be a group of lagging jobs using MBS and these jobs are managed in the ascending order of latest time (when any job gets changed into lagging job). L G = u = 1 m L G u is a set of all lagging jobs on all m processors. Further, i m p l g u = i = 1 l g u i m p u ( j i ) is the sum of the importance of lagging jobs on a processor u. Following this, i m p l g = u = 1 m i m p l g u is the sum of the importance of lagging jobs on all m processors. Our potential function Φ ( t ) for IbFt+E is the addition of all potential values of m processors.
Φ ( t ) = u = 1 m Φ u ( t )
Φ u ( t ) = { i = 1 l g u ( ( k = 1 i i m p u ( j k ) ) 1 2 δ ) · ω i i f   k = 1 i i m p u ( j k )   η 1 2 δ i = 1 l g u ( 1 1 δ ) · ( k = 1 i i m p u ( j k ) · η 1 ) · ω i o t h e r w i s e
Where ω i = m a x { 0 ,   ( p w k a ( j i ,   t ) p w k o ( j i ,   t ) ) } δ = 1 2 α   and
( k = 1 i i m p u ( j k ) ) 1 2 δ   and   ( 1 1 δ ) · ( k = 1 i i m p u ( j k ) · η 1 )
are the coefficients ci of ji on processor u
MBS is analyzed per machine basis. Firstly, the verification of boundary condition: the value of Φ is zero after finishing of all jobs and prior to release of any job on any processor. There will be no active job on any processor in both situations. Therefore, the boundary condition is true. Secondly, the verification of arrival and completion condition: at time t, on release of a new job j i in I, j i without execution is appended at end of I. ω i is zero as p w k a ( j i ,   t ) p w k o ( j i ,   t ) = 0 . The coefficient of all other jobs does not change and Φ remains unchanged. At the time of completion of a job j i , ω i becomes zero and other coefficients of lagging jobs either remains unchanged or decreases, so, Φ does not increase. Thus the arrival and completion criteria holds true. The third and last criterion to confirm is running condition, with no job arrival or completion.
According to previous discussion, for any processor u, let d G u a d t = i m p u a + s u a α and d G u o d t = i m p u o + s u o α be the alteration of IbFt+E in an infinitesimal period of time [ t ,   t + d t ] by MBS and Opt, respectively. The alteration of Φ because of Opt and MBS in an infinitesimal period of time [ t ,   t + d t ] by u is d Φ u o d t and d Φ u a d t , respectively. The whole alteration in Φ because of Opt and MBS in infinitesimal period of time [ t ,   t + d t ] by u is d Φ u d t = d Φ u o d t + d Φ u a d t . As this is multiprocessor system therefore to bound the RoC of Φ by Opt and MBS, the analysis is divided in two cases based on n a and m , and then every case is further divided in three sub cases depending on whether i m p u a > η α and i m p l u > η α , afterwards each sub case is further divided in two sub cases depending on i m p l g > ( i m p a ( 3 3 + Δ ) · i m p a ) and i m p l g ( i m p a ( 3 3 + Δ ) · i m p a ) , where 0 < Δ < 1 , μ = 3 3 + Δ . The potential analysis is done on individual processor basis, the reason behind it is that all the processors will not face the same case at the same time; rather different processors may face same or different cases.
Lemma 1.
For the positive real numbers x, y, A and B, if x 1 + y 1 = 1 holds then [2]:
x 1 · A x + y 1 · B y A · B
Lemma 2.
If n a m and i m p l g u η α
(a) d Φ u o d t s u o α α + ( 1 2 δ ) · i m p l g u ; (b) d Φ u a d t ( s u a · i m p l g u 1 2 δ )
Proof. 
If n a m then every processor executes not more than one job, i.e., every job is processed on individual processor.
(a) It is required to upper-bound d Φ u o d t for a processor u. To calculate the upper-bound, the worst-case is considered which occurs if Opt executes a job on u with the largest coefficient c l g u = i m p l g u 1 2 δ . At this time, ω i increases at the rate of s u o (because of Opt on u). The count of lagging jobs on some u may be only one.
d Φ u o d t c l g u · s u o i m p l g u 1 2 δ · s u o
Using Young’s inequality, Lemma 1 (Equation (6)) in (7) such that A = s u o , B = ( i m p l g u ) 1 2 δ , x = α and y = 1 1 2 δ we have:
d Φ u o d t s u o α α + ( 1 2 δ ) · i m p l g u
(b) Next, it is required to upper-bound d Φ u a d t for a processor u. To compute the upper-bound, consider that a lagging job j i on u is executed at the rate of ( s u a · i m p u ( j i ) k = 1 n u a i m p u ( j k ) ) or ( s u a · i m p u ( j i ) i m p u a ) , therefore, the change in ω i is at the rate of ( s u a · i m p u ( j i ) i m p u a ) .
d Φ u a d t = i = 1 l g u ( ( k = 1 i i m p u ( j k ) ) 1 2 δ ) · ( s u a · i m p u ( j i ) i m p u a )
As only one job executes on a processor, therefore i m p u ( j i ) i m p u a = 1 and l g u = i = 1 ,
d Φ u a d t = ( i m p l g u 1 2 δ ) · ( s u a ) d Φ u a d t = ( s u a · i m p l g u 1 2 δ )
Lemma 3.
If n a m and i m p l g u > η α
(a) d Φ u o d t ( 1 1 δ ) · i m p l g u ; (b) d Φ u a d t = ( 1 + Δ 3 m ) ( 1 δ ) · i m p l g u
Proof. 
If n a m then every processor executes not more than one job, i.e., every job is processed on individual processor.
(a) It is required to upper-bound d Φ u o d t for a processor u. To calculate the upper-bound, the worst-case is considered which occurs if Opt executes a job on u with the largest coefficient c l g u = ( 1 1 δ ) · i m p l g u · η 1 . At this time, ω i increases at the rate of s u o (because of Opt on u) where s u o η . The count of lagging jobs on any u may be only one.
d Φ u o d t c l g u · s u o c l g u · η = ( 1 1 δ ) · i m p l g u · η 1 · η d Φ u o d t ( 1 1 δ ) · i m p l g u
(b) Next, it is required to upper-bound d Φ u a d t for a processor u. To compute the upper-bound, consider that a lagging job j i on u is executed at the rate of ( s u a · i m p u ( j i ) k = 1 n u a i m p u ( j k ) ) or ( s u a · i m p u ( j i ) i m p u a ) , therefore the change in ω i is at the rate of ( s u a · i m p u ( j i ) i m p u a ) . i m p u a i m p g l u > η α , s u a = ( 1 + Δ 3 m ) · η
d Φ u a d t = i = 1 l g u ( 1 1 δ ) · ( k = 1 i i m p u ( j k ) · η 1 ) · ( s u a · i m p u ( j i ) i m p u a )
As only one job executes on a processor, therefore i m p u ( j i ) i m p u a = 1 and l g u = i = 1 ,
d Φ u a d t = ( 1 1 δ ) · ( i m p l g u · η 1 ) · ( s u a ) = ( 1 1 δ ) · ( s u a · i m p l g u · η 1 ) = ( 1 1 δ ) · ( ( 1 + Δ 3 m ) · η · i m p l g u · η 1 )
d Φ u a d t = ( 1 + Δ 3 m ) ( 1 δ ) · i m p l g u
Lemma 4.
If n a > m and i m p l g u η α
(a) d Φ u o d t s u o α α + ( 1 2 δ ) · i m p l g u ; (b) d Φ u a d t   s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
Proof. If n a > m then:
(a) It is required to upper-bound d Φ u o d t for a processor u. To calculate the upper-bound, the worst-case is considered which occurs if Opt is executing a job on u with the largest coefficient c l g u = i m p l g u 1 2 δ . At this time, ω i increases at the rate of s u o (because of Opt on u).
d Φ u o d t c l g u · s u o = i m p l g u 1 2 δ · s u o
Using Young’s inequality, Lemma 1 (Equation (6)) in (12) such that A = s u o , B = i m p l g u 1 2 δ , x = α and y = 1 1 2 δ we have:
d Φ u o d t s u o α α + ( 1 2 δ ) · i m p l g u
(b) Next, it is required to upper-bound d Φ u a d t for a processor u, to compute the upper-bound consider that a lagging job j i on u is executed at the rate of ( s u a · i m p u ( j i ) k = 1 n u a i m p u ( j k ) ) or ( s u a · i m p u ( j i ) i m p u a ) , therefore the change in ω i is at the rate of ( s u a · i m p u ( j i ) i m p u a ) . To make the discussion straightforward, let h u i = k = 1 i i m p u ( j k ) , h u 0 = 0 , h u l g u = i m p l g u and i m p u ( j i ) = h u i h u i 1 . (by using Equation (3):
d Φ u a d t = i = 1 l g u ( ( k = 1 i i m p u ( j k ) ) 1 2 δ ) · ( s u a · i m p u ( j i ) i m p u a ) =   s u a i m p u a i = 1 l g u ( ( h u i ) 1 2 δ ) · ( h u i h u i 1 )   s u a i m p u a i = 1 l g u h u i 1 h u i f 1 2 δ   d f   s u a i m p u a 0 h u l g u f 1 2 δ   d f =   s u a i m p u a · h u l g u 2 2 δ ( 2 2 δ ) =   s u a i m p u a · i m p l g u 2 2 δ ( 2 2 δ )
d Φ u a d t   s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
Lemma 5.
If n a > m and   i m p l g u > η α
(a) d Φ u o d t ( 1 1 δ ) · i m p l g u ; (b) d Φ u a d t ( 1 + Δ 3 m ) ( 2 2 δ ) · ( i m p l g u 2 i m p u a )
Proof. 
If n a > m then:
(a) It is required to upper-bound d Φ uo dt for a processor u. To calculate the upper-bound, the worst-case is considered which occurs if Opt executes a job on u with the largest coefficient c lg u = ( 1 1 δ ) · imp lg u · η 1 (as imp ua   imp lg u > η α ). At this time, ω i increases at the rate of s u o (because of Opt on u).
d Φ u o d t c l g u · s u o = ( 1 1 δ ) · i m p l g u · η 1 · s u o   ( 1 1 δ ) · i m p l g u · η 1 · η { s u o η }
d Φ u o d t ( 1 1 δ ) · i m p l g u
(b) Next, it is required to upper-bound d Φ u a d t for a processor u. To compute the upper-bound, consider that a lagging job j i on u is executed at the rate of ( s u a · i m p u ( j i ) k = 1 n u a i m p u ( j k ) ) or ( s u a · i m p u ( j i ) i m p u a ) , therefore the change in ω i is at the rate of ( s u a · i m p u ( j i ) i m p u a ) . To make the discussion uncomplicated, let h u i = k = 1 i i m p u ( j k ) , h u 0 = 0 , h u l g u = i m p l g u > η α , i m p u a   i m p l g u > η α and i m p u ( j i ) = h u i h u i 1 . Let z < l u be the largest integer such that h u z η α . (using Equation (3)):
d Φ u a d t = i = 1 l g u c i · ( s u a · i m p u ( j i ) i m p u a ) = (   s u a i m p u a ) · ( i = 1 z ( i m p u ( j i ) · ( h u i ) 1 2 δ ) + i = z + 1 l g u ( ( 1 1 δ ) · ( i m p u ( j i ) · h u i · η 1 ) ) ) (   s u a i m p u a ) · ( ( 0 h u z f 1 2 δ d f ) + ( ( 1 1 δ ) · η 1 · ( h u z h u l g u f d f ) ) ) = (   s u a i m p u a ) · ( h u z 2 2 δ ( 2 2 δ ) + h u l g u 2 h u z 2 ( 2 2 δ ) · η ) = (   s u a i m p u a ) · ( h u z 2 ( 2 2 δ ) h u z 1 α + h u l g u 2 h u z 2 ( 2 2 δ ) · η )   ( 1 + Δ 3 m ) · η i m p u a · ( h u z 2 ( 2 2 δ ) η + h u l g u 2 h u z 2 ( 2 2 δ ) · η ) { h u z η α   } =   ( 1 + Δ 3 m ) i m p u a · h u l g u 2 ( 2 2 δ ) =   ( 1 + Δ 3 m ) ( 2 2 δ ) · ( i m p l g u 2 i m p u a )
d Φ u a d t ( 1 + Δ 3 m ) ( 2 2 δ ) · ( i m p l g u 2 i m p u a )
Lemma 6. At all time t, when Φ does not comprise discrete alteration dG ua dt + γ · d Φ u dt c · dG uo dt , where c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) . Assume that γ = 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) .
Proof. 
The analysis is divided in two cases based on n a > m or n a m , and then each case is again alienated in three sub-cases depending on whether i m p u a > η α or i m p u a η α and i m p l g u > η α or i m p l g u η α , afterwards each sub-case is again alienated in two sub-cases depending on whether i m p l g u   > ( i m p u a ( 3 3 + Δ ) · i m p u a ) or i m p l g u   ( i m p u a ( 3 3 + Δ ) · i m p u a ) , where 0 < μ = ( 3 3 + Δ ) < 1 and Δ = ( 1 3 α ) . As a job in MBS which is not lagging must be an active job in Opt,
i m p u o   i m p u a i m p l g u i m p u a ( i m p u a μ · i m p u a ) μ   i m p u a i m p u a   i m p u o   μ
μ = ( 3 3 + Δ )
γ = 1 16 · ( 1 + ( 1 + Δ 3 m ) α )
c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α )
Case I: When n a m and i m p u a η α , since i m p l g u i m p u a we have i m p l g u η α , and s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( i m p u a 1 α ,   η ) = ( 1 + Δ 3 m ) · i m p u a 1 α .
(a) If i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS is d Φ u d t = d Φ u o d t + d Φ u a d t .
(using Equations (8) and (9))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u ) ( s u a · i m p l g u 1 2 δ )
(by using Equations (1) and (21))
d G u a d t + γ · d Φ u d t
( i m p u a + s u a α + γ · ( ( s u o α α + ( 1 2 δ ) · i m p l g u ) ( s u a · i m p l g u 1 2 δ ) ) )
= ( i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u γ · ( 1 + Δ 3 m ) · i m p u a 1 α · i m p l g u 1 2 δ )
( γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α ) · i m p u a + γ · ( 1 2 δ ) · i m p u a γ · ( 1 + Δ 3 m ) · i m p u a 1 α · ( ( 1 ( 3 3 + Δ ) ) · i m p u a ) 1 2 δ )
= ( γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) γ · ( 1 + Δ 3 m ) · ( Δ 3 + Δ ) 1 2 δ ) )
γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( Δ 3 + Δ ) 1 2 δ )
γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( Δ 3 + Δ ) )
= γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 3 3 + Δ ) )
γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ )  
= γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) )   (by using Equation (19))
γ α · s u o α + i m p u o   μ · ( 17 16 · ( 1 + ( 1 + Δ 3 m ) α ) )   (by using Equation (17))
= γ α · s u o α + i m p u o · ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) )   (by using Equation (18))
γ α · s u o α + i m p u o   · ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) )   (by using Equation (20))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u o   · c
Since c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) and γ = 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) , we have
c = ( 9 8 + 3 Δ 8 ) · 16 γ γ c = 1 18 + 6 Δ < 1 γ c < 1 γ < c
Since   γ < c   and   α > 1 1 > 1 α γ α < c
(by using Equation (23) in Equation (22))
d G u a d t + γ · d Φ u d t c · s u o α + c · i m p u o   = c · ( s u o α + i m p u o   ) = c · d G u o d t
Hence the running condition is fulfilled for n a m , i m p u a η α , i m p l g u η α , i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
(b) If i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS depends on d Φ u o d t since d Φ u a d t 0 .
(by using Equation (8))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u )
(by using Equations (1) and (24))
d G u a d t + γ · d Φ u d t i m p u a + s u a α + γ · ( s u o α α + ( 1 2 δ ) · i m p l g u )
= i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u
γ α · s u o α + i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · ( 1 2 δ ) · i m p u a
= γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) )
γ α · s u o α + i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) ) (by using Equation (17))
γ α · s u o α + ( 1 μ · ( ( 1 + ( 1 + Δ 3 m ) α ) + γ ) ) · i m p u o
= γ α · s u o α + ( 1 μ · ( ( 1 + ( 1 + Δ 3 m ) α ) + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) ) ) · i m p u o (by using Equation (19))
= γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o (by using Equation (18))
γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o
c · s u o α + c · i m p u o (by using Equations (18) and (23))
d G u a d t + γ · d Φ u d t c · ( s u o α + w u o )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is satisfied for n a m , i m p u a η α , i m p l g u η α , i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
Case II: When n a m , i m p u a > η α , i m p l g u η α , and s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( i m p u a 1 α ,   η ) = ( 1 + Δ 3 m ) η .
(a) If i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS is d Φ u d t = d Φ u o d t + d Φ u a d t .
(by using Equations (8) and (9))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u ) ( s u a · i m p l g u 1 2 δ )
(by using Equations (1) and (25))
d G u a d t + γ · d Φ u d t
i m p u a + s u a α + γ · ( ( s u o α α + ( 1 2 δ ) · i m p l g u ) ( s u a · i m p l g u 1 2 δ ) )
= i m p u a + ( 1 + Δ 3 m ) α · η α + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u γ · ( 1 + Δ 3 m ) · η · i m p l g u 1 2 δ
i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p u a γ · ( 1 + Δ 3 m ) · i m p l g u 2 δ · i m p l g u 1 2 δ
= γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) ) · i m p u a γ · ( 1 + Δ 3 m ) · i m p l g u
γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + γ ) · i m p u a γ · ( 1 + Δ 3 m ) · ( 1 ( 3 3 + Δ ) ) · i m p u a
γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( Δ 3 + Δ ) ) · i m p u a
= γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + γ · ( 3 3 + Δ ) ) · i m p u a
γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + γ ) · i m p u a
γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o   μ (by using Equations (17) and (19))
= γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o   (by using Equation (18))
γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o   (by using Equations (20) and (23))
c · s u o α + c · i m p u o  
= c · ( s u o α + i m p u o   )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is fulfilled for n a m , i m p u a > η α , i m p l g u η α , i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
(b) If i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS depends on d Φ u o d t since d Φ u a d t 0 . (by using Equation (7))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u )
(by using Equations (1) and (26))
d G u a d t + γ · d Φ u d t i m p u a + s u a α + γ · ( s u o α α + ( 1 2 δ ) · i m p l g u )
= i m p u a + ( 1 + Δ 3 m ) α · η α + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u
i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u
γ α · s u o α + i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · ( 1 2 δ ) · i m p u a
= γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) )
γ α · s u o α +   i m p u o μ · ( 1 + ( 1 + Δ 3 m ) α + γ ) (by using Equation (17))
= γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) ) ·   i m p u o μ (by using Equation (19))
= γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o (by using Equation (18))
γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o
c · s u o α + c · i m p u o (by using Equations (20) and (23))
= c · ( s u o α + i m p u o )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is satisfied for n a m , i m p u a > η α , i m p l g u η α , i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
Case III: When n a m , i m p u a > η α , i m p l g u > η α , and s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( i m p u a 1 α ,   η ) = ( 1 + Δ 3 m ) η   .
(a) If i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS is d Φ u d t = d Φ u o d t + d Φ u a d t .
(by using Equations (10) and (11))
d Φ u d t ( 1 1 δ ) · i m p l g u ( 1 + Δ 3 m ) ( 1 δ )   · i m p l g u
(by using Equations (1) and (27))
d G u a d t + γ · d Φ u d t  
i m p u a + s u a α + γ · ( ( 1 1 δ ) · i m p l g u ( 1 + Δ 3 m ) ( 1 δ )   · i m p l g u )
= i m p u a + ( 1 + Δ 3 m ) α · η α + γ · ( 1 1 δ ) · i m p l g u γ · ( 1 + Δ 3 m ) ( 1 δ )   · i m p l g u
i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · ( 1 1 δ ) · i m p u a γ · ( 1 + Δ 3 m ) ( 1 δ ) · ( i m p u a ( 3 3 + Δ ) · i m p u a )
= i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 1 δ ) γ · ( 1 1 δ ) · ( 1 + Δ 3 m ) · ( Δ 3 + Δ ) )
i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 1 δ ) γ · ( 1 1 δ ) · ( Δ 3 + Δ ) )
= i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 1 δ ) · ( 3 3 + Δ ) )
i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) · ( 1 1 δ ) ) (by using Equations (17) and (19))
d G u a d t + γ · d Φ u d t i m p u o   μ · ( ( 1 + ( 1 + Δ 3 m ) α ) + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) · ( 2 α 2 α 1 ) )
Since   α > 1 2 α 2 α 1 = 2 α 1 + 1 2 α 1 = 1 + 1 2 α 1 < 2 1 < ( 2 α 2 α 1 ) < 2
(by using Equations (29) and (28))
d G u a d t + γ · d Φ u d t i m p u o   μ · ( ( 1 + ( 1 + Δ 3 m ) α ) + 2 16 · ( 1 + ( 1 + Δ 3 m ) α ) ) = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) · i m p u o     ( by   using   Equation   ( 18 ) ) = c · i m p u o     ( by   using   Equation   ( 20 ) ) c · ( s u o α + i m p u o   ) d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is fulfilled for n a m , i m p u a > η α , i m p l g u > η α , i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
(b) If i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS depends on d Φ u o d t since d Φ u a d t 0 .
(by using Equation (10))
d Φ u d t ( 1 1 δ ) · i m p l g u
(by using Equations (1) and (30))
d G u a d t + γ · d Φ u d t   i m p u a + s u a α + γ · ( 1 1 δ ) · i m p l g u = i m p u a + ( 1 + Δ 3 m ) α · η α + γ · ( 1 1 δ ) · i m p l g u   i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · ( 1 1 δ ) · i m p u a = i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 2 α 2 α 1 ) ) i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 2 γ )   ( by   using   Equations   ( 17 )   and   ( 29 ) ) = i m p u o   μ · ( ( 1 + ( 1 + Δ 3 m ) α ) + 2 16 · ( 1 + ( 1 + Δ 3 m ) α ) )   ( by   using   Equation   ( 19 ) ) = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) · i m p u o     ( by   using   Equation   ( 18 ) ) = c · i m p u o     ( by   using   Equation   ( 20 ) ) c · ( s u o α + i m p u o   ) d G u a d t + γ · d Φ u d t   c · d G u o d t
Hence the running condition is satisfied if n a m ,   i m p u a > η α , i m p l g u > η α , i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) , for c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
Case IV: When n a > m and i m p u a η α , since i m p l g u i m p u a we have i m p l g u η α and s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( i m p u a 1 α ,   η ) = ( 1 + Δ 3 m ) i m p u a 1 α   .
If i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) then total RoC of Φ because of Opt and MBS is d Φ u d t = d Φ u o d t + d Φ u a d t .
(by using Equations (13) and (14))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u ) ( s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a ) )
(by using Equations (1) and (31))
d G u a d t + γ · d Φ u d t  
  i m p u a + s u a α + γ · ( (   s u o α α + ( 1 2 δ ) · i m p l g u ) (   s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a ) ) )
= i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u γ ·   ( 1 + Δ 3 m ) i m p u a 1 α ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
  γ α · s u o α + ( 1 + ( 1 + Δ 3 m ) α ) · i m p u a + γ · i m p u a γ ·   ( 1 + Δ 3 m ) i m p u a 1 α ( 2 2 δ ) · ( (   i m p u a ( 3 3 + Δ ) · i m p u a ) 2 2 δ i m p u a )
= γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( 1 + Δ 3 m ) ( 2 2 δ ) · ( Δ 3 + Δ ) 2 2 δ )
  γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( Δ 3 + Δ ) 2 2 δ ( 2 2 δ ) )
d G u a d t + γ · d Φ u d t     γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · ( Δ 3 + Δ ) 2 ( 2 2 δ ) )
Since   α > 1 2 α 2 α 1 = 2 α 1 + 1 2 α 1 = 1 + 1 2 α 1 > 1 ( 1 2 2 δ ) = α 2 α 1 = 1 2 · ( 2 α 2 α 1 ) > 1 2
(by using Equations (32) and (33))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ 2 · ( Δ 3 + Δ ) 2 ) = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ( 1 1 2 · ( Δ 3 + Δ ) 2 ) ) = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ( 2 Δ 2 + 11 Δ + 18 2 Δ 2 + 12 Δ + 18 ) ) γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ )
(by using Equations (17) and (19))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + i m p u o   μ · ( 17 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o  (by using Equation (18))
d G u a d t + γ · d Φ u d t γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o  
d G u a d t + γ · d Φ u d t c · s u o α + c · i m p u o   (by using Equations (20) and (23))
d G u a d t + γ · d Φ u d t = c · ( s u o α + i m p u o   )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is fulfilled for n a > m , i m p u a η α , i m p l g u η α , i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
(a) If i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) then total RoC of Φ because of Opt and MBS depends on d Φ u o d t since d Φ u a d t 0 .
(by using Equation (13))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u )
(by using Equations (1) and (34))
d G u a d t + γ · d Φ u d t i m p u a + s u a α + γ · ( s u o α α + ( 1 2 δ ) · i m p l g u ) = i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u γ α · s u o α + i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · ( 1 2 δ ) · i m p u a = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ · ( 1 2 δ ) ) γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ )
(by using Equations (17) and (19))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + i m p u o   μ · ( 17 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o (by using Equation (18))
d G u a d t + γ · d Φ u d t γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o  
d G u a d t + γ · d Φ u d t c · s u o α + c · i m p u o   (by using Equations (20) and (23))
d G u a d t + γ · d Φ u d t = c · ( s u o α + i m p u o   )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is satisfied for n a > m , i m p u a η α , i m p l g u η α , i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
Case V: When n a > m and i m p u a > η α , i m p l g u η α , and s u a ( t ) = ( 1 + Δ 3 m ) · m i n ( i m p u a 1 α ,   η ) = ( 1 + Δ 3 m ) η   .
(a) If i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) then the total RoC of Φ because of Opt and MBS is d Φ u d t = d Φ u o d t + d Φ u a d t .
(by using Equations (13) and (14))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u ) (   s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a ) )
(by using Equations (1) and (35))
d G u a d t + γ · d Φ u d t
  i m p u a + s u a α + γ · ( (   s u o α α + ( 1 2 δ ) · i m p l g u ) (   s u a ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a ) ) )
= i m p u a + ( 1 + Δ 3 m ) α · η α + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u γ · ( 1 + Δ 3 m ) η ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
  i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ α · s u o α + γ · ( 1 2 δ ) · i m p u a γ · ( 1 + Δ 3 m ) η ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
  γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ) γ · ( 1 + Δ 3 m ) i m p l g u 1 α ( 2 2 δ ) · ( i m p l g u 2 2 δ i m p u a )
  γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ) γ · 1 ( 2 2 δ ) · ( 1 ( 3 3 + Δ ) ) 2 · i m p u a 2   i m p u a
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ · 1 ( 2 2 δ ) · ( Δ 3 + Δ ) 2 )
(by using Equations (36) and (33))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ γ 2 · ( Δ 3 + Δ ) 2 ) = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ( 1 1 2 · ( Δ 3 + Δ ) 2 ) ) = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ ( 2 Δ 2 + 11 Δ + 18 2 Δ 2 + 12 Δ + 18 ) ) γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ )
(by using Equations (17) and (19))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + i m p u o   μ · ( 17 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o (by using Equation (18))
d G u a d t + γ · d Φ u d t γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o  
d G u a d t + γ · d Φ u d t c · s u o α + c · i m p u o   (by using Equations (20) and (23))
d G u a d t + γ · d Φ u d t = c · ( s u o α + i m p u o   )
d G u a d t + γ · d Φ u d t c · d G u o d t
Hence the running condition is fulfilled for n a > m , i m p u a > η α , i m p l g u η α , i m p l g u > ( i m p u a ( 3 3 + Δ ) · i m p u a ) , c = ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) .
(a) If i m p l g u ( i m p u a ( 3 3 + Δ ) · i m p u a ) then total RoC of Φ due to Opt and MBS depends on d Φ u o d t since d Φ u a d t 0 .
(by using Equation (13))
d Φ u d t ( s u o α α + ( 1 2 δ ) · i m p l g u )
(by using Equations (1) and (37))
d G u a d t + γ · d Φ u d t i m p u a + s u a α + γ · ( s u o α α + ( 1 2 δ ) · i m p l g u )
d G u a d t + γ · d Φ u d t = i m p u a + ( 1 + Δ 3 m ) α · η α + γ α · s u o α + γ · ( 1 2 δ ) · i m p l g u
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u a + ( 1 + Δ 3 m ) α · i m p u a + γ · i m p u a
d G u a d t + γ · d Φ u d t = γ α · s u o α + i m p u a · ( 1 + ( 1 + Δ 3 m ) α + γ )
(by using Equations (17) and (19))
d G u a d t + γ · d Φ u d t γ α · s u o α + i m p u o   μ · ( 1 + ( 1 + Δ 3 m ) α + 1 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + i m p u o   μ · ( 17 16 · ( 1 + ( 1 + Δ 3 m ) α ) )
d G u a d t + γ · d Φ u d t = γ α · s u o α + ( 17 16 · ( 1 + Δ 3 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o (by using Equation (18))
d G u a d t + γ · d Φ u d t γ α · s u o α + ( ( 9 8 + 3 Δ 8 ) · ( 1 + ( 1 + Δ 3 m ) α ) ) · i m p u o  
d G u a d t + γ · d Φ u d t c · s u o α + c · i m p u o   (by using Equations (20) and (23))
d G u