ProMo: A Probabilistic Model for Dynamic Load-Balanced Scheduling of Data Flows in Cloud Systems

: An important issue in cloud computing is the balanced ﬂow of big data centers, which usually transfer huge amounts of data. Thus, it is crucial to achieve dynamic, load-balanced data ﬂow distributions that can take into account the possible change of states in the network. A number of scheduling techniques for achieving load balancing have therefore been proposed. To the best of my knowledge, there is no tool that can be used independently for different algorithms, in order to model the proposed system (network topology, linking and scheduling algorithm) and use its own probability-based parameters to test it for good balancing and scheduling performance. In this paper, a new, Probabilistic Model (ProMo) for data ﬂows is proposed, which can be used independently with a number of techniques to test the most important parameters that determine good load balancing and scheduling performance in the network. In this work, ProMo is only used for testing with two well-known dynamic data ﬂow scheduling schemes, and the experimental results verify the fact that it is indeed suitable for testing the performance of load-balanced scheduling algorithms.


Introduction
One of the main challenges in today's networking is the efficient data flow scheduling among the numerous servers that constitute a data center. The synchronous big data networks [1][2][3] typically transfer vast data amounts from machine to machine, so a load-balanced transferring scheme is necessary to implement these transfers as efficiently as possible [4][5][6][7]. Dynamic load-balanced scheduling is the task of distributing as evenly as possible the traffic within a network (or among its links), while keeping in mind the overall network state at a time, or, at least, at specified time intervals. Recently, the trend in scheduling load-balanced data flows is to use software-defined networking schemes, and OpenFlow is such an example [8][9][10], where the load-balanced flow control is programmable.
While scheduling the flows, it is important to take into consideration the dynamic network changes; in other words, it is important to consider the state of the network. This is due to the fact that while new data flows keep arriving at different (or even similar) rates between data servers, some links may collapse and become unavailable for some time [11], meaning that some data flows may need to be re-transmitted to a different server or servers, thus changing the amount of load being stored and processed by each machine. Thus, the workload distributed fluctuates, and this necessitates a time-or period-based scheduling [12]. In this regard, the static flow scheduling approaches become rather unsuitable for big data flows. During each period, the current workload and the link state are considered, and new scheduling decisions may or may not be taken.
The problem of scheduling big data flows on a distributed-resource environment is an NP hard problem, and heuristics are employed. Most of the strategies developed aim at: (1) maximizing the 1.
The proposed model can be used independently with existing scheduling strategies for testing; 2.
The proposed model is simple, so its use does not add any overheads to a scheduling scheme; 3.
The model can be the basis for the development of a complete simulator of data flow scheduling algorithms, which will be used for testing different algorithms, topologies, and network parameters; 4.
The model can also become the basis for the development of a new scheduling strategy; 5.
It is a time interval-based model, so it is particularly useful in testing dynamic scheduling algorithms.
The remainder of this work is organized as follows: Section 2 presents the most important of the related papers, focusing on dynamic algorithms. Section 3 initially presents some preliminaries for the dynamic load-balanced scheduling of data flows in cloud systems, and then, it describes in detail the probabilistic model. In Section 4, experimental results are presented, to show that the model indeed fits existing scheduling algorithms. Section 5 concludes the paper and offers aspects for future work.

Related Work
Recently, the problem of flow scheduling has attracted considerable attention. The techniques available are mainly classified as static and dynamic. Unlike the dynamic approaches, the static ones schedule the flows based on their known characteristics, but scheduling cannot adapt to any change that may occur in the network. In this section, we will mainly focus our attention on dynamic strategies.
An interesting static approach was presented by Rodriguez et al. [13], who developed the Particle Swarm Optimization (PSO). This technique aims to minimize the overall flow execution cost while meeting deadline constraints (deadline based). It was inspired by the social behavior of bird flocks, and it is based on a swarm of communicating particles through space, to find an optimal search direction. A particle is an individual that moves through the defined problem space and represents a candidate solution to the optimization problem. Modeling is performed in two steps: (a) define how the problem is encoded, that is the solution representation, and (b) define the fitness function, that is how good a possible solution is. This function represents the objectives of the scheduling problem. The fitness function is minimized, so that its value represents the total cost of execution. The results obtained have shown improved performance compared to other strategies and, most importantly, that this technique meets the deadlines met by dynamic algorithms, like Scaling-Consolidation-Scheduling (SCS) (a dynamic algorithm designed to schedule groups of flows, not just single ones).
In [9], the authors developed joint static and dynamic traffic scheduling approaches for data center networks. The authors used the observation that traffic in a data center is a mixture of static and rapidly-changing elements (dynamics), so the scheduler combines both of these elements. Network dynamics has been used for solving other important problems like virus spreading and network attacks [14,15]. Another combination of static and dynamic algorithms was presented by Malawski et al. [16]. The authors aimed at maximizing the amount of work completed. This is defined as the number of executed data flows. Furthermore, their schemes try to meet QoS constraints like deadline and budget. The task execution time may vary based on a uniform distribution, and they employed a cost safety margin to avoid generating a schedule that goes over budget. This shows some type of robustness, which the scheme indeed has.
As already stated from the Introduction, the static approaches are not an ideal solution in the big data era. A good number of researchers have developed dynamic strategies. These strategies can be further categorized into two approaches: the ones that use the traditional networking schemes and the ones that use software-defined networking schemes, where the overall network configuration and flow control is programmable. In the remainder of this section, strategies from both categories are presented. In the experimental results, ProMo was used to test two strategies, one from each category. The criteria behind this choice are explained in the Simulation Analysis section.

Traditional Networking Schemes
A number of interesting heuristics for dynamic data flow scheduling on traditional networks have been developed. Tsai et al. [17] presented a heuristic scheme called the Hyper-Heuristic Scheduling Algorithm (HHSA), which implements scheduling solutions for cloud computing systems. The proposed algorithm uses certain operators for diversity detection and improvement, to determine the appropriate heuristic dynamically in the search for better candidate solutions. By experimenting on CloudSim and Hadoop, the authors showed that the HHSA scheme reduces significantly the makespan of scheduling. Neely et al. [18] considered the problem where multiple devices make repeated decisions based on the events they observe. The strategy is time interval based, and the events observed determine the values of a utility function and a set of penalty functions. The proposed strategy aims at making decisions at time intervals, such that the utility function is maximized subject to the time constraints imposed by the penalties. Another interesting dynamic approach is the Dynamic Randomized load-Balancing (DRB), suggested by Wang et al. [19]. The aim was to achieve network-wide low levels of path collisions through local-link adjustment, which is free of communications and cooperations between switches. First, a path selection scheme was used to allocate default paths to all source-destination pairs in a fat-tree network, and then, a Threshold-based Two-Choice (TTC) randomized technique was proposed to balance the traffic. Their simulation results showed that DRB in cooperation with the TTC technique achieved significant improvement against other randomized routing schemes scheduled for fat-tree networks. A fat-tree is a network topology where branches near the top of the hierarchy are "fatter" (or thicker) compared to the branches further down the hierarchy. The branches represent the data links, and the thickness (bandwidth) of the data links is determined based on the technology-specific use of the network.

Software-Defined Networking Schemes
A number of researchers have focused on software-oriented data flow scheduling approaches, and the main example is OpenFlow. In OpenFlow, the control plane (decision, dissemination) is separated from the data plane (discovery, data), and this simplifies to a large extent the network management. All the management and control functions (routing, flow identification, QoS, etc.) are taken over by the OpenFlow controller, which has a global view of the network and configures the switches to forward packets on the basis of a specified flow definition [20,21]. With OpenFlow, load-balancing scheduling techniques are programmable and have received considerable attention. In the remainder of this paragraph, some of the important OpenFlow-based works are presented. Tang et al. [8] proposed a Dynamical Load-Balanced Scheduling (DLBS) approach for maximizing the network throughput and balancing the network load. They developed a set of efficient heuristic scheduling algorithms for the two typical OpenFlow network models, which balance data flows at regular time intervals (specifically, on a time-slot basis). Their experiments showed improvement over typical load-balanced scheduling algorithms like round-robin-based schemes, especially when the data flows are characterized by a large imbalance degree.
In [22], the authors addressed the inter-coflow scheduling problem with two different objectives: (a) to decrease the communication time of data-intensive jobs and (b) to achieve guaranteed predictable communication time. They introduced a system named Varys, which enables data-intensive frameworks to use co-flows with the proposed algorithms while the network is highly utilized and starvation is avoided. The simulations showed that the communication stages complete about three times faster. Furthermore, about twice as many co-flows manage to meet their deadlines when Varys is used, compared to other known co-flow management systems. SlickFlow [23] is another approach implemented with OpenFlow, which allows fast failure recovery by combining the source routing with alternative routing information carried in the packet header. The information (primary and alternative) regarding routing is encoded and placed in the packet header. When failures occur, the packets are rerouted towards the alternative routes by the switches, and the controller itself plays no role in this procedure.
Bolla et al. [24] proposed another OpenFlow system called OpenFlow in the Small (OFiS), scheduled to provide a hardware abstraction layer for heterogeneous multi-core systems, in order to meet application-specific requirements. The experimental results showed that OFiS manages to exploit hardware parallelism to a high degree. A more specialized application of OpenFlow was presented by Egilmez et al. [25], who developed a framework for dynamic rerouting of data flows, to enable dynamic QoS in stream scalable coded videos. An OpenFlow v1.1 hardware-based forwarding plane was implemented, and under cooperation with a performance model, it helped choose a suitable mapping with no need to consider implementation.
This work presents ProMo, a probabilistic mathematical model, which can be employed for testing existing schemes, but also, it can become a basis for the development of the network simulator and a new flow-scheduling strategy. Because of its timing characteristics, it can be used in accordance with dynamic flow-scheduling schemes.

The ProMo Model for Dynamic Flow-Scheduling
This section describes the details of ProMo. First, some details regarding the three-layer fully populated network are presented, and then, the details of the model are described.

Three-Layer Fully-Populated Network
A Fully-Populated Network (FPN) topology has a number of switch-layers and a host layer. The switches at the topmost layer are called the core switches; the switches at the second layer are the intermediate switches; and the switches at the third level are the TOR (Top-Of-Rack) switches. An FPN example is shown in Figure 1a. Each TOR switch is directly connected to 4 end hosts. The switch ports can be divided into two classes [19]: up-ports and down-ports. The up-ports are used to connect to switches at a higher layer, while the down-ports are for connections to switches or hosts at a lower layer. This means that the core switches have only down-ports. The up-and down-ports are shown in Figure 1b.
up-port (to higher level switches) down-port (to lower level switches)

The ProMo Model
The ProMo model is based on the construction of a queuing system. Let L 1 − L 3 represent the three switch layers and L 4 represent the end host layer. For the sake of simplicity, let each layer have a single queue of infinite size (or at least, large enough queue). Another assumption necessary for the model is that each layer accepts data flows, and the service times at each layer (the time required to do some processing and forward the flow) are independent and follow the exponential distribution. This independence assumption is necessary, since clouds are most of the time heterogeneous and integrate components from different vendors at different layers. The users are assumed to be an infinite source that produces large data flows that follow the Poisson distribution with rate λ. In this model, the somehow abstract notion of "data flow" is expressed as the number of jobs, N, submitted by the users over the network [26], and in this sense, λ can be considered as a function of N (in the following, the terms "jobs" and "data flow" will be used interchangeably). Each layer L i is considered to have its own service rate µ i (which includes all the necessary processing before forwarding the data). This service rate can be considered as a function of the number of jobs, N i , being serviced by layer L i . To link the reader to a broader area of interest, one can conceptually consider this model as related to multiplex opinion networks and the multiplex opinion dynamics model that is probability based [27].
The interconnections between the four layers of the model reflect the actual structure of the FPN. As can be seen from Figure 2, a job enters the system after requests from the users' community. The job arrival rate is λ. The jobs enter the first queue N 1 and spend some time in the switches of Layer 1. The average service time from Layer 1 is µ 1 and includes the time required from the time this layer accepts the data flows until these flows are ready for transmission to the next layer. Then, the jobs enter the next layer's queue, Q 2 , with probability p 12 , which means that there is probability p 12 that there is no link collapse and no data corruption, so there is no need for re-transmission. In a similar manner, the communication between Layers 2 and 3 is modeled. Finally, the data arrive to the end hosts. Table 1 summarizes the main parameters required by the ProMo model. Two observations are necessary: 1.
The probability values can be obtained after collecting information regarding the system's behavior. For example, to obtain the probability of no link collapse, information regarding the total time within a period (for example, one day or some hours) that links work properly is required. Furthermore, to have a probability of data corruption, it is necessary to keep track of how many times the links have received or sent corrupted data out of the total transmission it has taken over.

2.
The probability values used in the model are the sum of two probabilities per layer (that is, for all the layer links) mentioned in the first observation divided by two (average of the two probabilities).
Job arrivals from users End hosts The state of a network can be modeled using the values of N 1 − N 3 . If a linear vector S is used, then the state is described by is the sum of all the user jobs at a time. Assume that the system is in state S t at time t, where S t = (N 1 , N 2 , N 3 ). The probabilities (conditional probabilities) that the system transitions to state S t+δ after δ time are denoted as: where δ is the time required for one change of the system to take place. In the simulation terminology, these small changes are called "events", and this is the term that will be used from now on. In other words, in the interval (t, t + δ), only one event takes place. To determine this event, there are two basic cases:
A layer completes processing of a job to forward it to another layer, with probability δµ i The two basic cases (C) can by mathematically expressed as follows: (C.i) No event occurs, that is, S t = S t+δ : (C.iii) One job completed and transferred from L 1 to L 2 , or from L 2 to L 3 , or from L 3 to the end nodes: (C.iv) One job was interrupted due to link collapse or data corruption: We know that the probability of state p(S t+δ at a time t + δ is the sum of the products of the probabilities that the system is in state S t at time t and the probabilities that the system transitions to S t+δ as a result of an event occurring during the interval t + δ, for all the possible states S (by L 4 , we denote the end node layer): Now, if the sums of Equation (6) are expanded using Equations (2)- (5): (7) has eight terms, related to the four cases described in the previous page (C.i-C.iv). The first term of the form p(S t (1 − δλ − δ ∑ 3 i=1 µ i ) corresponds to the case where no events occur (C.i); the second term of the form δλp (N 1 + 1, N 2 , N 3 ) t corresponds to the case of new job arrival (C.ii); the next three terms correspond to the case of a completed job transferred to the next lower layer (C.iii); and the last term corresponds to the case of a job interruption (C.iv). In this last case, one job is subtracted from the lower layer (in case it has been partially transferred or in a corrupted form), and it is added back to the upper layer. The t index in every term indicates the time t.
Equation (7) expresses the probability of a transition from a given state at time t to a new state at time t + δ, as a function of the mean job arrival rate λ, of the independent average service times of each layer µ i , of the probabilities of data transfers between layers i, i + 1, that is p i i+1 , and of the probabilities of transfer interruptions between layers i, i + 1, that is (1 − p i i+1 ). Although Equation (7) has timely characteristics, it has been proven (see Jackson [28]) that there exists a unique and time-independent solution to such an equation. The solution proposed by Jackson is the equilibrium state probability distribution, and it exists if the average arrival rate to each queue in the network is less than the average service rate of that queue. The equilibrium state probability distribution has some very important properties: Property 1. Under equilibrium conditions, the average flow leaving the queue will equal the average flow entering the queue. This can be considered as a "per-layer" load balance.

Property 2.
The equilibrium state probability distribution at each layer i is independent of those at the other layers. This means that each layer can be examined independently as an M/M/1 queue with mean arrival rate λ i and mean service rate µ i .

Property 3.
Given the external arrival rates (from the users) to each layer of the FPN network and the routing probabilities from each queue to another (probabilities of normal or interrupted transmission), the job arrival rate to each layer (at equilibrium) can be found by solving the flow balance equations for the network.
Before the solution proposed by Jackson is described for ProMo, it is necessary to define mathematically the relationship between the mean arrival rates per layer and the transition probabilities. According to the model of Figure 2, to find the mean arrival rate per layer, the following considerations are required: λ 1 : The mean arrival rate of Layer 1 is determined by λ; the arrival rate of the users' jobs multiplied by a probability equal to one (assuming that the users always generate jobs) plus the mean arrival rate of the next Layer 2, multiplied by the probability of interrupted transmissions from Layer 1 to Layer 2 (due to corrupted data or collapsed links), 1 − p 12 . λ 2 : The mean arrival rate of Layer 2 is determined by the mean arrival rate of Layer 1, λ 1 , multiplied by the probability of uninterrupted transmissions between Layers 1 and 2, p 23 , plus the mean arrival rate of the next Layer 3, multiplied by the probability of interrupted transmissions from Layer 2 to Layer 3 (due to corrupted data or collapsed links), 1 − p 23 .  The above considerations are expressed mathematically as follows: According to Jackson, the solution to the equilibrium state probability distribution: is given by the product of probabilities: where: with the p i 's denoting the utilization of an entire layer i given by: Finally, the mean number N i of jobs for layer i can be computed as: By using the system of Equation (8)   Layer i mean arrival rate, i = 1-4 (Layer 4 is for end nodes) µ i Layer i mean average service time p 12 Probability that data transfers between Layers 1 and 2 is uninterrupted (no collapsed links or corrupted data) p 23 Similar to p 12 , for communication between Layers 2 and 3 p 34 Similar to p 12 , for communication between Layer 3 and the end hosts N Number of user jobs Number of jobs in each of the layers L 1 − L 3

An Illustrative Example
Suppose that, for the network of Figure 1, the users generate jobs with a mean arrival rate of λ = 70, and the mean service rates of the three layers are 1/µ 1 = 5 ms, 1/µ 2 = 1/µ 3 =8 ms, while the mean service rate of the end nodes is 1/µ 1 = 10 ms. The probabilities of uninterrupted transfers are p 12 = 0.9, p 23 = 0.8, p 34 = 0.9. For this example, the values were chosen in random, but for a simulation set, they were obtained based on the network characteristics (the service time values) and through observation and statistics (the mean job generation rate by the users and the probabilities of uninterrupted transfers), as described in the Simulation Analysis section. Then, the mean arrival rates at the network's layers are found from the system of Equation (8), and they are λ 1 = 78.6 jobs/s, λ 2 = 86.3 jobs/s, λ 3 = 77.7 jobs/s, λ 4 = 70 jobs/s (these values have been rounded to one decimal place). The utilization values for each layer are found from Equation (12).  (10) and (11): This value means that the probability of changing the state with time is a very small one, 0.001044, which virtually means that the system's state does not change with time. In other words, if the users generate 70 jobs/s for a period of time, given the mean service times and the probability of uninterrupted transfers per layer, then we can estimate the number of jobs per layer to keep the system at a steady (or equilibrium) state, in which the average flow leaving one layer will equal the average flow entering that layer. This can be considered a "per layer" load-balance. In the next section, the probabilistic model described is used to test parameters that reflect load balancing over the network.

Simulation Analysis
In this section, the proposed model is compared with existing strategies to test its validity and accuracy. It is important to note that no scheduling was implemented based on the ProMo model. Instead, existing scheduling algorithms were used, and the metrics, which are indicators of good balancing and performance, were computed using the strategy-defined parameters and the ProMo-defined probabilistic parameters. To determine if the ProMo model is indeed suitable to model existing dynamic data flow schedulers in the cloud, it was necessary to compare the results obtained by its probabilistic parameters against the results obtained by the corresponding strategy-defined parameters. This is the approach taken in this section.
For the analysis that follows, two scheduling scheme were selected: (1) Dynamic Load Balancing Scheduling for FPNs (DLBS-FPN) [8] and (2) the Dynamic Randomized load-Balancing (DRB) [19]. The reason behind this choice is two-fold: first, the schemes are quite simple to implement in a simulator, and second, they are designed for FPN or similar network topologies, so the model can better fit to them. A description of how the aforementioned strategies operate was given in the Related Work section. For the simulation environment, an Intel Core i7-8559U Processor system was used, with a clock speed of 2.7 GHz. In the following subsections, the use of the ProMo to test these two schemes is described in detail.

Application of ProMo to the DLBS-FPN Scheduling Scheme
In the first set of experiments, the link bandwidth was set to 1 Mbps, and the value of λ was set to 12.5 user jobs/s. Each job was assumed to have an average size of 1 Mb, and the data flows were assumed to have the same size of 500 MB (average of 500 jobs). Furthermore, the FPN network had 200 core switches, 400 aggregation switches, and 400 ToR switches, to comply with the settings used in [8].

Using the ProMo Model to Study the System's Throughput
The DLBS-FPN strategy uses a parameter called the scheduling trigger h * to study the system's throughput. This parameter tests the network load balancing on different links, and the throughput was examined in accordance to h * . The scheduling trigger depends on the flow balance degree and network topology of data centers. It can be decided by experiments under a fixed network topology [19]. Under this policy, the flow that occupies the largest amount of bandwidth on the most congested link moves to another available link, whenever h * ≤ h(t), where h(t) is the variance between the network bandwidth utilization ratio and the real-time link bandwidth utilization ratio. Because h * is a threshold value, it is determined by executing the algorithm under a variety of h * values and, then, according to the relationship between throughput and h * . Then, h * is chosen as the value that corresponds to the highest throughput.
In ProMo, a different parameter is introduced to examine the throughput, expressed as a relationship between the Mean Response Time (MRT) and the mean arrival rate of the jobs generated by the users, λ. For reference, this parameter is called MRT/λ.
To determine MRT/λ, it can be observed that the maximum arrival rate of user jobs is the one that can saturate the first level (meaning that p 1 = 1), and it is found by the following equation: This means that a maximum arrival rate is achieved when λ is increased by a factor of f = 1/λ 1 . Then, the maximum throughput rate T max will be: In other words, the throughput reaches a local maximum when λ is multiplied by a factor f . Clearly, the average throughput T is: The MRT was determined by the well-known Little's formula, and it is: From Equations (16) and (17), it is clear that there is a relationship between the MRT, the mean throughput rate T, and the mean arrival rate λ. To study the mean throughput behavior, the DLBS was simulated, and the MRT and T values were recorded for a variety λ values, with mean equal to 12.5 jobs/s. The µ values were assumed to be equal to 1/50 ms, while the probabilities of uninterrupted transfer between layers were reported to be p 12 = 0.8 and p 23 = p 34 = 0.9. Table 2 shows the values obtained, and Figure 3 shows the relationship between the MRT and the mean throughput T.  A similar behavior has been reported in DLBS, when examining the relationship between the parameter h * and the throughput achieved (see Figure 6 in [8]). The throughput line had a similar form when plotted against h * , and it reached its maximum for an h * value. Beyond this point, there was no further improvement for the DLBS throughput. When applying the ProMo during the DLBS simulation, the model recorded a maximum throughput for MRT close to six. This corresponded to an arrival rate of λ = 16.5 user jobs/s, and the corresponding f value that saturated the first level was ≈1.31. These values can be verified by applying the values of Table 2 to Equations (14)- (17). The mean throughput curve computed in [8] had an identical slope to the one computed by ProMo, and this was repetitive for different sets of µ and probabilistic values recorded.
Three observations are necessary here: 1.
The ProMo model was scheduled to work on network layers, not on single links, so the values obtained were average link values. For example, an average of 10% interrupted transfers was recorded in total for all the links connecting Layers 2 and 3 or 3 and 4. Thus, p 23 = p 34 = 0.9.

2.
Continuing the previous observation, the mean throughput values were average layer throughput values. To change these values to bytes/s, it was necessary to consider the average job size, record the number of bytes/s per link, and get the average. This approach will also lead to similar curves.

3.
The maximum point was the Layer 1 saturation point, beyond which the throughput will be dramatically reduced. In ProMo, this was shown by the fact that for f values beyond 1.31, the value of p 1 exceeded one, making N 1 negative (see Equation (13)). This is the reason that the curve was not plotted beyond the value of t = 6, in Figure 3.
From the above analysis, it can be concluded that the proposed probabilistic model can be used to examine the mean throughput of a scheduling approach, in this case the DLBS.

Using the ProMo Model to Study the Bandwidth Utilization
In DLBS, the authors computed the network bandwidth utilization ratio of all the links of the network. A high and stable value of the network bandwidth is an indication of a good scheduling scheme. In this set of simulations, a uniform transfer pattern was assumed, where the flows are symmetrically distributed among all the hosts and the data are transmitted with equal probability among the links. For ProMo, the average bandwidth utilization was computed as the ratio of the data-in and data-out rates per layer, divided by the number of layers, that is: In this set of experiments, the parameter values used were similar to the previous set, when examining the system mean throughput. Some values that were recorded during time intervals are given in Table 3. The average bandwidth utilization rate computed by ProMo is plotted in Figure 4. Again, the line derived from the simulation observations was similar to the one presented in [8] (see Figure 8a of this work). Some points that need to be mentioned here are the following: 1.
The bandwidth utilization line presented in [8] was somehow smoother compared to the one of Figure 4. The reason is because the results here were obtained on a per layer basis and not as the average values from the total number of links. Looking at the corresponding graph in [8], one can observe longer periods of stable bandwidth values compared to the line derived by the ProMo tester.

2.
The bandwidth utilization fell off as the load transmitted kept increasing with time, as also described for the DLBS scheme. This case was modeled by ProMo by an ever-decreased value of λ to avoid possible congestions in the links. In its turn, this caused a reduction of p i values, and thus, the average bandwidth utilization was reduced.

Application of ProMo to the DRB Scheduling Scheme
The DRB is another interesting scheme that is scheduled for fat-tree networks that have a similar topology as the one used in ProMo. The DRB uses another metric to test a scheduling algorithm's performance, which is the average latency, that is the longest delay of data sent into the network at the same time [19]. In DRB, the hosts were assumed to send data flows continuously into the network. Each host had a packet to send into the network with probability p, and one packet was transmitted over each link at a time. In this set of simulations, a DRB routine was implemented and simulated, using the basic parameters mentioned at the beginning of Section 4.1. The aim here is to verify that the MRT parameter of ProMo displays a similar behavior as the average latency over different traffic scenarios. It is interesting that the average latency in DRB is also related to a probability function, just like the MRT parameter of ProMo. During the simulation, the ProMo model was used to record average MRT values at time intervals. Table 4 shows a set of values recorded for different loads λ. It was shown that the MRT increased as the load becomes heavier. Specifically, as seen in Figure 5, the MRT increased very smoothly, until it approached λ values that saturated the initial layer; these values were close to 16. Above this threshold, the MRT increase was explosive. A similar behavior was described in DRB (see Figure 6 in [19]). A few observations are necessary here:

1.
The MRT values were on a "per user job basis". To find the MRT values on a byte basis, it is necessary to use the average job size, but the behavior will be identical.

2.
The authors in [19] proposed a DRB probability-based threshold, which is used to improve the average latency. In correspondence with this, the λ value factored by at most f is an analogous parameter in the sense that, above this traffic value, there is not much to be done to prevent the MRT from falling-off.
Again, one can observe that the ProMo model can be used with satisfactory accuracy to test an algorithm's average latency, in this case the average latency of DRB.

Conclusions and Future Work
In this paper, the main goal was to use the ProMo model to test the accuracy of the results found by some state-of-the-art dynamic data flow scheduling algorithms in cloud environments. In this regard, the model was used with two quite simple schemes, the DLBS and the DRB. The ProMo model was basically built to work for FPN networks, and since the aforementioned schemes also operate on similar structures, they were chosen for testing. The results showed that the parameters computed by the ProMo model present similar or identical behavior to those used by DLBS and DRB, in order to verify their load-balancing and good performance. This suggests that ProMo can be used for testing with a satisfactory accuracy.
Much research is now ahead, and many efforts need to be made. First, more schemes have to be tested to guarantee this accuracy. Then, the model itself can be used as the basis for a new scheduler. Finally, it can be the basis of a cloud simulator, in cooperation with the CPN (Colored Petri Nets) tool, which is necessary since each layer, link, and user job will be treated as a token with its own features. This will increase the accuracy of the simulator.