Delay Analysis for End-to-End Synchronous Communication in Monitoring Systems

With the rapid development of smart grid technologies, communication systems are further integrated in the existing power grids. The real-time capability and reliability of the power applications are receiving increasing concerns. Thus, it is important to measure the end-to-end delay in communication systems. The network calculus theory has been widely applied in the communication delay measuring tasks. However, for better operation performance of power systems, most power applications require synchronous data communication, in which the network calculus theory cannot be directly applied. In this paper, we expand the network calculus theory such that it can be used to analyze the communication delay for power applications in smart grids. The problem of communication delay calculation for the synchronization system is converted into a maximum path problem in graph theory. Finally, our theoretical results are compared with the experimental ones obtained with the network simulation software EstiNet. The simulation results verify the feasibility and effectiveness of the proposed method.

. Synchronous calculation and transmission service model for the monitoring system.
In Figure 1, 1 ( ) and 2 ( ) stand for the input of two sets of monitoring data, and the corresponding arrival curves of the monitoring data are 1 and 2 . Similar to the data arrival curves proposed in network calculus theory [22], 1 and 2 represent for the equivalent transmission service nodes where the monitoring data go through the control center, and the corresponding service curves are denoted as 1 and 2 . Here, the data arrival curve is the characteristic curve which is used to describe the monitoring data. We denote 3 as an equivalent computing service node, with the scaling function being ( ) and the calculation service curve being . 4 is denoted as a follow-up service model, with its the service curve denoted as 4 . Before 1 ( ) and 2 ( ) enter into 3 , they go through a synchronous link which causes the equivalent transmission service curve changing. Let us denote 1 ′ and 2 ′ as the equivalent transmission service curves after the synchronization link.
The smart grid's wide area measurement system has three components: power monitor unit (PMU), communication network and controller. The operation parameters of the utility grid within different regions are measured by PMU. Based on the time scale from the global positioning system, such data is sent to the control center for analysis and procession. Let 1 ( ) and 2 ( ) be defined as the amount of data that is generated by Sensor 1 and Sensor 2 with time scale , respectively. Then, we have: Next, the flow ratio is defined. Here, we assume that the sensor has a synchronous clock, and the time scale of data is marked at the same time, i.e., if 2 ( ) ≠ 0, then 1 ( ) ≠ 0. We assume that 2 ( ) ≠ 0 , and we define ≜ 1 ( ) 2 ( ) , and ≜ 1 ( ) 2 ( ) . The data arriving at 3 is synchronized. The aggregate received data calculation service curve is . Then, we have the minimal computational service curve for 1 , which is 1+ . Similarly, for 2 , the minimal computational service curve is 1 1+ . Figure 1. Synchronous calculation and transmission service model for the monitoring system.
In Figure 1, R 1 (t) and R 2 (t) stand for the input of two sets of monitoring data, and the corresponding arrival curves of the monitoring data are α 1 and α 2 . Similar to the data arrival curves proposed in network calculus theory [22], S 1 and S 2 represent for the equivalent transmission service nodes where the monitoring data go through the control center, and the corresponding service curves are denoted as β 1 and β 2 . Here, the data arrival curve is the characteristic curve which is used to describe the monitoring data. We denote S 3 as an equivalent computing service node, with the scaling function being S(n) and the calculation service curve being C. S 4 is denoted as a follow-up service model, with its the service curve denoted as β 4 . Before R 1 (t) and R 2 (t) enter into S 3 , they go through a synchronous link which causes the equivalent transmission service curve changing. Let us denote β 1 and β 2 as the equivalent transmission service curves after the synchronization link. Here, β 1 and β 2 are functions of β 1 and β 2 , i.e., β 1 = F 1 (β 1 , β 2 ), and β 2 = F 2 (β 1 , β 2 ).
The smart grid's wide area measurement system has three components: power monitor unit (PMU), communication network and controller. The operation parameters of the utility grid within different regions are measured by PMU. Based on the time scale from the global positioning system, such data is sent to the control center for analysis and procession. Let G 1 (t) and G 2 (t) be defined as the amount of data that is generated by Sensor 1 and Sensor 2 with time scale t, respectively. Then, we have: Next, the flow ratio is defined. Here, we assume that the sensor has a synchronous clock, and the time scale of data is marked at the same time, i.e., if G 2 (t) = 0, then G 1 (t) = 0. We assume that G 2 (t) = 0, and we define ρ max max G 1 (t) G 2 (t) , and ρ min min The data arriving at S 3 is synchronized. The aggregate received data calculation service curve is C. Then, we have the minimal computational service curve for R 1 , which is ρ min 1+ρ min C. Similarly, for R 2 , the minimal computational service curve is 1 1+ρ max C. According to the above assumptions and calculation model of unified service transmission, the equivalent end-to-end service model for monitoring data R 1 is obtained as follows: where the notation ⊗ stands for the convolutional operator. Similarly, the equivalent end-to-end service model for monitoring data R 2 can be obtained as follows: In real-world scenarios, the monitoring data is normally the same. Hence, ρ max = ρ min = 1. In the above model, both sensors R 1 and R 2 reach the service node S 3 simultaneously due to the synchronization process. Thus, their processing time is also the same. After the calculation processing, the data of sensor R 1 and R 2 go through the same service node S 4 with the same time delay. This refers to the so-called synchronization property; see, [7], and the references therein.
Based on the synchronous property, the end-to-end delay of the synchronization system can be analyzed. We assume that the data of sensor R 1 always arrives earlier than that of R 2 during a given time period [0, t]. This means that the data of R 1 always waits for that of R 2 in the section of synchronization. As soon as the data of R 2 arrives, it can be input into the service node S 3 . Hence, regarding R 2 the equivalent service curve is not changed via the synchronized transmission link, and we have β 2 = F 2 (β 1 , β 2 ) = β 2 . According to [7], the end-to-end service curve of R 2 can be expressed as: According to the service theorems in network calculus theory [22] and the obtained public network flow model, β 2 and β 4 can be obtained directly. We have the upper bound of the end-to-end delay of R 2 being: However, the equivalent transmission service curve of R 1 has been changed, since the data of R 1 cannot go through service node S 3 until the arrival of the R 2 . Then, we have β 1 = F 1 (β 1 , β 2 ) = β 1 . In fact, due to the waiting time of data, the delay of R 1 's data may increase. Then, we have β 1 ≤ β 1 . According to the synchronization property, the end-to-end delay of R 1 and R 2 are the same, even if β 1 is unable to be obtained. The upper bound of the end-to-end delay of R 1 can be expressed by (3).
The problem is that the conclusion of (3) was tenable, only if we assume that R 1 's data always arrives earlier than R 2 's. If R 1 's data arrives later than R 2 's after time t * , the expression of the end-to-end service curve of R 1 is presented as (4), which shall be investigated in Section 3: Although the data of R 1 arrives later, most of the equivalent transmission curve of R 1 has already changed before t * . So, the service curve of R 1 cannot be β 1 , i.e., (4) cannot be the equivalent transmission service curve to R 1 . In order to calculate the end-to-end delay in this case, the delay theory of the suspension service system is discussed in Section 3.

Calculation of Equivalent Delay of Monitoring System
The main results of the equivalent delay calculation are provided as three theorems in this section.

Delay Theorem of Suspension Service System
Theorem 1. Consider an input R(t) through a service node which has the strict service curve β(t). The system does not provide any service during t 1 < t < t 2 . We assume that R * (t 1 ) is known and time delay of the original system is d(t). For t > t 1 , the delay is denoted as d (t). Then, d (t) satisfies the following inequality: where '∨' refers taking the maximum value.
The proof of Theorem 1 is given in Appendix A; see, Appendix A.1. Before time t * , data of R 1 always arrives earlier than that of R 2 . But after time t * , on the contrary, R 2 's data arrives earlier. The service received by R 1 can be equivalent to a suspended service system, and the suspended time period is d 1 (t * ) < t < d 2 (t * ), where d 1 (t * ) and d 2 (t * ) are obtained from (2) and (4), respectively. Because of the synchronization, the data which was supposed to be processed in d 1 (t * ) by R 1 will not be completed until d 2 (t * ), and this is equivalent to the system being suspended for R 1 .

Synchronization System Delay Analysis
Since the data sent by sensors is in accordance with a fixed sampling interval, we denote the time scale of such data as T 1 < T 2 < T 3 < · · · < T n . Then, we have: Let us introduce the following definitions. Definition 1. Let us denote β 1 R1 , β 2 R1 , β 1 R2 , β 2 R2 as service curves of flow R 1 and flow R 2 before and after the synchronous link, respectively, then we have β 1 R1 = β 1 , β 1 R2 = β 2 , and:

Definition 2.
Denote d x i (t) as the delay upper bound calculated by the equivalent service model using flow i, after the data arrival sequence changes for the x-th time. Denote D i (t) as the upper bound of the system delay before time t, when the data arrival order has already changed i times. Then, we have: We assume that the flow arrival curves for R 1 and R 2 are α 1 and α 2 , respectively, and data of R 1 always arrives earlier than that of R 2 at the synchronization service node no later than time T x 1 . The flow delay of R 2 is d 0 2 (t). If there is no synchronization mechanism, flow delay of R 1 is d 0 1 (t). Since R 1 's data always arrives earlier than R 2 's, the system delay should be R 2 's delay which is d 0 2 (t). According to network calculus theory, we have d 0 2 (t) = h(α 2 , β R2 ). After time T x 1 , R 2 's data arrives earlier than R 1 's at the synchronization service node. Thereafter, the delay of R 1 should be taken as the system delay. However, due to the waiting time of flow R 1 , the original system delay β R1 has changed. According to the analysis in Section 3.1, the equivalent time period of flow R 1 is obtained as follows: Flow R 1 can be seen as the output of the suspensive service β 2 R1 which comes from service β 1

R1
first. The suspension time for service β 2 R1 is the length of the time period given in (6). Assume that the data from flow R 1 passes by β 1 R1 , the output is R 1 1 (t) at time t, and the data from flow R 1 reaches the synchronization link at time t . Obviously, R 1 1 (t ) = R 1 (t). Since the considered system is a periodic sampling monitoring system, data can only be transmitted in a fixed time period. The delay of the data transmission needs to be taken into consideration only when data is transmitted. Let us define t = T m .. Thus, t defined above stands for the time when data with time scale T m reaches the synchronize link.
Assuming that the output of flow R 1 by service β 1 R1 , synchronization and service β 2 Discussions for the value of m in T m is given in Appendix A; see, Appendix A.2. Let us assume that after time scale T x 2 , data R 1 arrives the synchronization node before data R 2 with the same time scale. R 2 's traffic can be seen as the output of the suspensive service β 1 R2 which gets through service β 2 R2 first. The suspension time is: Therefore, T m > T x 2 , the upper bound of data delay be expressed as: In accordance with the discussion of above, when T ω ≥ T x 1 : , then the equivalent model of the service is not suspended, so (T m ) = d 0 2 (T m ). Otherwise, we can obtain: The problem is that the time of x 1 and x 2 cannot be obtained with the existing theory in network calculus [22]. The maximum of: is the bound of the system, which is: Furthermore, for T m > T x 2 : Given θ 1 = θ 2 , according to (A1), we have: So, for T m ≥ T x 2 , we have: Therefore, we have and: So:

Synchronization System Delay Upper Bound Theorem
Theorem 2. The system's delay upper bound of the n-th exchange of data arrival sequence before time T m can be expressed as: where d n 1 (T m ) and d n 2 (T m ) have different expressions, which are determined by the property of n. If n is an odd number, and if the equivalent model of data stream 1 is used after the last change, then data stream 1 arrives sooner than data stream 2 before the first exchange. If the equivalent model of data stream 2 is used after the last change, then data stream 2 arrives sooner than data stream 1 before the first exchange. Therefore, we have: And: If n is an even number, and if the equivalent model of data stream 1 is used after the last change, then data stream 2 arrives sooner than data stream 1 before the first exchange. If the equivalent model of data stream 2 is used after the last change, then data stream 1 arrives sooner than data stream 2 before the first exchange. Therefore, we have: And: The proof of Theorem 2 is given in the Appendix A (see, Appendix A.3).
In the above theorem, the system's delay upper bound of the n-th exchange of data arrival sequence before time T m is obtained. Next, for any given time t, we drive the expression of system's delay upper bound.
Theorem 3. At any given time t, the system's delay upper bound can be expressed as d(t) ≤ h(α, β, t).
The proof of Theorem 2 is given in Appendix A; see, Appendix A.4. According to Theorem 3, we have h(α, β, t 1 ) ≤ h(α, β, t 2 ), t 1 ≤ t 2 . According to Theorem 3, D n (t) in Theorem 2 can be written as follows (n is an odd number):

The Method of Calculation of the Upper Bound Equivalent Synchronization System Delay
Furthermore, consider the case of p-channel data, if it needs to be synchronized. The flows are represented as 1, 2, · · · , p. According to the similar analysis in Sections 3.2 and 3.3, the upper bound of the delays can be represented using the model as follows: We define: Therefore: where f (t) can be equivalent to any path from node 0 to node dest in Figure 2. Max f (t) can be equivalent to obtaining the maximum path. There are + 2 nodes in this graph, and the nodes in the same row do not connect with each other. There is no connection between the nodes of the same column, but there is a connection between the two nodes in different rows and different columns. Besides, node 0 and any other node are connected. Node dest is connected to the node km only, and the distance is 0. The defined distance is represented as ( 1 , 2 ). So, we have: There are pm + 2 nodes in this graph, and the nodes in the same row do not connect with each other. There is no connection between the nodes of the same column, but there is a connection between the two nodes in different rows and different columns. Besides, node 0 and any other node are connected. Node dest is connected to the node km only, and the distance is 0. The defined distance is represented as l(node 1 , node 2 ). So, we have: In Figure 2, all paths from node 0 to node dest constitute the value of f (t). For example, taking It can be expressed as the path distance which is l(0, 1) + l(1, 2m + 2) + l(2m + 2, pm).

Network Topology Simulation and Experimental Design
The network simulation software EstiNet is used for the experiments. For the sake of simplicity and generality, this paper selects the network topology structure shown in Figure 3. In order to simulate the time delay of the synchronization system, the data transmission introduced in Algorithm 1 is carried out in the network of Figure 3.   In order to assess the feasibility and effectiveness of the proposed method, we design different 3 ( ) , 4 ( ) , 5 ( ) in the experiment, which are given as follows. Deploying data generating  In order to assess the feasibility and effectiveness of the proposed method, we design different α 3 (t), α 4 (t), α 5 (t) in the experiment, which are given as follows. Deploying data generating program, such that data packets sent by R 3 are shown in Figure 4.  In order to assess the feasibility and effectiveness of the proposed method, we design different 3 ( ) , 4 ( ) , 5 ( ) in the experiment, which are given as follows. Deploying data generating program, such that data packets sent by 3 are shown in Figure 4. Deploying data generating program, such that data packets sent by 4 are shown in Figure 5.  . Deploying data generating program, such that data packets sent by R 4 are shown in Figure 5. 1. Node 1 and 3 send packets in size of 100 kb at intervals of one second to node 42, denoting arrived curves as α 1 ( ) and α 2 ( ). 2. Send node 42 packets of data to node 11 after synchronization. 3. No. 5,12,14,17,18,20,21,23,22, 29 nodes send competing data packets to node 8, to constitute competing flow 3 ( ), which reaches curve α 3 ( ). 4. No. 4,13,15,16,19,24,25,26,27, 28 nodes send competing data packets to node 8, to constitute competing flow 4 ( ), which reaches curve α 4 ( ). 5. No. 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 nodes send competing data packets to node 8, to constitute competing flow 5 ( ), which reaches curve α 5 ( ). 6. All the links take bandwidth of 10 Mb. For all routers, the same configuration is used.
In order to assess the feasibility and effectiveness of the proposed method, we design different 3 ( ) , 4 ( ) , 5 ( ) in the experiment, which are given as follows. Deploying data generating program, such that data packets sent by 3 are shown in Figure 4. Deploying data generating program, such that data packets sent by 4 are shown in Figure 5.  Deploying data generating program, such that data packets sent by R 5 are shown in Figure 6. Deploying data generating program, such that data packets sent by 5 are shown in Figure 6. Make competing flows 3 ( ), 4 ( ) and 5 ( ) consistent with data in BC-pAug89 [23] dataset.
The stg-trace file command in software EstiNet can read network flows generated by the specified file.  Make competing flows R 3 (t), R 4 (t) and R 5 (t) consistent with data in BC-pAug89 [23] dataset. The stg-trace file command in software EstiNet can read network flows generated by the specified file.

First, the Computation Service Curve and the Scaling Function of Node 42 of the Synchronous Link Are Given
For simplicity, the synchronization process is to add the number of packets sent by node 1 and node 3. Then, the scaling function can be expressed as S(a) = 1 2 a. Further testing the processing time of a char type data. The total time of one million operations is less than 10 milliseconds, which means the average time for a single operation is less than 0.01 microseconds. In addition, defining the time complexity as O(n), thus the equivalent service curve can be expressed as C(t) ≥ 100t × 8 = 800t, where t is in units of microseconds, and C(t) is in units of byte.

When
There Are no Competing flows α 3 (t), α 4 (t) and α 5 (t) Because the propagation delay of the link is set relatively small (1 microsecond, which is negligible), the delay of the system is mainly composed of processing delay, transfer delay and queuing delay. The equivalent service model using common links and routers is as follows: Since the propagation delay is small, assuming the router processing delay can be ignored, then the equivalent service model for each link and router is the bandwidth of the link. Therefore, the equivalent transport service curve from node 1 to node 42 is β 1,42 = β 1,2 ⊗ β 2,42 , where β 1,2 = 10t. Based on the remaining service theorem [7], we have: The equivalent transport service curve from node 11 to node 42 is β 42,11 = 10t. Due to the same amount of data in node 1 and node 3, and according to the previously obtained computing services curve C(t) of node 9 and scaling function S, we can obtain the equivalent service curve on node 1 and node 3 to node 11 as follows:

When Adding Competing Flows of α 3 (t), α 4 (t) and α 5 (t)
According to the residual service curve theorem [7], the service curves of node 1 and node 3 to node 2 are as follows: The service curve from node 42 to node 11 is: Thus, the equivalent service curves on node 1 and node 3 to node 11 are as follows: 11 ,

Time Delay Increasing due to Forwarding
Because we adopt socket for packet forwarding in the implementation process, which means the packets will not be sent to node 11 until the total packets arrive to node 42. Nevertheless, each arrived packet should be forwarded in normal condition. Thus, the additional time delay is the transmission delay for the same size of the data packet from node 2 to node 11. Because the link bandwidth is 10 MB, the increase of delay can be obtained with a packet size/bandwidth. Define the delay as ∆t.
The system delay should be superimposed on ∆t in the original calculation of boundary value. Namely, D n (t) = D n (t) + ∆t.
Taking experimental verification of the above analysis, we get 22,625.8 microseconds of delay for 100 Kb packet to be sent from node 1 to node 11 transiting in node 42, and the delay directly sent from node 1 to node 11 is 12,896.8 µs, where ∆t = 9996.9 ≈ 100 Kb/10 Mbps.
Furthermore, multiple data links are considered to be forwarded. Therefore, the total delay is required to compose of the delay time for the last packet forwarded through an intermediate transmission. Intermediate delay δ can be calculated by:

Arrival Flow Curve
(a) Arrival flow curve of monitoring sensors.
Since 100 kb monitoring data packet is sent, we can omit the time required for data transmission. The arrival flow curve on node 1 and node 3 can be represented by the following step function [14]: Thus, the equivalent service curves on node 1 and node 3 to node 11 are as follows: )).

Time Delay Increasing due to Forwarding
Because we adopt socket for packet forwarding in the implementation process, which means the packets will not be sent to node 11 until the total packets arrive to node 42. Nevertheless, each arrived packet should be forwarded in normal condition. Thus, the additional time delay is the transmission delay for the same size of the data packet from node 2 to node 11. Because the link bandwidth is 10 MB, the increase of delay can be obtained with a packet size/bandwidth. Define the delay as ∆ . The system delay should be superimposed on ∆ in the original calculation of boundary value. Namely, ( ) = ( ) + ∆ .
Taking experimental verification of the above analysis, we get 22,625.8 microseconds of delay for 100 Kb packet to be sent from node 1 to node 11 transiting in node 42, and the delay directly sent from node 1 to node 11 is 12,896.8 μs, where = 9996.9 ≈ 100 Kb/10 Mbps. Furthermore, multiple data links are considered to be forwarded. Therefore, the total delay is required to compose of the delay time for the last packet forwarded through an intermediate transmission. Intermediate delay can be calculated by: = MTU size × (m − 1)/10 Mbps.

Arrival Flow Curve
(a) Arrival flow curve of monitoring sensors.
Since 100 kb monitoring data packet is sent, we can omit the time required for data transmission. The arrival flow curve on node 1 and node 3 can be represented by the following step function [14]: where time is in units of microseconds, and = 10 6 .
(b) Competing arrival flow curve according to  Maximum flow within any one second is 10 Mb for competition flow 3 and 4 . Maximum flow within any two seconds is 13 Mb. Arbitrary maximum flow within three seconds is 14 Mb. So, we can get the curve as: While for competing flow 5 , the largest flow within any one second is 1 Mb. Therefore, it can be expressed as: Thus, the equivalent service curves on node 1 and node 3 to node 11 are as follows: )).

Time Delay Increasing due to Forwarding
Because we adopt socket for packet forwarding in the implementation process, which means the packets will not be sent to node 11 until the total packets arrive to node 42. Nevertheless, each arrived packet should be forwarded in normal condition. Thus, the additional time delay is the transmission delay for the same size of the data packet from node 2 to node 11. Because the link bandwidth is 10 MB, the increase of delay can be obtained with a packet size/bandwidth. Define the delay as ∆ . The system delay should be superimposed on ∆ in the original calculation of boundary value. Namely, ( ) = ( ) + ∆ .
Taking experimental verification of the above analysis, we get 22,625.8 microseconds of delay for 100 Kb packet to be sent from node 1 to node 11 transiting in node 42, and the delay directly sent from node 1 to node 11 is 12,896.8 μs, where = 9996.9 ≈ 100 Kb/10 Mbps. Furthermore, multiple data links are considered to be forwarded. Therefore, the total delay is required to compose of the delay time for the last packet forwarded through an intermediate transmission. Intermediate delay can be calculated by: = MTU size × (m − 1)/10 Mbps.

Arrival Flow Curve
(a) Arrival flow curve of monitoring sensors.
Since 100 kb monitoring data packet is sent, we can omit the time required for data transmission. The arrival flow curve on node 1 and node 3 can be represented by the following step function [14]: where time is in units of microseconds, and = 10 6 .
(b) Competing arrival flow curve according to  Maximum flow within any one second is 10 Mb for competition flow 3 and 4 . Maximum flow within any two seconds is 13 Mb. Arbitrary maximum flow within three seconds is 14 Mb. So, we can get the curve as: While for competing flow 5 , the largest flow within any one second is 1 Mb. Therefore, it can be expressed as: where time t is in units of microseconds, and T = 10 6 . Maximum flow within any one second is 10 Mb for competition flow R 3 and R 4 . Maximum flow within any two seconds is 13 Mb. Arbitrary maximum flow within three seconds is 14 Mb. So, we can get the curve as: )).

Time Delay Increasing due to Forwarding
Because we adopt socket for packet forwarding in the implementation process, which means the packets will not be sent to node 11 until the total packets arrive to node 42. Nevertheless, each arrived packet should be forwarded in normal condition. Thus, the additional time delay is the transmission delay for the same size of the data packet from node 2 to node 11. Because the link bandwidth is 10 MB, the increase of delay can be obtained with a packet size/bandwidth. Define the delay as ∆ . The system delay should be superimposed on ∆ in the original calculation of boundary value. Namely, ( ) = ( ) + ∆ . Taking experimental verification of the above analysis, we get 22,625.8 microseconds of delay for 100 Kb packet to be sent from node 1 to node 11 transiting in node 42, and the delay directly sent from node 1 to node 11 is 12,896.8 μs, where = 9996.9 ≈ 100 Kb/10 Mbps. Furthermore, multiple data links are considered to be forwarded. Therefore, the total delay is required to compose of the delay time for the last packet forwarded through an intermediate transmission. Intermediate delay can be calculated by: Since 100 kb monitoring data packet is sent, we can omit the time required for data transmission. The arrival flow curve on node 1 and node 3 can be represented by the following step function [14]: where time is in units of microseconds, and = 10 6 . Maximum flow within any one second is 10 Mb for competition flow 3 and 4 . Maximum flow within any two seconds is 13 Mb. Arbitrary maximum flow within three seconds is 14 Mb. So, we can get the curve as: While for competing flow 5 , the largest flow within any one second is 1 Mb. Therefore, it can be expressed as: While for competing flow R 5 , the largest flow within any one second is 1 Mb. Therefore, it can be expressed as: (c) Competing flow using BC-pAug89 data set.
(c) Competing flow using BC-pAug89 data set.

Simulation Results of Monitoring System Delay
In Figure 7, the simulated network delay is compared with the case where competing flow is not superimposed. In Figure 8, the simulated network delay is compared with the case where competing flows in Figures 4-6 are superimposed. In Figure 9, the simulated network delay is compared with the case where competing flow in BC-pAug89 data set is superimposed.

Simulation Results of Monitoring System Delay
In Figure 7, the simulated network delay is compared with the case where competing flow is not superimposed. In Figure 8, the simulated network delay is compared with the case where competing flows in Figures 4-6 are superimposed. In Figure 9, the simulated network delay is compared with the case where competing flow in BC-pAug89 data set is superimposed.    In Figure 7, the simulated network delay is compared with the case where competing flow is not superimposed. In Figure 8, the simulated network delay is compared with the case where competing flows in Figures 4-6 are superimposed. In Figure 9, the simulated network delay is compared with the case where competing flow in BC-pAug89 data set is superimposed.    flows in Figures 4-6 are superimposed. In Figure 9, the simulated network delay is compared with the case where competing flow in BC-pAug89 data set is superimposed.    Since delay caused by operations such as packet and packetization in the network communication and processing delay of the routers are not considered in the theoretical calculation, the upper bound of the theoretical delay calculated in Figures 7 and 9 is smaller than the measured value. However, such deviation is within 5 milliseconds, which is quite small. According to Figures 7-9, it can be seen that the theoretical results are close to those obtained by simulation, which indicates the feasibility and effectiveness of our proposed method.