Effect of AQM-Based RLC Buffer Management on the eNB Scheduling Algorithm in LTE Network †

With the advancement of the Long-Term Evolution (LTE) network and smart-phones, most of today’s internet content is delivered via cellular links. Due to the nature of wireless signal propagation, the capacity of the last hop link can vary within a short period of time. Unfortunately, Transmission Control Protocol (TCP) does not perform well in such scenarios, potentially leading to poor Quality of Service (QoS) (e.g., end-to-end throughput and delay) for the end user. In this work, we have studied the effect of Active Queue Management (AQM) based congestion control and intra LTE handover on the performance of different Medium Access Control (MAC) schedulers with TCP traffic by ns3 simulation. A proper AQM design in the Radio Link Control (RLC) buffer of eNB in the LTE network leads to the avoidance of forced drops and link under-utilization along with robustness to a variety of network traffic-loads. We first demonstrate that the original Random Early Detection (RED) linear dropping function cannot cope well with different traffic-load scenarios. Then, we establish a heuristic approach in which different non-linear functions are proposed with one parameter free to define. In our simulations, we demonstrate that the performance of different schedulers can be enhanced via proper dropping function.


Introduction
To meet the key requirements for the next generation wireless network, cellular operators around the world provide seamless Quality of Service (QoS) for converged mobile and multimedia services.This is possible due to the deployment of 4G networks based on the Long-Term Evolution (LTE) standard.The growing demand for network services, such as video streaming, video telephony, Voice Over Internet Protocol (VOIP) with constraints on delay and bandwidth requirements, poses new challenges for researchers in the design of the future generation cellular network.LTE and System Architecture Evolution (SAE) were formulated by the 3rd Generation Partnership Project (3GPP) in 2004 [1].LTE introduces a new air interface and radio access network, which provides much higher throughput and low latency, greatly improved system capacity and coverage compared to those of the Wideband Code Division Multiple Access (WCDMA) systems.These improvements lead to increased expectations on the end-user Quality of Experience (QoE) over LTE as compared to existing 2G/3G systems [2].For this reason, both research and industrial communities are making a considerable effort in the study of LTE systems, proposing new and innovative solutions in order to analyze and improve their performance.
To date, there has been much research on LTE, not only on the physical layer but also on higher layers, e.g., the Packet Data Convergence Protocol (PDCP) and Radio Link Control (RLC) layer.The main data buffers of the user interface are located at the RLC layer where data is held before its transmission to the destination User Equipment (UE).However, if there is congestion or overflow in buffers of RLC entities, low latency will not be guaranteed [3].As a high speed broadband network, LTE promises uninterrupted connectivity to the packet data network.Congestion may take place at the radio interface due to various reasons.One such reason for congestion is communication quality in the channel that varies with the distance of the UE from the base station (Evolved Node B, eNB) that leads eNB to select a low bit rate modulation scheme in the case of poor communication channel quality and may segment the packet.Therefore, if the channel condition is not good enough to transmit RLC Service Data Unit (SDUs) in time, they will be in buffers of RLC entities waiting for the instruction for segmentation or concatenation from the Medium Access Control (MAC) layer.
Also, when a user is moving from one cell to another cell, handover occurs.During the handover, the source eNB forwards the incoming data and the data that is already in the RLC buffer for the user to the target eNB to alleviate the service disruption.However, the forwarding of the user data may cause increased delay of forwarded data.Moreover, there is a time interval immediately after the handover when the data on both the direct path and the forwarded path may arrive in parallel to the target eNB [4].Hence, there is a high probability of congestion or overflow in RLC buffers due to the large volume of traffic in a short period of time, leading to a high delay that results in poor end-to-end application performance [5].To guarantee high throughput and low delay when congestion occurs, numerous studies on Active Queue Management (AQM) based congestion control schemes in the network have been proposed in the past, especially for transport-layer protocols.Among the proposed AQM schemes, the Random Early Detection (RED) [6] algorithm is one of the most famous algorithms.
The work presented in this paper is an improvement and extension of our preliminary version [7] with the following new contributions: • In this paper, we have extended our idea Smart RED (SmRED) [7] to SmRED-i, where packet dropping probability function is different in accordance with the value of i = 2, 3, 4 . . .Increasing the value of i will lead to lower dropping probability during low traffic-load condition and high dropping probability during high traffic-load condition.We can tune the parameter i to get different dropping functions.• We overview the effect of AQM-based buffer management on the performance of different scheduling algorithms with and without handover.We measured the performance in terms of end-to-end average Transmission Control Protocol (TCP) throughput and delay.While there is significant work on the performance of different schedulers in the absence of AQM-based buffer management [4] and handover [8], it is very difficult to capture the effect of both on the scheduling algorithms using analytical models.• Finally, we used detailed and extensive ns3 [9] realistic simulation to simulate the effect of AQM-based buffer management on the scheduling algorithms in the single cell network topology without handover and in the multi-cell network topology in the presence of handover in LTE networks across a wide range of traffic-loads.
The remainder of this paper is organized as follows.Related work is presented in Section 2. A brief introduction to LTE network architecture is described in Section 3. The AQM mechanism and the proposed approach is described in Section 4. The network model and implementation of different schedulers in ns3 are described in Section 5. Section 6 describes the performance of the proposed scheme by simulation.Finally, Section 7 concludes the paper.

Related Work
Many AQM schemes have been designed based on heuristic, control theoretic and optimization approaches [10].Examples of heuristic schemes are RED [6], gentle RED (GRED) [11], adaptive RED (ARED) [12], parabola RED [13], hyperbola RED (HRED) [14], dynamic RED [15], flow RED [16], stabilized RED [17], balanced RED [18], BLUE [19], Yellow [20] etc.These algorithms aimed to improve some or all of the features, such as fairness, stability, network utilization, packet loss and adaptability for different traffic loads.In GRED, the packet dropping probability from the maximum packet dropping probability (P max ) to 1 is replaced by a gentle slope.The author suggested that, by doing so, the stability would increase.However, the main problem of GRED is parameter tuning.Different parameters should be tuned according to different network conditions.ARED adaptively adjusts the maximum packet dropping probability P max using the multiplicative increase-multiplicative decrease approach.However, ARED is also sensitive to different parameter configuration and its performance is not superior to RED when the network environment is complex [10].BLUE uses link idle events and packet loss to model the congestion control scheme in a simple way and its packet loss is low.However, the long delay causes congestion in certain situations [10].
In the control theoretic approach, the main focus is to model TCP with RED dynamics.This approach then provides a stable and faster responding system [15].In this approach, the main thrust is to interpret the structural problems to analytically determine various RED parameters [21].Proportional Integral (PI) controller [22], Proportional Derivative (PD) controller [23], PI-Derivative (PID) controller [24] and fuzzy logic controller [25] are examples of control theoretic schemes.The low-pass filter design of RED leads to low frequency oscillations in the regulated output from a control standpoint.To overcome this limitation, a PI controller is proposed that uses instantaneous samples instead of changing the average queue size of the samples.A PID controller is proposed to provide a faster response time than a PI controller.On the other hand, a PD controller is proposed to adapt the RED parameter P max to further improve the queue length stabilization compared to ARED.In a fuzzy logic controller, the scheme can provide better control of the non-linear time varying system where traffic loads, round trip times, etc., vary over time.This scheme dynamically adapts its parameters to cope with the situation.
An optimization scheme such as [26] deals with the congestion control problem by maximizing the aggregate source utility via an approximate gradient algorithm [27].For example, an adaptive virtual queue [28] attempts to maintain the queue length by utilizing the arrival rate of the traffic.On the other hand, a stabilized virtual buffer [29] utilizes both the arrival rate of the traffic and the queue length as the target value to achieve a stabilized output.
Traditional AQM research discussed previously focused on a wired network.Recently, there has been much AQM research focused on wireless networks.In an infrastructure type wireless network, the base station acts as the router between the wired network part and the wireless access network part and becomes congested because of the speed mismatch between these two parts [30].A RED-based discard strategy [3], PDCP discard timer-based strategy in the LTE network [2], efficient and fair AQM [31], slope-based discard [32] and wireless delay-based queue [33] are examples of this category.On the other hand, in an infrastructureless type of wireless network, mobile nodes themselves act as the router, which can become congested.Examples of such schemes in this scenario are predictive AQM [34], adhoc hazard RED [35] etc.
Although many studies, as described previously, have contributed many insights into the AQM-based congestion control, they have their own technical drawbacks.For example, the control theoretic approach and optimization approach work well in analysis, but the complexity of their implementation design makes them impractical to use in the router [10].Any new AQM implementation drastically impacts the router architecture design and the software [30].
The primary objective of this paper is to design a simple and practical RED variant that needs minimal adjustment to the original RED algorithm and is also able to maintain the RLC buffer in the LTE network at the predictable value under a wide variety of traffic load.Figure 1 shows the architectural view of the RLC entity in the LTE network [3].The information about congestion measured by our approach can be fed as cross layer information to the PDCP layer to adjust the discard timer accordingly.In the LTE network, there is a discard function in the PDCP layer which is upon RLC layer.Each Service Data Unit (SDU) that comes from the higher layer matches a discard timer in the PDCP layer, and when the PDCP layer receives a SDU, the matched timer starts.If the timer expires and the SDU is still in the buffer, then the SDU will be discarded.The discard timer value can be set according to the congestion level of the RLC buffer.However, this adjustment according to the congestion level also depends on the specific requirements of the QoS of the corresponding bearer and is an open research problem [2,3].

Background
This section presents a brief introduction to the LTE network architecture and the resource block allocation in the eNB in order to better understand the later sections.

Overview of LTE Networks
The LTE system is based on a flat architecture, known as the SAE, with respect to the 3G systems [36].LTE provides high speed delivery of data and signaling and a seamless mobility support.As depicted in Figure 2, it is mainly composed of two parts, the evolved packet core, also called the core network and the Evolved-Universal Terrestrial Radio Access Network (E-UTRAN).The evolved packet core is connected to the IP multimedia subsystem (IMS).The Home Subscribe Server (HSS) is the main IMS database which also acts as database in Evolved Packet Core (EPC).EPC has three main functional units: the Mobility Management Entity (MME), the Serving Gateway (SGW), and the Packet Data Network Gateway (PGW).The MME is responsible for handling user mobility, intra-LTE handover, and tracking and paging procedures of User Equipment upon connection establishment.The main function of the SGW is to route and forward user data packets among LTE nodes, and to manage handover among LTE and other 3GPP technologies.The PGW has the function of interconnecting the LTE network with the rest of the Internet.The LTE access network can host only two kinds of node: the UE (that is, the end-user) and the eNB.Note that eNB nodes are directly connected to each other through the X2 interface and to the MME gateway.The eNB is the only device responsible for radio resource management and control procedures at the radio interface, in contrast to other cellular network architectures such as 3G.

Radio Resource Management
Radio Resource Management (RRM) is an eNB application level function that ensures the efficient use of available radio resources.Radio resources are allocated into the time/frequency domain (see Figure 3).In the frequency domain, the total bandwidth is divided into sub channels of 180 kHz, each one with 12 consecutive and equally spaced OFDM sub-carriers.In the time domain, instead, they are distributed every Transmission Time Interval (TTI), each one lasting 1 ms.The time is split into frames, each one composed of 10 consecutive TTIs.Furthermore, each TTI is made of two time slots with length 0.5 ms, corresponding to 7 Orthogonal Frequency Division Multiplexing (OFDM) symbols in the default configuration with a short cyclic prefix.A time/frequency radio resource spanning over two time slots in the time domain and over one sub-channel in the frequency domain is called Resource Block (RB) and corresponds to the smallest radio resource unit that can be assigned to an UE for data transmission.As the sub-channel size is fixed, the number of RBs varies according to the system bandwidth configuration (e.g., 25 and 50 RBs for system bandwidths of 5 and 10 MHz, respectively).Readers interested in RRM can refer to [37].

Active Queue Management and Proposed Approach
RED [6], proposed by Floyd and Jacobson, is the prominent solution for global synchronization and this opens a vast research area in AQM.The RED algorithm functions by using four parameters according to which the queue size is managed.These are queue length, minimum threshold (Min th ), maximum threshold (Max th ), and maximum probability (P max ).Queue length is the maximum buffer size of the queue.Briefly, the algorithm functions by maintaining an average queue size.The packet dropping probability (P d ) linearly changes between zero and P max with the change of average queue size between the Min th and Max th .If the average queue size exceeds the Max th , all arriving packets are dropped.RED can control the transient congestion (due to bursty traffic) by absorbing the arrival rate fluctuations because the packet dropping mechanism is based on the moving average of the past values.Although RED is a prominent technique to properly maintain the queue size-compared to simple Droptail [38] that drops a packet when the queue reaches its limit-RED is particularly sensitive to the parameters of the scheme itself and the dynamic traffic load [39].
The goal of a proper AQM scheme is to maintain the average queue size between Min th and Max th of the queue at low oscillations and in turn help avoid forced drops.It is inappropriate for the average queue size and the original RED packet dropping probability to be linearly related [10].It has been found in [40] that with a small average delay in the low-load scenario, the link bandwidth is not fully utilized; thus, in order to improve the link utilization, a smaller packet dropping probability should be used.The link bandwidth is fully utilized with a large average delay in the high-load scenario; thus, a larger packet dropping probability should be used in order to reduce the average delay.
Considering the above requirements, we modified the packet dropping probability of RED according to the traffic-loads which we call Smart RED (SmRED), in which we divided the packet dropping probability function into two regions according to the setting of the Min th and Max th to distinguish between low and high traffic-load conditions.By doing this, we want to achieve a trade-off between the throughput and the delay.
RED's original packet dropping probability can be defined as where avg is the average queue length, Min th is the minimum threshold that the average queue length must exceed before any packet marking or dropping is done, and Max th is the maximum threshold that the average queue length must exceed before all packets are marked and dropped.We first decided one target value of the eNB buffer size below which we treat the traffic volume as low and above which we treat the traffic volume is high.We define the target value as the middle of the minimum threshold and the maximum threshold and it can be defined mathematically as: If the avg is below the target value, we set the packet dropping probability as On the other hand, when the traffic volume starts to become high, i.e., if the avg is between Target and Max th , then the packet dropping probability function is defined as where, i = 2, 3, 4, 5, . . .Increasing the value of i will lead to lower dropping probability during low traffic-load condition and high dropping probability during high traffic-load condition.We can tune the parameter i to obtain different dropping functions as shown in Figure 4, where Min th = 20, Max th = 80, and P max = 0.5.Depending on the value of the parameter i, we define SmRED's different versions as SmRED-i.In summary, the expressions of SmRED-i's packet dropping probability, being the function of the average queue length, are shown in Equation (5).
The detailed proposed discard strategy of the algorithm can be described as shown in Figure 5. Essentially, one cannot simultaneously have a high link utilization and low queuing delays.Therefore, a reasonable trade-off is required between these two performance measures [41].Tuning the value of the parameter i gives a trade-off between high throughput and low delay among different versions of SmRED.The effectiveness of our proposed scheme is presented via simulation in Section 6.

Network Model
In this section, we will describe the network model that we used for simulation in ns3, system parameters and provide a brief description of different scheduling algorithms.

Network Topology and System Parameters
In order to systematically explore the interactions between the AQM-based buffer management and scheduling algorithm, we have to delve deeper to consider mobile network elements and protocols.We implemented our proposed strategy using the LTE module of the ns3 [9] simulator configured with the topology depicted in Figure 6 for single-cell and Figure 7 for the handover scenario in multi-cell topology.We used the LTE/EPC Network Simulator and Analysis (LENA) module [42] to create an end-to-end LTE network.The LENA module has all the major elements of a real LTE system including the Evolved Packet Core (EPC) and air interface Evolved UMTS Terrestrial Radio Access (E-UTRA).The LTE model in ns3 provides a detailed implementation of various aspects of the LTE standard such as adaptive modulation and coding, Orthogonal Frequency Division Multiple Access (OFDMA), hybrid Automatic Repeat Request (ARQ) etc.The ns3 implementation follows the detailed specification of 3GPP LTE and various versions of TCP.Hence, the results obtained in the simulation can be representative of what happens in a real system.The guaranteed bit rate video traffic is simulated, the remote host on the right side acts as sources, and the UEs on the left side act as sinks.TCP data senders are of the TCP-Cubic type.We used TCP-Cubic as it is the default TCP congestion control algorithm in Linux OS in real networks.The TCP packet sizes are 1000 bytes.We also used the Single Input Single Output (SISO) transmission mode for both UE and the eNB.The remote host is connected to the LTE core network via a wired link with the link capacity of 10 Mbps (1250 packets per second) and the propagation delay on this link is 50 ms.The application data rate is 100 Mbps.The thresholds for the packet dropping function are set as Min th = 20 packets and Max th = 3 × Min th packets and the maximum packet dropping probability is set as 0.1.The eNB RLC buffer size is 100 packets.The total simulation time is 75 s.Table 1 shows the details of some of the important simulation parameters and their values.

Scheduling Algorithms
To meet the QoS requirements, one of the important features in the LTE system is multi-user scheduling.Multi-user scheduling is responsible for distributing available resources among active users.Both downlink and uplink packet schedulers are deployed at the eNB, and, since there is no inter-channel interference provided by OFDMA, they work with a granularity of one TTI and one RB in the time and frequency domain, respectively.Resource allocation for each UE is usually done based on the comparison of per-RB metrics.These metrics can provide the status of transmission queues, channel quality, resource allocation history, buffer state and QoS requirements.The main differences among resource allocation strategies exist due to the trade-off between computational complexity and decision optimality.To define a resource allocation policy for the LTE system, the main design factors that always should be taken into account are spectral efficiency, scalability and complexity, fairness and QoS provisioning.A good overview of different allocation strategies such as Maximum Throughput (MT), Blind Equal Throughput (BET), Proportional Fair (PF), Throughput to Average (TTA), Round Robin (RR), Priority Set Scheduler (PSS) , and Token Bank Fair Queue (TBFQ) etc., introduced for LTE systems, highlighting pros and cons related to each solution, can be found in [37].

Simulation Result and Performance Evaluation
In this section, we first demonstrate the effectiveness of different versions of SmRED as compared to the original RED algorithm.With the topology in Figure 6, we have simulated different versions of SmRED where the Guaranteed Bit Rate (GBR) video traffic is downloaded from the source to the target UEs.The number of UEs is varied from 2 to 10 to demonstrate different traffic-loads.We set 15 resource blocks in eNB to serve the UEs.This means that the channel bandwidth is 3 MHz (Other values for resource blocks are 6, 25, 50, 75, 100.We set the value 15 in order to create a bottleneck in the LTE wireless access part.So, even with 10 parallel TCP connections, i.e., 10 UE, the traffic load is considered high when the application is sending packets at a rate of 100 Mbps).The results are shown in Figures 8 and 9   From the results, we can see that, in a low traffic-load condition, a higher value of i gives higher throughput and lower delay.In fact, i = 5 behaves like Droptail.This is because, when the traffic-load is low, higher values of i result in lower dropping probability and even no dropping occurs for some cases.However, as the traffic-load is increasing, higher values of i such as i = 5 failed to provide the optimal performance.This is because, for the high traffic-load condition, if the dropping probability is set too low (before the Target), then the AQM algorithm failed to inform the TCP source about the possible congestion more in advance.As a result, the queue becomes full and packet dropping as well as TCP timeout occurs and the TCP window size enters into the slow start phase, which results in lower throughput and higher delay.On the other hand, the dropping probability should not be set too high (after the Target), so that unnecessary dropping can be avoided.From the result, we can see that SmRED with i = 4 achieves a better balance between the throughput and the delay as compared to other versions.
To understand the performance of RED as compared to SmRED-4, we varied the RLC buffer size as the input parameter and calculated the average packet loss rate in percentage with the high traffic load scenario (10 parallel TCP connections) and moderate traffic load scenario (4 parallel TCP connections).The buffer size was varied from 100 packets to 200 packets with a granularity of 25 packets.The simulation results are shown in Tables 2 and 3 for high load and moderate load respectively.As can be seen from the table, the packet loss rate of SmRED-4 is lower than that of RED with buffer size changed throughout the whole range.With the high traffic load scenario, the reduction rates of the packet loss rates of SmRED-4 compared with RED increased from 3.05 to 14.05.On the other hand, when the traffic load is moderate, the reduction rate is increased from 17.99 to 34.19.This is because, when the traffic load is low, the dropping probability of SmRED-4 is low.So, unnecessary packet dropping is avoided.Thus, the gain in the reduction rate of the packet loss rate of SmRED as compared to RED is higher.To validate the improved performance achieved by different scheduling algorithms along with SmRED-4, we compared the performance of different scheduling algorithms with SmRED-4 and RED in a single-cell without handover (HO) and in multiple cells with HO.We kept the number of UEs at 10 to simulate high traffic-load conditions.In order to experience different channel conditions, the UEs were placed randomly in the single-cell within a distance of 500 m to 5000 m from the eNB.All UEs downloaded GBR video traffic from a single server.For multi-cell, all UEs were initially located in the first cell, moving towards other cells at a specific speed of 20 m/s.We used a 3GPP-specified A3-RSRP handover algorithm that utilizes Reference Signal Received Power (RSRP) measurements and event A3, as designed in [43].Event A3 is defined as a reporting triggering event which is activated when the neighboring cell's measured RSRP is better than that of the serving cell by a certain offset.
We compared the download performance in terms of end-to-end average throughput and end-to-end average delay.The results are shown in Figures 10 and 11 for both single-cell and multi-cell.In the multi-cell network scenario, we explored the effect of handover on the TCP performance.Due to the handover, TCP throughput suffers from the packet duplicates and extra latency caused by buffering in the eNB.Thus, in the multi-cell network, the reduction in the total TCP throughput as compared to the single-cell network can be attributed to TCP.
When the traffic-load is high (UE = 10), different scheduling algorithms with SmRED-4 outperform RED.This is because SmRED-4 has a higher dropping probability than RED when the traffic-load is high, which helps to inform the source more in advance than RED about the possible congestion.Thus, the TCP source reduces the window size in advance to avoid resending lost packets, which helps to achieve lower end-to-end average delay (Figure 11) without sacrificing the throughput much.An interesting observation is that, in a single-cell topology without handover, MT achieves the highest throughput as expected but in multi-cell topology with handover, its performance degradation is the highest among all the schedulers.This is because MT assigns RB to those users which have good signal quality in order to utilize the the full bandwidth.However, as the UEs are moving and handover occurs, different UE's face different signal conditions and MT assigns RB to one UE for a short period of time and then to others.Thus, the overall throughput degraded much but the fairness is increased as expected and can be seen from Figure 12.   Figure 12 shows the fairness of different schedulers (Jains fairness Index [44]).From the performance of MT, TBFQ and TTA, we can see that they blindly enhance the overall cell throughput by utilizing the effective channel in terms of spectral efficiency, but these approaches provide unfair resource sharing among users.Thus, in order to ensure minimum performance even to cell-edge users or in general to those users suffering bad channel conditions, fairness is therefore a criterion of the utmost importance that should be taken into consideration by a good scheduling algorithm.Fairness can be ensured by considering the past service level experienced by each user as is taken into account by many schedulers such as PF, RR, PSS and BET.

Conclusions
This paper presented a performance evaluation of different scheduling algorithms in the presence of AQM-based congestion control with handover.From a maximum utilization of the channel capacity point of view, the best solution is to allocate the RB to those users that experience good channel condition.However, doing so will lead to unfair resource allocation to other users.Thus, providing equal service to all users, i.e., fairness, QoS provisioning, computational complexity, and energy savings, can be achieved at the cost of lower cell capacity.According to this requirement, the implementation design of a RB allocation strategy is a trade-off among the goals that the network operator wants to achieve and the spectral efficiency.
On the other hand, a good AQM algorithm should require very low computational cost and be easily implemented in a real network.SmRED-4, a smart congestion control mechanism based on RED, is aimed at solving link under-utilization and large delay problems in low and high traffic-load scenarios for eNB RLC in the LTE network.Furthermore, the migration from RED to SmRED-4 in a real network needs very little work because of its simplicity (only the packet dropping profile is modified).To achieve gains in user application performance, an optimal RLC buffer occupancy at eNB is expected.SmRED-4 effectively improves the drawbacks of RED, and achieves a better balance between higher throughput and lower delay.Thus, a good combination of the AQM algorithm with the scheduling techniques can provide optimal performance to network operators.

Figure 1 .
Figure 1.Model of Radio Link Control (RLC) entity in the Long-Term Evolution (LTE) network.

Figure 8 .
Figure 8. End-to-end average throughput of different versions of Smart RED (SmRED).

Figure 9 .
Figure 9. End-to-end average delay of different versions of SmRED.

Figure 10 .
Figure 10.End-to-end average throughput of different schedulers.

Figure 11 .
Figure 11.End-to-end average delay of different schedulers.

Table 1 .
Different Simulation Parameters in ns3.

Table 2 .
Packet Loss Rate in the High Traffic Load Scenario.

Table 3 .
Packet Loss Rate in the Moderate Traffic Load Scenario.