A Resource Allocation Scheme for Packet Delay Minimization in Multi-Tier Cellular-Based IoT Networks

: With advances in Internet of Things (IoT) technologies, billions of devices are becoming connected, which can result in the unprecedented sensing and control of the physical environments. IoT devices have diverse quality of service (QoS) requirements, including data rate, latency, reliability, and energy consumption. Meeting the diverse QoS requirements presents great challenges to existing ﬁfth-generation (5G) cellular networks, especially in unprecedented scenarios in 5G networks, such as connected vehicle networks, where strict data packet latency may be required. The IoT devices with these scenarios have higher requirements on the packet latency in networking, which is essential to the utilization of 5G networks. In this paper, we propose a multi-tier cellular-based IoT network to address this challenge, with a particular focus on meeting application latency requirements. In the multi-tier network, access points (APs) can relay and forward packets from IoT devices or other APs, which can support higher data rates with multi-hops between IoT devices and cellular base stations. However, as multiple-hop relaying may cause additional delay, which is crucial to delay-sensitive applications, we develop new schemes to mitigate the adverse impact. Firstly, we design a trafﬁc-prioritization scheduling scheme to classify packets with different priorities in each AP based on the age of information (AoI). Then, we design different channel-access protocols for the transmission of packets according to their priorities to ensure the QoS in networking and the effective utilization of the limited network resources. A queuing-theory-based theoretical model is proposed to analyze the packet delay for each type of packet at each tier of the multi-tier IoT networks. An optimal algorithm for the distribution of spectrum and power resources is developed to reduce the overall packet delay in a multi-tier way. The numerical results achieved in a two-tier cellular-based IoT network show that the target packet delay for delay-sensitive applications can be achieved without a large cost in terms of trafﬁc fairness.


Introduction
With the rapid growth of wireless services beyond fifth-generation (5G) technology and the proliferation of innovative applications, a massive number of devices have been deployed to collect and share data to achieve their full potential; this is known as the Internet of Things (IoT) [1,2].IoT technology plays a key role in modern wireless communication systems.IoT networks are expected to realize features like local decision making and/or remote monitoring and control by utilizing sensory mechanisms in real-time applications by deploying a massive number of devices [3].IoT technologies, such as smart cities, vehicular communication, autonomous driving, and remote health care, demand a high amount of data traffic for wireless networks and are growing explosively [2].In this context, providing reliable wireless access for the massive IoT devices and the diversification of data traffic is essential to the realization of 5G IoT networks and has attracted considerable attention from academia and industry [2,4].Cellular-based IoT networks are considered the most promising potential solution to provide last-mile connectivity for IoT devices due to their cost-effective deployments and guaranteed services, such as high scalability, diversity, and security, which do not need massive additional infrastructure deployments and significant changes in current cellular networks [2,5].
In cellular-based IoT networks, massive IoT devices initially access the cellular base bastion (BS) for network connections directly or through IoT gateways by performing a random access (RA) procedure [5].However, the massive number of IoT devices simultaneously access and uplink the transmission of small data packets and even need to frequently forward the real-time updating status information.Then, preamble collision among IoT devices is inevitable, and as such, improving the access mechanism of existing cellular systems for better quality of service (QoS) with less power consumption of IoT devices is one of the key challenges for cellular-based IoT networks [2,5,6].
In IoT networks, a broad range of IoT services are enabled by massive amounts of IoT devices to monitor environmental factors such as temperature and humidity [4,7].In this case, IoT devices need to process different data traffic streams for various receivers, where each traffic stream has different performance metrics, such as delay, throughput, or AoI, in characterizing the timeliness [1,2,4].Then, there is an unavoidable need for designing innovative data-collection mechanisms for cellular-based IoT networks to enhance efficiency and scalability [7].Due to the delay-tolerant and uplink-preferable characteristics of IoT traffic with small data packets, the main access technology to request the channel resources in the uplink transmission is the contention-based RA [2].Meanwhile, different IoT services have different QoS requirements based on the properties of the traffic data and functionality.Then, providing access mechanisms with satisfactory network performance for a massive number of IoT devices and the corresponding variable network services is still a key challenge.However, IoT devices repeatedly generate updated information regarding the environmental factors being observed and transmit them to the BSs [3].Due to the properties of IoT networks, the purpose of frequent updating of the information status is to keep the information as fresh as possible, which means the delay is as minimal as possible.In [8], the authors introduced the idea of the Age of Information (AoI), which measures the freshness of information, and they discussed the possibility of utilizing the efficient design of freshness-aware IoT networks.Then, a series of works utilized the results of [8] by characterizing the temporal mean of AoI or other freshness-related metrics for various variations in queuing models.In [4], the authors applied the concept of AoI to measure the freshness of information at the BSs regarding the random processes monitored by IoT devices, while [1] studied the AoI of K-tier cellular-based IoT networks while considering weighted path loss association policy and fractional power control strategy.However, artificial designs, like utilizing timing advance (TA) information to avoid collisions, can result in unfairness, which is not considered in most existing studies.
In this paper, we propose a multi-tier cellular-based IoT network with a queuing theory-based theoretic model and utilize a traffic-prioritization scheduling scheme to investigate minimum packet transmission delay to meet particular application latency requirements on AoI.Firstly, a multi-tier hierarchical cellular-based IoT network is modeled by utilizing Voronoi tessellation.Secondly, data packets from each IoT device are allocated different priorities based on their AoI in real time.A multiple access protocol is designed for data packets to access the uplink channel and the data packet with the highest priority without the regulatory requirement for listen-before-talk (LBT), while the others have carrier-sense multiple access with a collision-avoidance (CSMA/CA) mechanism.Thirdly, the mean packet-transmission delay is obtained for different traffic packets by analyzing the M/G/1 queue in each AP.Fourthly, a mean packet total delay minimization problem by optimally allocating network resources for a two-tier cellular-based IoT network is given and solved by utilizing the gradient descent method with the bisection method.The numerical results demonstrate the superior effectiveness of the proposed mechanism and algorithm, which not only achieve the minimum mean packet delay but also give consideration to delay-sensitive traffic based on the AoI metric.
The remaining parts of this paper are structured as follows.In Section 2, we review the related literature.In Section 3, we model the system for a cellular-based IoT network with multi-tier APs and investigate the queueing model in each AP to achieve mean packet delay for packets with different priorities.In Section 4, we present the problem formulation to minimize mean total packet delay in a multi-tier cellular-based IoT network and solve it.In Section 5, we assess the effectiveness of the proposed mechanism by the mean packet delay in a two-tier cellular-based IoT network by showing numerical results, followed by conclusions in Section 6.

Related Literature
In [9], a queuing-theory-based model that allows for cross-layered optimization was proposed to investigate the possibility of leveraging multi-RAT to reduce transmission delay without losing the requisite QoS and maintaining the freshness of the received information via the AoI metric in the multi-hop IoT networks.Meanwhile, previous research [10] proposed a three-dimensional resource allocation algorithm for maximizing the total throughput in a cognitive radio IoT network with simultaneous wireless information and power transfer.In [11], the authors maximized the energy efficiency and spectral efficiency of the space-air-ground IoT networks by optimizing subchannel selection, power control, and UAV position deployment.Refs.[12][13][14] investigated the resource allocation in cellular IoT networks, which revealed the rising attention of researchers given to next-generation cellular networks and IoT techniques.The authors in [15] maximized the energy efficiency in a UAV-based network that collects information from smart devices, sensor devices, and IoT devices by optimizing UAV trajectory, power allocation, and time slot assignment.However, as the most distinguishing feature, diverse QoS requirements, especially the strict data packet latency at difficulty scenarios with multi-hop transmissions, are not comprehensively investigated in the existing literature, which requires a more elaborately designed optimization mechanism based on the AoI metric, and this need motivates this paper.In [16], authors studied the resource allocation optimization mechanism to minimize mean packet transmission delay in a three-dimensional cellular network with multi-layer UAV networks.Furthermore, in [17], data packets were classified into incumbent packets and relayed packets based on the data source locations.Packet delays for various packet classes at each layer of a multi-layer UAV network were investigated, and minimum total packet delay was achieved by optimally allocating spectrum and power resources among layers of the UAV network.However, all the relayed packets in each UAV for relaying to the macrocell base station were allocated with the same priority to be served in the queuing model, which is less efficient in utilizing limited radio resources with guaranteed network performance, especially for the packets that already experienced large delay that accumulated at the previous relaying UAV layers.Meanwhile, multiple-hop relaying causes a delay penalty to packets, which is fatal to delay-sensitive packets in life-limited UAV networks.Then, it is essential to investigate and guarantee packet delay performance for all the relayed packets in the hierarchical multi-tier networks based on the AoI metric, which are the main contributions and innovations of this paper.Furthermore, we extend the technologies into a different but more general scenario, which is the cellular-based IoT networks, and it can be extended to the scenarios for heterogeneous data with different delay requirements based on the preference of the network designer.

System Model
In this paper, we investigate a cellular-based IoT network with multi-tier APs, which is denoted by a graph G({M, A}, D).In G, M = {m 1 , . . ., m M } is the set of M MBSs,

Channel Model
In cellular-based IoT networks, the communication environment is complex, and it is impossible to achieve ideal LOS communication channels for all the IoT devices, even with the utilization of multi-tier APs to extend network coverage and improve network connectivity.To achieve reliable analysis results for general communication scenarios, this paper models propagation channels by considering LOS and non-LOS components along with their occurrence probabilities separately in (1).The LOS connection probability is decided by the communication environment, height, and density of surrounding buildings, and the elevation angle between the IoT devices and APs along locations of them [18].Depending on the LOS or non-LOS connection between the IoT device and AP, the received power p r jj at node j from node j is shown as where η is an additional attenuation factor due to the non-LOS connection and ∀j, j ∈ M ∪ A ∪ D, j = j .This paper utilizes the Rayleigh fading model in non-LOS paths for both desired signal and interference to achieve reliable analysis results in general communication scenarios, which leads to a random channel power gain following an exponential distribution with unit mean.Then the received power at node j from node j, p r jj , is shown as [16] p where power transmitted from node j to node j is the p t jj .The g jj is a random variable that follows an exponential distribution with mean 1  h to account for the multi-path fading.Path loss exponent is α.
the distance between node j and node j , where (x j , y j , z j ) and (x j , y j , z j ) denote locations of node j and node j in the Voronoi tessellation [19].
The transmission rate C jj from node j to node j is where the channel bandwidth between j and j is b jj with the co-channel interference and white Gaussian noise are Ic and N 0 .

Multi-Tier Cellular-Based IoT Network Model
In a cellular-based IoT network, the expected number of APs covered by one MBS can be given in the Poisson-Voronoi (PV) system, as shown in [19,20] where the radius of MBS cell coverage is l m .By utilizing the same methodology, the expected number of IoT devices in one AP coverage E[N d a ] = δ d δ a .With modifications in (4), the expected total length of connections between APs and MBS is as shown as and expected total length of connections from device to AP is .
The expected average length of connections between APs and MBS can be turned into the radius of MBS cell coverage, which can be given by utilizing the properties of Voronoi properties mentioned above: Then, the mean number of tiers of APs covered by one MBS is where r min is the distance between neighboring tiers of APs, which is determined by the minimum transmission range needed to avoid co-channel interference.By taking the same procedure as shown in ( 4), mean number of APs at the ith tier of AP network is represented as [20] where i ∈ {1, . . ., (I − 1)}.

Queueing Model in Multi-Tier Cellular-Based IoT Network
As shown in the multi-tier cellular-based IoT network model, each AP at the ith tier receives packets not only directly from the IoT device served by it but also from APs at the (i + 1)th tier for relaying.To utilize the limited resources efficiently while considering the QoS requirement of the delay-sensitive IoT devices and the corresponding applications, we utilize different access protocols for different data traffic streams in the queueing model.The priority of each data packet is shown as where q ∈ {1, . . ., Q} are the data packets in the waiting list to access the channel.The data packet that has the highest priority is named q 1 , and the others with different sequence numbers from {2, . . ., Q}, to indicate the priorities.Data packet q 1 with the highest priority in the queueing model immediately occupies the channel as soon as the channel is sensed to be idle without the LBT procedure, while the other packets follow the CSMA/CA mechanism in the same batch.After the packet q 1 is successfully transmitted, the data packet with the highest priority among all the remaining data packets that not have been successfully transmitted yet follows the CSMA/CA mechanism in the next sensing process.In this paper, we name the data packet q 1 with the highest priority as a packet with priority and the other packets as packets without priority.
For the data received from the APs at the (i + 1)th tier, the priority of each packet is decided by the location of the AP that the generating device is attached to, which is because the packet generated by the device attached to the APs far from the ith tier already experienced delay, which makes it more emergent to give priority to it to satisfy the QoS requirement by utilizing a shorter contention window size for these packets than that of the packets near the ith tier.The packet service time of packets with priority is determined by the status of the packets without priority.Under the average fraction of time that packets without priority do not occupy the channel, packets with priority in the queueing instantly occupy the channel, and the packet service time in this case is as shown as [17] where there is no packet with priority occupying the channel at the moment.S s,o is the channel occupancy time in service of packets with priority, and that for packets without priority is S r,o .S s,o and S r,o follow the exponential distribution with E[S s,o ] = 1 µ s and E[S r,o ] = 1 µ r , respectively.Packet arrival rates for packets with priority and packets without priority are λ s and λ r , which follow Poisson distribution, respectively.
On the other hand, the packet service time of packets with priority with existing packets with priority occupying the channel is given by where R s,busy is the residual busy period of packets with priority, whose Laplace-Stieltjes transform (LST) is as shown as The LSTs of B s based on the S s,case1,1 are expressed as follows: where B s (s * ) is a root of quadratic equation.The mean packet service time of packets with priority in this case is

With Packets in the Channel
In addition, under the average fraction of time that packets without priority occupy the channel, packets with priority wait until the channel is released and occupy the channel as soon as the channel is sensed to be idle.The packet service time in this case is given by where R r,o is the residual service time of S r,o .It is worth mentioning that this case is based on the fact that it is impossible to have packets with priority occupying the channel under the assumptions in this case.
Then, based on the fundamental theorem, the packet service time of packets with priority is shown as where (a) is achieved by utilizing the memoryless property of exponential distribution.
The LST of S s can be given as ).By taking the analysis procedure as shown in the previous section for packets with priority, the packet service time of packets without priority at the ith tier of the multi-tier cellular-based IoT network can be achieved based on the status of packets with priority.In the case that packets with priority do not take the channel, the packet service time of packets without priority is [ where S DIFS is the time for successful completion of the distributed coordination function (DCF) inter-frame space (DIFS) without any packets with priority arriving.We also assume that there are no packets without priority occupying the channel in this case at the moment.However, if the channel is occupied during the duration of the DIFS, a new DIFS restarts after the channel is idle again, which can be given as a summation of stopped time for the time between the first DIFS starts and the last DIFS duration T stop,DIFS with a busy period of packets with priority B s .Then the S DIFS analyzed above for packets without priority with CSMA/CA mechanism is [22], as shown in (19).
In (19), the Pr and Pr k,arr,DIFS is the probability that there are arriving packets within the kth DIFS attempt duration.
The LST of T stop,DIFS and B s are as shown as follows: and where the B s (s * ) is a root of cubic equation.Then, the LST of S DIFS is shown as S back is the time for the backoff procedure in the CSMA/CA mechanism.After detecting the channel is not occupied for the DIFS duration, the station initiates the transmission only if the channel remains idle for an additional random time duration S back , which is determined by the contention window size and is given by where T back is the backoff interval and N back,s is the number of arriving packets with priority during T back .N back,i+1 is the number of arriving packets without priority transmitted from devices served by APs at the (i + 1)th tier.Then, the LST of S back can be given by On the other hand, the packet service time of packets without priority with existing packets without priority occupying the channel is given by where R r o ,busy is the residual busy period of packets without priority occupying the channel, whose LST is shown as The LSTs of B r o are expressed as follows: where r o is the packets without priority occupying the channel.B r o (s * ) is a root of cu- bic equation.
Then, the mean packet service time of packets without priority in this case is shown as

With Packets with Priority in the Channel
The packet service time of packets without priority in the case that packets with priority occupy the channel is presented as It is also worth mentioning that this case is based on the fact that it is impossible to have packets without priority occupying the channel under the assumptions in this case.
Then based on the fundamental theorem, the packet service time of packets without priority is shown as The LST of S r is S r (s In this paper, the successful transmission probability between the device and AP is β, then the mean packet arrival rate of packets with priority from the device to the serving AP λ s is where C uv is the transmission rate from one device to the serving AP.Then, the mean packet arrival rate in any AP at the ith tier of a multi-tier cellular-based IoT network is given by is the mean service rate for packets in any AP at the (i + 1)th tier of the network, which is also the packet transmission rate of packets without priority from any AP at the (i + 1)th tier to APs at the ith tier, C vv (i+1) .

Mean Packet Transmission Delay
If we utilize the M/G/1 queue in each AP, the mean waiting time of packets in a random AP at the ith tier E[W v i |N v i , b, p] with given N v i , b and p is shown in the following [23]: where E[L v i ] is the number of packets in any AP at the ith tier and b, p] are mean service time and residual service time, respectively.The b and p are spectrum and power allocation ratios for each tier in a multi-tier cellular-based IoT network.At each tier, spectrum and power are uniformly allocated to APs.The condition that ρ v i ≤ 1 must be satisfied to ensure the stability of the queue system.For convenience, the superscript v is removed for all parameters without any ambiguity.
With Little's law and the relationship between service time and residual service time, which is the mean waiting time in (35) can be also represented as Then, the mean packet transmission delay in a random AP at the ith tier, E[D i |N i , b, p], is given by b, p] are the first and second moments of conditional packet service time and can be obtained by applying the property of LST, which can be given as and

Channel Occupancy Time in Service
In this paper, the packet transmission time from the ith tier to the (i − 1)th tier is defined as the packet service time S i , which is E[S i ] = 1/C i for per unit length packet.The achievable packet transmission rate in any AP at the ith tier is shown as ), where b i and p i are the total spectrum and power resources allocated to APs at the ith tier.Then the CDF of S i can be represented as With the fact that g ∼ exp(h) [24], we can obtain where the proof for (b) is given in Appendix A.
Then, the PDF of service time S i at the ith tier can be achieved: ) can be achieved by derivation operations.
With a conditional PDF of the service time, we can obtain the mean packet transmission delay in a random AP at the ith tier.

Total Packet Delay Minimization Problem
In multi-tier cellular-based IoT networks, it is essential to investigate mean total packet delay for packets due to the delay penalty to packets caused by the multi-hopping transmission, which is the summation of served delay and relayed delay experienced at each tier, and can be shown as follows: We present a cellular-based IoT network with APs in two tiers to demonstrate the effectiveness of the analysis results and then formulate a mean total packet delay minimization problem for packets transmitted from devices served by APs at the second tier of the IoT network.Then, the optimization problem with objective function and the corresponding constraints is shown as where B and ξ = p 1 P , respectively.The P and B are the total power and spectrum for the two tiers of APs, respectively.
The proposed algorithm for total packet transmission delay minimization via resources optimal allocation in the cellular-based IoT networks with multi-tier APs is exhibited in Algorithm 1 [25].

Numerical Results
In this paper, we present a cellular-based IoT network with two-tier APs in one MBS cell coverage with dimensions 1 km × 1 km × 1 km.The distance between two tiers of APs is 50 m.In the simulation, δ m , δ a and δ d are 1, 1000, and 10, 000 per km 2 , respectively.The total spectrum and power for APs in the network are 1000 MHz and 100 W, respectively.The step size θ to update φ and ξ is 0.05.The system parameter T DIFS is 36 × 10 −6 s, and the maximum T back is 72 × 10 −6 s by taking slot time as 9 × 10 −6 s with the contention window size as 8.
Figure 2 presents variation trends of the total packet transmission delay E[D t ] as spectrum allocation ratio φ for different fixed power allocation ratios ξ.Meanwhile, the minimum E[D t ] for each ξ reveals the optimal value of φ, which is φ * as shown with the vertical dotted lines.Based on the facts shown in the figure, we can observe that the value of φ * is decreasing as the ξ keeps increasing, which is due to the fact that more transmission power can compensate the shortage of spectrum resource to achieve the same minimum total packet delay.Figure 3 shows the variation trend of the total packet transmission delay E[D t ] as power allocation ratio ξ for various fixed values of spectrum allocation ratio φ.Also, the minimum E[D t ] for each φ reveals the optimal value of ξ, which is ξ * as shown with the vertical dotted lines.The value of ξ * is decreasing as the φ keeps increasing, which reflects the same conclusion as shown in Figure 2 and cross-proves the correctness of the proposed scheme and algorithm.On the other hand, it presents a performance comparison of the proposed mechanism and the spectrum resource allocation with the Monte Carlo simulation.As shown in the figure, the proposed mechanism obtains better minimum mean total packet delay with φ * than the φ is any other values obtained by using the Monte Carlo simulation.Meanwhile, the performance comparison of the proposed scheme with the spectrum allocation based on the round-robin scheduling mechanism with φ = φ rr by the total packet transmission delay is given in Figure 4.In the round-robin scheduling mechanism, each tier in the multi-tier cellular-based IoT network alternatively utilizes all the network resources, which results in a single-hop transmission network and accumulates all the network traffic.It also shows superior performance than the round-robin scheduling in the figure.Based on the results shown in Figures 2 and 3, Figure 5 describes the value of optimal ξ and optimal φ, which collaboratively reveal the minimum E[D t ] as shown in the third dimension in the figure.In the figure, the value of ξ * is decreasing with φ increasing, and the value of φ * is decreasing when ξ is increasing, as shown in Figures 2 and 3.With the condition mentioned above that ρ i ≤ 1 must be satisfied, ξ cannot be less than 0.5 in Figures 3 and 5  With Figure 5 as a basis, Figure 6 describes the mean delay E[D s,1 ] at the first tier of the network experienced by the packets transmitted from devices served by APs at the first tier of the multi-tier cellular-based IoT network.The optimal values of φ, φ * , and the optimal value of ξ, ξ * , are constant at the maximum values to achieve the minimum E[D s,1 ], as shown in Figure 6, on account of more spectrum and power resources leading to the smaller delay to packets from devices attached to the APs in the first tier with constant packets arrival rate and higher priority than the other packets.On the other hand, Figure 7 exhibits the mean delay E[D s,2 ] at the second tier of the network experienced by the packets transmitted from devices served by APs at the second tier of multi-tier cellular-based IoT networks.The optimal value of φ, φ * , and the optimal value of ξ, ξ * , is constant at the minimum values to obtain the minimum E[D s,2 ], as shown in Figure 7, for more spectrum and power resources.Figure 8 shows the mean delay E[D r,1 ] at the first tier of the network experienced by the packets transmitted from devices served by APs at the second tier of the multi-tier cellular-based IoT networks for relaying.The optimal value of φ, φ * , is constant at the maximum value, which is due to the relaying delay being the dominant factor in mean total packet delay and needs enough resources to guarantee the QoS performance due to the large AoI.The optimal value of ξ, ξ * decreases from the maximum value to the minimum value as φ increases because the maximum power resource is needed to obtain the minimum mean relaying delay when the spectrum resource cannot be guaranteed, as the spectrum increases until it reaches the value where the minimum mean total packet delay can be guaranteed with any ratio of power resource allocation, which changes to minimum value sharply, and the remaining part is allocated to the second tier of the IoT network.It also shows that the mean total packet delay is more sensitive to ξ compared to φ, because the mean delay for relaying is more sensitive to ξ, as shown.

Conclusions
In this paper, we proposed a multi-tier cellular-based IoT network with a traffic prioritization scheduling scheme based on the AoI metric to satisfy strict data packet latency requirements in 5G IoT networks.Our mathematical framework based on the queueing theory models the time-domain behaviors of data packets with different channel access protocols to overcome delay penalties and ensure fairness for packets with large AoI metrics.Then, we minimized the mean total packet delay by optimally distributing spectrum and power resources among multiple tiers in IoT networks.Numerical results demonstrated the effectiveness of the proposed mechanism and algorithm in a two-tier cellular-based IoT network, which achieved the minimum total packet transmission delay to meet strict latency requirements.Moreover, the optimal allocation of spectrum and power resources minimized delays of data packets experienced at each tier in the multitier IoT networks, which realized monitoring and guaranteeing the packet latency as a requirement in real-time.Future studies will consider a more general and practical case in which forwarded packets have different priorities based on the location of the transmitting device.Our work contributes to the development of IoT networks by addressing packet delay penalties during multi-hopping transmission and improving fairness for different kinds of data packets.

3. 4 .
Mean Packet Delay of Packet with Priority 3.4.1.Without Packets in the Channel
,1 ].The total packet delay, E[D t ], includes delay experienced as served packet at the second tier E[D s,2 ] and relayed packet at the first tier E[D r,1 ].The ratios of spectrum and power allocated to APs at the first tier are φ and ξ, as φ = b 1

Figure 4 .
Figure 4. Performance comparison for the proposed scheme. .

Figure 5 .
Figure 5.The optimal φ and ξ for the minimum E[D t ].

Figure 6 .
Figure 6.The optimal φ and ξ for the minimum E[D s,1 ].

Figure 7 .
Figure 7.The optimal φ and ξ for the minimum E[D s,2 ].

Figure 8 .
Figure 8.The optimal φ and ξ for the minimum E[D r,1 ].