Next Article in Journal
A Single-Stage LED Streetlight Driver with Soft-Switching and Interleaved PFC Features
Previous Article in Journal
A Control Strategy for a Three-Phase Grid Connected PV System under Grid Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SCRAS Server-Based Crosslayer Rate-Adaptive Video Streaming over 4G-LTE for UAV-Based Surveillance Applications

1
College of Computing & Information Sciences, PAF Karachi Institute of Economics & Technology, Karachi 75190, Pakistan
2
College of Engineering, PAF Karachi Institute of Economics & Technology, Karachi 75190, Pakistan
3
Department of Electronic and Power Engineering, PN-Engineering College (PNEC), National University of Sciences and Technology (NUST), Karachi 74500, Pakistan
4
Department of Electrical Engineering, Faculty of Engineering, Islamic University of Madinah, Madinah 41411, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(8), 910; https://doi.org/10.3390/electronics8080910
Submission received: 18 July 2019 / Revised: 7 August 2019 / Accepted: 13 August 2019 / Published: 18 August 2019
(This article belongs to the Section Networks)

Abstract

:
This research focuses on intelligent unmanned aerial vehicle (UAV)-based, real-time video surveillance to ensure better monitoring and security of remote locations over 4G-LTE cellular networks by maximizing end-user quality of experience (QoE). We propose a novel server-based crosslayer rate-adaptive scheme (SCRAS) for real-time video surveillance over 4G-LTE networks using UAVs. Our key contributions are: (1) In SCRAS, mobile UAVs having preprogrammed flight co-ordinates act as servers, streaming real-time video towards a remote client; (2) server-side video rate adaptation occurs in 4G-LTE based on the physical characteristics of the received signal conditions due to variations in the wireless channel and handovers; (3) SCRAS is fully automated and independent of client assistance for rate adaptation, as it is intended for real-time, mission-critical surveillance applications; (4) SCRAS ensures that during rate adaptation, the current video frame should not be damaged by completing the current group of packet (GoP) before adaptation. Our simulations in NS3 provide credible evidence that SCRAS outperforms recently proposed schemes in providing better QoE for real-time, rate-adaptive, video surveillance over 4G-LTE under varying channel quality and frequent handovers that occur during flight by UAVs.

1. Introduction

Today, we are living in a hi-tech era where technology is affecting every aspect of life. Likewise, it is playing a significant role in guaranteeing high standards of surveillance. Recently, unmanned aerial vehicles (UAVs) have earned ample interest for this purpose. Their applications can be found in highway traffic and flood monitoring, the oil and gas industries for taking safety and security measures, and preventing natural disasters and terrorist attacks [1,2,3]. UAVs used for monitoring purposes are equipped with cameras. Hence, they provide essential knowledge through video streaming at a low operational budget and fewer hazards to life. In our research, 4G-LTE is preferred over other wireless technologies because UAVs with preprogrammed flight coordinates covers larger areas and ranges as compared to remote controlled aeroplanes. The 4G-LTE network is an optimized solution for remote location surveillance as it offers a cellular infrastructure that is vast and scalable. Moreover, the 4G-LTE network offers higher data rates and lower network latency compared to previous cellular network generations [4,5,6,7].
As far as UAV deployment strategy is concerned, there is no inter-UAV communication in our architecture like in Flying ad hoc networks (FANETs) [8,9,10,11]. In this work, all UAVs flying using realistic flight patterns and wireless channels models. In this architecture, all UAVs transmit real-time captured video to the nearest base-stations, and from there, video streams towards a command and control center at remote sites where all activities are monitored. We thoroughly examined the effect of video adaptation rate on channel conditions and handovers. On the basis of the obtained results, a novel rate-adaptive scheme for surveillance is proposed, referred to as a server-based crosslayer rate-adaptive scheme (SCRAS). The newly proposed scheme ensures better quality of experience (QoE) for video monitoring in an environment suffering from sudden channel quality variations and frequent handovers. As mentioned in [7], the upper layer protocols cannot extract the full benefits of 4G-LTE, so it is a good practice to extract those benefits of 4G-LTE using lower layer protocols and indirectly improve the performance of upper layer protocols. Our proposed video adaptation scheme is intelligent enough to wait for the completion of the current frame before video adaptation to avoid unnecessary performance degradation. We compared our work with the Non-Rate-Adaptive Scheme (NRAS), Fair Efficient Stable adaptIVE algorithm (FESTIVE) [12] and Physical layer Informed adaptive video Streaming (piStream) [13], which is designed for 4G-LTE networks. Our controlled experimental results show that SCRAS not only outperforms other schemes in the conditions, having frequent channel variations and handovers, but also beats others with respect to video adaptation without damaging frames.
To the best of our knowledge, the following combination of work is not yet explored in the literature.
  • We considered the scenario where servers are mobile, i.e., UAVs. They support variety of codec standards such as H.264 and H.265 for video streaming to a remote client. With preprogrammed flight coordinates, they perform video adaptation, among available video resolutions, which best suits their underlying surveillance application in the current 4G-LTE conditions. Hence, video viewing QoE is ensured.
  • Mobile servers perform server-side push-based rate adaptation after evaluating the received signal strengths and rate of handovers.
  • Our proposed rate-adaptive scheme uses a server-based proactive approach entirely independent of client assistance. This ensures fast streaming; hence, the client will never experience degraded video because of poor signal and handovers.
  • Our proposed scheme ensures that the video should not flicker by damaging frames during the adaptation process. For this reason, we took group of packets (GoPs) into account and deferred our adaptation process to the end of the latest GoP.
Our paper is organized as follows:
Section 2 presents a thorough review of research work in the field of video surveillance and adaptive streaming. Section 3 discusses the architecture required for simulation. The details regarding QoE metrics used in our paper for performance comparison and evaluation are discussed in Section 4. Section 5 discusses the proposed rate-adaptive scheme, SCRAS. Section 6 emphasizes the experimental environment and simulation settings. In this section, we also demonstrate a relationship between varying channel conditions and data rate, which provide the foundation for our scheme. Section 7 provides a comparative analysis of SCRAS with other schemes. Section 8 shows the analysis of frame-grabs captured. Section 9 highlights the outstanding characterstics of our proposed scheme, SCRAS, and Section 10 concludes this paper.

2. Related Work

In the modern era, the importance of surveillance cannot be denied due to the high concerns of security and privacy. In this context, UAVs play a vital role in remote monitoring because of their video streaming capabilities [14]. These UAVs monitor and stream any important activity in real-time at remote sites with low operational cost and minimal setup time [15]. It is a useful strategy to replace fixed cameras with UAVs for surveillance purposes. In the recent past, several surveillance frameworks were proposed, e.g., [16,17,18,19]. They offer basic tuning for variation in the target region during the flying mode. Hence, the region under surveillance can be adjusted by relocating UAVs. Interested users may refer to [20,21] for a basic understanding of FANETs and their operational behavior in a specific environment. Challenges faced by FANETs during communication were also discussed in these papers using a solution-oriented approach. Mustaqim et al. [22] examined the UAV-to-UAV and UAV-to-ground communication in a FANET environment using wideband and high-gain antenna arrays. Qazi et al. [23] proposed an architecture which is well-suited for UAV-based surveillance. Further, they examined line of sight (LoS) and nonline of sight (NLoS) propagation in aerial ad hoc networks, especially when UAVs are flying at very low-speed and altitudes. In [24], Qazi et al. proposed a framework based on UAVs for surveillance application over 4G-LTE networks, especially in urban areas at the remote site. Different attributes such as throughput and delay were examined under different models like multipath propagation loss, wireless propagation, fading and shadowing, etc.
Transmitting good quality video is a vital challenge in the dynamic wireless medium. UAVs stream videos at a fixed rate, whereas users require 4k resolution. To achieve this objective of maintaining video viewing QoE, UAVs dynamically adapt video streaming rates. Wireless link uncertainties are a big hurdle in this context. There is a requirement of optimization between UAVs and streaming video to achieve this objective [25]. At present, cellular network technologies are a vital source of communication. Among them, 4G-LTE is the most superior one. Compared to others, it offers high data rates and low network latency. However, existing upper layer protocols are not capable enough to take full advantage of the features offered by 4G-LTE. Hence, it is a good idea to leverage lower layer protocols by 4G-LTE provided information for indirectly enhancing the performance efficiency of upper layer protocols [7]. There are several measurements used in a wireless channel for this purpose [26]. Afroz et al. [27] highlighted the physical layer parameters used in 4G-LTE, like signal-to-interference-plus-noise ratio (SINR), reference signal received power (RSRP), received signal strength indicator (RSSI), and reference signal received quality (RSRQ) measurements. Moreover, they also discussed several handover scenarios in this paper. Such physical layer parameters are highly beneficial to improve video viewing QoE via rate adaptation [28]. Ramamurthi et al. [29] considered the link conditions for rate adaptation of video streaming. It categorized a link (channel) as good or bad on the basis of 4G-LTE physical layer parameters. Unlike universal mobile telecommunications system (UMTS), LTE only supports hard handover. This means that the user breaks its connection to current serving node and setting up a new connection to another serving node. Hence, the user equipment (UE) is disconnected for an instant from all nearby cells [30].
Dynamic adaptive streaming over HTTP (DASH) is the standard protocol for streaming in a wireless environment over HTTP. It chops video into small chunks for transmission. The quality of every chunk is coming into consideration in rate adaptation by considering the end-user channel situation. Kua et al. [31] provided a good survey on recent developments in different video rate-adaptive mechanisms such as client-based, server-based, and network-based mechanisms to assist in DASH-based applications. Atawia et al. [32] presented a DASH scheme that takes into consideration faulty rate predictions over upcoming wireless networks. By considering the association between a client and server, Marai et al. [33] presented a DASH-based approach which guarantees fairness, stability, optimal consumption of network resources, and fast convergence, while ignoring the buffer-based approach. Poojary et al. [34] worked on QoE parameters in adaptive video streaming by considering a wireless network. Researchers [34] demonstrated that whenever the average video bit-rate would equal the average channel rate, it provides the quality rate adaptation in a buffer-based approach. Kumar et al. [35] proposed a framework that ensures proper allocation of resources over 4G-LTE for adaptive video streaming. Su et al. [36] and Fan et al. [37], in their respective research, not only identified the challenges in video transmission over a wireless network but also proposed solutions. Ong et al. [38] proposed a queue management technique for enhancing video viewing QoE under network congestion scenarios. For this purpose, their technique exploits different GoP characteristics. Readers may compare our proposed solution SCRAS with existing techniques at a glance by referring to Table 1.

3. Proposed Architecture

In this paper, we propose a UAV-based architecture for monitoring and surveillance. The strategically important sites have some target areas that need surveillance. Hence, our proposed architecture is beneficial for preventing crimes and disasters by taking appropriate proactive measures on the basis of obtained surveillance results. Our solution is specially designed for 4G-LTE cellular networks, which are the latest technology. Figure 1 illustrates a typical communication scenario for our proposed architecture. Terminologies used here are borrowed from the 3GPP R4-092042 standard. In our considered scenario, we used the terms ENB and UE for enhanced node and user equipment, respectively. We have some cameras attached to UAVs, called macroUE, whereas our target area has several outdoor cells, termed macrocells. UAVs are continuously flying and may pop from one macrocell to another (handover). MacroUEs, i.e., UAVs transmit real-time videos to their respective base stations or macroENB. Using the internet connection, these streams are forwarded to a particular command and control center by these macroENBs. From here onward, we use the terms UAVs for macroUEs for ease of understanding.
To mimic the flight patterns of UAVs over 4G-LTE standards, certain models were introduced in the architecture for assistance. As we are interested in the video surveillance of remote locations, progressive downloading is out of question. In progressive download, a client downloads the entire video before playing it; which is an useless strategy in true live streaming. For real-time video streaming, we used User Datagram Protocol (UDP)for streaming video from a server (UAVs) to the remote client. The motive is that the remote static client always experiences better video viewing QoE for better surveillance. To protect a client from a degraded QoE, the server (UAV) makes its best effort and makes an instant decision for rate-adaptive video streaming wherever it detects a lower signal strength. This facilitates better QoE of video viewing for the client, who will never experience such kinds of degraded video because of poor signal strength. Therefore, in this case, server-side, dynamic, link-aware adaptation occurs. We also took the handover into consideration, as we know that there is no soft handover in LTE, and it only supports hard handover, which drastically decreases the bandwidth [39,40,41]. At the time of handover, the server-side adaptation occurs from high-resolution to low-resolution video streaming to achieve better QoE of video monitoring. Thus, we planned for rate-adaptive video streaming in which we were trying to maintain a better QoE of video monitoring while automated UAVs are capturing video in the flying mode.
At the start, the low-rate video is transmitted from the server to the client to avoid startup latency of video so that the client promptly watches the video as requested without any delay. To keep the quality of video very smooth, the algorithm will gradually switch the version of video quality from a lower to higher bit-rate and vice versa, keeping in consideration the strength of the current channel.

3.1. Wireless Signal Propagation Model Used for UAVs

For LoS propagation, we used the well known Friis propagation model depicted in Equation (1):
P r ( d ) = P t G t G r π 2 ( 4 π d ) 2 .
The LoS to NLoS distance was adjusted to 50 m.

3.2. Mobility Models Used for UAVs

(i)
Random Flight Pattern
We used the Gauss–Markov mobility model for outdoor UAVs to mimic the real world flight pattern [42]. There are three variables used in this model, i.e., speed, pitch, and direction. All the movement across the z-axis is measured by the variable pitch; by contrast, the speed and direction variables instruct the new speed and direction in the x–y plane. Equations (2)–(4) are given below to show this.
s n = α s n 1 + ( 1 α ) s ¯ + ( 1 α 2 ) s x ( n 1 ) ,
θ n = α θ n 1 + ( 1 α ) θ ¯ + ( 1 α 2 ) θ x ( n 1 ) ,
p n = α p n 1 + ( 1 α ) p ¯ + ( 1 α 2 ) p x ( n 1 ) ,
where s n , θ n , and p n show the new speed, new direction, and new pitch, respectively. The random model mimics the natural flight pattern of UAVs, which is a suitable scenario for a realistic illusion.
(ii)
Fixed Flight Pattern
The fixed model is required to restrict the UAVs to follow a certain path which supports the simulation to understand channel variation and handovers. With the help of the fixed flight pattern, we try to present our idea with proof of its functionality.

4. Measuring QoE over Proposed Architecture

We took two important metrics for measuring QoE. The first one is the well-known PSNR and the second one is the SSIM.
PSNR is the ratio between an extreme signal’s power and noise-corrupting power by which the signal is altered. PSNR is expressed using decibels (dB) as its bound fluctuates dynamically. PSNR could be utilized for a rough approximation of comparative characteristics if content inside the video and kinds of distortions remain the same, while only the altitude of distortion is altered [43]. However, it is also a fact that the correlation between subjective quality and PSNR might become too small and dependent on content and interruption [44]. This is the reason PSNR is not considered to be a consistent approach for gauging the quality of video among dissimilar contents of video. In the presence of all such issues, PSNR is even now utilized as a quality metric. Another reason for its popularity is its complexity, which is very low. The PSNR is taken as a reference standard for evolving perceptual video quality metrics, and it is accepted as a fidelity metric [45].
PSNR is formulated by setting the mean square error (MSE) in relation to the maximum possible value of the luminance ( 2 8 1 = 255 ) for a typical 8-bit value as:
M S E = Σ i = 1 M Σ j = 1 N [ ( f ( i , j ) ( F ( i , j ) ] 2 ) M . N ,
where:
f ( i , j ) is the original signal at pixel ( i , j ) ,
F ( i , j ) is the reconstructed signal, and
M . N is the picture size.
MSE is the cumulative squared error between the original and the distorted videos.
P S N R = 20 log 10 255 M S E d B
The outcome is a particular digit in dB, ranging from 30 dB to 40 dB for a medium- to high-quality video [46]. PSNR and MSE are inversely proportional with each other, represented in Equation (6). A video with good quality has a high PSNR, while for the same video, the MSE is low. Similarly, a video with bad quality has a low PSNR and high MSE. To measure the similarities between the two pictures, the well-known objective metric SSIM is used [46,47]. SSIM measures picture quality in such a way that one picture is considered as of faultless quality and others may have some errors which give a clearer indicator than PSNR for QoE of video viewing. The conventional scheme PSNR cannot overcome irregularities in comparison to human eye perception. For this reason, the SSIM technique has been suggested. SSIM is a quantifying metric between two windows having equal length, where −1 shows 0% similarity, while +1 indicates 100% similarity in frames. To calculate the quality of a distorted image, correlations in luminance, contrast, and structure are used in comparison locally between the reference and distorted images, and averaging these quantities over the entire image. The inspiration for SSIM design is taken from the functionality of human visual system (HVS) [48]. To gauge the structural similarity between two signals, consider Equation (7)
S S I M ( x , y ) = ( 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 ) α ( 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 ) β ( 2 σ x y + C 3 σ x 2 + σ y 2 + C 2 ) γ ,
where:
x = ( x i ) , i = 1 , 2 , 3 . N ,
y = ( y i ) , i = 1 , 2 , 3 . N ,
( 2 μ x μ y + C 1 μ x 2 + μ y 2 + C 1 ) α compares the luminance of signal,
( 2 σ x σ y + C 2 σ x 2 + σ y 2 + C 2 ) β compares the contrast of signal, and
( 2 σ x y + C 3 σ x 2 + σ y 2 + C 2 ) γ gauges the structural correlation of signal.
If the video is of bad quality, its SSIM value would be −1. This value shows a strong negative correlation and, thus, a strong deviation between the frame(s) of interest and the original one(s). Though the abovementioned formula to measure SSIM is more complex than that of MSE, it remains rationally traceable. Such kinsd of features make the objective SSIM metric appealing to work with.

5. SCRAS-Server-Based Crosslayer Rate-Adaptive Scheme

We analyzed different rate adaptation techniques. To optimize upper layer performance, we took advantage of the physical layer characteristic RSRP into consideration, as we have shown in the light of our experiment that RSRP and throughput are directly proportional to each other. We performed rate adaptation on the basis of RSRP. The UAVs convert captured video into MP4 video chunks in three resolutions, i.e., high, medium, and low. These video chunks are then sent towards remote hosts via an LTE cellular infrastructure. We start our video from low resolution video, then gradually we move towards high resolution video. The high resolution video continues until the decrease in RSRP is analyzed. In decreasing RSRP, the video switches to low resolution video. When we observed good RSRP, the video shifted to high resolution video. Thus, the overall phenomenon is on the basis of RSRP, meaning that the video will switch from high resolution video to low resolution video on a bad channel, and when the RSRP increases (indicating the good channel), again rate adaption occurs which will switch low resolution video to high resolution video to ensure better QoE of streaming video.
Channel reciprocity is the concept in which the receiver takes the same channel values as the transmitter. The concept is used in channel-aware rate adaptation CHARM [49], which leverages channel reciprocity to obtain its information, so the information is available to the transmitter without incurring request to send / clear to send (RTS/CTS) overhead. This combination of techniques allows CHARM to respond quickly to dynamic channel changes. We adopted this phenomenon for UAVs in our architecture. Through channel reciprocity, the UAVs know the channel information, and by looking into signal strength, these UAVs made a decision by themselves without remote assistance of the client. Hence, the overall decision of rate adaptation was made by UAVs instead of the remote client, which makes our scheme more robust, with minimum delay, and suitable for a surveillance environment.
As UAVs are mobile in our case, signal strength, as well as channel quality, varies [50,51,52,53,54,55,56]. Sometimes, the signals are best, sometimes average, and sometimes the worst. We categorized the channel on the basis of such signal strength as the best channel (BC), good channel (GC), and poor channel (PC). To measure the signal strength on UAVs with preprogrammed flight coordinates for mobile UAVs, we considered RSRP on UAVs with preprogrammed flight coordinates. We allocated three ranges of RSRP values for three channels, which could be observed in Table 2. MP4 video streaming is used by dynamic mobile UAVs with preprogrammed flight coordinates servers towards the remote client. Our suggested algorithm, SCRAS, ensures a better quality of experience of video viewing. By considering RSRP for channel quality, SCRAS switches the video. If a channel is the best channel, SCRAS streams high resolution video (HRV), for the good channel, it streams medium resolution video (MRV), and for the poor channel, it streams low resolution video (LRV). The results are satisfactory in that they verify that SCRAS outperforms other schemes in varying channel quality. SCRAS is designed for servers; this means mobile UAVs made the decision of rate adaptation by themselves. This means that there is no need for client assistance in rate adaptation. The real benefit of this scheme is that the remote client never experiences degradation in video quality because of varying channel quality or handovers. SCRAS takes care of all these issues at the server end; hence, this scheme not only becomes delay-tolerant but also ensures video quality which is viewable all the time at the client end. The motive is to provide good monitoring of such real-time surveillance systems.
In Figure 2, SCRAS operations are represented in the state transition diagram, whereas its corresponding legends can be seen in Table 3. SCRAS constantly measures the channel strength. For this reason, it continuously calculates the weighted current average of RSRP, which is represented by R S R P r a v g . At any point where it finds variation in the channel in the categorized three standards, it makes a decision for adaptation. Before each adaptation, it ensures that the current ongoing frame must be completed. For this reason, it waits for the current GoP to be finished so that important frames are not damaged during the adaptation process. Soon after, it adapts to the required resolution video suitable for the current channel strength.
Low resolution video is taken for streaming first irrespective of whether the channel is poor, good, or the best. As SCRAS reduces startup latency so that the client does not wait for the video to start, it is true that we compromised here video quality in terms of resolution, but we resolved the issue of start-up delay. If any handover occurs at this stage or the channel is found to be the poor channel, low resolution video will continue in streaming, otherwise it moves towards medium resolution video. During the streaming of medium resolution video, if handover happens or the channel signals becomes weaker, adaptation occurs, and the video is switched back to low resolution video after completing the current GoP from medium resolution video to low resolution video. In cases where R S R P r a v g comes in the range of the good channel, the video will continue at this stage. On the other hand, at this stage, if the current channel signal becomes stronger, which means better than the good channel, then rate adaptation for high resolution video occurs. However, again, it is ensured here that the current video chunks are not damaged, so it waits for the next I-frame to be encountered. Hence, it waits for the current GoP to be completed, otherwise some useful frames could be lost in the adaptation, which may corrupt important video clips from medium resolution video to high resolution video. The high resolution video will continue until R S R P r a v g comes under the best channel. At this stage, if handover happens, that causes the bandwidth to suddenly be chopped as LTE supports hard handover only and it is not suitable to continue streaming high resolution video, as it degrades the video quality; thus, it waits for the current GoP to be completed first, and then adaptation towards low resolution video occurs from high resolution video. The interesting point to be noted here is that rate adaptation occurs each time only towards low resolution video whenever handover occurs at any stage in the given scenario, because the best match for very low bandwidth in the result of the hard handover is only low resolution video, which provides at least a viewable video stream in this problematic condition. On the other hand, instead of handover, if a channel deviates and the current channel signal becomes weaker enough that it becomes what we call the good channel here, rate adaptation occurs from high resolution video to medium resolution video, keeping in consideration the need for the current GoP to be completed. Moreover, when the channel is best and streaming high resolution video, if suddenly the channel becomes poor and touches the R S R P r a v g that comes under the range of a poor channel, adaptation does not occur directly from high resolution video to low resolution video because this will introduce fluctuation in video streaming as larger shifting in video resolution occurs here. i.e., the heaviest video towards the lightest video; to accommodate this issue, we shift gradually from high resolution video towards medium resolution video, then shift from medium resolution video towards low resolution video to ensure smooth shifting of video with a minimum difference in video resolutions available in our scenario. At any stage in the streaming of low resolution video, medium resolution video or high resolution video, if the streamed video chunks are completed, the SCRAS operations end. It should also be noted that gradual shifting of video from low resolution video, medium resolution video, and high resolution video in a forward direction or high resolution video, medium resolution video, and low resolution video in a backward direction is proposed only in the case of the best channel, good channel or poor channel based on the R S R P r a v g values. For handover, we proposed direct shifting from high resolution video towards low resolution video; the reason is the sudden crash of bandwidth, since only hard handover occurs in LTE, which is suitable only for low resolution video. Otherwise, video frames may be lost if high resolution video runs in a very low bandwidth.
The corresponding algorithm for SCRAS scheme is represented in Algorithm 1, which describes the overall functionality of SCRAS.
Algorithm 1: Algorithm SCRAS
Electronics 08 00910 i001

6. Experimental Environment and Simulation Settings

For all the experiments of the work mentioned in this paper, we used HP EliteBook 8440p with 2 GB RAM and 1 TB hard drive. We installed NS-3 version 3.25 on a 64-bit Ubuntu 14.04 LTS operating system. We utilized NS-3 package lena-dual-stripe to simulate 4G-LTE. As an IDE for NS-3, we used Eclipse Neon-3, which is specially designed for Ubuntu-based systems [57]. For the conversion of video in different encoding, we used FFmpeg [58], which provides strong tools for video conversion even for real-time video streaming.
To simulate client/server architecture in NS-3, we used an application designed for client-servers by the GERCOM group [59] known as Evalvid. EvalVid is a toolset which is used for analysis of the QoE of video streamed over a simulated network for communication. Evalvid provides a suitable platform for researchers who are interested in analyzing their network architecture or frameworks in terms of user-perceived video quality. This toolset transmits real-time mp4 encoded video towards communicating nodes. We modified the original code of Evalvid for the requirement of our surveillance architecture. To mimic the flight pattern of UAVs, we used a Gauss–Markov mobility model, while in the original Evalvid code, the Random-Waypoint mobility model is used. Another modification is that in the actual Evalvid code, the video streams from client to server; we changed this setting and made the video stream from the server to a remote static client through UDP. To calculate the PSNR and SSIM of streaming video, we followed the step-by-step procedure available online at [60]. The Evalvid binaries required are also available online [61].
We used a real video highway-cif.264 which is available online in multiple resolutions [62,63]. The duration of the video is about 1 min and 6 s. Three video resolutions were used in our experiment. These multiple resolutions videos were converted into MP4 for streaming in the simulation using FFMpeg. For the rest of this paper, we refer to high resolution video as HRV, medium resolution video as MRV, and low resolution video as LRV for ease of understanding in tables and graphs. To convert these videos in MP4 codec, the parameter settings can be seen in Table 4.
To calculate the received signals at UAVs, the physical layer parameter reference signal received power (RSRP) was used. RSRP delivers the average power which is collected via the resource element passing the reference signal in any symbol. The range of typical values of RSRP is from 44 dBm to 140 dBm [64]. We adjusted the LoS to NLoS ratio up to 50 meters for the experiments and in the analysis of throughput with respect to RSRP.
Our experiments proved that on varying channel conditions, with respect to a decrease in RSRP, the average throughput also decreases and vice versa, as shown in Figure 3. Based on experimental results, we devised an optimized rate-adaptive algorithm that automatically switches from high to low resolution while facing a decrease in RSRP and vice versa from low to high resolution while facing an increase in RSRP values, hence enhancing the end-user QoE.
For experimental purposes, first we applied a constant position mobility model in our simulation (Table 5). The video is streamed from UAVs towards macroENBs and a single UAV used only for experimental purposes. The good channel is one where good RSRP is analyzed, and, vice versa, the bad channel is one where bad RSRP values are received. The UAV was placed first at the good channel for experimental purposes and then on the bad channel for the same experiments. High and low resolution videos were streamed on both channels. We analyzed the quality of the received video and found results that support our approach.
We took the physical layer performance characteristic RSRP as a key indicator for measuring the channel quality. In Figure 3, it is analyzed that with respect to an increase in RSRP, the average throughput increases, which shows that these factors are directly proportional to each other. Here, RSRP is given in dBm, while the average throughput is in kbps. The RSRP value ranges between −75 dBm to −120 dBm from best to worst, respectively.
Figure 3 supports our idea of rate adaptation, which is, on the basis of RSRP values, that the video will switch from high to low resolution on poor RSRP values (bad channel) and when the RSRP values increase (indicating the good channel), adaptation occurs and the video will switch from low resolution to high resolution for a better QoE of video viewing.
To understand the complete simulation process, consider Figure 4, where all steps are shown in a sequence labeled from 1 to 15 over the directed arrows. Arrows labeled from 1 to 5 represent the preprocessing tasks required for simulation to start. In preprocessing tasks, the UAV captures the video of any targeted area for surveillance. (We have used highway-cif.264 as a captured video). The captured video then converts to a YUV sequence, MP4, M4V, then again to MP4, which contains the hint-tracks in samples of video. MP4Box [65] is a tool used to introduce hint-tracks in the samples of video. To create a trace file from the hinted MP4 file, the mp4trace tool from EVALVID is used. The mp4trace tool is capable of transmitting a hinted MP4 file per UDP to a particular destination host. This trace file streams in the network over UDP. The directed arrows from 6 to 12 depict the simulation process where UAVs fly and stream the real-time captured video towards a remote client. For realistic flight in UAVs, a Gauss–Markov mobility model applies in the simulation. As the medium is wireless, we applied a wireless-propagation model to mimic the real wireless scenario. At the time of video-streaming, the rate-adaptive approach SCRAS was applied to enhance the QoE of video viewing at the remote client. The remote client received the corrupt trace file due to channel variations and handovers in the simulation. The arrows labeled with 13 to 15 show the post-processing tasks after simulation. The first step in post-processing tasks is the reconstruction of the transmitted video as it is seen by the receiver. To achieve this objective, the MP4 video file and trace files at the receiver end were processed by the etmp4 tool from EVALVID. This generates a possibly corrupt video file in which the lost frames are removed. This corrupt video file then decodes into the YUV sequence. At the end, the original and corrupt YUVs are used to calculate the PSNR and SSIM using a psnr binary file offered by EVALVID, which highlights the difference between in original video before streaming and the corrupt video after being received by the client. The PSNR and SSIM statistics provide a great source of QoE analysis of received video after streaming.
The parameters for a fixed flight pattern, consider Figure 5. Under simulation, the box values are +50 to +350 in the x-axis and 0 to +50 in the y-axis, while 1.5 to 12 in the z-axis. Intersite distance between two nearby macroENBs is 100 m. The motion of UAVs could be seen in numbers from 1 to 7. We labeled different events like channel variations on account of signal strength and handover. The coordinates at each step were also labeled for UAV. All the events occur in a fixed flight pattern. At the start of label 1, UAV found the best channel. UAV was very much nearer to the macroENB1. At this stage, SCRAS transmitted low resolution video to reduce start-up latency. Then, at number 2, intracellular handover occurred within the range of the best channel. As in LTE, hard handover crashes the bandwidth drastically, and SCRAS again switched to low resolution video immediately, which is a suitable video for this low bandwidth, then gradually shifted from low resolution video to medium resolution video, then high resolution video, as the channel is the best. Afterwards, at number 3, an intercellular handover occurred and found some weakening in signal strength measured by RSRP, which comes under the best channel. Again, video switched to low resolution video because of handover and then gradually shifted to medium resolution video. At number 4, an intracellular handover occurred at macroENB 2 under the good channel. Again, rate adaptation occurred from medium resolution video towards low resolution video for this handover to save the video quality for the available bandwidth. At number 5, an intercellular handover occurred within the good channel, while at number 6, the channel became again weaker under the range of a poor channel, and also, intracellular handover occurred. As the channel was already poor, handover did not change for rate adaptation, and the same video continued after handover. Finally, it ended at number 7 with an extremely poor channel.
In the simulation environment, for the first experiment, we used the fixed flight pattern as mentioned earlier in Figure 5 in this paper, to analyze the movement of single UAV for high, low, and medium resolution videos so that we can examine what happens at the time of change in the channel (best, good or bad) and at the time of handover, we calculated PSNR and SSIM for high, medium, and low resolution videos, and we compared all of the graphs and analyzed which resolution video performs better in which case; on the basis of this analysis and proven results, we propose SCRAS.
Consider Figure 6a,b. The simulation started and found the best channel; one handover occurred in this duration, which shows the slight downfall of PSNR in medium and high resolution videos, while low resolution video seemed stable. In the good channel, three handovers occurred. At first handover in the good channel, the videos showed a downfall in PSNR and SSIM, then subsequently, other two handovers occurred, after which high and medium resolution videos were facing a downfall in their PSNR and SSIM, while there was no effect, it seemed, on PSNR of low resolution video. The poor channel faced one handover that hits the PSNR and SSIM of high resolution video too much. Medium resolution video also faced a downfall but improved later, while low resolution video outperformed the rest here despite the poor channel and handover, which shows its stability. As depicted in Figure 6a,b, while the channel is the best channel, it is better to use HRV that utilizes the bandwidth well and gives a good QoE at the user end, but when the channel goes down, it is better to shift towards medium resolution video or low resolution video to enhance end-user QoE of video monitoring.
Consider Figure 6c,d. HR, MR, and LR represent high, medium, and low resolution, respectively. At the start, the best channel was found, but SCRAS enforced transmission of low resolution video at that time to reduce start-up delay. Here it would be checked that the current GoP is complete or not to ensure the current video frame is not lost, so the scheme would wait for the next I-frame to come into the picture, then make the decision to switch towards medium resolution video. This waiting time is indicated by the label W. Again, it waits for the current GoP to be completed, then switches to the high resolution video; this gradual shift ensures the smooth transition of videos that enhance the end-user QoE of video monitoring. When the first handover occurs, the decision is to switch towards low resolution video but wait for the current GoP to finish, and afterwords, it switches to low resolution video, which is indicated by the W/LR tag. This scheme was implemented for each handover, because after handover, the video switches to low resolution video as dynamic downfall in bandwidth occurs because of the hard handover. Then, after it switches to medium resolution video, indicated by MR, also ensuring the completion of the current GoP, then again it goes to high resolution video as the channel is the best.

7. Comparative Analysis of SCRAS with Other Schemes

We compared our proposed technique (SCRAS) with three related works. One is called the nonrate-adaptive scheme, which we named as NRAS. FESTIVE [12] is a rate-adaptive scheme in the literature, and piStream [13] is a rate-adaptive scheme designed for 4G-LTE networks for mobile applications. The piStream rate-adaptive scheme supports congestion at bottleneck as well as measuring the received signal strength at UAVs to measure a varying channel quality. We compared our work with this scheme because it is the closest one to ours. In our work, all schemes transmit low resolution video in the start except NRAS, as it is a nonrate-adaptive scheme, and it transmits high resolution video throughout the transmission. The phenomenon is called Slow Start, which means starting the transmission of video with low resolution and then gradually increasing afterwards.
In Figure 7, Figure 8, Figure 9 and Figure 10, the PSNR and SSIM with their corresponding frames are shown. PSNR and SSIM are shown vertically in the graphs, while their respective frames are shown horizontally.

7.1. Impact of UAV Speed

We used the Gauss–Markov mobility model for this experiment. The UAV shows a random flight pattern to mimic a realistic flight. In this experiment, we varied the speed of UAV at 4 ms 1 , 2 ms 1 , and 0.8 ms 1 using parameters as shown in Table 6.
For the experiment of 4 ms 1 , refer to Figure 7a,b. After completing a slow-start at event 1 (E1) in the poor channel, NRAS and FESTIVE were near to 0 dB in PSNR because both schemes transmitted high resolution video, while SCRAS continued its low resolution video transmission, as it found the poor channel by measuring RSRP at the UAV. If we look at piStream at event 1 E1, its PSNR dropped for a short period, and then it converged to a stable equal at SCRAS. This is because piStream maps the physical layer resource allocation to an estimation of available bandwidth, while SCRAS focuses completely on channel quality and quickly makes the decision for a suitable video for the current channel. This is the reason that at E1, SCRAS superseded over piStream. As piStream looks at the channel quality constantly, as well as congestion, it superseded other techniques and came in the second position after SCRAS at the time of handover. At event 2 (E2), SCRAS quickly analyzed the good channel and adapted towards medium resolution video, while piStream and FESTIVE performed the adaptation after some time, and NRAS continued its high resolution video, which is why it remains at the bottom in the good channel. At event 3 (E3), an adaptation event occurs from the good channel towards the best channel; this time, SCRAS and NRAS outperformed the rest by transmitting high resolution video, while piStream and FESTIVE took some time to measure the bandwidth and then adapted to high resolution video. At Ho1, SCRAS outperformed the rest as it quickly adapted toward low resolution video, which is the most suitable video in a handover condition, as bandwidth degrades drastically because of hard handover. In this handover, other streams could not maintain their video transmission according to the available bandwidth; hence, their PSNR goes down in the handover event in comparison to SCRAS. We examined the same pattern in almost every handover, and for each experiment, SCRAS comes in first, and then piStream is second, FESTIVE third, and the last one is NRAS. After the best channel, again, a good channel is observed by the UAV, and adaptation occurs here from high resolution video to medium resolution video, as marked by event 4 (E4). In this time, some differences in PSNR could be observed in between SCRAS and piStream initially, then both almost converged to the same PSNR. After the good channel, a poor channel experienced by UAV was marked as event 5 (E5). An interesting trend can be observed here in that the PSNR of the NRAS and FESTIVE schemes degraded near to 0dB as we saw at the start of this experiment, while SCRAS outperformed them by switching to low resolution video; meanwhile, piStream adapted towards low resolution video after some frames and competed with SCRAS.
For the experiment of 2 ms 1 , refer to Figure 7c,d, respectively. At the start, the best channel was found at E1, with a slow start session performed by SCRAS, FESTIVE, and piStream, by which low resolution video was transmitted; on the other hand, NRAS outperformed them as it started with high resolution video, which is the most optimized video for the best channel. The fist handover occurred in the best channel, indicated by Ho1. SCRAS outperformed the rest, while piStream was equal with SCRAS after some frames. Convergence from high resolution video to medium resolution video could be shown at E2. In the good channel, SCRAS and piStream outperformed the rest, while FESTIVE and NRAS showed a comparatively lower PSNR. E3 shows the adaptation process from the good channel to the poor channel. Again, in the poor channel, the PSNR of FESTIVE and NRAS degraded as these schemes could not realize the poor channel, while SCRAS outperformed the rest in poor channel by immediately shifting towards low resolution video, while piStream joined with SCRAS after some frames. The second handover occurred, indicated by Ho2, which shows degradation in the PSNR of SCRAS and piStream, while FESTIVE and NRAS experienced nothing as these were already experiencing a very low PSNR. It can be observed in Figure 7d that the SSIM of FESTIVE and NRAS degrades.
Refer to the Figure 7e,f, respectively, for a UAV at a pedestrian speed of 0.8 ms 1 . In the beginning, the best channel was observed by UAV, and E1 shows a slow start. NRAS is in a better position here as it starts with high resolution video, which is suitable for this channel. Afterwards, at the good channel, SCRAS showed a better position in comparison with other schemes, as depicted in E2. After handover Ho1 in the best channel, SCRAS came in first and NRAS last. An interesting thing to observe is that PSNR values are very much closer to each other as the UAV experienced the pedestrian motion here, and with this slow speed, overall values of PSNR and SSIM increase.

7.2. Impact of Increasing Handovers

For this experiment, we used a fixed flight pattern, as shown previously in Figure 5. We found an increasing trend in the total number of handovers. By increasing the macroENB sites from 1, 2, and 3, we found 2, 4, and 7 handovers, respectively. The increasing trend in handover makes SCRAS more prominent than the other schemes, as in each handover, SCRAS outperformed other schemes, especially when macroENB sites were set to 3, with 7 handovers showing SCRAS superiority over other schemes in terms of PSNR and SSIM, as shown in Figure 8e,f, respectively.

7.3. Impact of Congestion in Core Network

For this experiment, we limited the bottleneck bandwidth in three categories, i.e., no congestion with 10 Gbps, mild congestion with 10 Mbps, and severe congestion with 1 Mbps bottleneck bandwidth.
For no congestion again, we used a fixed flight pattern. We set 10 Gbps bottleneck bandwidth for this experiment. Refer to Figure 9a,b, respectively, for this experiment. There is just variation in channels and handovers. SCRAS outperformed the rest in the no congestion scenario.
We set 10 Mbps bottleneck bandwidth for mild congestion. Please refer to Figure 9c,d respectively, for this experiment. We introduced congestion at different points in the simulation. C1, C2, C3, C4, and C5 are congestion areas. C1 and C2 are congestion points in the best channel, while C3 is in the good channel and C4 and C5 are congestion points in the poor channel, respectively. As SCRAS is not taking care of congestion, its performance degrades in all congestion points in the simulation. In C1, C2, and C3, both SCRAS and NRAS degrade, while piStream and FESTIVE performed well. At C4 and C5, the SCRAS position is much better; as these congestions occur in the poor channel, its PSNR degrades near 30 dB and SSIM around 0.5. The reason is that in the poor channel, SCRAS transmits low resolution video which suffers less at the time of congestion. In the poor channel at C4 and C5, FESTIVE suddenly improves its PSNR and SSIM, as it is well aware of congestion events, but again after or before the congestion points in the poor channel, FESTIVE’s performance goes very lower, as it could not realize the channel quality, which is poor at that time.
We set 1 Mbps bottleneck bandwidth for the severe congestion experiment. Please refer to Figure 9e,f, respectively, for this experiment. As in the mild congestion experiment, we introduced congestion at the same point. C1, C2, C3, C4, and C5 are the congestion areas. We examined more degradation in PSNR and SSIM for SCRAS and NRAS, while there was no such effect found on piStream and FESTIVE, which proves their congestion awareness. In this experiment, we saw that SCRAS is not performing better in comparison with FESTIVE and piStream. SCRAS can overcome this problem using improved video coding, as shown in Section 7.4.

7.4. Impact of Video Encoding Schemes

During the high congestion time in network bottleneck, we saw that SCRAS performance degrades in comparison with piStream and FESTIVE, as depicted in Figure 9e,f, respectively. This is because of the encoding scheme used for SCRAS, i.e., H.264. As SCRAS is compatible with different codecs, that problem could be overcome. As discussed in Section 3.2, a fixed flight pattern was applied for this experiment so that an exact comparison could be possible between H.264 and H.265. In High Efficiency Video Coding HEVC (H.265) encoded video, SCRAS improved its performance during the congestion time. This is because, as compared to H.264, HEVC (H.265) performed well, especially during congestion in networks [66,67]. For the result of this experiment, please look at Figure 10a,b, respectively. H.265 shows better PNSR and SSIM in congestion time as marked with C1 through to C6. At C1, very slight degradation in PSNR and SSIM was observed, the reason being that as this congestion period comes in a slow start session, with SCRAS streaming low resolution video, as the channel is the best channel and the video is low resolution video, HEVC performs extremely well due to the congestion period. It is also noticed that during handover, H.265 also performs well in comparison to H.264. The HEVC codec requires more processing power because of its complex computational scheme, which is why UAVs with higher configuration are required which might not be a cost-effective-solution but provide better results to a remote client for monitoring any targeted region [68].

8. Analysis of Frame-Grabs of Streamed Video

In this section, some frame-grabs of streamed video are shown. We took different frame-grabs of steaming video by selecting SCRAS, FESTIVE, and piStream as the rate-adaptive schemes. FESTIVE frame-grabs were taken while the channel becomes poor, as FESTIVE is not aware of the quality of link immediately since it depends on the feedback of the client for rate adaptation. The frame-grabs prove that SCRAS reacts better in comparison to FESTIVE, especially in a poor channel. Figure 11a,c,f shows the frame-grabs of video using FESTIVE as the rate-adaptive scheme, while Figure 11b,d,f represents the frame-grabs of video using SCRAS as the rate-adaptive scheme during surveillance of the monitored zone.
The frame-grabs of piStream and SCRAS were taken immediately after the occurrence of handover. As in the handover condition, SCRAS outperformed piStream, since bandwidth suddenly crashed at the time of handover in 4G-LTE. Figure 12a,c,e show the frame-grabs of piStream, and Figure 12b,d,f show the frame-grabs of SCRAS after handovers. The frame-grabs were taken at the same time after handover. All frame-grabs prove that SCRAS outperforms FESTIVE and piStream rate-adaptive streaming.

9. Outstanding Characteristics of SCRAS

9.1. SCRAS Takes Care of GoP in Rate Adaptation

SCRAS took care of GoP in rate adaptation. Video compression used several data compression techniques by means of different algorithms. These algorithms are also known as picture types and, technically, frame types. The major categories of these frames are of three types, i.e., B-frame, I-frame, and P-frame. I-frames are least compressible and have the complete picture, and hence, they do not need any further knowledge to restructure these frames, whereas P-frames (predicted pictures) are more compressible than I-frames, as these frames can use data from previous frames to decompress. For instance, there is a scene in a film where a bus travels across a background which is stationary. In that sense, only the bus movement needs to be encoded. It is not required to save the background pixels in the P-frame which is, in fact, unchanged. This technique saves storage space.
As far as B frames (Bidirectional predicted pictures) are concerned, for data reference, these frames know how to utilize both preceding and forwarding frames; as a result, these frames gain the maximum volume of compression in data. B frames store even additional areas by having dissimilarities among the current frame and both the previous and succeeding frames to enumerate its information.
The arrangement of intraframes and interframes are specified by the GoP structure in video coding. As represented in Figure 13, a group of successive images inside a coded video stream is actually referred to as GoP. For the generation of viewable frames, each coded video stream contains successive GoPs. The decoder does not require any past frames so as to decode the upcoming ones if a new GoP bumps into a compressed video stream and permits rapid finding through the video. Usually, the encoder utilizes structures of GoP that produce every I-frame as a spotless random point of access, in such a way that on an I-frame, decoding will be able to commence neatly, and any inaccuracies inside the structure of GoP are rectified after consuming an accurate I-frame. Mostly, the more I-frames are present in the video stream, the more it is editable. Conversely, bit-rate considerably rises if more I-frames are encountered to code the video. Encoders have ample room for referencing structures in the latest schemes observed in H.264 or HEVC.
Thus, SCRAS waits for the current GoP to be finished, then adapts to the other video. This scheme ensures the video clips are not damaged during the adaptation process.

9.2. SCRAS Outperforms Other Schemes during Handovers in 4G-LTE

Our infrastructure is based on UAVs which are constantly in flight; as a result, these UAVs with preprogrammed flight coordinates are frequently disconnected and connected from one base station (macroENB) to another; this phenomenon is known as handover. These handovers usually occur in a stable cellular architecture. The principle of handover schemes is subject to the strongest signal received from the maroENBs.
As 4G-LTE has only hard handovers, SCRAS takes care of such kinds of handover, and rate adaptation occurs to low resolution video whenever any handover (Intercellular or intracellular) occurs. As bandwidth suddenly crashes in the hard handover, it is worth expansive if high resolution video streams which degrades the QoE of video viewing for the end-user. Therefore, on each handover, SCRAS adapted towards low resolution video that provides better results of video viewing and is the best choice for such conditions. In comparison to other works discussed in Section 7, there is no such strategy defined in their rate-adaptive process; hence, SCRAS has the edge over these schemes in the handover scenarios.

9.3. SCRAS Performs Better in Surveillance over 4G-LTE

As for surveillance, video transmission should have a minimum delay, as SCRAS uses UDP, so it is delay-sensitive as compared to schemes which are working over TCP that required an acknowledgment for every packet sent that introduces a delay in transmission.

9.4. SCRAS Impact on Battery Life

Our proposed rate-adaptive scheme SCRAS efficiently saves the battery life of flying UAVs. At the poor channel, as the lowest power received by UAVs, the uplink power control of UAVs consumse extra power, which lowers the battery life in the poor channel [70,71,72,73]. At that time, SCRAS streams low resolution video, which is the optimal lightweight video that requires minimum computation power, hence saving battery life. In the same way, at the best or good channels, the optimal high or medium resolution video transmits, respectively, which is the suitable solution at that instance of time, hence saving battery life effectively. As 4G-LTE only supports hard handover, in which the bandwidth crashes drastically, low resolution video transmits over the network, ensuring the life of the battery is saved. Therefore, we can say that our scheme is suitable in saving battery life in channel variations and handover situations during flight by UAVs.
SCRAS does not require a special kind of UAVs; rather, commercial camera-mounted UAVs could be used for surveillance over 4G-LTE, as these UAVs have longer flight time, covering a wider range [74].
H.264 is a lightweight encoding scheme which we used in our simulation. Because of its low computation approach, low battery would be consumed by UAVs. If we compromise on battery, then a more optimized video compression algorithm such as H.265 could also be considered [75] for better video quality of video monitoring, but for that, high configuration UAVs are required.

10. Conclusions

In this paper, a 4G-LTE UAV-based surveillance architecture is proposed that ensures QoE of video viewing during real-time streaming of video at the remote site. We introduced a novel rate-adaptive scheme for video streaming (SCRAS) that supports this architecture to achieve the desired objective. As upper layer protocols are not taking full advantage of what is offered by 4G-LTE, our proposed scheme SCRAS is a crosslayer rate-adaptive scheme that leverages the information provided by lower layer protocols to maximize the productivity of upper layer protocols. SCRAS ensures that QoE of video viewing remains stable during video streaming instead of degrading with varying channel quality and frequent handovers. We validated our experiments on fixed and random flight patterns. SCRAS also ensures that during the adaptation process, the current GoP should not be damaged so that video does not flicker frequently. We compared SCRAS with a nonrate-adaptive scheme NRAS, client-based rate-adaptive scheme FESTIVE, and recent 4G-LTE-based rate-adaptive scheme piStream. As SCRAS also supports different encoding schemes, it deals with congestion nicely, with HEVC encoded video streaming. All simulations were performed using the NS3 platform. The PSNR and SSIM objective metrics were used to measure the quality of streaming video. The simulation results prove that SCRAS outperformed the other schemes, especially in handovers and varying channel quality.

Author Contributions

Conceptualization, M.N.; Data curation, S.M.A.; Formal analysis, M.N.; Investigation, M.N.; Methodology, M.N.; Project administration, S.Q.; Resources, B.A.K.; Supervision, S.Q.; Visualization, S.M.A.; Writing—original draft, M.N.; Writing—review and editing, S.Q., B.A.K., S.M.A., and M.M.

Funding

This research received no external funding.

Acknowledgments

We are very thankful to the higher authorities of PAF Karachi Institute of Economics & Technology, Karachi, Pakistan for providing an excellent research environment and motivational support for this work.

Conflicts of Interest

The authors declare no conflict of interest.

References and Note

  1. Wang, H.; Huo, D.; Alidaee, B. Position Unmanned Aerial Vehicles in the Mobile Ad Hoc Network. J. Intell. Robot. Syst. 2013, 74, 455–464. [Google Scholar] [CrossRef]
  2. Merwaday, A.; Guvenc, I. UAV assisted heterogeneous networks for public safety communications. In Proceedings of the 2015 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), New Orleans, LA, USA, 9–12 March 2015; pp. 329–334. [Google Scholar]
  3. Cho, J.; Lim, G.; Biobaku, T.; Kim, S.; Parsaei, H. Safety and Security Management with Unmanned Aerial Vehicle (UAV) in Oil and Gas Industry. Procedia Manuf. 2015, 3, 1343–1349. [Google Scholar] [CrossRef] [Green Version]
  4. Kumar, S.; Hamed, E.; Katabi, D.; Erran Li, L. LTE radio analytics made easy and accessible. In ACM SIGCOMM Computer Communication Review; ACM: New York, NY, USA, 2014; Volume 44, pp. 211–222. [Google Scholar]
  5. Becker, N.; Rizk, A.; Fidler, M. A measurement study on the application-level performance of LTE. In Proceedings of the 2014 IFIP Networking Conference, Trondheim, Norway, 2–4 June 2014. [Google Scholar] [CrossRef]
  6. Nguyen, B.; Banerjee, A.; Gopalakrishnan, V.; Kasera, S.; Lee, S.; Shaikh, A.; der Merwe, J.V. Towards understanding TCP performance on LTE/EPC mobile networks. In Proceedings of the 4th Workshop on All Things Cellular: Operations, Applications, & Challenges, Chicago, IL, USA, 22 August 2014. [Google Scholar] [CrossRef]
  7. Jin, R. Enhancing Upper-Level Performance from Below: Performance Measurement and Optimization in LTE Networks. Ph.D. Thesis, University of Connecticut, Mansfield, CT, USA, 2015. [Google Scholar]
  8. Aljehani, M.; Inoue, M. Performance evaluation of multi-UAV system in post-disaster application: Validated by HITL simulator. IEEE Access 2019, 7, 64386–64400. [Google Scholar] [CrossRef]
  9. Huo, Y.; Dong, X.; Lu, T.; Xu, W.; Yuen, M. Distributed and Multi-layer UAV Networks for Next-generation Wireless Communication and Power Transfer: A Feasibility Study. IEEE Internet Things J. 2019, 6, 7103–7115. [Google Scholar] [CrossRef]
  10. Huo, Y.; Dong, X. Millimeter-wave for unmanned aerial vehicles networks: Enabling multi-beam multi-stream communications. arXiv 2018, arXiv:1810.06923. [Google Scholar]
  11. Lai, C.C.; Chen, C.T.; Wang, L.C. On-demand density-aware uav base station 3d placement for arbitrarily distributed users with guaranteed data rates. IEEE Wirel. Commun. Lett. 2019, 8, 913–916. [Google Scholar] [CrossRef]
  12. Jiang, J.; Sekar, V.; Zhang, H. Improving fairness, efficiency, and stability in http-based adaptive video streaming with festive. IEEE/ACM Trans. Netw. (ToN) 2014, 22, 326–340. [Google Scholar] [CrossRef]
  13. Xie, X.; Zhang, X.; Kumar, S.; Li, L.E. pistream: Physical layer informed adaptive video streaming over lte. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, Paris, France, 7–11 September 2015; ACM: New York, NY, USA, 2015; pp. 413–425. [Google Scholar]
  14. Najiya, K.; Archana, M. UAV Video Processing for Traffic Surveillence with Enhanced Vehicle Detection. In Proceedings of the 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), Coimbatore, India, 20–21 April 2018; pp. 662–668. [Google Scholar]
  15. Angelov, P.; Sadeghi-Tehran, P.; Clarke, C. AURORA: Autonomous real-time on-board video analytics. Neural Comput. Appl. 2017, 28, 855–865. [Google Scholar] [CrossRef]
  16. Alsmirat, M.A.; Jararweh, Y.; Obaidat, I.; Gupta, B.B. Automated wireless video surveillance: An evaluation framework. J. Real-Time Image Process. 2017, 13, 527–546. [Google Scholar] [CrossRef]
  17. Jung, J.; Yoo, S.; La, W.; Lee, D.; Bae, M.; Kim, H. AVSS: Airborne Video Surveillance System. Sensors 2018, 18, 1939. [Google Scholar] [CrossRef]
  18. Karaki, H.S.A.; Alomari, S.A.; Refai, M.H. A Comprehensive Survey of the Vehicle Motion Detection and Tracking Methods for Aerial Surveillance Videos. IJCSNS 2019, 19, 93. [Google Scholar]
  19. Shin, S.Y. UAV Based Search and Rescue with Honeybee Flight Behavior in Forest. In Proceedings of the 5th International Conference on Mechatronics and Robotics Engineering, Rome, Italy, 16–19 February 2019; ACM: New York, NY, USA, 2019; pp. 182–187. [Google Scholar]
  20. Mukherjee, A.; Keshary, V.; Pandya, K.; Dey, N.; Satapathy, S.C. Flying Ad hoc Networks: A Comprehensive Survey. In Information and Decision Sciences; Springer: Singapore, 2018; pp. 569–580. [Google Scholar] [CrossRef]
  21. Bekmezci, I.; Sahingoz, O.K.; Temel, Ş. Flying ad-hoc networks (FANETs): A survey. Ad Hoc Netw. 2013, 11, 1254–1270. [Google Scholar] [CrossRef]
  22. Mustaqim, M.; Khawaja, B.A.; Razzaqi, A.A.; Zaidi, S.S.H.; Jawed, S.A.; Qazi, S.H. Wideband and high gain antenna arrays for UAV-to-UAV and UAV-to-ground communication in flying ad-hoc networks (FANETs). Microw. Opt. Technol. Lett. 2018, 60, 1164–1170. [Google Scholar] [CrossRef]
  23. Qazi, S.; Alvi, A.; Qureshi, A.M.; Khawaja, B.A.; Mustaqim, M. An Architecture for Real Time Monitoring Aerial Adhoc Network. In Proceedings of the 2015 13th International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 14–16 December 2015; pp. 154–159. [Google Scholar]
  24. Qazi, S.; Siddiqui, A.S.; Wagan, A.I. UAV based real time video surveillance over 4G LTE. In Proceedings of the 2015 International Conference on Open Source Systems & Technologies (ICOSST), Lahore, Pakistan, 17–19 December 2015; pp. 141–145. [Google Scholar]
  25. Wang, X. Optimizing Networked Drones and Video Delivery in Wireless Network. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 2017. [Google Scholar]
  26. Halperin, D.; Hu, W.; Sheth, A.; Wetherall, D. Predictable 802.11 packet delivery from wireless channel measurements. ACM SIGCOMM Comput. Commun. Rev. 2010, 40, 159–170. [Google Scholar] [CrossRef]
  27. Afroz, F.; Subramanian, R.; Heidary, R.; Sandrasegaran, K.; Ahmed, S. SINR, RSRP, RSSI and RSRQ Measurements in Long Term Evolution Networks. Int. J. Wirel. Mob. Netw. 2015, 7, 113–123. [Google Scholar] [CrossRef]
  28. Nasrabadi, A.T.; Prakash, R. Layer-Assisted Adaptive Video Streaming. In Proceedings of the 28th ACM SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video—NOSSDAV, Amsterdam, The Netherlands, 12–15 June 2018; pp. 31–36. [Google Scholar] [CrossRef]
  29. Ramamurthi, V.; Oyman, O. Link aware HTTP Adaptive Streaming for enhanced quality of experience. In Proceedings of the 2013 IEEE Global Communications Conference (GLOBECOM), Atlanta, GA, USA, 9–13 December 2013. [Google Scholar] [CrossRef]
  30. Marwat, S.N.K.; Meyer, S.; Weerawardane, T.; Goerg, C. Congestion-Aware Handover in LTE Systems for Load Balancing in Transport Network. ETRI J. 2014, 36, 761–771. [Google Scholar] [CrossRef]
  31. Kua, J.; Armitage, G.; Branch, P. A Survey of Rate Adaptation Techniques for Dynamic Adaptive Streaming Over HTTP. IEEE Commun. Surv. Tutor. 2017, 19, 1842–1866. [Google Scholar] [CrossRef]
  32. Atawia, R.; Hassanein, H.S.; Noureldin, A. Robust Long-Term Predictive Adaptive Video Streaming Under Wireless Network Uncertainties. IEEE Trans. Wirel. Commun. 2018, 17, 1374–1388. [Google Scholar] [CrossRef]
  33. Marai, O.E.; Taleb, T.; Menacer, M.; Koudil, M. On Improving Video Streaming Efficiency, Fairness, Stability, and Convergence Time Through Client–Server Cooperation. IEEE Trans. Broadcast. 2018, 64, 11–25. [Google Scholar] [CrossRef]
  34. Poojary, S.; El-Azouzi, R.; Altman, E.; Sunny, A.; Triki, I.; Haddad, M.; Jimenez, T.; Valentin, S.; Tsilimantos, D. Analysis of QoE for adaptive video streaming over wireless networks. In Proceedings of the 2018 16th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), Shanghai, China, 7–11 May 2018; pp. 1–8. [Google Scholar]
  35. Kumar, S.; Sarkar, A.; Sur, A. A resource allocation framework for adaptive video streaming over LTE. J. Netw. Comput. Appl. 2017, 97, 126–139. [Google Scholar] [CrossRef]
  36. Su, G.M.; Su, X.; Bai, Y.; Wang, M.; Vasilakos, A.V.; Wang, H. QoE in video streaming over wireless networks: Perspectives and research challenges. Wirel. Netw. 2015, 22, 1571–1593. [Google Scholar] [CrossRef]
  37. Fan, Q.; Yin, H.; Min, G.; Yang, P.; Luo, Y.; Lyu, Y.; Huang, H.; Jiao, L. Video delivery networks: Challenges, solutions and future directions. Comput. Electr. Eng. 2018, 66, 332–341. [Google Scholar] [CrossRef]
  38. Ong, D.; Moors, T. Deferred discard for improving the quality of video sent across congested networks. In Proceedings of the 38th Annual IEEE Conference on Local Computer Networks, Sydney, NSW, Australia, 21–24 October 2013. [Google Scholar] [CrossRef]
  39. Narváez, E.A.T.; Bonilla, C.M.H. Handover algorithms in LTE networks for massive means of transport. Sist. Telemát. 2018, 16, 21–36. [Google Scholar] [CrossRef]
  40. Abdelmohsen, A.; Abdelwahab, M.; Adel, M.; Darweesh, M.S.; Mostafa, H. LTE Handover Parameters Optimization Using Q-Learning Technique. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 194–197. [Google Scholar]
  41. Ahmad, R.; Sundararajan, E.A.; Othman, N.E.; Ismail, M. Efficient Handover in LTE-A by Using Mobility Pattern History and User Trajectory Prediction. Arab. J. Sci. Eng. 2018, 43, 2995–3009. [Google Scholar] [CrossRef]
  42. Broyles, D.; Jabbar, A.; Sterbenz, J.P. Design and analysis of a 3–D gauss-markov mobility model for highly-dynamic airborne networks. In Proceedings of the international telemetering conference (ITC), San Diego, CA, USA, 25–28 October 2010; pp. 25–28. [Google Scholar]
  43. Vijaykumar, M.; Rao, S. A cross-layer frame work for adaptive video streaming over wireless networks. In Proceedings of the 2010 International Conference on Computer and Communication Technology (ICCCT), Allahabad, Uttar Pradesh, India, 17–19 September 2010. [Google Scholar] [CrossRef]
  44. Korhonen, J.; You, J. Improving objective video quality assessment with content analysis. In Proceedings of the International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA, 13–15 January 2010; pp. 1–6. [Google Scholar]
  45. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  46. Li, S.; Ngan, K.N. Influence of the Smooth Region on the Structural Similarity Index. In Pacific-Rim Conference on Multimedia; Springer: Berlin/Heidelberg, Germany, 2009; pp. 836–846. [Google Scholar] [CrossRef]
  47. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  48. Channappayya, S.S.; Bovik, A.C.; Heath, R.W. A Linear Estimator Optimized for the Structural Similarity Index and its Application to Image Denoising. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006. [Google Scholar] [CrossRef]
  49. Judd, G.; Wang, X.; Steenkiste, P. Efficient channel-aware rate adaptation in dynamic environments. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, Breckenridge, CO, USA, 17–20 June 2008. [Google Scholar] [CrossRef]
  50. Zeng, Y.; Zhang, R.; Lim, T.J. Throughput maximization for UAV-enabled mobile relaying systems. IEEE Trans. Commun. 2016, 64, 4983–4996. [Google Scholar] [CrossRef]
  51. Lyu, J.; Zeng, Y.; Zhang, R. Cyclical multiple access in UAV-aided communications: A throughput-delay tradeoff. IEEE Wirel. Commun. Lett. 2016, 5, 600–603. [Google Scholar] [CrossRef]
  52. Zhang, C.; Zhang, W.; Wang, W.; Yang, L.; Zhang, W. Research Challenges and Opportunities of UAV Millimeter-Wave Communications. IEEE Wirel. Commun. 2019, 26, 58–62. [Google Scholar] [CrossRef]
  53. Khawaja, W.; Guvenc, I.; Matolak, D.W.; Fiebig, U.C.; Schneckenberger, N. A survey of air-to-ground propagation channel modeling for unmanned aerial vehicles. IEEE Commun. Surv. Tutor. 2019. [Google Scholar] [CrossRef]
  54. Zeng, Y.; Lyu, J.; Zhang, R. Cellular-Connected UAV: Potential, Challenges, and Promising Technologies. IEEE Wirel. Commun. 2019, 26, 120–127. [Google Scholar] [CrossRef]
  55. Mozaffari, M.; Saad, W.; Bennis, M.; Nam, Y.H.; Debbah, M. A tutorial on UAVs for wireless networks: Applications, challenges, and open problems. IEEE Commun. Surv. Tutor. 2019. [Google Scholar] [CrossRef]
  56. Zhang, G.; Wu, Q.; Cui, M.; Zhang, R. Securing UAV communications via joint trajectory and power control. IEEE Trans. Wirel. Commun. 2019, 18, 1376–1389. [Google Scholar] [CrossRef]
  57. Eclipse. Available online: https://www.eclipse.org/downloads/packages/release/neon/3 (accessed on 15 July 2019).
  58. FFmpeg Website. Available online: https://ffmpeg.org/ (accessed on 15 July 2019).
  59. GERCOM Group. Available online: http://www.gercom.ufpa.br (accessed on 15 July 2019).
  60. PSNR and SSIM Computation. Available online: http://totalgeekout.blogspot.com/2013/04/evalvid-on-ns-3-on-ubuntu-1204.html (accessed on 15 July 2019).
  61. Evalvid Binaries. Available online: http://www2.tkn.tu-berlin.de/research/evalvid/fw.html (accessed on 15 July 2019).
  62. YUV CIF Reference Videos (Lossless H.264 Encoded). Available online: http://www2.tkn.tu-berlin.de/research/evalvid/cif.html (accessed on 15 July 2019).
  63. YUV Video Sequences.
  64. Raffelsberger, C.; Muzaffar, R.; Bettstetter, C. A Performance Evaluation Tool for Drone Communications in 4G Cellular Networks. arXiv 2019, arXiv:1905.00115. [Google Scholar]
  65. MP4Box Online Resource. Available online: https://gpac.wp.imt.fr/mp4box/ (accessed on 15 July 2019).
  66. Pongsapan, F.P. Hendrawan. Evaluation of HEVC vs H.264/A VC video compression transmission on LTE network. In Proceedings of the 2017 11th International Conference on Telecommunication Systems Services and Applications (TSSA), Lombok, Indonesia, 26–27 October 2017. [Google Scholar] [CrossRef]
  67. Uhrina, M.; Frnda, J.; Ševčík, L.; Vaculik, M. Impact of H. 264/AVC and H. 265/HEVC compression standards on the video quality for 4K resolution. Open J. Syst. 2014. [Google Scholar] [CrossRef]
  68. Lin, Y.C.; Wu, S.C. An Accelerated H. 264/AVC Encoder on Graphic Processing Unit for UAV Videos. In International Symposium Computational Modeling of Objects Represented in Images; Springer: Cham, Switzerland, 2016; pp. 251–258. [Google Scholar]
  69. De Rango, F.; Tropea, M.; Fazio, P. Multimedia Traffic over Wireless and Satellite Networks. In Digital Video; IntechOpen: London, UK, 2010. [Google Scholar] [Green Version]
  70. Sun, K.; Yan, Y.; Zhang, W.; Wei, Y. An Interference-Aware Uplink Power Control in LTE Heterogeneous Networks. In Proceedings of the TENCON 2018-2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 0937–0941. [Google Scholar]
  71. Abreu, R.; Jacobsen, T.; Berardinelli, G.; Pedersen, K.; Kovács, I.Z.; Mogensen, P. Power control optimization for uplink grant-free URLLC. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6. [Google Scholar]
  72. Gora, J.; Pedersen, K.I.; Szufarska, A.; Strzyz, S. Cell-specific uplink power control for heterogeneous networks in LTE. In Proceedings of the 2010 IEEE 72nd Vehicular Technology Conference-Fall, Ottawa, ON, Canada, 6–9 September 2010; pp. 1–5. [Google Scholar]
  73. Deb, S.; Monogioudis, P. Learning-based uplink interference management in 4G LTE cellular systems. IEEE/ACM Trans. Netw. (TON) 2015, 23, 398–411. [Google Scholar] [CrossRef]
  74. Recommended Drones for 4G-LTE. Available online: http://g-uav.com/en/index.html (accessed on 15 July 2019).
  75. Ohm, J.R.; Sullivan, G.J.; Schwarz, H.; Tan, T.K.; Wiegand, T. Comparison of the coding efficiency of video coding standards—including high efficiency video coding (HEVC). IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1669–1684. [Google Scholar] [CrossRef]
Figure 1. Surveillance system comprising macrocells.
Figure 1. Surveillance system comprising macrocells.
Electronics 08 00910 g001
Figure 2. State transition diagram of SCRAS.
Figure 2. State transition diagram of SCRAS.
Electronics 08 00910 g002
Figure 3. Resolution vs. bit-rate.
Figure 3. Resolution vs. bit-rate.
Electronics 08 00910 g003
Figure 4. Complete simulation process over NS3 containing preprocessing and post-processing tasks.
Figure 4. Complete simulation process over NS3 containing preprocessing and post-processing tasks.
Electronics 08 00910 g004
Figure 5. Coordinates indicating waypoints on the fixed flight pattern.
Figure 5. Coordinates indicating waypoints on the fixed flight pattern.
Electronics 08 00910 g005
Figure 6. PSNR and SSIM on a fixed flight pattern.
Figure 6. PSNR and SSIM on a fixed flight pattern.
Electronics 08 00910 g006
Figure 7. Varying velocity of unmanned aerial vehicles (UAVs).
Figure 7. Varying velocity of unmanned aerial vehicles (UAVs).
Electronics 08 00910 g007
Figure 8. Impact of increasing handovers.
Figure 8. Impact of increasing handovers.
Electronics 08 00910 g008
Figure 9. Impact of congestion in the core network.
Figure 9. Impact of congestion in the core network.
Electronics 08 00910 g009
Figure 10. Impact of video encoding schemes in SCRAS.
Figure 10. Impact of video encoding schemes in SCRAS.
Electronics 08 00910 g010
Figure 11. Frame-grabs of streaming video showing SCRAS outperform FESTIVE before FESTIVE reacts to the poor channel.
Figure 11. Frame-grabs of streaming video showing SCRAS outperform FESTIVE before FESTIVE reacts to the poor channel.
Electronics 08 00910 g011
Figure 12. Frame-grabs of streaming video showing SCRAS outperform piStream after handover.
Figure 12. Frame-grabs of streaming video showing SCRAS outperform piStream after handover.
Electronics 08 00910 g012
Figure 13. Typical strucure of group of packets (GoP) [69].
Figure 13. Typical strucure of group of packets (GoP) [69].
Electronics 08 00910 g013
Table 1. Comparative analysis of the server-based crosslayer rate-adaptive scheme (SCRAS) with other related work.
Table 1. Comparative analysis of the server-based crosslayer rate-adaptive scheme (SCRAS) with other related work.
SCRASFESTIVE [12]piStream [13]AVSS [17][16][38][35][24][29][34][33]
20192012201520182017201320172015201320182018
Real-time SurveillanceYesNoNoYesYesNoNoYesNoNoNo
UAVs based
communication
YesNoNoYesNoNoNoYesNoNoNo
Rate-adaptation
Client/Server based
ServerClientClientClientClientClientClientClientClientClientClient/Server
Transport-layerUDPTCPTCPUDPUDPTCPTCPUDPTCPTCPTCP
Crosslayer adaptationYesNoYesNoNoNoNoNoYesNoNo
Considering GoP
for QoE of video
YesNoNoNoNoYesNoNoNoNoNo
4G-LTE SupportYesNoYesNoNoNoYesYesYesNoNo
Link-awarenessYesNoYesNoYesNoYesNoYesYesNo
Wireless link
for Communication
YesNoYesYesYesNoYesYesYesYesNo
Congestion awarenessNoYesYesYesYesYesYesNoYesYesYes
Buffer based approachNoYesYesNoNoNoYesNoYesYesYes
Considering effects
of Handovers
YesNoNoNoNoNoNoNoNoNoNo
Table 2. Ranges of specified channels.
Table 2. Ranges of specified channels.
Channels Ranges
BCdBm−75 to −90
GCdBm−90 to −100
PCdBm−100 to −120
Table 3. Legends used in Figure 2.
Table 3. Legends used in Figure 2.
ParametersDescription
HoInter- or Intracellular Handover
LRVLow Resolution Video
HRVHigh Resolution Video
MRVMedium Resolution Video
PCPoor Channel
GCGood Channel
BCBest Channel
EoVEnd of Video
Table 4. Parameters set for H.264 highway-cif.mp4.
Table 4. Parameters set for H.264 highway-cif.mp4.
ParametersHRVMRVLRV
Video FormatH.264H.264H.264
Resolution1920 × 1080352 × 288176 × 144
Frame rate303030
Frame rate typeCFRCFRCFR
Average Bit-Rate5 Mbps1 Mbps500 Kbps
Table 5. Parameters with their units and values adopted from 3GPP R4-092042 specification for simulation.
Table 5. Parameters with their units and values adopted from 3GPP R4-092042 specification for simulation.
ParameterUnitsValues
macroEnbSitesnumb1–4
Area Margin Factornumb0.5
macroUE Densitynumb/sq m0.00002
macroUEsnumb20
macroEnb Tx PowerdBm46
macroEnb DLEARFCNnumb100
macroEnb ULEARFCNnumb18,100
macroEnb BandwidthResource Blocks100
Bearers per UEnumb1
SRS Periodicityms80
Scheduler-Proportional Fair
Table 6. Parameters set in the Gauss–Markov model for a random flight pattern of a UAV.
Table 6. Parameters set in the Gauss–Markov model for a random flight pattern of a UAV.
Parameter Values
Time Stepsec0.5
Alphanumb0.85
Mean Velocitym/sVariable 1–10
Mean DirectionURVMin = 0, Max = 6.28
Mean PitchURVMin = 0.05, Max = 0.05
Normal VelocityGRVMean = 0, Var = 0, Bound = 0
Normal DirectionGRVMEAN = 0, Var = 0.2, Bound = 0.4
Normal PitchGRVMean = 0,Var = 0.02, Bound = 0.04

Share and Cite

MDPI and ACS Style

Naveed, M.; Qazi, S.; Atif, S.M.; Khawaja, B.A.; Mustaqim, M. SCRAS Server-Based Crosslayer Rate-Adaptive Video Streaming over 4G-LTE for UAV-Based Surveillance Applications. Electronics 2019, 8, 910. https://doi.org/10.3390/electronics8080910

AMA Style

Naveed M, Qazi S, Atif SM, Khawaja BA, Mustaqim M. SCRAS Server-Based Crosslayer Rate-Adaptive Video Streaming over 4G-LTE for UAV-Based Surveillance Applications. Electronics. 2019; 8(8):910. https://doi.org/10.3390/electronics8080910

Chicago/Turabian Style

Naveed, Muhammad, Sameer Qazi, Syed Muhammad Atif, Bilal A. Khawaja, and Muhammad Mustaqim. 2019. "SCRAS Server-Based Crosslayer Rate-Adaptive Video Streaming over 4G-LTE for UAV-Based Surveillance Applications" Electronics 8, no. 8: 910. https://doi.org/10.3390/electronics8080910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop