You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

12 October 2020

A Machine Learning Approach for 5G SINR Prediction

,
,
,
,
,
and
1
Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25120, Pakistan
2
Department of Computer Science and Information Technology, Jalozai Campus, University of Engineering and Technology Peshawar, Nowshera 24240, Pakistan
3
Department of Mechatronics Engineering, University of Engineering and Technology Peshawar, Peshawar 25100, Pakistan
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Applications for Smart Cyber Physical Systems

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) are envisaged to play key roles in 5G networks. Efficient radio resource management is of paramount importance for network operators. With the advent of newer technologies, infrastructure, and plans, spending significant radio resources on estimating channel conditions in mobile networks poses a challenge. Automating the process of predicting channel conditions can efficiently utilize resources. To this point, we propose an ML-based technique, i.e., an Artificial Neural Network (ANN) for predicting SINR (Signal-to-Interference-and-Noise-Ratio) in order to mitigate the radio resource usage in mobile networks. Radio resource scheduling is generally achieved on the basis of estimated channel conditions, i.e., SINR with the help of Sounding Reference Signals (SRS). The proposed Non-Linear Auto Regressive External/Exogenous (NARX)-based ANN aims to minimize the rate of sending SRS and achieves an accuracy of R = 0.87. This can lead to vacating up to 4% of the spectrum, improving bandwidth efficiency and decreasing uplink power consumption.

1. Introduction

The use of technology in general and portable devices in particular has skyrocketed in the past two decades. As the technology continues to evolve, various requirements of newly developed standards have emerged, and these requisites have been fulfilled significantly through research and innovation. Wireless and cellular communication technologies have played a vital role in the evolution of technology. Mobile communication marks its birth in the 1970s with the advent of 1G mobile communication systems, which incorporated circuit switching technologies such as Frequency Division Multiple Access (FDMA) along with analog communication techniques. Eventually, the 2G standards evolved with the help of a mechanism where FDMA and Time Division Multiple Access (TDMA) were combined. The first digital communication system designed was 2G [1]. However, the 3G used Code Division Multiple Access (CDMA) and High Speed Packet Access (HSPA) technologies to provide IP- based services in an attempt to meet the demand of higher data rates. In a long Term Evolution (LTE) also termed as 3.9G and LTE-Advanced (LTE-A) also referred to as 4G, incorporate Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) in order to achieve much higher rates compared to 3G. Figure 1 shows the evolution from 1G to 5G in terms of services offered.
Figure 1. Evolution from 1G to 5G.
The technological innovation is prospering at an unprecedented rate in the form of Internet-of-Things (IoT), where connectivity is required ubiquitously and pervasively. Ranging from fan to vehicle, CEO to peon, all are required to be in the loop and stay connected to remain up-to-date in the ever changing global technological scenario. Smart homes, cities, cars and industries are envisaged to further soar and toughen the spectrum, speed, QoS (Quality-of-Service) and QoE (Quality-of-Experience) requirements for networks. As per Gartner stats of 2016, 20.4 billion IoT devices are envisaged to be connected by 2020 [2]. The total number of connected devices including IoT devices and mobile devices will reach more than 50 billion by 2021 [3]. IoT devices, scientific research and social media have also resulted in the proliferation of internet traffic leading to an estimate of 2.5 exabytes/day and 40 petabytes/year by CERN’s Large Hadron Collider (LHC) [4]. Considering internet streaming data, approximately 400 h’/min videos are uploaded to YouTube and 2.5 million photos and videos are posted using the platform of Instagram. Consequently, a total of 2.5 quintillion bytes of data are uploaded every day [5]. To cope with all these challenges, the research community is gradually heading towards the next generation 5G network.
Figure 2 shows the journey towards 5G deployment ever since LTE was introduced. Substantial research is going on in the area of mobile communication, which is evident from the number of papers being published in this area. From January 2014 to January 2018, a total of 389 papers have been published in IEEE Xplore, while 588 in ScienceDirect [6].
Figure 2. Plan execution timeline [7].
The 5G is still undergoing standardization and New Radio (NR) was standardized in 3GPP Release 15 [8]. Its requirements include up to 10 Gbps speed and different operating bands (from below 1 GHz up to 100 GHz) of frequency [1,9]. The 5G will inherit several features of legacy 4G system like sub-carrier spacing (with few additional options), Carrier Aggregation (CA) and most of the Reference Signals (RS) [10].
Nevertheless, it is anticipated that 5G will augment some novel and versatile dynamics to networks such as Big Data (BD), Artificial Intelligence (AI) and Machine Learning (ML). AI and ML have evolved as irrefutably important disciplines in the last decade. In the case of 5G, the quality of a network of an operator can be assessed by the quality of deployment of AI techniques [11]. It is making appliances automatic and hence decreasing human intervention, which is also a demand of 5G [12]. In a nutshell, it can be inferred that introduction of AI and its sub-categories in 5G is inevitable [13].
Spectrum is our invisible infrastructure; it’s the oxygen that sustains our mobile communication”—stated by US Federal Communications Commission Chairman, Julius Genachoeski. It is obvious that spectrum is one of the most precious resources in mobile communication. Therefore, handling spectrum with due attendance is indispensable [14]. In cellular communication, a considerable amount of the spectrum is used for the reference signals. The part of the spectrum that is consumed by the reference signals aggregates up to 25%. Availability of additional spectrum helps in smoothening of the communication. Transmission of these signals adds up overhead as well as consumes a considerable amount of uplink power. In this research, an attempt has been made to save a fraction of the above-mentioned spectrum i.e., 25% mainly utilized by the reference signals. This will help in mitigating the motioned problems.
The recent research advances in Mobile Networks (MN) demonstrates several attempts for saving the spectrum resources as well as power constraints. Some of the researchers are focusing on ML and AI techniques to achieve the said goal. The individual proposed solutions in the literature are orchestrated to address a particular/single problem at a time. As discussed in the literature survey section in detail, the proposed solutions are not multifarious and do not present a single solution for the stated problems. Hence, an efficient technique is required, which addresses the spectrum as well as power usage problems simultaneously.
In this paper, we present a novel method for predicting Signal-to-Interference-and-Noise-Ratio (SINR) based on the location of a Cyber Physical System (CPS). An Artificial Neural Network (ANN)-based model predicts the channel condition, i.e., SINR on the basis of current location of CPS. The Base Station (BS) sends the location in the form of different signals including Position Reference Signal (PRS) and Observed Time Difference of Arrival (OTDoA). In terms of ML, the model has shown good enough (R = 0.87 and MSE = 12.88) accuracy. Furthermore, employing such an approach into the cellular network has contributed in:
  • Increased throughput of up to 1.4 Mbps in the case of 16 QAM (Quadrature Amplitude Modulation) and 2.1 Mbps in the case of 64 QAM.
  • Saved power of 6.2 dBm at the CPS end.
  • Increased bandwidth (BW) efficiency i.e., approximately equal to 4% by saving the fraction of spectrum that is used for the frequent transmission of Sounding Reference Signals (SRS).
The rest of the paper is organized as follows: Section 2 covers the background, Section 3 discusses the literature survey on the use of AI in wireless and cellular networks, and Section 4 is about data collection. In Section 5, simulation setup for data acquisition has been discussed, Section 6 discusses training and testing process for ANN, Section 7 demonstrates the way ANN-based model will be incorporated into MN, Section 8 shows results of the ML model, Section 9 discusses contributions of the proposed scheme and Section 10 provides a conclusion of the discussion.

2. Background

As already discussed, this paper combines two diverse fields of computers, i.e., MN and ML. The framework is conceptually divided into two sections for better understanding of the proposed scheme: (a) Network module (b) ML module.

2.1. Network Module

Mobile communication systems have made a lot of things easier for people, but then, on the other hand, they have posed some tough research questions as well. QoS provision and efficient spectrum utilization are extremely important in mobile communication systems. QoS provision usually relies on proper resource allocation. The spectrum is generally allocated on the basis of channel conditions of CPS users. The channel conditions depend on SINR of the CPS.
For resource scheduling in LTE, an uplink SRS (Sounding Reference Signal) is sent, which intends to estimate channel conditions at different frequencies. 5G will also follow the same pattern except that its SRS will be one, two or four symbols long in the time domain [15]. These SRS are generated on the basis of the Zadoff–Chu sequence [16]. An SRS when received at BS is used to estimate SINR using different state-of-the-art techniques, one such novel method is discussed in [17]. Here is how SRS sequence works mathematically:
r u , v α = e j α n r ¯ u , v ( n ) , 0 n M S C R S 1
In the above equation, M S C R S represents length of the reference signal sequence and r ¯ u , v ( n ) α is generated using the Zadoff-Chu sequence and has Constant Amplitude Zero Autocorrelation (CAZAC) characteristics and hence allows orthogonal code multiplexing by cyclic shift α , where α is demonstrated as follows,
α   =   2 π n S R S C S 8   ( n   =   0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 ) ,
Here, n shows the number of SC-FDMA symbols used in uplink. The two indices C S and S R S refer to Cyclic Shift and Sounding Reference Signal, respectively.
Radio resource scheduling is then done on the basis of this calculated SINR. 5G will also be using the same configurations and features as that of LTE SRS [16] and 5G is also envisaged to give a further role of MIMO configuration to SRS [18]. SRS is sent in gaps of milliseconds. It can be sent as frequently as after 2 ms and as seldom as after 160 ms [16]. Figure 3 shows resource grid of LTE-A. LTE-A and 5G have the same numerology except that the subcarrier in 5G can be up to 30, 60 and 120 KHz and 5G can also have up to 275 PRBs (Physical Resource Blocks) in its resource grid. Different terminologies are explained in Figure 3. Figure 4 shows SRS configuration on a resource grid. It is evident from the aforementioned figures that SRS uses large bandwidth and occupies almost 4.1% of the spectrum. However, in some cases like high Path Loss (PL) and fast fading, SRS may have poor performance on consecutive frequency resources [16]. Frequency hopping is used to overcome this problem.
Figure 3. Resource grid [19].
Figure 4. Sounding Reference Signals (SRS) configuration on a subframe [16].
A Base Station (BS) also acquires a CPS location information using Observed Time Difference of Arrival (OTDOA), Positioning Reference Signal (PRS) or Global Positioning System (GPS) [20]. For the purpose of simulating our network and acquiring required parameters, a Vienna Simulator [21] has been used, while for the purpose of mobility of CPSs, a Random Waypoint model has been used [22].

2.2. ML Module

As already discussed, ML has revolutionized several research areas. It may hardly have left any area of the digital world uninfluenced. Recently, the research area of ML is proving to be of great significance in numerous fields. It is achieving popularity in fields like image processing, medical diagnostics, self-driving cars, etc. With the advent of ML, devices or things are becoming automated, requiring less labor and hence less manual control is needed. In some cases, ML is also proving to be more efficient and reliable than statistical methods [13].
Several techniques with a lot of innovations and revamps have so far been employed for accomplishing AI and ML tasks. However, there is a generic categorization of ML algorithms:

2.2.1. Supervised Learning

It is the category of ML algorithms that works on such data that are pre-labeled. This means that there are pre-defined inputs, we call features, and pre-defined output(s) for them called label(s). Cases where data are continuous and do not belong to a few classes, falls under regression. When the output label is/are discrete class (es) then it is classification.

2.2.2. Unsupervised Learning

These types of algorithms use unlabeled data. These models work in a way that they first learn features from the fed data and then the validation step is done, and after that, data are tested for the already learned features.

2.2.3. Semi-Supervised Learning

It is a kind of hybrid algorithm of supervised and unsupervised algorithms where the data are a mix of labeled and unlabeled data.

2.2.4. Reinforcement Learning

In reinforcement, self-learning behavior of humans and animals is co-opted. It works on the principle of “Action and Punishment”.

2.2.5. Artificial Neural Networks

Artificial Neural Networks (ANNs) work on the principle of Human Neural Networks (HNNs). They have certain inputs, multiplied with weights and passed through an activation function for one or more hidden layers and hence reach the output layer. In the O/P layer, results are compared to targets and error is rectified through back propagation for defined epochs and in the last accuracy with best adjusted weight is calculated. A simple ANN is shown in Figure 5.
Figure 5. A simple Artificial Neural Networks (ANN).
Commonly used algorithms in wireless networks are Deep Learning, Ensemble Learning and K-Nearest Neighbors [13,23,24].
An MIT Review has traced the trails of research papers published at arXive.org since 1993. Their findings show the fact that the number of research papers published at arXive.org in this decade has increased more than two-fold as vis-à-vis the last decade [25].
However, neural networks have gained a lot more popularity than other ML algorithms. An MIT review has analyzed research articles of arXive since 1993 of how ANNs have gained popularity especially in the last half of the current decade [25].
For this research work, ANN with a Levenberg–Marquardt algorithm (LBM) has been taken into consideration. An ANN with LBM was selected on the basis of non-linearity of data as ANN and LBM processes and optimizes non-linear data. LBM is the training/ optimization algorithm for ANN to minimize the error. ANN has an input layer that takes the input, multiplies weights to the input, adds bias, and processes it through neurons. The result(s) acts as input to the next layer of neurons. The output layer retracts the values back again for optimization. Here is how it happens mathematically:
z = i = 1 n ( ( x i ) ( w i ) ) + b
where, x i is input, w i is weight and b is bias.
z is then passed through a squashing/activation function, in our case, for hidden layers, we have used the Sigmoid function also known as logistic function. Mathematically it is shown as follows:
y   =   1 1 + e z
For the output layer, a linear function has been used. Mathematically, it is shown as follows:
y   =   c x ,
In the above equation, x is the input from the preceding layer and c is the constant.
Optimization is followed with LBM as follows:
( J t J + λ   I ) δ = J t E ,
where J is the Jacobian matrix, λ is Levenberg’s damping factor,   δ is the weight update while E is the error matrix.

4. Data Acquisition

As a rule of thumb for ML algorithms, they need data for training, which includes features and labels (in case of supervised learning). In our case, features are geographical positions of the user, i.e., X and Y coordinates and the labels are the SINR values. To acquire SINR, MATLAB based traces from a Vienna simulator are obtained and a Random Waypoint mobility model is simulated for 10,000 s to get the mobility patterns, returning almost 200,000 instances of SINR against X, Y coordinates. These traces were acquired for a single node. The speed interval of the node was set to 0.02 m/s and the pause interval was 0 s (no pause). The time step for trace collection was 0.001 s. Figure 6 shows the simulation process of how data are collected. Where a CPS sends SRS and position signal to a BS and BS estimates SINR from that, and then allocates the scheduling scheme accordingly.
Figure 6. Data acquisition.

5. Simulation Setup

The footprints of channel conditions were obtained using a Vienna simulator. A MATLAB-based system level simulator was developed for simulating advanced networks. In order to obtain the link quality (i.e., SINR), the model uses three parameters. The first parameter is the Macro path-loss (L), which models both L due to distance and antenna gain. Distance based L is calculated using the following formula.
L   =   128.1   +   37.6 log 10 R   [ Km ]
In the above equation, R   [ Km ] represents the distance between transmitter and receiver in kilometers, whereas 128.1 is a constant and 37.6 is the constant coefficient of the second term in the equation.
The second parameter is the shadow fading, which estimates the loss of the signal due to obstacles in the way of signal traveling from transmitter to receiver. It is referred to as the introduction change in geographical properties with respect to the Macro path-loss. The third parameter is channel modeling. In this part, J. C. Ikuno et al. [41] have considered small scale fading as a time varying function. Channels have been modeled for both multiple-input and multiple-output (MIMO) and single-input and single-output (SISO) modes of transmission. Finally, the link quality traces are collected from the Vienna simulator. Different scenarios have different SINR estimation methods. Our SINR of interest uses SISO as the uplink. Systems such as LTE have only SISO support, but the model could be extended to MIMO as well. SISO-based SINR is estimated using the following formula.
SINR = P t x 1 | h 0 | 2 σ 2 + i = 1 N i n t | h 1 | 2 | h 0 | 2 P t x , i
where P t x is the transmission power, 1 | h 0 | 2 ,   | h 1 | 2 | h 0 | 2 are noise and interference traces, P t x , i shows transmission power at the i t h transmitter.

6. Training and Testing

As it has already been discussed, the acquired dataset has two input features, i.e., the coordinates of the user location and the SINR. So, in light of the available data, our problem lies in the scope of supervised learning. As the output label is a vector of continuous values of SINR, the regression technique has been used for curve fitting. The MATLAB environment is used for the experiment. The ANN model that has been used for this experiment is a Non-Linear Auto Regressive External/Exogenous (NARX) model. The NARX model has two architectures—one is series-parallel architecture, which is also called an open-loop and the other is a parallel architecture also referred to as closed loop. We preferred the open loop architecture for our simulations mainly because the true output is provided to the network, unlike a closed loop that feeds an estimated output. Another advantage of using an open loop is that it uses a simple feed forward network, which allows to incorporate efficient training algorithms [42]. The two different architectures of NARX are depicted in Figure 7.
Figure 7. Non-Linear Auto Regressive External/Exogenous (NARX) [42].
Mathematically, open loop NARX is shown by the following formula:
y ^ ( t ) = f ( y ( t 1 ) , y ( t 2 ) , , y ( t n y ) , x ( t 1 ) , x ( t 2 ) , , x ( t n x ) ) ,
In the above equation, y ^ ( t ) is the predicted value based on the previous value of the input vector u ( t 1 ) as well as that of the output vector y ( t 1 ) . F ( . ) is the mapping function, n y represents output delays, while n x shows input delays.
Results show that the best performance/accuracy ( R = 0.87112 ) is achieved when the training data are spread across the time domain. The spreading of data in the time domain means that the input data are not fed at once along with the delays.
The dataset is split into three sets including training, validation and testing in the proportion as shown by Table 2. The configuration of the proposed neural network comprises 15 hidden layers with 5 delays and 20 epochs. The results show that the training process stops if the generalization, i.e., cross validation does not improve beyond six iterations.
Table 2. Distribution of the dataset.

7. Working

Initially, it is not desired to totally exterminate the SRS; nonetheless, the goal is to reduce its transmission rate by half. As already discussed, SRS is normally transmitted after every 2 ms, but because of this experiment it is transmitted after 4 ms and for the rest of the 2 ms, BS predicts SRS based values of SINR with the trained ML algorithm.
Normally, a regular communication of SRS and geo-position signals between the user and BS exists. In this case, it is proposed to reduce the rate of the communication of SRS to half. In the deployed systems, SRS is transmitted almost after every 2 ms and it is proposed to send SRS after 4 ms in order to save two sub frames, which are especially configured for SRS. The CPS regularly communicates its position to the BS in the form of PRS.
In the 2 ms of the middle, where no SRS is being communicated, ML helps in predicting the channel conditions. In this case, only the position signal is to be configured for transmission on the basis of geo-location of the CPS. Since the machine has been trained for predicting SINR, BS schedules DL resources for the CPS without SRS being transmitted. Figure 8 shows this concept through a flowchart.
Figure 8. Flow chart for implementation of the proposed scheme.
It is inferable from Figure 8 that first the CPS will be configured in such a manner that it will decide whether to use ML or the classical statistical method. If it is ML’s turn, then the CPS sends only its position signal rather than sending SRS and at the BS the position coordinates are provided to the ML model in order to predict the SINR. DL scheduling is decided on the basis of the calculated SINR.
Alternatively, if it is not ML’s turn, then the CPS will send SRS as well as position signal to the BS. The BS will estimate SINR on the basis of the received SRS (already discussed in Section 2). Therefore, the scheduling is achieved based on this estimated SINR.
For the decision of when it will be the turn of the ML model calling and the classical method calling, we proposed the following Algorithm 1.
Algorithm 1: Method decision
start
             for t equals 0 to_max with t + 2ms
                           {
                           CPS.Classical_Method
                           }
             for t equals 1 to_max with t + 2ms
                           {
                           CPS.ML_Method
                           }
end
The BS configures a CPS on the basis of the above algorithm. For the time slot ( t = 0 ), the classical method is called where both SRS as well as position signals are transmitted. For the next slot, the ML method is invoked and only a position signal is being sent and an ML model is applied.
Here, only the time domain is taken into account due to the fact that the already proposed/allocated bandwidth for SRS is not being decreased but in fact its transmission in the time domain is intended to be decreased. SRS will still stretch over a large bandwidth but not as frequent in time as it used to be previously.

8. Results

After completing the training, validation and testing task, different matrices of performance measurement are acquired. However, the parameter considered here is the loss function and accuracy measure obtained at the end of the simulation. The loss function preferred for our experiment is Mean Squared Error (MSE) and the accuracy measure is Regression (R). The MSE achieves a value of 12.88 and the accuracy measure of the experiment is 0.87, as shown in Figure 9. Mathematically, MSE works as follows:
MSE = 1 n i = 1 n ( y i y ˜ i ) 2
Here, y i shows target/real output, while y ˜ i is the predicted/ estimated output.
Figure 9. Machine Learning (ML) model accuracy.
The mathematical formula of R is as follows.
R = i = 1 n ( y ˜ i y ¯ ) 2 i = 1 n ( y i y ¯ ) 2
The y ¯ in above equation shows mean value of y ˜ i .
There are four different results in Figure 9 obtained at different stages of the experiment. Overall, this figure shows the comparison of the real output (target) with the predicted values (output). The training results are those that have been obtained during the training phase. The model achieves an accuracy of 0.87 for training. The validation results are obtained during the validation step (ensuring that there is no overfitting) and yielded an accuracy of 0.86. The testing results are those obtained after feeding the test part of the dataset. Accuracy of the test phase is 0.86. The last result shows a generic model obtained after running all of the above steps. O u t p u t = 0.77 × T a r g e t + 1.7 is the final model obtained, it shows that the input ( f ( y , x ) ) is multiplied with the coefficient 0.77 (weight) and then added with 1.7 (bias). The fit and Y = T lines show the convergence of the real output and target and the data legend shows the distribution of data instances against the fitness function.
Table 3 shows a comparison of accuracy of NARX with a Non-Linear Input Output technique and SVM with 10-fold cross validation under R accuracy measure and for the same data. It can be inferred that NARX outperforms the latter.
Table 3. Comparison of accuracy of NARX with Non-Linear Input Output and Support Vector Machines (SVM).
Equation (12) shows how the Non-Linear Input Output technique works mathematically.
y ^ ( t ) = f ( x ( t 1 ) , , x ( t d ) )
Table 4 shows performance parameters at a glance. The RMSE (Root MSE) is acquired by taking the square root of MSE. The r2 shows the R-squared value that has been acquired by taking the square of the regression value R.
Table 4. Performance measures.

9. Contribution

The increase in the amount of maximum throughput/good-put capacity for 16 QAM is depicted in Figure 10. Payload throughput is obtained after the exclusion of reference signals overhead, reference signals take up to 25% of the total capacity of a channel BW. Here, 10 MHz BW may not seem better for some higher frequencies in 5G because it will not satisfy the higher data rate requirements; however, it is still considered by standardizing bodies and operators for Non-Standalone (NSA). NSA is the initial phase of 5G, where it will work as an extension of 4G and will use LTE packet core and lower frequencies in Standalone (SA). SA will be the full fledge 5G system requiring no LTE or LTE-A support [43,44]. For this reason, this paper also considers 10 MHz BW. Besides that, 15 and 20 MHz BW are stronger candidates for uplink in 5G, and therefore considered in this work [45]. 256 QAM and MIMO have not been considered because they are not supported at the CPS.
Figure 10. Throughput comparison for 16 QAM (Quadrature Amplitude Modulation).
The impact of the proposed mechanism on throughput with 64 QAM scheme, where 6 bits/symbols are allowed is depicted in Figure 11.
Figure 11. Throughput comparison for 64 QAM.
As seen in the results section, the ML model has been trained, now its incorporation into the network allows us to reduce the number of transmissions of SRS. This increases the throughput capacity.
Throughput is the measure of real user data (bits) transfer capacity of a BW in one second. This is acquired by subtracting the overhead from the total capacity of a BW. The increased throughput is obtained by adding the saved part of SRS to the payload/ throughput of the BW.
S o l d = C c h C o h
Here, C c h is the capacity in bits/sec of a channel for a given BW while C o h is the reference signal overhead capacity in bits/sec and S o l d is the throughput in bits/sec.
S n e w = S o l d + C S R S / 2
S n e w is the enhanced throughput. It has been obtained by adding S o l d and C S R S / 2 . C S R S / 2 is the channel capacity dedicated for SRS in bits/sec. The 2 in the denominator shows that in the obtained results, SRS transmission has been reduced to half.
The exclusion of overhead increases the efficiency of BW utilization. Normally, almost 25% of the channel is used for reference signals, for example, SRS, Demodulation Reference Signal (DMRS), overhead and the rest of the 75% is usable for real user data transfer. Among the 25% of the channel BW that is occupied by reference signals, SRS only occupies 4.1% of the BW.
So, in the proposed solution, the BW efficiency for real user data throughput increases by about 2% when SRS transmission is reduced to half and the BW efficiency increases approximately by 4% when SRS transmission is totally replaced by the ML model and hence the total usable part of BW for user data becomes 77% and 79% respectively. Thus, a remarkable BW efficiency improvement can be effectively realized. Figure 12 shows a graphical representation of BW efficiency.
Figure 12. Bandwidth (BW) efficiency.
The typical power consumption for a CPS in uplink is 23 dBm per physical resource block, as per 3 GPP [46]. This becomes 199 mW on a linear scale. Since the number of transmissions of SRS has been significantly reduced, a decrease in power consumption can also be noticed by the same proportion. It saves us power of almost 4.1 mW, which in log scale becomes 6.2 dBm. Hence, useful power can be conserved by consumption reduction.

10. Conclusions

In this paper, a different paradigm is investigated for introducing artificial intelligence in mitigating resource utilization in a physical layer of SRS. We proposed a novel method for predicting an uplink SINR, which is based on SRS using an ANN-based scheme. This research holds potential value for enhancing the most precious aspects of RRM. We attempted to mitigate the research gap by increasing the throughput, saving uplink power, and increasing BW efficiency for 5G networks through prediction rather than estimation via pilot signals. This research is important for exploring new future ideas, such as the prediction of PL, fading, QoS and power control.

Author Contributions

Conceptualization, methodology, software, writing—original draft, R.U.; supervision, validation, S.N.K.M.; formal analysis, A.M.A.; investigation, S.A.; writing—review and editing, A.H.; resources, T.K.; project administration, M.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research is a part of the Higher Education Commission of Pakistan funded projects, grant number No.9-5(Ph-IV-MG-2)/Peridot/R&D/HEC/2017.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kelechi, G.E.; Matthew, N.O.S.; Sarhan, M.M. 5G Wireless Technology: A Primer. Int. J. Sci. Eng. Technol. 2018, 7, 62–64. [Google Scholar]
  2. Egham, G. Gartner Says 8.4 Billion Connected ”Things” Will Be in Use in 2017, up 31 Percent from 2016, 14 January 2018. Available online: https://www.gartner.com/newsroom/id/3598917 (accessed on 8 June 2020).
  3. Zhang, N.; Yang, P.; Ren, J.; Chen, D.; Yu, L.; Shen, X. Synergy of Big Data and 5G Wireless Networks: Oppurtunities. Challenges and Approaches. IEEE Wireless Commun. 2018, 25, 12–18. [Google Scholar] [CrossRef]
  4. Mohammed, S.H.; Ahmed, Q.L.; Taisir, E.H.E.-G.; Jaafar, M.H.E. Big Data Analytics for Wireless and Wired Network Design: A Survey. Comput. Netw. 2018, 132, 180–199. [Google Scholar]
  5. IBM. 10 Key Marketing Trends for 2017. Paul Writer. Available online: https://paulwriter.com/10-key-marketing-trends-2017 (accessed on 2 February 2020).
  6. Li, S.; Xu, L.D.; Zhao, S. 5G Internet of Things: A Survey. J. Ind. Inf. Integr. 2018, 10, 1–9. [Google Scholar] [CrossRef]
  7. Huawei. 5G: A Technology Vision; Huawei: Shenzhen, China, 2019. [Google Scholar]
  8. Uchino, T.; Kai, K.; Toeda, T.; Takahashi, H. Specifications of NR Higher Layer in 5G. NTT DOCOMO Tech. J. 2019, 20, 3. [Google Scholar]
  9. Marimuthu, S. 5G NR White Paper. In New Era for Enhanced Mobile Broadband and 5G. A Future beyond Imagination; Available online: http://www.u5gig.ae/MediaTek-5G-NR-White-Paper-PDF5GNRWP.pdf (accessed on 8 June 2020).
  10. Ghosh, A. 5G New Radio (NR): Physical Layer Overview and Performance. In Proceedings of the IEEE Communication Theory Workshop. Available online: http://ctw2018.ieee-ctw.org/files/2018/05/5G-NR-CTW-final.pdf (accessed on 8 June 2020).
  11. Jakobsson. The 5G Future Will Be Powered By AI. Network Computing, 14 Mar 2019. Available online: https://www.networkcomputing.com/wireless-infrastructure/5g-future-will-be-powered-ai (accessed on 14 February 2020).
  12. Cayamcela, M.E.M.; Lim, W. Artificial Intelligence in 5G Technology: A Survey. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 17–19 October 2018. [Google Scholar]
  13. Zhang, C.; Patras, P.; Hadded, H. Deep Learning in Mobile and Wireless Networking: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 2224–2287. [Google Scholar] [CrossRef]
  14. GSMA. Spectrum Handbook; GSMA; Available online: https://www.gsma.com/spectrum/wp-content/uploads/2012/09/Spectrum-GSMA-Spectrum-Handbook-2012.pdf (accessed on 8 June 2020).
  15. Lin, X.; Li, J.; Baldemair, R.; Cheng, T.; Parkvall, S.; Larsson, D.; Koorapaty, H.; Frenne, M.; Falahati, S.; Grövlen, A.; et al. 5G New Radio: Unveiling the Essentials of the Next Generation Wireless Access Technology. IEEE Commun. Stand. Mag. 2019, 3, 30–37. [Google Scholar] [CrossRef]
  16. Mathworks. MathWorks. Available online: https://www.mathworks.com/help/lte/ug/sounding-reference-signal-srs.html (accessed on 9 July 2019).
  17. Tian, H.; Yang, L.; Li, S. SNR Estimation based on Sounding Reference Signal in LTE Uplink. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013), KunMing, China, 5–8 August 2013. [Google Scholar]
  18. Giordano, L.G.; Campanalonga, L.; P’erez, D.L.; Rodriguez, A.G.; Geraci, G.; Baracca, P.; Magarini, M. Uplink Sounding Reference Signal Coordination to Combat Pilot Contamination in 5G Massive MIMO. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018. [Google Scholar]
  19. Marwat, S.N.K. Future Machine-to-Machine Communications: LTE-A Optimization for M2M Applications; WiSa Jessica Haunschild/Christian Schön GbR; Ibidem-Verlag: Stuttgart, Germany, 2018; ISBN 9783955380304. [Google Scholar]
  20. Mahyuddin, M.F.M.; Isa, A.A.M.; Zin, M.S.I.M.; H, A.M.A.; Manap, Z.; Ismail, M.K. Overview of Positioning Techniques for LTE Technology. J. Telecommun. Electron. Comput. Eng. 2017, 9, 2–13. [Google Scholar]
  21. TU Wien, Vienna LTE-A Simulators. 2019. Available online: https://www.nt.tuwien.ac.at/research/mobile-communications/vccs/vienna-lte-a-simulators/ (accessed on 11 June 2020).
  22. Camp, T.; Boleng, J.; Davies, V. A survey of Mobility Models for Ad Hoc Network Research. WCMC Spec. Issue Mob. Ad. Hoc. Netw. Res. Trends Appl. 2002, 2, 483–502. [Google Scholar] [CrossRef]
  23. Chen, M.; Challita, U.; Saad, W.; Yin, C.; Debbah, M. Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial. IEEE Commun. Surv. Tutor. 2019, 21, 3039–3071. [Google Scholar] [CrossRef]
  24. Casas, P. Machine Learning Models for Wireless Network Monitoring and Analysis. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference Workshops (WCNCW): International Workshop on Big Data with Computational, Barcelona, Spain, 15–18 April 2018. [Google Scholar]
  25. Hao, K. MIT Technology Review. 25 January 2019. Available online: https://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/ (accessed on 15 February 2019).
  26. Saija, K.; Nethi, S.; Chaudhuri, S.; M, K.R. A Machine Learning Approach for SNR Prediction in 5G Systems. In Proceedings of the 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), GOA, India, 16–19 December 2019. [Google Scholar]
  27. Calabrese, F.D.; Wang, L.; Ghadimi, E.; Peters, G.; Hanzo, L.; Soldati, P. Learning Radio Resource Management in RANs:Framework, Opportunities and Challenges. IEEE Commun. Mag. 2018, 56, 138–145. [Google Scholar] [CrossRef]
  28. Dhananjay, K.; Logganathan, N.; Kafle, V.P. Double Sarsa Based Machine Learning to Improve Quality of Video Streaming over HTTP through Wireless Networks. In Proceedings of the ITU Kaleidoscope Academic Conference, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar]
  29. Bhutani, G. Application of Machine-Learning Based Prediction Techniques in Wireless Networks. Int. J. Commun. Netw. Syst. Sci. 2014, 7, 131–140. [Google Scholar]
  30. Wang, J.; Jiang, C. Machine Learning Paradigms in Wireless Network Association; Shen, X., Lin, X., Zhang, K., Eds.; Encyclopedia of Wireless Networks; Springer: Cham, Switzerland, 2018. [Google Scholar]
  31. Ibarrola, E.; Davis, M.; Voisin, C.; Cristobo, L. A Machine Learning Management Model for QoE Enhancement in Next-Generation Wireless Ecosystems. In Proceedings of the Itu Academic Conference, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar]
  32. Lee, G.M.; Um, T.W.; Choi, J.K. AI as a Microservice (AIMS) over 5G Networks. In Proceedings of the ITU Kaleidoscope Academic Conference, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar]
  33. Mwanje, S.S.; Mannweiler, C. Towards Cognitive Autonomous Networks in 5G. In Proceedings of the ITU Academic Conference, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar]
  34. Shi, Q.; Zhang, H. Fault diagnosis of an autonomous vehicle with an improved SVM algorithm subject to unbalanced datasets. IEEE Trans. Ind. Electron. 2020. [Google Scholar] [CrossRef]
  35. You, X.; Zhang, C.; Tan, X.; Jin, S.; Wu, H. AI for 5G: Research directions and paradigms. Sci. China Inf. Sci. 2019, 62, 21301. [Google Scholar] [CrossRef]
  36. Memon, M.L.; Maheshwari, M.K.; Saxena, N.; Roy, A.; Shin, D.R. Artificial Intelligence-Based Discontinuous Reception for Energy Saving in 5G Networks. Electronics 2019, 8, 778. [Google Scholar] [CrossRef]
  37. Sesto-Castilla, D.; Garcia-Villegas, E.; Lyberopoulos, G.; Theodoropoulou, E. Use of Machine Learning for energy efficiency in present and future mobile networks. In Proceedings of the IEEE Wireless Communications and Networking Conference, Marrakesh, Morocco, 15–18 April 2019; pp. 1–6. [Google Scholar]
  38. Bohli, A.; Bouallegue, R. How to Meet Increased Capacities by Future Green 5G Networks: A Survey. IEEE Access 2019, 7, 2220–42237. [Google Scholar] [CrossRef]
  39. Song, Z.; Ni, Q.; Sun, X. Spectrum and Energy Efficient Resource Allocation with QoS Requirements for Hybrid MC-NOMA 5G Systems. IEEE Access 2018, 6, 37055–37069. [Google Scholar] [CrossRef]
  40. ABROL, A.; Jha, R.K. Power Optimization in 5G Networks: A Step towards GrEEn Communication. IEEE Access 2016, 4, 1355–1374. [Google Scholar] [CrossRef]
  41. Ikuno, J.C.; Wrulich, M.; Rupp, M. System level simulation of LTE networks. In Proceedings of the 2010 IEEE 71st Vehicular Technology Conference, Taipei, Taiwan, 16–19 May 2010. [Google Scholar]
  42. Boussaada, Z.; Curea, O.; Remaci, A.; Camblong, H.; Bellaaj, N.M. A Nonlinear Autoregressive Exogenous (NARX) Neural Network Model for the Prediction of the Daily Direct Solar Radiation. Energies 2018, 11, 620. [Google Scholar] [CrossRef]
  43. Nokia. IEEE 5G Summit Rio; Nokia: Rio de Janeiro, Brazil, 2018. [Google Scholar]
  44. Tabbane, S. 5G Networks and 3GPP Release 15; ITU PITA Workshop on Mobile Network Planning and Security: Nadi, Fiji, 2019. [Google Scholar]
  45. ETSI. 3GPP TS 38.101-1 Version 15.2.0 Release 15. Available online: https://www.etsi.org/deliver/etsi_ts/138100_138199/13810101/15.02.00_60/ts_13810101v150200p.pdf (accessed on 13 June 2020).
  46. Haider, A.; Hwang, S.H. Maximum Transmit Power for UE in an LTE Small Cell Uplink. Electronics 2019, 8, 796. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.