5G/B5G Service Classification Using Supervised Learning

The classification of services in 5G/B5G (Beyond 5G) networks has become important for telecommunications service providers, who face the challenge of simultaneously offering a better Quality of Service (QoS) in their networks and a better Quality of Experience (QoE) to users. Service classification allows 5G service providers to accurately select the network slices for each service, thereby improving the QoS of the network and the QoE perceived by users, and ensuring compliance with the Service Level Agreement (SLA). Some projects have developed systems for classifying these services based on the Key Performance Indicators (KPIs) that characterize the different services. However, Key Quality Indicators (KQIs) are also significant in 5G networks, although these are generally not considered. We propose a service classifier that uses a Machine Learning (ML) approach based on Supervised Learning (SL) to improve classification and to support a better distribution of resources and traffic over 5G/B5G based networks. We carry out simulations of our proposed scheme using different SL algorithms, first with KPIs alone and then incorporating KQIs and show that the latter achieves better prediction, with an accuracy of 97% and a Matthews correlation coefficient of 96.6% with a Random Forest classifier.


Introduction
The complexity, flexibility, and dynamism of 5G/B5G networks means that they need to be managed automatically [1]. Variations in behaviour patterns limit the identification of network activity. Furthermore, the traditional management model is insufficient, and due to the correlations between multiple variables and the extensive datasets handled in a single analysis, computational assistance is required [1]. Artificial Intelligence (AI) can be used to support the cognitive management of 5G/B5G [2], and ML is one of the most promising tools in this area.
5G is a network that focuses on services, and the classification of these services therefore requires an efficient scheme for network resource allocation. In [3], a variety of new services for 5G are described, some of which have very similar performance and quality requirements. The growth and heterogeneity of the different services implemented in 5G networks have undergone fast development due to their particular characteristics and specifications. If services are classified without the help of ML, it is difficult to monitor and control network resources and to predict and avoid SLA violations, which can affect both the QoS performance and the QoE perceived by users, that is, the management of the network as a whole.
In the field of telecommunications, and, particularly in the deployment of 5G/B5G networks, the correct classification of services offers a way of providing a better QoS of the network and of optimizing the QoE [3], and is therefore important. The User Equipment (UE) requests services that require precise categorization, in order to allow network operators to select specific network slices for each service, to improve the QoS of the network and the QoE perceived by the users, and to define the SLAs for each network slice [3,4].
The critical requirements for 5G can be considered from two points of view: from the user s perspective, and from a network performance perspective [5]. Current, 5G service classification systems use ML, and consider KPIs as the main factor when performing the classification. However, it is necessary to consider KQIs in order to achieve the best possible classification, since these allow us to consider the performance of the network and the user s requests for a service, and our proposed scheme therefore takes these into account.
The main objective of this work was to prove the hypothesis that when considering the quality parameters (KQI) in addition to the traditional performance parameters (KPI), there would be a better identification (through classification) of the services. Consequently, providers can optimally allocate resources with proper QoS management.
The inclusion of KQIs, which reflect the performance and quality of End To End (E2E) services, makes it possible to achieve a better customer experience in practice [5,6]. Incorporating the KQIs reflects the customer's experience in terms of indicators that include their requirements, and can therefore provide them with better network performance and better QoE.
To improve the management of 5G networks in general, and the QoS and QoE in particular, we present a 5G/B5G service classifier system based on Supervised Machine Learning (SML) techniques. This system is based on the KPIs and KQIs of the different services to offer a better classification. One contribution of our scheme is the feedback made once a new service is classified, introducing its KPI/KQI parameters in the database to retrain dynamically and consequently regenerate a new predictive model. To make it more robust (from the structural point of view) concerning the previously generated predictive model. The rest of this paper is organized as follows. Section 2 gives an overview of previous works on 5G services, and discusses some characteristics of the KPIs and KQIs involved in service classification. Section 3 describes our 5G/B5G service classifier system and the creation of our database. In Section 4 we used the Jupyter Notebook Integrated Development Environment (IDE) from the Anaconda Navigator platform to carry out the classifier simulations and present the results. Finally, Section 5 concludes the paper with some final remarks and suggestions for future work. We are including Supplementary Files (dataset and simulation program) for those who want to experiment (see Data Availability Statement section for details).

Related Work
The authors of [7] identified the different services that are implemented in 5G networks and services related to a generic use case, such as: Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low Latency Communication (urLLC), and Massive Machine Type Communication (mMTC). Several services are shown in the middle of the triangle (see page 12 ( Figure 2) in ref [7]), meaning that they may have characteristics of several use cases and different requirements in terms of KPIs and KQIs; for example, Augmented Reality (AR) is appearing between eMBB and urLLC. However, other services belong to specific use cases; for instance, smart cities are related to mMTC.
The relevance of the specific essential requirements may vary significantly, depending on the use case or scenario in which a service is deployed. ITU in [7] (see page 15, Figure 4) shows the importance of some critical requirements with reference to the KPIs/KQIs for three generic use cases, based on a scale with three levels: low, medium and high. For example, for services associated with the urLLC use case, low latency is the most critical requirement, while the peak data rates and network energy efficiency are not key parameters. The KPI and KQI parameters affect the network performance so strongly that we consider it essential to incorporate them both, in an interrelated way, and this is the idea that underpins this work.
Several research groups have implemented smart solutions to address the need for mobile network services, to standardize certain methods, and, to improve network performance, for example through the alliance of Third Generation Partnership Project (3GPP), Next Generation Mobile Networks (NGMN), European Telecommunications Standards Institute (ETSI), and many other research initiatives [8,9].
The authors of [9] highlighted the possibility of using SL and unsupervised learning algorithms to classify new services within use cases (eMBB, mMTC, and urLLC). They proposed the use of basic requirements or KPIs such as latency, bandwidth and data rate.
In [10], the author demonstrated the possibility of classifying the services demanded by users using technologies such as Software Defined Networks (SDN), Network Function Virtualization (NFV), and ML. The objective of this work was to predict demand and dynamically allocate network resources [10]. The parameters were classified based on the bandwidth, latency, jitter and other KPIs.
In [4], a system was established and a database was created in which SLAs were defined for each network slice. The authors used unsupervised learning techniques to classify 5G services based on the primary system. Although little information was provided on the elements considered in this classification, these are expected to have been related to the KPI requirements.
The Network Machine Learning Research Group (NMLRG) has worked on novel methods of classifying 5G services using ML [11]. Some of the papers published by the NMLRG [11] present results obtained from the application of their models to the classification of 5G services. They focused on KPIs related to network traffic as the main factors when performing system classification. The authors used some SL techniques and compare it is, and they concluded that Decision Tree and Random Forest are best in this kind of problem.
KQIs represent a shift from traditional network-based performance parameters (KPIs) to, a subjective quality-based perceived by the end-user, known as QoE. After they were defined, however, KQIs were not promoted or applied for some time [12].
The different variants used in the works described above were developed based on data collection and analysis to give a better classification of 5G services. These studies focused on KPI requirements, and none of them took into account the KQI parameters as elements for the classification of 5G services.
When KQIs are included, the ML algorithm becomes even more complicated; due to the multiple interdependencies throughout the E2E route, the measurement of service quality is not a trivial issue, even when it is limited to objective quality rather than subjective experience. KQI offers a framework that can reflect service performance and quality in an objective way, from an E2E perspective and these indicators can be obtained through direct testing and statistical analysis of the network [12].
In the literature [9][10][11], the classification carried out was considering only KPI parameters. We propose improving the classification (identification) of the services considering (besides KPI) KQI parameters.
We estimated that the incorporation of KQIs into the classification of 5G/B5G services would improve the QoS and QoE, and would make the determination and compliance of SLAs more precise. The requirements for establishing SLAs in 5G based on the services provided and the infrastructure of the provider are known as Service Level Objectives (SLOs). It is therefore imperative to consider both the KPIs and KQIs, among others [3], since the QoS, which forms the basis for establishing the SLA, is a function of the applications, the network, and subjective factors such as user experience (QoE) [6]. Figure 1 shows a block-level diagram of the proposed system for the classification of services in 5G/B5G networks. The planned scheme first operates offline until the predictive model has been validated, and can learn to classify services effectively with few mistakes. In the next phase, the system is implemented online by the network operators, and the predictive model then classifies new services requested by the UEs. The output of the system corresponds to the requested service classification, and is feedback to the ML algorithm, making the predictive model more efficient. Each of these phases, and each block, is explained later in this article. The proposed system can classify services in next-generation networks. However, it is essential to clarify that this forms only one part of a system that a network operator can use to offer services. Our system needs to interact to connect with the rest of the operator's system, and we must therefore consider two options:

1.
Reprogramming the proposed system in the language of the operator's system. This has the disadvantage of requiring reprogramming of the systems used by each operator, including the cloud, which is not very feasible (due to future maintenance or update issues).

2.
Incorporating into the proposed system an appropriate Application Programming Interface (API) to enable communication with the operator's system (accessible from a public or private server). The necessary security must be provided to ensure this is only employed in an authorized way.
The second option is preferable, since 5G systems provide appropriate APIs to allow a trusted third party to create, modify, delete and monitor instances of the network segments used by the third party, and to manage a set of devices or capabilities including QoS functions [13].

Building the Dataset
The main limitation in this project was the achievement of a real dataset, a dataset with 5G operating systems parameters. We considered building a synthetic dataset (manually) analyzing KPI and KQI parameters extracted from ITU standards documents and various European projects and analysis documents carried out by telecommunications companies. The dataset built by taking the threshold values consulted in the bibliography and oscillating these values randomly, causing diverse values for each KPI/KQI and each service individually.
The first block of the scheme shown in Figure 1 corresponds to our database, which we used to train the ML algorithm to validate and verify the predictive model. We created the database manually, using parameter values corresponding to the KPIs and KQIs of the selected services in Comma Separated Values (CSV) format. The documents consulted in this stage were related to standards and various project reports on 5G networks, such as 3GPP [14][15][16][17], the 5G Public-Private Partnership (5G-PPP) [2], NGMN [18,19], Speed [20,21], 5G America [4], International Telecommunications Union (ITU) [7,22,23], Huawei [24], and others [12,[25][26][27]. The selected parameter values were the standard threshold values, and these were manipulated randomly until we obtained values that were sufficiently close to their limiting values. Appendix A shows the tables with the threshold parameters for the extracted KPI/KQI parameters.
The database contained 165 rows and 14 columns. The rows were the parameter values of the 5G services to be classified and, the firsts 13 columns contained the KPI and KQI values, while the last column corresponded to the labels of the 5G services. Table 1 is a fragment of the database, and we can visualize some values of KPI, KQI and services 5G. This project is a work in progress, the database is a process that can grow over time, the proposal is to test the combination of KPI+KQI, and as the database grows, we will test our hypothesis. The dataset is available, see the statement section for the link.
In this project, we work on a classification problem where different labels are used (5G services). We need to attribute or assign a label to the elements to be classified to distinguish them. The review and attribution of labels can be done manually or computationally, using a specifically designed program. When applying ML, a set of labeled data, both the characteristics of the services and their labels are used to solve a classification problem we are talking about the SL scheme for the classification of 5G/B5G services. Similarly, it is necessary to label (represented by variable y) the 5G services found in the database; labels y corresponds to parameters or characteristics for the arriving services represented by the variable x (see Figure 1). The labeling of the data is realized before creating the algorithm's predictive model, so it is possible to know which label (y) corresponds to the parameters (x) of each 5G service in the database (see Table 1).

ML Algorithm and Predictive Model
The classification of services carried out in this work is based on the following premise: the services are in the form of a series of parameters (x) determined by the KPIs and KQIs that define them, and the corresponding labels (y) must be assigned according to these parameters.
As mentioned above, the descriptive parameters of the services to be classified are in the form of a single database, and it is therefore necessary to partition it into two datasets, one of which is used to train the algorithm, and the other to predict or verify it. Splitting the database generates four new variables: Xtrain, Xtest, Ytrain, and Ytest. The training variable Xtrain correlates with the input values for the 5G services that are selected to train the ML and Ytrain algorithm with their respective output labels. The other two variables Xtest and Ytest correspond to the input and output variables for the testing and validation phase of the predictive model. The training phase involves passing training data to the ML algorithm to allow it to learn. The ML algorithm develops a function based on the training data (Xtrain) that provides the correct answer (Ytrain). Using Xtrain and Ytrain, the algorithm learns, and a function f (Xtrain) = Ytrain is generated that: identifies patterns in the training data, allows the attributes of the input data to be assigned to the target data (representing the answer to be predicted) and generates a model that captures these patterns.
An ML algorithm is then applied whose objective is to create a function y = f (x) that is capable of predicting the value corresponding to any input object x (in the proposed system for the classification of services, these represent the KPIs and KQIs of 5G services). The ML algorithm must be trained with a set of parameters or characteristics of the different services to be classified. This training allows new known input values (x), new unknown labels (y) assigned since after the first training, the predictive model can provide results for new data. This means that once the ML algorithm is trained, the predictive model it generates is ready to predict or classify the requested services (see Figure 1).
The result is a predictive model that can classify 5G services, which is generated by training the ML algorithm with the Xtrain data. However, the Xtest data are not used in the training of the ML algorithm, which may mean that the generated predictive model cannot classify services correctly; there is a possibility that this model is prone to underfitting or overfitting, and it is therefore necessary to validate the ML algorithm to ensure that the predictive model is effective. If the predictive model shows overfitting, it is not very useful, since a model that repeats the labels of the samples it has just seen would achieve a perfect score but would not be able to predict unknown labels. To avoid this problem, the validation block of the ML algorithm applies a method based on the cross-validation technique.
If the result of the validation stage is similar to those of the evaluation and training stages, the trained model is correct, and there is no indication of overfitting. This validation indirectly affects the final evaluation of the predictive model. When the model is tested with the new Xtest data, the values of the metrics must be similar to those from the validation stage, as this indicates that the chosen algorithm works effectively.
When the algorithm has been trained and validated, the next step is to test the predictive model to determine whether it can predict new and future data. The test block of the model addresses this by carrying out a prediction test with Xtest; in Figure 1, Y = f (Xtest) represents this. The output Y from this block is a vector of the different 5G services generated by the predictive model, which corresponds to the test results of the prediction model with the variable Xtest.
Similarly, it is necessary to label (represented by variable y) the 5G services found in the database; labels y corresponds to parameters or characteristics for the arriving services represented by the variable x (see Figure 1). The labeling of the data is realized before creating the algorithm's predictive model, so it is possible to know which label (y) corresponds to the parameters (x) of each 5G service in the database.
A prediction Y can be compared with previously known data as the target response (Ytest) in order to determine the quality of the predictions from the model, and this test can therefore be used as a basis for predictive precision for future data. Hence, a comparison between the Y and Ytest vectors can describe the verification or validation ability of the model.

Validation of the Predictive Model
To validate the predictive model, it is necessary to determine whether the values obtained for Y are the expected ones. The use of metrics to measure performance can allow us to confirm the effectiveness of the model. The relationship between Ytest and Y is used to generate the performance measures, and to construct the confusion matrix shown in Table 2. A confusion matrix is so named because it visualizes the performance of the predictive model and observes confusion in two labels. The columns of the matrix represent the number of predictions for each label (Y) made by the predictive model, while each row represents the current label for the test values (Ytest) as follows: [28] • True Positives: The number of current values classified as belonging to a particular class, for which the model s prediction is correct. • False Positives: These are the current values classified as belonging to an incorrect class. They are considered by the model to be positive, but the prediction is wrong. • False Negatives: These are values that belong to a particular class but are classified differently (incorrect prediction).

•
True Negatives: These are observations that do not belong to a given class and are classified correctly.
A series of metrics can be derived from the results in the confusion matrix of Table 2 and used to evaluate performance of the predictive model as follows: [29] 1.
Accuracy: This is the relationship between the number of correct predictions (TP and TN results) made by the model and the total number of predictions. In other words, this reflects how often the predictive model's classification is correct. It is the most direct measure of the quality of the classification, although it is less appropriate when the labels of the output variables are not balanced (unbalanced data), i.e., labels are not of similar quantities. (1)

2.
Precision: This measures the precision with which the predictive model ranks services by their performance due to optimistic predictions. It is the relationship between the number of correct predictions and the total number of correctly predicted predictions.

3.
Recall: This is the relationship between the number of correct predictions to the total number of positive predictions. In other words, it represents the sensitivity of the predictive model in terms of detecting positive instances.
4. F1 score: This is a weighted average of the recall and precision. A higher score represents a better model. Thus, it provides a good indicator of the overall accuracy of the predictive model, while the accuracy and recall provide information on explicit areas.

5.
Matthews correlation coefficient (MCC): As an alternative measure unaffected by the unbalanced datasets issue, MCC is the only binary classification rate that generates a high score only if the binary predictor was able to correctly predict the majority of positive data instances and the majority of negative data instances. It ranges in the interval [−1, +1], with extreme values −1 and +1 reached in case of perfect misclassification and perfect classification, respectively. At the same time, MCC = 0 is the expected value for the coin-tossing classifier [30].
If the values of the metrics for the predictive model are satisfactory, the offline work phase is terminated, and the model is ready to be used online by a network operator to classify new services requested by the UEs. If the results achieved in terms of the metrics are not as expected, the entire cycle must be repeated, starting from the training of the ML algorithm, until a reasonable rate of success is observed, so that the model will generate fewer mistakes in the future. In the latter case, one or more of the following actions can be taken:

•
Increasing the volume of data used to train the ML algorithm and test the predictive model.

•
Choosing another ML algorithm.

•
Making the ML algorithm used in the simulation more straightforward or more complex (from a structural point of view) to achieve better precision.
We now have a predictive model that is capable of classifying 5G services based on their KPIs and KQIs. When this is implemented online, UEs request new services (represented in the lower part of Figure 1), and the model takes as its input a vector formed of the KPIs and KQIs of the requested services. When classifying the services, our system also uses an output tag and the characteristics (KPI/KQI) of the service, and feeds them back into the system database. The objective of this approach is to take advantage of each requested service by incorporating it into the database and repeating the entire procedure of retraining the ML algorithm until a new, more robust predictive model is generated that provides a better classification.

Simulation Results
To determine whether if the inclusion of KQIs improves the predictive service classification model, we perform two simulations. The first considers only the KPIs, while the second also incorporates the KQIs.
We first explain and define the scenario and conditions used in the simulations. The necessary elements are the SML algorithms, a programming language, a development platform, the 5G services to be classified, and the parameters of their KPIs and KQIs.
For the validation scenario and to simulate the proposed system, we used SL algorithms: Decision Tree, Random Forest (with five trees), Support Vector Machine (SVM) with a linear kernel, K-Nearest Neighbors (KNN, K = 3) and Multi-Layer Perceptron Classifier (MLPC), using the Python language, and Anaconda Navigator platform with Jupyter Notebook as IDE. We considered nine essential 5G services to be classified: Ultra High Definition (UHD) video streaming, immersive experience, connected vehicles, e-health, industry automation, video surveillance, smart grid, Intelligent Transport Systems (ITS) and Voice over 5G (Vo5G). The selected KPI parameters were E2E latency, jitter, bit rate, packet loss rate, peak data rate DownLink (DL), peak data rate UpLink (UL), mobility and, service reliability. The KQI parameters were service availability, user experience data rate DL/UL, survival time and interruption time.
In the first simulation, we worked with the KPIs. The dataset had dimensions of 165 × 9, where the first eight columns represented the KPIs, and the last contained the labels of the services. We divided the database into two parts, where 80% (132) of the data (Xtrain) were used to train the algorithms created and, once trained, generated the predictive model. The remaining 20% (Xtest) were used to test the model. The models may be prone to underfitting or overfitting, meaning that it will work perfectly for the training data (Xtrain) that are already known, but its accuracy may be lower for new services (Xtest). According to [29,31], there are two possible approaches to avoid overfitting: increasing the volume of the database or reserving additional data by dividing the dataset into three parts (training, validation and testing). Increasing the amount of data was difficult because there were insufficient known data from the 5G service; hence, additional data were reserved, and the K-Folds cross-validation technique was applied using K = 10 [32] and we obtained the results showed in Table 3. It should be noted that all of the data formed part of the original dataset, and did not constitute three new datasets. Table 3. Results of the accuracy in cross-validation stage for the first simulation (KPIs).   We applied Equations (1)- (5) to the metrics obtained from the confusion matrix to evaluate the performance of the predictive models. The results were as follows Table 4: In the second simulation, we incorporated the user quality parameters (KQIs) and repeated the procedure (with a few differences from the previous simulation). The KQI parameters considered here were service availability, user experience data rate DL/UL, survival time, and interruption time. A database containing 165 rows was kept, with five additional columns corresponding to the KQI parameters.

Decision
We used the same functions to create and train the ML algorithms, resulting in the same SL algorithms. Again, we used the K-Folds cross-validation technique with K = 10 [32] to validate the ML algorithm and we obtained the results showed in Table 5. Figure 3 shows the confusion matrix obtained for each model in this simulation. Table 5. Results of the accuracy in cross-validation stage for the second simulation (KPIs + KQIs).  We obtained the performance metrics for the predictive models based on the newly generated confusion matrices. The results were as follows Table 6: From Table 6, it is possible to know that the KNN model does not apply to our problem because it had an inadequate accuracy. Furthermore, we can see that the other models increased their metrics in this second simulation, and the best metrics obtained are Decision Tree, Random Forest, and SVM.

Decision
To verify if the predictive model was satisfactory, we created a function to compare the accuracy obtained in the cross-validation stage versus the accuracy of the testing stage. We considered the model was acceptable if the difference does not exceed 5%. The result obtained for the SVM had a difference of 8.41%, so we conclude that this model may be overfitting. The result obtained had a difference of 0.69% and 1.62% in the Decision Tree and Random Forest. This result means that the predictive model generated by the Decision Tree and the Random Forest algorithms are not overfitting. If the predictive model is overfitting, we choose the third option mentioned above, for example, making a Random Forest with maximum depth.
We determined that we can use both Decision Tree or Random Forest to solve the service classification problem presented, as the authors obtained in [11]. Although we are going to choose Random Forest as the predictive model in our proposal. Figures 4 and 5 show the schematics of one of the trees in the Random Forest in each simulation. We can appreciate the KPIs that the tree uses in the first simulation to classify a 5G service. In the second simulation ( Figure 5), the same tree incorporates KQIs to improve its classification and obtain better metrics.  As expected, when the KQI parameters were incorporated to classify 5G services, the predictive model learned to classify the services more effectively. Figure 6 shows the average results in terms of the evaluation metrics obtained by the Random Forest model for both simulations. We can see that there were increments of 3% in accuracy, 2.5% in precision, 1.6% in recall, 2.4% in F1 score and 3.4% in MCC. The low percentages in the results arise from the small amounts of data available for the simulation. With an increase in the number of values in the database, the ML algorithm would have more data to learn from and more data to classify, meaning that the percentages for the predictive model would also increase. All these metrics represent aspects of the performance of the predictive model of the proposed 5G service classifier system. Since these metrics were increased, the performance of the proposed system also increased, meaning that more effective service classification was produced when both KPIs and KQIs were considered.

Conclusions
During selecting the network slices in new generation networks (5G/B5G), the use of KPI and KQI parameters is crucial to identify and characterize each service requested by the UE. This procedure allows service providers to allocate ad hoc resources for the service and, implicitly, have a better and appropriate QoS. A good classification scheme can improve network and service management, SLA compliance, and in consequence, the QoE perceived by users. This paper has proposed a system for classifying services in new generation networks based on ML. The predictive model is a fundamental block of the proposed service classifier system, which is in charge of classifying 5G services. The main limitation of the project was to have a 5G real operating dataset. It achieved the best possible classification results in our system. It was necessary to create a dataset containing KPI/KQI parameters extracted from standards documents and projects to classify 5G services.
The SML algorithm generated a predictive model trained and validated using the KPI and KQI parameters to classify each service. We established two situations; classify services using only parameters KPI and applying both parameters (KPI + KQI). We implemented simulations employing five different SL algorithms (Decision Tree, Random Forest, SVM, KNN, and MLPC), and we validated the results with the K-Folds technique.
Analyzing the results produced by the confusion matrices and applying equations to evaluate performance indicates that it is possible to compare the proposed ML algorithms. Furthermore, comparing the simulation results obtained, two proposals showed similarities; both the classifications of services by Decision Trees and Random Forests are the best approaches. It is not easy to make a direct comparison between the two proposals if the characteristics or attributes differ.
Incorporating KQIs allowed for improvements of 3% in accuracy and 3.41% in MCC for the classification of services using a Random Forest algorithm, as shown in Figure 6. The aim was to prove that including KQI besides KPI in the service classification will improve the identification of the services. This idea was gotten supported using ML techniques to solve a new generation telecommunication network. It was proven satisfactory according to the results.
Future simulations will use a dataset of real operating values for the KPI and KQI parameters from 5G/B5G networks to better characterize the network's performance.

Conflicts of Interest:
The authors declare no conflict of interest.