In this section, the following performance metrics are used to report the results in Tables 3–6: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negative (FN) are denoted by $\alpha $, $\beta $, $\gamma $, and $\delta $, respectively. Based on the values gathered from these metrics, the accuracy of the IDS is calculated using sensitivity, specificity, false negative rate (FNR), false positive rate (FPR), and positive predictive value (PPV).

#### 4.6. Accuracy

Attack prediction of RNN-ABC is considered accurate only if it detects attacks with a low false positive rate and high precision.

This paper is an extended version of the publication [

11], where a different number of hidden and input layer neurons were used to design the intrusion detection system. The following methods were adopted for training: RNN-IDS with a gradient descent algorithm (GD), and RNN-ABC with an artificial bee colony algorithm (ABC). In addition to this all, methods for both ABC and GD were trained using variant learning rates of 0.4, 0.1, and 0.01, respectively, and additional comparisons were undertaken to discuss the effects of mean square error (MSE) by analysis of mean of MSE (MMSE), standard deviation of MSE (SDMSE), best mean squared error (BMSE), and worst mean squared error (WMSE) [

30] during the RNN training and testing phases.

Mathematically:

where actual intrusion in the network is denoted as

${\psi}_{a}$ and

${\psi}_{RNN}$ indicates a predicted attack.

#### 4.10. Method-II ABC

In order to validate the performance of RNN-ABC with the results in [

11], the complete feature space of NSL-KDD was utilized. The total numbers of input and hidden layer neurons were increased from the previous method, to 41 input, 41 hidden, and 1 output layer neurons. As mentioned in

Table 2, the size of the bee colony remained the same, where 20 employed bees participated in finding the optimal solution over a total of 100 iterations.

After training, the RNN-IDS model was tested against unseen datapoints and results were collected against different performance metrics, as shown in

Table 3. The empirical results revealed that, in the case of 21 inputs, the accuracy for the proposed RNN-IDS was 91.4%, which was more than has been reported with artificial neural networks (ANN) in [

16]. This suggests that an IDS is good if it is highly sensitive to an intrusion happening in real-time in the network. After trainig the RNN-IDS with the GD algorithm, the empirical results for Method-II suggest that it was 95.60% sensitive to the intrusions, while specificity remained at 69.68%. The system also predicted attacks with a positive predictive value of 98.62%, while the overall accuracy remained at 94.5%.

To check the percentage of error in RNN-IDS trained using the GD algorithm,

Table 4 reports the fraction of error by calculating MMSE, SDMSE, BMSE, and WMSE. The performance verified learning rates for both full and extracted features from the NSL-KDD data set in binary classification. For Method I, where there was a reduced number of features, at learning rates of 0.4, 0.1, and 0.01, the MMSE remained at

$3.44\times {10}^{-2}$,

$3.85\times {10}^{-2}$, and

$3.99\times {10}^{-2}$, respectively. Whereas, for Method II, where the complete feature space of the data set was utilized to train the RNN-IDS, the MMSE remained at

$3.11\times {10}^{-2}$,

$3.47\times {10}^{-2}$, and

$3.9\times {10}^{-2}$ at disparate learning rates. Empirical analysis backs the results gathered previously in [

11], where it was concluded that decreasing the learning rate decreases the error and enhances the efficiency of an intrusion detection system. RNN-IDS with a larger feature space (i.e., method II), outperformed method I, with a BMSE of

$3.30\times {10}^{-2}$ in contrast to

$3.72\times {10}^{-2}$.

As an extension of the results from our previous contribution [

11], we have developed a new intrusion detection system using a meta-heuristic algorithm inspired by the food search behavior of honey bees, known as an artificial bee colony; the system is called RNN-ABC. Again, two methods have been adopted, with full and reduced feature spaces, respectively, from the NSL-KDD data set. To understand the learning behavior of the RNN-ABC, training was done using rates of 0.4, 0.1, and 0.01, respectively. After testing the trained RNN-ABC with unknown data points, the results in

Table 5 reveal that, for Method I, the sensitivity value was reported as 93.98%, with a lowest false negative rate of 6.02% and an accuracy of 93.32%; furthermore, the RNN-ABC method classified unauthorized access with a high precision rate, 97.79%. For Method II, analysis also establishes the fact that increasing the number of neurons at input and hidden layer of the RNN-ABC resulted in better performance, in terms of the sensitivity value of 98.84%. In contrast to Method I, the false positive rate (FPR) decreased from 35.22% to 22.12%. While Method II outperformed Method I in terms of accuracy (95.62%), the fact was once again proved that decreasing the learning rate (0.01) actually increases the accuracy of intrusion detection systems.

To evaluate the percentage of error in RNN-ABC trained using the ABC algorithm,

Table 6 reports the fraction of error by calculating MMSE, SDMSE, BMSE, and WMSE. The MMSE for Method I was

$5.21\times {10}^{-2}$,

$5.89\times {10}^{-2}$, and

$4.12\times {10}^{-2}$ for the learning rates of 0.4, 0.1, and 0.01 respectively. For Method II, where there were 42 input and 42 hidden layer neurons, the MMSE was reduced to

$3.24\times {10}^{-2}$,

$3.94\times {10}^{-2}$, and

$3.87\times {10}^{-2}$, respectively, by training the system with the above-mentioned learning rates accordingly. In total, 100 iterations were carried out, where employed bees searched for the optimal solution for the given feature space. Method II surpassed Method I, where the BMSE was

$1.92\times {10}^{-2}$ and SDMSE remained at

$2.31\times {10}^{-2}$.

It is evident, from the results presented in

Figure 3a,b and

Table 3,

Table 4,

Table 5 and

Table 6, that swarm intelligence-based IDS (e.g., RNN-ABC) trained using an artificial bee colony algorithm takes precedence over an RNN-IDS which is trained using gradient descent algorithm, as in the previous publication [

11]. RNN-ABC proved to be more efficient, in terms of a higher true positive rate and a low false positive rate. Additionally, the high sensitivity value in Method-II showed that 95.02% of the time, the IDS successfully predicted an attack on the network. Meanwhile, the specificity value of 77.88% shows that RNN-ABC didn’t falsely classify normal patterns as an intrusion in the network. It worth noting that there is an inversely proportional relationship between the sensitivity and specificity of a network, where an increase in one value decreases the other. The better performance of RNN-ABC is related to the learning rate, where the value of the error cost function decreases while decreasing the learning rate. This is also due to the size of the bee colony and the number of employed bees, which help to find the optimal solution for a given problem.