Next Article in Journal
Dynamically Reconfigurable Data Readout of Pixel Detectors for Automatic Synchronization with Data Acquisition Systems
Next Article in Special Issue
Modeling and Piezoelectric Analysis of Nano Energy Harvesters
Previous Article in Journal
Improved Pixel-Level Pavement-Defect Segmentation Using a Deep Autoencoder
Previous Article in Special Issue
Integration of Novel Sensors and Machine Learning for Predictive Maintenance in Medium Voltage Switchgear to Enable the Energy and Mobility Revolutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Use of Ensemble Models for Multiple Class and Binary Class Classification for Improving Intrusion Detection Systems

1
Department of Electronics, BCC of Central South University of Forestry and Tech, Changsha 410004, China
2
Department of Computer Science, Air University, Islamabad 44000, Pakistan
3
Department of Communication Engineering, Hohai University, Changzhou 211100, China
4
Department of Information Science and Engineering, Kyoto Sangyo University, Kyoto 603-8555, Japan
5
College of Computer and Information Sciences, Prince Sultan University, Riyadh 12435, Saudi Arabia
6
College of Engineering, IT and Environment, Charles Darwin University, Casuarina NT 0800, Australia
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(9), 2559; https://doi.org/10.3390/s20092559
Submission received: 7 February 2020 / Revised: 22 April 2020 / Accepted: 27 April 2020 / Published: 30 April 2020
(This article belongs to the Special Issue Sensors for Societal Automation)

Abstract

:
The pursuit to spot abnormal behaviors in and out of a network system is what led to a system known as intrusion detection systems for soft computing besides many researchers have applied machine learning around this area. Obviously, a single classifier alone in the classifications seems impossible to control network intruders. This limitation is what led us to perform dimensionality reduction by means of correlation-based feature selection approach (CFS approach) in addition to a refined ensemble model. The paper aims to improve the Intrusion Detection System (IDS) by proposing a CFS + Ensemble Classifiers (Bagging and Adaboost) which has high accuracy, high packet detection rate, and low false alarm rate. Machine Learning Ensemble Models with base classifiers (J48, Random Forest, and Reptree) were built. Binary classification, as well as Multiclass classification for KDD99 and NSLKDD datasets, was done while all the attacks were named as an anomaly and normal traffic. Class labels consisted of five major attacks, namely Denial of Service (DoS), Probe, User-to-Root (U2R), Root to Local attacks (R2L), and Normal class attacks. Results from the experiment showed that our proposed model produces 0 false alarm rate (FAR) and 99.90% detection rate (DR) for the KDD99 dataset, and 0.5% FAR and 98.60% DR for NSLKDD dataset when working with 6 and 13 selected features.

1. Introduction

The increase in how people view and utilize the Internet has become a blessing and also a liability to our everyday online activities. The quest for urgent data transmission on the internet and the need for commensurable security, authentication, confidentiality of web applications, and cloud interface computing have given rise to all kinds of advanced security attacks. The day to day internet usage is becoming complicated due to the threat of the internet in data security, industrial attack, and sponsored attacks to social and engineering facilities [1,2]. The complex natures of the attacks demand response with security systems that are efficient, automated, having faster responses, accuracy, and efficient security preventing systems in place.
Network intrusion detection systems (NIDS) have been developed by researchers over time that serve the purpose of detecting any suspicious action and intention that will lead to data theft or identity cloning. The fact that there has been a rapid response to security attacks on many web-based applications has not deterred the intruders from discovering loopholes to the networks and sending more sophisticated attacks.
An ExtraTrees classifier that is used in selecting applicable features for different types of Intruders with extreme learning machines (ELMs) was proposed by [1]. During attacks classification, multi-class issues were divided into multiple binary classifications and the authors used subjective extreme learning machines to solve the issue of imbalance. Lastly, they implemented in parallel the ELMs ensemble by using GPUs in order to perform in real-time intrusion detection. Their results did better than all the other methods earlier in use, achieving 98.24% and 99.76% precision on their datasets for multi-class classification. Their proposer incurred a small overhead and lacks training on how to distinguish between normal traffic and potential attacks. Meanwhile, a multi-model biomatrix recognition system that is based on pattern recognition methods was used to make a personal identification by [2]. A modification of the fingerprint was done by applying the Delaney triangulation network. Although their system achieved a high precision with low error rate equals 0.9%, it is limited and cannot function as IDS because it is based on eyelash detection and not on the internet or online system.
Another multiclass classification that uses a heterogeneous ensemble model and outlier detection in a combination of numerous approaches and ensemble methods was developed by [3]. Their study was based on Pre-processing involving a way to filter global outliers, using a synthetic minority oversampling technique (SMOTE) algorithm to repeat the sampling process. They performed a binarization on the dataset by using the OnevsOne decomposing technique. In addition, Adaboost, random subspace algorithms, and random forest were used to design their model as the base classifier. Their proposed model performed better in terms of outlier detection and classification prediction for the multiclass problem, and also did better than other classical algorithms commonly in use. The study failed to combine filtration and wrapper selection methods in order to investigate the effect of partial removal of point-outliers from datasets prior to building up of classifiers. DOS, probe, U2R, and R2L were the four types of attacks used by [4] to deal with the random forest model. They implemented ten cross-validations that were functional for classification usage and a Feature selection that was implemented on the dataset in order to reduce dimensionality, remover of redundancy and unrelated features. On comparing their random forest modeling with a j48 classier, their experimentation proves that accuracy and DR for four types of attacks are better, but they failed to use evolutionary calculation as a feature selection measure that could further improve the accuracy of the classifier. The fact is that denial of service (DoS) attacks have created massive disruptions to private and public sectors web-based applications of which many are not in the news due to management fears of customers’ panic and loss of shares. It becomes a challenge to create a multiple class-based IDS that has the capacity to withstand multiple attacks provide higher accuracy, higher detection rate (DR), and lower false detection rate (FAR).
This paper’s intention is to develop an intelligent intrusion detection system that has high accuracy, high packet detection rate, and low false alarm rate. The Objectives include 1. Developed machine learning models for the intrusion detection system; 2. Implement and evaluate the proposed solution on network security datasets; 3. Proposed a data-independent Model; 4. Achieved high accuracy; 5. Achieved high detection system; and 6. Achieved a low false alarm rate.
Our motivation is to reduce False Positive Rate (FPR) by applying dimensional reduction method on the Correlation Feature Selection (CFS) algorithm.
Our contribution includes:
  • The research performs dimensionality reduction using the Correlation-based feature selection (CFS) approach.
  • Machine Learning Ensemble Models with base classifiers (j48, Random forest and reptree) were used to perform simulations.
  • Automatically proposed optimal subset features for the new dataset.
  • FAR and Detection rate has a great impact on the IDS system, so we propose a novel solution based on machine learning ensemble models with the effect of the CFS algorithm.
  • Our Proposed CFS + Ensemble Classifiers has 0 false alarm rate and 99.90% detection rate for kdd99 dataset and for nslkdd dataset 0.5% FAR and 98.60% detection rate.
  • Our proposed model was evaluated and compared with two different datasets and also these research experimental results are also likened with other recent and important papers in this area.
The remainder of the paper is structured as stated: In Section 2, we describe the Literature review. Section 3 presented the proposed methodology. Section 4 describes the experiments and results. Section 5 concludes the research and the mindset of future work.

2. Literature Review

A hybrid smart system with an enhancement of the decision tree was used by the authors in [5] to design a multiple classifier system. This was done by applying Adaboost and naïve Bayes with decision trees (NBDT), non-nested generalized exemplar (NNge), and incremental pruning (JRip) rule-based classifiers (NNJR). The system was able to detect network intrusions efficiently. The only limitation to this research is that other data mining approaches were not explored in full. Hybrid IDS based on integrating the predictions of a tree by probability in a diverse kind of classifier was proposed by [6]. Their result illustrates a model that gives a much lower false alarm rate and a peak detection rate. Moreover, their proposed model shows better precision than the recent IDS models with a precision equivalent to 96.27% for KDD’99 and 89.75% for NSL-KDD—unlike authors in [7] that use spectral clustering (SC) and deep neural network (DNN) in their proposer for intrusion detection. Their results indicate that their classifier delivers a real tool of study and analysis of intrusion detection in a large network and does better than back propagation neural network (BPNN), support vector machine (SVM), random forest (RF), and Bayes tree models in spotting precision and the types of irregular attacks in the network.
The hybrid model of [8] is a proposed system designed on the network transaction that estimates the intrusion scope threshold degree at data’s peak features which are readily accessible for the physical activities. Their results show that the hybrid approach is necessary in order to achieve accuracy of 99.81% and 98.56% for the binary class and multiclass NSL-KDD datasets, respectively. Nevertheless, it was suggested for further studies to apply optimizing techniques with the intrusion detection model because it is likely to have a better accuracy rate.
A Gini index based feature selection can give the ensemble technique a higher increase accuracy of detection by 10% according to [9]. Other benefits include reduction of a false positive rate to 0.05 and improving the system performance in terms of the time it takes for executing a truer positive rate. Nevertheless, reduced features that will require less processing time in a distributed situation need to be applied to improve the detection rate.
An improved conditional variational Auto Encoder (ICVAE) with a deep neural network (DNN) was combined to design an intrusion detection model known as ICVAE-DNN by [10]. They learn and explore potential sparse representations between network data features and classes that show better overall accuracy, detection rate, and false positive rate than the nine state-of-the-art intrusion detection methods. Nonetheless, there is a need to improve the detection performance of minority attacks and unknown attacks. The adversarial learning method can be used to explore the spatial distribution of ICVAE latent variables to better reconstruct input samples. The machine learning-based IDS developed by the authors in [11] are based on deep learning. According to the authors, in large network datasets and unbalanced network traffic, the performance of the IDS may be affected, this can result in an anomaly network-based IDS. A Deep Belief Networks (DBNs) approach which projected deep learning as a swift upsurge of machine learning (ML) was proposed in [12,13]. Following this proposal, deep learning has realized greatly the extraction of high-level dormant features from dataset models. However, notwithstanding these huge successes, several problems related to IDS still exist—firstly, a high network data dimension. In many IDS models, the feature selection approach is first considered as one of the steps of the preprocessing [14]—for instance, the advancement of the Internet of Things (IoT) and the prevalent cloud-based services, in addition to the emergence of several new attacks. In the training dataset, several unidentified attacks do not appear. For instance, in the NSL-KDD dataset considered in [15,16], about 16.6% of the attack samples in the dataset tested did not appear in the training dataset. This implies that mostly all conventional IDS typically achieve poor performance. However, for an anomaly network-based IDS (A-NIDS), the authors in [17,18] proposed a primal dependable hybrid approach that incorporates the Adaboost meta-algorithm and artificial bee colony (ABC). This is intended to achieve optimal detection rate (DR) at a minimized false positive rate (FPR) [19]. In the study by [20], the ABC algorithm is implemented for selection of features, while the Adaboost meta-algorithm is used for feature classification and evaluation. The Adaboost meta-algorithm was implemented to tackle the unbalanced data based on the actual plan, while the ABC was used for the IDS problem optimization. Incorporating both the redesigned density peak clustering algorithm (MDPCA) and the deep belief networks (DBNs) resulted in a novel fuzzy aggregation approach which was proposed in [21]. The MDPCA section of the algorithm splits the primal training dataset into numerous minor subsets based on the similarity of the training samples feature. On the other hand, the results of the entire sub-DBNs classifiers are combined according to the weights of the fuzzy membership. The objective of [22] was to design a system that has to have the capacity for accurate traffic classification of classes into normal and attack, measure up the huge datasets, and be able to acquire a lower false alarms rate. To achieve these, the authors leveraged on the Extreme Learning Machine (ELM) algorithm, which is an advanced ML algorithm. Although the ELM algorithm has proved to be more efficient in terms of performance against the Support Vector Machine (SVM) algorithm, it operates, however, at high frequency while sustaining adequate classification ability. The authors further attempted to enhance the performance ELM algorithm by including a redesigned kind of Huang’s Kernel-based ELM and combined this with the Multiple Kernel Boost (MKBoost) framework which was earlier introduced by [3]. A novel approach based on the combination of discretization, filtering, and classification methods using a KDD Cup 99 dataset is presented in [23]. The focus of the research was to drastically minimize the number of features while classifier performance is absolutely maintained, or even improved. The approach makes use of filters because of their high-speed characteristics and based on their high suitability for large datasets. Deep learning models were applied as classifiers. Bearing in mind the importance of the temporary data classification of network attacks, the Long Short Term Memory (LSTM) network, a modification of frequent networks, was used in classifying the KDD’s dataset attacks [24]. Several works in the literature of [25,26] motivated the development of our proposed approach. A scheme of nested binary trees was used in [26]; the scheme realized a good performance when tested with minor UCI datasets, but the computational difficulty of this scheme amplified swiftly with the increase at the number of instances. The recent study of [25] integrated both the oversampling and binarization with boosting, and indicated that the proposed approach realized improved performance than the multiclass learners and one-versus-all (OVA) framework. Even though information about the runtime was voided in the study, the use of oversampling enhances substantial computational difficulty; hence, this method failed to scale proficiently for an application to IDS datasets, which encompasses a higher number of samples. On the other hand, the authors in [26] implemented random undersampling (RUS) in their method because it can realize similar performance when used for all the datasets while dealing with class imbalance mitigation.
Several studies on the use of binary classifiers set to the detection of intrusion have been established. A good number of these studies engaged the use of classifiers based on SVM. Authors presented a simple decision tree–based OVA model which populates a decision tree structure using a set of class probabilities [27]. An OVA method in [28] was also incorporated into a least-squares SVM technique and analyzed on the KDD dataset. The output showed that, for each of the five classes of traffic, their attack detection rate was approximately 99%. Additionally, the authors observed in the method, the best model realized an average FPR of 0.28%. SVMs in a binary classification method was employed by [29]. Authors in [30] proposed a composite scheme architecture in which precise classifiers were allocated the task of detecting precise classes. For example, an SVM was allocated for the detection of DoS attacks, while an RBF-based neural network was allocated for the detection of U2R-based attacks. The results of the hybrid classifier were transferred to a different ensemble which was allocated for the detection of R2L and probe attacks. For this scenario, in advance, a definite architecture was defined. A weighting element was included in a scheme of binary SVMs in [31]. The binarization methods that were tested included one-versus-one (OVO), OVA, directed acyclic graphs, and ECOC. It was noticed that the OVA model distributes the best performance. It is observed by the authors that the weight which measures a prediction level of certainty was targeted at the unclassifiable areas in which the group of binary classifiers cannot approve on a single class prediction. using a precise subset of the KDDTest+ dataset, the model was assessed, but then the outputs proved that employing a weighting system with the model resulted in an improved general performance better than the model that did not include weighting scheme. Individual class performance on binarization approaches have been analyzed in all the above-mentioned works; however, the lowest FPR was realized in the recent works [32,33,34,35,36] while many other algorithm and DoS were considered by [37,38,39,40,41,42,43,44].

3. Proposed Methodology

This research has five phases according to our proposed methodology shown in Figure 1; the 1st phase is data collection. After data collection, the next phase is data pre-processing, which is phase 2. In data pre-processing, duplicate values inside the dataset are removed. Inconsistent values are also removed. Missing values were checked for its presence or not in the dataset. Data normalization was also done to bring down the whole dataset into one standard scale. Non-numeric values were converted to numeric by doing encoding. After data pre-processing, the 3rd phase is dimensionality reduction, which was done by using the CFS method. After dimensionality reduction, the next phase, which is the 4th phase, comes in the 4th phase machine learning ensemble classifiers Bagging, and Adaboost was used. The 5th phase is an evaluation phase; in this phase, this research work is compared with other state-of-the-art work that used the same approach.

3.1. Description

This research uses two datasets: the KDD99 dataset and the NSLKDD dataset.

3.1.1. KDD99 Dataset

KDD99 is one of the most famous and old data sets used in network security for intrusion detection systems. KDD99 is a derived version of the 1998 DARPA. The kdd99 dataset was developed in an MIT research lab, and it is used by IDS designers as a benchmark to evaluate various methodologies and techniques [40]. The kdd99 has 4,900,000 rows and 41 attributes, and one is class label. Twenty-two network attacks are listed in the KDD99 dataset [41]. In this research, we did binary classification as well as multiclass classification for kdd99 and nslkdd datasets. We named all the attacks as an anomaly and normal traffic and then performed experiments. Class labels consist of four major attacks like DoS, Probe, U2R, R2L, and Normal class. We did further classification in DoS, Probe, U2R, and R2L, in order to detect the categories of these attacks.
Table 1 represents the total number of normal and anomaly packets that contain the KDD99 dataset used in this research. 97,277 and 396,731 packets were used for anomaly and normal classes to develop ensemble machine learning classifiers upon which training and testing can be performed. In addition, 70% of the KDD99 dataset was used for training and validation purposes, and the rest of the 30% dataset was used for testing and validation, respectively. The samples for KDD99 Training and Testing are present in Table 2.
Table 3 represents the number of attacks used in this research for prediction and their number of packets (size). DoS has five sub-attacks in it. The similarity Probe and R2L have four sub-attacks in it, respectively.

3.1.2. NSLKDD Dataset

NSLKDD is a derived version of the KDD99 dataset. NSLKDD does not have any duplicate values that were in the kdd99 dataset. NSLKDD also does not have inconsistent values. NSL-KDD contains 148,517 instances for training and testing purposes overall. The NSLKDD set has 41 features in total. Some features are binary, some are numeric, and nominal features are also listed in the NSLKDD dataset. The NSLKDD dataset also consists of four major attacks like DoS, Probe, U2R, R2L, and Normal class.
Table 4 represents the total number of normal and anomaly packets containing the NSLKDD dataset used in this research. The total number of anomaly and normal packets used to train and test machine learning ensemble models are 71,215 and 77,054, respectively. In addition, 70% of the NSLKDD dataset was used for training and rest of the 30% dataset was used for testing and validation, respectively.
Table 5 represents the total number of anomaly and normal packets used to train and test machine learning ensemble models are 103,789 and 44,481, respectively. The number of attacks for NSLKDD and Features of KDD99 and NSSLKDD datasets are presented in Table 6 and Table 7, respectively.

3.2. Pre-Processing

3.2.1. Normalization

After selection of the dataset, data cleaning operations are performed on datasets to remove noise from the dataset and normalize the features. For normalization, different techniques are used, but, in this research, the min-max normalization approach is used which is better in terms of scaling and solving outliers’ issues with z-score normalization. Min-max scaling normalizes values in the range of [0, 1]. The equation for min-max normalization is given below:
Z i = Y i min ( Y ) max ( Y ) min ( Y )
Y = ( Y 1 , Y 2 , Y 3 , , Y n ) are the number of features, while Yi is the feature that we want to normalize and Zi is the normalized feature. By doing this, now all features have the same weights and all features are in one scope.

3.2.2. Data Encoding

In the process of data encoding, duplicate and inconsistent values were removed earlier from the datasets before the commencement of this process. The next process was to convert the nominal attributes to numeric values. The reason for this is that machine learning algorithms’ back-end calculations are done using numeric values and not nominal values. This data encoding step is vital before we proceed to passing data to the proposed model.

3.3. Feature Selection

Optimal features not only improve accuracy, but also improve computational cost in terms of time. The main focus of feature optimization is not only to decrease the computational cost but also find such feature subsets that can work with different classifiers to produce better results. In this research, we used the correlation-based feature selection method (CFS) for feature selection.

Correlation-Based Feature Selection (CFS)

Figure 2 illustrates the workflow of the TCFS model. In the illustration, feature selection algorithms not only reduce dimensionality, but also select optimal features that produce high results in terms of accuracy, precision, recall, and F1-Scores. Dimensionality reduction also decreases the computational cost of algorithms. Heuristic evaluation function is used inside the Correlation-based feature selection (CFS) algorithm, which is dimensionality reeducation algorithm [45,46,47]. CFS ranks features based on their similarity with the predication class. CFS examines every feature vector subset. These subsets of feature vectors are highly correlated with the predication class but irrespective of each other. The CFS algorithm considers that some features have a low correlation with the predication class, so these features can be ignored because these features have no major role in prediction. On the other side, it is important to evaluate excess features since they are generally strongly associated with each other or with other features as well. The following equation can be used to find a subset of features vectors correlated with each other:
M s = A M c f ¯ A + A A 1 M f f
If we have S number of features subset having A number of attributes, then M s is evaluation of these S subsets with A number of attributes, where M ¯ c f represents the average correlation between class label and attributes. M f f is average correlation values between attributes, or we can say how much two features are associated with each other based on this M f f function [37]. If we have a classification problem, CFS calculates symmatrix uncertainty shown in Equation (3):
S U = E X E X Y E X + E X Y
In Equation (3), E represents the entropy function that is calculated using below Equation (4). Entropy is a function of the uncertainty of a random variable:
E X = y X p y l o g 2 p y
E X Y = w Y p w w X p y w l o g 2 p y w
For all values of X, P (y) represents the prior probabilities while, when Y given to X, P (y/w) is the posterior probability.
Six features were selected using the KDD99 dataset for binary class and 11 features were selected for 21 attacks for the KDD99 dataset. Similarly, for the nslkdd dataset, 13 features were selected for both binary and multiple attacks as shown in Table 8. The correlation-based feature selection working algorithm which describes the modalities of the CFS model is presented below as Algorithm 1.
Algorithm 1: Correlation-based feature selection (CFS) working algorithm.
Input of data: S (A1, A2, A3, A4, C) // Clean dataset
δ // Benchmark threshold value
Output: S o p t Optimal Features vector
  • Start
  • For I = 1 to N do start
  • Measure S U i , c for every attribute A i ;
  • If ( S U i , c δ )
  • Then, Append A i to list S l i s t ;
  • end

3.4. Bagging Classifier

An ensemble method is a technique that combines the predictions from multiple machine learning algorithms together to make predictions more reliable than any other (see Algorithm 2). Bootstrapping or Bagging is a very effective and powerful ensemble approach. In ensembling, multiple classifiers are combined to get more accurate results, compared to their individual performance. Working on Bagging is given below in the form of a pseudo code.
Algorithm 2: Bagging classifier algorithm.
Input: KDD99 and NSLKDD datasets
Training:
  • Selection of the number of samples for Bagging which is n samples and also the selection of base classifier C (j48, Random Forest, and reptree in our case).
  • Dividing dataset into two subsets (Training and Testing subsets). Produce further training datasets using with replacement sampling and these datasets are D 1 D 2 D 3 . . . . . . . . . D n .
  • Then, train a base classifier on each dataset D i and build n number of classifiers C 1 C 2 C 3 . . . . . . . . . C n .
Testing:
  • In the testing dataset, each data object X is passed to trained classifiers C 1 C 2 C 3 . . . . . . . . . C n .
  • The label is assigned to every new data object based on a majority vote. For the classification problem, the majority vote is used to assign a new label to data point X and, for the regression problem, the average value is used to be assigned to a new data object X i .
  • We repeat these steps until we classify every object in the dataset.

3.5. Adaboost Classifier

The goal of the Adaboost classifier is converting weak classifiers into a strong classifier that produces better results:
H X = s i g n n = 1 N θ n h n x
h n represents the nth weak classifier and θ n is the corresponding weight for that classifier. The Adaboost Classifer is given in Algorithm 3.
Algorithm 3: Adaboost classifier algorithm
Input: KDD99 and nslkdd datasets
Training:
  • Selection of base classifier C;
  • Set the threshold for initial weights W i 1 [0, 1], s u m i = 1 N w i 1 = 1, Commonly W i 1 = 1 N ;
  • For n = 1 →k produce sample D n for training from D using the distribution W n
  • Training of base classifier C on D n data subset to develop the C n classifier.
  • e n = j = 1 N w n i is the ensemble error calculated when classifier C n misclassifies the i t h data point in D.
  • If e n (0, 0.5) then calculate β n = e n 1 e n and update the next weight.
  • w n + 1 , i = w n i X β w n i
  • Distribution W n + 1 , i needs to be normalized.
  • For further value of e n , set threshold W i 1 = 1 N and continue the process;
  • Return the trained classifiers C 1 C 2 C 3 . . . . . . . . . C n and β 1 β 2 β 3 . . . . . . β n .
Testing:
  • In the testing dataset, each data object X is passed to the testing dataset; classify by classifiers C 1 C 2 C 3 . . . . . . . . . C n .
  • For each label y, assign to x by C n , calculate m y x = C n x = y l n 1 β k . The class that has maximum value m y ( x ) is decided as the class label of x.
  • Repeat step2 for testing data and return the output.

3.6. Evaluation Matrixs

Various performance matrixs are used to evaluate the proposed solution, including precision, recall, F1-Measure [48], False Alarm Rate (FAR), Detection Rate (DR), and Accuracy. The above-mentioned performance matrixs are based on True Positive (TP), False Positive (FP), False Negative (FN), and True Negative (TN).
False Positive Rate is a combination of total instances that are normal but classified as attack class and truly classify attack class.
F a l s e P o s i t i v e R a t e ( F P R ) = F p F p + T n
Accuracy is used to measure how many instances are correctly classified as normal and attack classes. Accuracy is achieved by summing correctly classify instances and dividing the total instances as shown in Equation (8):
A c c u r a c y = T p + T n T p + F p + F n + T n
Detection Rate (DR) provides information about the attacks detected correctly divided by the total number of attacks in the dataset:
T r u e P o s i t i v e = T p T p + F n
Precision’s objective is to evaluate the True Positive (TP) entities in relation to False Positive (FP) entities:
P r e c i s i o n = T p T p + F p
The purpose of recall is to evaluate True Positive (TP) entities in relation to (FN) False Negative entities that are not at all categorized. The mathematical form of recall is mentioned in Equation (10):
R e c a l l = T p T p + F n
Sometimes, performance assessment may not be good with accuracy and recall. For instance, if one mining algorithm has low recall but high precision, then another algorithm is needed. Then, there is the question of which algorithm is better. This problem is solved by using an F1-Score that gives an average recall and precision. F1-Score can be calculated as shown in Equation (11):
F 1 S c o r e = 2 Precision Recall Precision + Recall

4. Experiments

The simulation was performed using Weka 3.7 [49] on Intel® Core™ i3-4010 [email protected] Ghz (4 CPUs) with 8 GB RAM installed. Haier laptop was used with a 64-bit operating system on it. In this research, for experiments, two datasets were used: KDD99 [50] and nslkdd [51]. The KDD99 dataset is an advanced version of the DARPA 1998 dataset. The main feature that separates the KDD99 dataset from DARPA 1998 is that test data are not from the same probability distribution as training data; it has different attacks for testing that training data doesn’t have. Similarly, NSLKDD are an advanced version of the KDD99 dataset. The NSLKDD dataset solves the problem of duplicate and inconsistent values that the KDD99 dataset had.

4.1. Binary Class Experiment Results for KDD99

Table 9 depicts that false positive rate and true positive rate scores were 0.6% and 99.10%, respectively, for a normal class. Similarly, for the anomaly class, false positive and true positive scores were 0.9 and 99.40%, respectively. For normal class, the number of correctly detected packets was 28,934, and 271 packets were detected incorrectly as anomaly packets. In addition, for the anomaly class, 118,238 packets were correctly detected while 759 packets were incorrectly detected as normal packets. From Table 10, we can see that precision for normal was class 97.40%, Recall score for normal class was 99.10%, and F1-Score was 98.30%, respectively. Likewise, for anomaly class, Precision and Recall scores were 99.80% and 99.40%, respectively. F1-Score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class were 99.90%, respectively.
Table 11 depicts that, for normal class, the number of correctly detected packets is 28,934, and 271 packets were detected incorrectly as anomaly packets. Similarly, for anomaly class, 118,238 packets were correctly detected, while 759 packets were incorrectly detected as normal packets. From Table 12, we can see that precision for normal was class 97.40%, Recall score for normal class is 99.10%, and F1-Score was 98.30%, respectively. Similarly, for anomaly class, Precision and Recall scores were 99.80% and 99.40%, respectively. F1-Score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class was 99.90%, respectively.
Table 13 indicate that, out of 148,202 instances, 147,314 instances were classified correctly with the accuracy of 99.80%. False Positive Rate and True Positive Rate score were 0.6% and 99.20%, respectively, for a normal class. Similarly, for anomaly class, False Positive and True Positive score were 0.98 and 99.40%, respectively. For normal class, the number of correctly detected packets were 28,975, and 230 packets were detected incorrectly as anomaly packets. Likewise, for the anomaly class, 118,339 packets were correctly detected, while 658 packets were incorrectly detected as normal packets. From Table 14, precision for normal class is 97.80%, recall score for normal class is 99.20%, and F1-Score was 98.50%, respectively. In addition, for anomaly class precision and recall scores were 99.80% and 99.40%, respectively. F1-score for anomaly class was 99.60%. The ROC Area for both normal and anomaly class was 99.80% and 100%, respectively.
As shown in Table 15, correctly detected normal and anomaly packets were 28,838 and 118,225, respectively. In addition, 367 packets were wrongly classified as anomaly, but, actually, it was normal packets. Similarly, 772 packets were anomaly, but it was detected as normal packets.
According to Table 16, using the Bagging j48 classifier, the false positive rate and true positive rate scores are 0.6% and 98.70%, respectively, for a normal class. Similarly, for the anomaly class, false positive and true Positive scores were 1.30% and 99.40%, respectively, using a j48 classifier.
Table 17 depicts a Bagging random forest classifier detects 28,994 packets correctly as normal packets and 118,318 packets as anomaly packets. In addition, 211 packets are detected as anomaly packets, but, actually, they are normal packets and 679 packets were detected as normal packets, but, actually, they were anomaly packets.
For Bagging random forest classifier precision, recall, and F1- score for the normal class are 97.70%, 99.30%, and 98.50%, respectively. Similarly, for Bagging random forest, anomaly class precision is 99.80%, the recall is 99.40%, and F1-score is 99.60%, respectively. using Bagging random forest normal class, False Positive Rate was 0.60% and, for anomaly, False Positive Rate was 0.90%. True Positive score for Bagging random forest normal class was 99.10% and, for anomaly, 99.40%, respectively, as shown in Table 18
As shown in Table 19, correctly detected normal and anomaly packets are 29,010 and 118,299, respectively. In addition, 195 packets are wrongly classified as anomaly, but, actually, they are normal packets. Similarly, 698 packets were an anomaly, but they are detected as normal packets. False Positive and True positive scores for normal are 0.60% and 99.30%, respectively. Similarly, for anomaly class, False Positive and True positive scores are 0.70% and 99.40%, respectively, as shown in Table 20. For reptree Bagging normal class, precision score is 97.70%; recall and F1-Scores are 99.30% and 98.50%, respectively. For Bagging reptree, anomaly class precision score is 99.80%, recall score is 99.40%, and F1-Score was 99.60%, respectively, as shown in Table 20.
Table 21 and Figure 3 indicate that Perl, Neptune, Smurf, Guess_passwd, Pod, Teardrop, and Lad attacks have 100% TP Rate. Only three attacks Loadmodule, Ftp_write, Phf have a very low TP Rate. The weighted average TP Rate is 99.90 overall. The FP Rate for all attacks are very low. Normal packets achieve 99.80% precision, Loadmodule, Neptune, Smurf, Teardrop, Portsweep, Imap, and Warezmaster achieved 100% precision, respectively. Guess_passwd achieved 93.80% precision, and Portsweep achieved 95.30% precision. Ipsweep and Land achieved 81.60% and 83.30% precision, respectively. Perl and Multihop achieved 33.33% precision, respectively. Back, Satan, and Warezclient achieved 99.80%, 99.10%, and 97.10% Precision, respectively. Perl, Neptune, Smurf, Guess_passwd, and Pod achieved 100% recall. Teardrop and Land also achieved 100% recall, respectively. Normal, Guess_passwd, Portsweep, Ipsweep, Back, Satan, and Warezclient achieved more than 90% F1-score on average. Neptune, Smurf, Pod, and Teardrop achieved a 100% F1-score, respectively. Buffer overflow, Loadmodule, Ipsweep, Nmap, and Warezclient achieved more than 99% average ROC. Multihop and Warezclient achieved 81.70% and 71.70% ROC, respectively. All other attacks achieved 100% ROC, respectively.
TP and FP Rate for normal class are 99.8% and 0%, respectively. Precision, recall, and F1-score for a normal class was 99.90%. Similarly, Perl, Neptune, Smurf, Guess_passwd, Pod, Teardrop, Back, Imap, and Phf achieved 100% precision, recall, F1-score, TP Rate, and ROC area, respectively. Buffer_overflow achieved 61.50% TP Rate, 88.90% precision, 61.50% recall, 72.70% F1-Measure, and 96.10% ROC area. Loadmodule attack achieved a 20% FP Rate and 20% recall. Precision and F1-Measure for Loadmodule were 33.33% and 25.00%, respectively. Portsweep achieved a 96.90% FP Rate and recall, respectively. Precision and F1-Measure for Portsweep were 99.30% and 98.10%, respectively. Warezclient, Warezmaster, Multihop, Nmap, and Satan also performed very well in terms of precision, recall, and F1-Measure—as shown in Table 22 and Figure 4.
From Table 23 and Figure 5, we can conclude that Normal class achieved 99.80% precision, 99.70% recall, and 99.80% F1-Measure, respectively. Loadmodule, Ftp_write, Phf, and Multihop achieved very low results. Perl, Neptune, Smurf, Guess-passwd, Pod, teardrop, and Back achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Land, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall, and F1-Measure, respectively. From Table 24 and Figure 6, we can conclude that Normal class achieved 99.80% precision, 99.70% recall, and 99.80% F1-Measure, respectively. Loadmodule, Ftp_write, and Phf achieved very low results. Perl, Neptune, Smurf, Guess-passwd, Pod, teardrop, and Land achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Back, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall, and F1-Measure, respectively. From Table 25 and Figure 7, we can conclude that Normal class achieved 99.80% precision, recall, and F1-Measure, respectively. Similarly, Buffer-overflow achieved 61.50% recall and TP Rate, 80% recall, and 69.69% F1-Measure, respectively. Loadmodule, Perl, Phf, and Multihop achieved very low results. Neptune, Smurf, Guess-passwd, Pod, teardrop, and Imap achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Land, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall and F1-Measure, respectively.
Table 26 and Figure 8 depict that Normal class achieved 99.80% precision, 99.70% recall, and 99.80% F1-Measure, respectively. Loadmodule, Ftp_write and Phf achieved very low results. Perl, Neptune, Smurf, Guess-passwd, Pod, teardrop, and Land achieved 100% TP Rate, precision, recall, and F1-Measures, respectively. Buffer-overflow, Portsweep, Ipsweep, Back, Imap, Satan, Nmap, Warezmaster, and Warezclient also performed well and achieved on average 90% precision, recall, and F1-Measure, respectively.

4.2. Binary Class Experiment Results for NSLKDD

Table 27 indicates that 44,481 packets are used for testing and 44,026 packets are detected correctly as normal and anomaly packets, and 455 packets were incorrectly detected; the accuracy of Adaboost J48 was 98.97%.
In Table 28, TP rate for both normal and anomaly was 99.10% and 98.90%, respectively, while FR rate for normal packets was 1.10% and, for anomaly packets, it was 0.90%, respectively. Precision, recall, and F1-Score for normal packets was 99.00%, 99.10%, and 99.00%, respectively. Similarly, for anomaly packets, the precision score was 99.00%, recall score was 98.90%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets.
Table 29 indicates that 44,481 packets were used for testing and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Adaboost random forest was 99.08%. TP rate for both normal and anomaly was 99.00% and 99.20%, respectively. FR rate for normal packets was 0.8% and, for anomaly packets, it was 1.00%. Precision, recall, and F1-score for normal packets were 99.30%, 99.00%, and 99.10%, respectively. Likewise, for anomaly packets, precision score was 98.90%, recall score was 99.20%, and F1-Score was 99.00%, respectively. The ROC area was 99.80%, respectively, for both normal and anomaly packets as shown in Table 30.
Table 31 indicates that 44,481 packets were used for testing and 44,028 packets were detected correctly as normal and anomaly packets, and 453 packets were incorrectly detected; the accuracy of Adaboost reptree was 98.98%. The TP rate for both normal and anomaly was 98.70% and 99.30% respectively. The FR rate for normal packets was 0.70% and, for anomaly packets, it was 1.30%. Precision, recall, and F1-score for normal packets was 99.40%, 98.70%, and 99.00%, respectively. On the other hand, for anomaly packets, precision score was 99.30%, recall score was 99.30%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 32.
Table 33 indicates that 44,481 packets were used for testing and 44,039 packets were detected correctly as normal and anomaly packets, and 442 packets were incorrectly detected; the accuracy of Bagging j48 was 99.00%. TP rate for both normal and anomaly was 99.10% and 98.90%, respectively. FR rate for normal packets was 1.10% and, for anomaly packets, it was 0.90%, respectively. Precision, recall, and F1-score for normal packets was 99.00%, 99.10%, and 99.00%, respectively. Similarly, for anomaly packets, precision score was 99.00%, recall score was 98.90%, and F1-Score was 99.00%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets as shown in Table 34.
Table 35 indicates that 44,481 packets were used for testing and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Bagging random forest was 99.08%. TP rate for both normal and anomaly was 99.20% and 99.10%, respectively. FR rate for normal packets was 0.90%, and, for anomaly packets, it was 0.80%, respectively. Precision, recall, and F1-Score for normal packets was 99.10%, 99.10%, and 99.10%, respectively. Similarly, for anomaly packets, precision score was 99.10%, recall score was 99.10%, and F1-Score was 99.10%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 36.
Table 37 indicates that 44,481 packets were used for testing, and 44,072 packets were detected correctly as normal and anomaly packets, and 409 packets were incorrectly detected; the accuracy of Bagging random forest was 99.08%. TP rate for both normal and anomaly was 99.00% and 98.90%, respectively. FR rate for normal packets was 1.10%, and, for anomaly packets, it was 1.00%, respectively. Precision, recall, and F1-score for normal packets was 99.00%, 99.00%, and 99.00%, respectively. Similarly, for anomaly packets, precision score was 98.90%, recall score was 98.90%, and F1-Score was 98.90%, respectively. The ROC area was 99.90%, respectively, for both normal and anomaly packets, as shown in Table 38.
From Table 39 and Figure 9, we can conclude that Normal class achieved 99.80% precision, recall, and F1-Measure, respectively. Neptune class achieved 99.90% precision, 100% recall, and 99.90% F1-Measure, respectively. Similarly, Warezclient achieved 95.60%, 90.20%, and 92.80% precision, recall, and F1-Measure, respectively. On the other hand, Ipsweep achieved 99.50%, 90.50%, and 94.80% precision, recall, and F1-Measure, respectively. Portsweep achieved above 97% precision, recall, and F1-Measure, respectively. Teardrop achieved 96.30%, 100%, 98.10% precision, recall, and F1-Measure, respectively. For Nmap precision, recall and F1-Measure scores were 78.20%, 96.20%, and 86.30%, respectively. Satan, Smurf, and Pod achieved on average 90% precision, recall, and F1-Measure, respectively. Back attack achieved 100% recall while 99.80% and 99.90% precision and F1-Measure, respectively. Guess_passwd achieved 96.50%, 96.80%, and 96.70% precision, recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess attack didn’t perform well. Warezmaster, Mscan, Apache 2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, recall, F1-Measure, and for TP Rate as well.
From Table 40 and Figure 10, we can conclude that Normal class achieved 99.00% precision, 99.20% recall, and 99.10% F1-Measure, respectively. Neptune class achieved 99.70% precision, 100% recall, and 99.80% F1-Measure, respectively. Similarly, Warezclient achieved 94.40%, 95.50%, and 95% precision, recall, and F1-Measure, respectively. Likewise, Ipsweep achieved 99.60%, 90.60%, and 94.90% precision, recall, and F1-Measure, respectively. Portsweep achieved above 97% precision, recall, and F1-Measure, respectively. Teardrop achieved 95.20%, 100%, 97.60% precision, recall, and F1-Measure, respectively. For Nmap precision, recall, and F1-Measure scores were 77.90%, 96.20%, and 86.10%, respectively. Satan, Smurf, and Pod achieved on average 90% precision, recall, and F1-Measure, respectively. Back attack achieved 100% recall, precision, and F1-Measure, respectively. Guess_passwd achieved 97%, 96.50%, and 96.80% precision, recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess performed well. Warezmaster, Mscan, Apache2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, Recall, F1-Measure, and for TP Rate as well.
From Table 41 and Figure 11, we can conclude that Normal class achieved 98.90% precision, 99.10% recall, and 99% F1-Measure, respectively. Neptune class achieved 99.40% precision, 99.90% recall, and 99.60% F1-Measure, respectively. Similarly, Warezclient achieved 93%, 94.70%, and 93.90% precision, recall, and F1-Measure, respectively. In addition, Ipsweep achieved 98.40%, 90%, and 94% precision, recall, and F1-Measure, respectively. Portsweep achieved above 96.90% precision, 92% recall, and 94.40% F1-Measure, respectively. Teardrop achieved 95.20%, 100%, 97.60% precision, recall, and F1-Measure, respectively. For Nmap precision, recall and F1-Measure scores were 74.40%, 92.70%, and 84.40%, respectively. Satan, Smurf, and Pod achieved on average 93% precision, recall, and F1-Measure, respectively. Back attack achieved 100% recall while 99.30% and 99.60% precision, and F1-Measure, respectively. Guess_passwd achieved 97%, 96.50% and 96.80% precision, recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess performed well. Warezmaster, Mscan, Apache2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, recall, and F1-Measure.
From Table 42 and Figure 12, we can conclude that Normal class achieved 99% Precision, 99.10% Recall, and 99.10% F1-Measure, respectively. Neptune class achieved 99.90% Precision, 100% Recall, and 99.90% F1-Measure, respectively. Similarly, Warezclient achieved 95%, 992%, and 93% Precision, Recall, and F1-Measure, respectively. Meanwhile, Ipsweep achieved 99%, 90%, and 94% Precision, Recall, and F1-Measure, respectively. Portsweep achieved above 98.10% precision, 98.40% Recall, and 98.20% F1-Measure, respectively. Teardrop achieved 96.30%, 100%, 98.60% Precision, Recall, and F1-Measure, respectively. For Nmap Precision, Recall, and F1-Measure scores are 78%, 96%, and 86%, respectively. In addition, 91%, 97%, and 94% Precision, Recall, and F1-Measure are achieved for Satan attack. Smurf and Pod achieved on average 96% Precision, Recall, and F1-Measure, respectively. Back attack achieved 100% Recall while 99.30% and 99.60% Precision, and F1-Measure, respectively. Guess_passwd achieved 97%, 96.50%, and 96.80% Precision, Recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess attacks did not perform well. Warezmaster, Mscan, Apache2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, Recall, and F1-Measure.
From Table 43 and Figure 13, we can conclude that Normal class achieved 99.10% Precision, 99.20% Recall, and 99.20% F1-Measure, respectively. Neptune class achieved 99.80% Precision, 100% Recall, and 99.90% F1-Measure, respectively. In addition, Warezclient achieved 93%, 98.90%, and 96% Precision, Recall, and F1-Measure, respectively. Likewise, Ipsweep achieved 99.70%, 90.90%, and 95.10% Precision, Recall, and F1-Measure, respectively. Portsweep achieved above 99% precision, 96% Recall, and 97% F1-Measure, respectively. Teardrop achieved 96.30%, 99.60%, 97.90% Precision, Recall, and F1-Measure, respectively. For Nmap Precision, Recall, and F1-Measure scores are 78.60%, 95.30% and 86.20%, respectively. In addition, 91.90%, 96.70%, 94.20% Precision, Recall, and F1-Measure are achieved for Satan attack. Smurf achieved 94%, 99%, and 97% Precision, Recall, and F1-Measure, respectively. Pod achieved on average 96% Precision, Recall, and F1-Measure, respectively. Back attack achieved 100% Precision, Recall, and F1-Measure, respectively. Guess_passwd achieved 97%, 96.50%, and 96.80% Precision, Recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess performed well. Warezmaster, Mscan, Apache2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, Recall, and F1-Measure. All the attacks achieved above 90% results for all the evaluation matrixs.
From Table 44 and Figure 14, we depict that Normal class achieved 98% precision, 99.20% recall, and 99.00% F1-Measure, respectively. Neptune class achieved 99.30% precision, 99.90% recall, and 99.60% F1-Measure, respectively, while Warezclient achieved 98%, 92%, and 95% precision, recall, and F1-Measure, respectively. Similarly, Ipsweep achieved 98%, 94%, and 94% precision, recall, and F1-Measure, respectively. Portsweep achieved above 96% precision, 91% recall, and 93% F1-Measure, respectively. Teardrop achieved 95.60%, 100%, 97.70% precision, recall, and F1-Measure, respectively. For Nmap precision, recall, and F1-Measure scores are 77%, 92%, and 84%, respectively. In addition, 91%, 93%, and 92% precision, recall, and F1-Measure were achieved for Satan attack. Smurf achieved 94%, 99%, and 97% precision, recall, and F1-Measure, respectively. Pod achieved on average 97% precision, recall, and F1-Measure, respectively. Back attack achieved 99.80% precision, 100% recall, and 99.90% F1-Measure, respectively. Guess_passwd achieved 98%, 94%, and 96% precision, recall, and F1-Measure, respectively. Saint, Snmpgetattack, and Snmpguess performed well. Warezmaster, Mscan, Apache2, Processtable, Httptunnel, and Mailbomb also achieved promising results for precision, recall, and F1-Measure. These attacks achieved above 95% results for all the evaluation matrixs.

5. Discussion

In this section, we will discuss our key outcomes as well as comparison with previous work. Therefore, Table 45, Table 46 and Table 47 provide the detailed results of our whole work. Hence, we will discuss them one by one in detail.
From Table 45, we conclude that, with base machine learning classifier j48, random forest, and Reptree, we used Adaboost and Bagging to make predictions more accurate on KDD99 and NSLKDD datasets, for binary and multi classes. J48, Random Forest, and Reptree with Adaboost achieved 99.90 true positive (TP) rate and 00.00% false positive (FP) rate, respectively. Meanwhile, precision recall and F1-score were 99.90%, respectively, for all base classifiers with Adaboost and Bagging, respectively, using the KDD99 dataset. On the NSLKDD dataset, true positive and false positive scores were 98.40% and 00.60%, respectively, using Adaboost with a j48 classifier. Adaboost with random forest achieved a 98.50% TP rate and 00.60% FR rate, respectively. Precision was 98.30%, recall was 98.50%, and F1-score was 98.40%, respectively. ROC area for Adaboost J48 and random forest were 99.90% and 99.80%, respectively. TP and FR rate for Adaboost Reptree was 98.20% and 00.80%, respectively. Precision was 97.90%, recall was 98.20%, and F1-score was 98.00%, respectively. TR rate for Bagging j48 was 98.50%, for Bagging random forest was 98.60%, and for Bagging reptree was 98.20%, respectively. Precision, recall, and F1-score for Bagging J48 was 98.40%, 98.50%, and 98.30%, respectively. FR rate for Bagging J48, random forest, and reptree was 00.60%, 00.50%, and 00.70%, respectively. For Bagging random forest, precision was 98.40%, and recall and F1-score were 98.60% and 98.40%, respectively. Bagging reptree achieved 98% precision, and 98.20% and 98.10% recall and F1-score, respectively.
Similarly, from Table 46, we can conclude that Adaboost and Bagging with base classifiers j48, random forest, and reptree achieved high accuracy, TR rate, precision, recall, and F1-measure and improved FP rate for both KDD99 and NSLKDD datasets. Adaboost with J48, random forest, and reptree achieved 99.30%, 99.10%, and 99.40% TP rate and 00.90%, 00.90%, and 00.70% FP rate, respectively, on the KDD99 dataset. Precision and recall scores for Adaboost J48 were both 99.30%, respectively. F1-score was 98.30% for J48 with Adaboost for multiclass. Similarly, random forest with Adaboost achieved 99.10% precision, recall, and F1-Scores, respectively. For Adaboost with reptree, we achieved 99.40% precision, recall, and F1-Score, respectively. With Bagging j48, we achieved 99.20% TP rate; likewise, for random forest and reptree, the FP rate was 99.40%, respectively. FP rate for J48 was 01.10% and 00.70%for Bagging random forest and Bagging reptree. Precision, recall, and F1-score for Bagging J48 was 99.20%, respectively. In addition, 99.40% precision, recall, and F1-score was achieved with Bagging random forest. In addition, for the nslkdd dataset using Adaboost with j48, we achieved a 99.00% TP rate and 01.00% FP rate, respectively. Furthermore, 99.10% and 00.90% TP and FP rate were achieved using Adaboost random forest. using Adaboost with reptree, we achieved 99.00% TP rate and 01.00% FP rate, respectively. Precision, recall, and F1-Score using Adaboost j48 was 99.00%, respectively. random forest with Adaboost achieved 99.10% precision, recall, and F1-Score, respectively. reptree with Adaboost achieved 99.00% precision, recall, and F1-score, respectively. In addition, 99.00%, 99.10%, and 98.90% TP rate were achieved with Bagging j48, random forest, and reptree, respectively. Furthermore, a 01.00% FP rate was achieved using j48 Bagging, 00.90% was achieved using random forest, and 01.10% was achieved with reptree. j48 with Bagging achieved 99.00% precision, recall, and F1-score, respectively. In addition, 99.10% precision, recall, and F1-score were achieved with random forest. Thus, for reptree, precision and recall were both 98.90% respectively, and F1-score was 98.10%, respectively.
In Table 47, DAR Ensemble [52] achieved 78.88% accuracy. Naive Bayes with KNN [53] achieved 82.00% Accuracy and 05.43% FP rate. Feature selection with SVM [54] achieved 82.37% detection rate and 15.00% FP rate. GAR forest with symmatrixal uncertainty [55] achieved 85.00% detection rate and 12.20% FP rate, respectively. Bagging j48 [56] achieved 82.25% detection rate and 02.79% FP rate, respectively. PCA + PSO [57] achieved 99.40% detection rate and 00.60% FP rate. Our proposed model Bagging with random forest achieved 99.90% detection rate and 00.00% FP rate, respectively, using the kdd99 dataset. Furthermore, 98.60% detection rate and 00.50% FP rate were achieved using the nslkdd dataset, which is improved compared to other state-of-the-art research.

6. Conclusions and Future Work

In this paper, a machine learning based intrusion detection system has been proposed. During experimentation, various ensemble machine learning algorithms have been implemented on NSLKDD and KDD99 datasets. First, NSLKDD and KDD99 datasets were collected. Then, we transformed collected data into binary classes: Attack class and Normal class. In addition, we kept them as multiple attacks (21 Attacks for both KDD99 and nslkdd datasets). At the initial stage of the experiment, various steps were included for the datasets to prepare for the experiment such as pre-processing on the datasets, min-max normalization, feature optimization, and dimensionality reduction. After best feature selection, we have applied different machine learning algorithms on both of the datasets. Ensemble random forest has outperformed all other methods in terms of accuracy, training, and false-positive rate. Experimental results prove that our method performs better in terms of detection rate, false alarm rate, and accuracy for both KDD99 and NSLKDD datasets. The FPR on the KDD99 dataset that we achieved was 0.0%, and we achieved 0.5% FPR on the NSLKDD dataset, respectively. Similarly, we achieved on average 99% testing accuracy for both KDD99 and NSLKDD datasets, respectively. The limitation of this work is that some attacks have 0 classification accuracy. The reason for this is that the data size of that attack is less than 20, while, for other attacks, data size is large. In the future, we will solve this problem using some data balancing methods like SMOTE, which balances the classes and improves the performance of lower classes as well.

Author Contributions

This research specifies below the individual contributions: Conceptualization, C.I. and S.K.; Data curation, J.H.A.; Formal analysis, M.M.; Funding acquisition, M.A. (Mamdouh Alenezi); Investigation; S.K.; Methodology, C.I.; Project administration, C.I.; Resources, M.A. (Mamdouh Alenezi); Software, J.H.A. and M.M.; Supervision, C.I. and M.A. (Mamoun Alazab); Validation, J.H.A. and M.M.; Visualization, S.K.; Writing—Review and editing, C.I., J.H.A., and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research obtained no funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sharma, J.; Giri, C.; Granmo, O.C.; Goodwin, M. Multi-layer intrusion detection system with ExtraTrees feature selection, extreme learning machine ensemble, and softmax aggregation. Eur. J. Inf. Secur. 2019, 2019, 15. [Google Scholar] [CrossRef] [Green Version]
  2. Omran, S.S.; Salih, M.A. Design and Implementation of Multi-model Biomatrix Identification System. Int. J. Comput. Appl. 2014, 99, 14–21. [Google Scholar]
  3. Kaimuru, D.; Mwangi, W.; Nderu, L. A Hybrid Ensemble Method for Multi class Classification and Outlier Detection. Int. J. Sci. Basic Appl. Res. 2019, 45, 192–213. [Google Scholar]
  4. Farnaaz, N.; Jabbar, M.A. random forest Modeling for Network Intrusion Detection System. Procedia Comput. Sci. 2016, 89, 213–217. [Google Scholar] [CrossRef] [Green Version]
  5. Panda, M.; Abraham, A.; Patra, M.R. Hybrid intelligent systems for detecting network intrusions. Secur. Commun. Netw. 2015, 8, 2741–2749. [Google Scholar] [CrossRef]
  6. Ahmim, A.; Derdour, M.; Ferrag, M.A. An intrusion detection system based on combining probability predictions of a tree of classifiers. Int. J. Commun. Syst. 2018, 31, e3547. [Google Scholar] [CrossRef]
  7. Ma, T.; Wang, F.; Cheng, J.; Yu, Y.; Chen, X. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks. Sensors 2016, 16, 1701. [Google Scholar] [CrossRef] [Green Version]
  8. Aljawarneh, S.; Aldwairi, M.; Yassein, M.B. Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model. J. Comput. Sci. 2018, 25, 152–160. [Google Scholar] [CrossRef]
  9. Khonde, S.R.; Ulagamuthalvi, V. Ensemble-based semi-supervised learning approach for a distributed intrusion detection system. J. Cyber Secur. Technol. 2019. [Google Scholar] [CrossRef]
  10. Yang, Y.; Zheng, K.; Wu, C.; Yang, Y. Improving the Classification Effectiveness of Intrusion Detection by using Improved Conditional Variational AutoEncoder and Deep Neural Network. Sensors 2019, 19, 2528. [Google Scholar] [CrossRef] [Green Version]
  11. Thing, V.L.L. IEEE 802.11 Network Anomaly Detection and Attack Classification: A Deep Learning Approach. In Proceedings of the IEEE Wireless Communications and Networking Conference, San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar]
  12. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  13. Hinton, G.E. Deep belief networks. Scholarpedia 2009, 4, 5947. [Google Scholar] [CrossRef]
  14. Ambusaidi, M.A.; He, X.; Nanda, P.; Tan, Z. Building an intrusion detection system using a filter-based feature selection algorithm. IEEE Trans. Comput. 2016, 65, 2986–2998. [Google Scholar] [CrossRef] [Green Version]
  15. UNB. NSL-KDD Dataset. Available online: https://www.unb.ca/cic/datasets/nsl.html (accessed on 10 December 2018).
  16. Dhanabal, L.; Shantharajah, S. A study on NSL-KDD dataset for intrusion detection system based on classification algorithms. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, 446–452. [Google Scholar]
  17. Iwendi, C.; Khan, S.; Anajemba, J.H.; Bashir, A.K.; Noor, F. Realizing an Efficient IoMT-Assisted Patient Diet Recommendation System Through Machine Learning Model. IEEE Access 2020, 8, 28462–28474. [Google Scholar] [CrossRef]
  18. Lopez-Martin, M.; Carro, B.; Sanchez-Esguevillas, A.; Lloret, J. Conditional Variational Autoencoder for Prediction and Feature Recovery Applied to Intrusion Detection in IoT. Sensors 2017, 17, 1967. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Anajemba, J.H.; Yue, T.; Iwendi, C.; Alenezi, M.; Mittal, M. Optimal Cooperative Offloading Scheme for Energy Efficient Multi-Access Edge Computation. IEEE Access 2020, 8, 53931–53941. [Google Scholar] [CrossRef]
  20. Mazini, M.; Shirazi, B.; Mahdavi, I. Anomaly network-based intrusion detection system using a reliable hybrid artificial bee colony and Adaboost algorithms. J. King Saud Univ. Comput. Inf. Sci. 2019, 31, 541–553. [Google Scholar] [CrossRef]
  21. Ren, J.; Guo, J.; Wang, Q.; Huang, Y.; Hao, X.; Hu, J. Building an Effective Intrusion Detection System by using Hybrid Data Optimization Based on Machine Learning Algorithms. Secur. Commun. Netw. 2019. [Google Scholar] [CrossRef]
  22. Fossaceca, J.M.; Mazzuchi, T.A.; Sarkani, S. MARK-ELM: Application of a novel Multiple Kernel Learning framework for improving the robustness of Network Intrusion Detection. Expert Syst. Appl. 2015, 42, 4062–4080. [Google Scholar] [CrossRef]
  23. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. Feature selection and classification in multiple class datasets: An application to KDD Cup 99 dataset. Expert Syst. Appl. 2011, 38, 5947–5957. [Google Scholar] [CrossRef]
  24. Kim, J.; Thu, H.L.T.; Kim, H. Long Short Term Memory Recurrent Neural Network Classifier for Intrusion Detection. In Proceedings of the International Conference on Platform Technology and Service (PlatCon, 2016), Jeju, Korea, 15–17 February 2016; pp. 1–5. [Google Scholar]
  25. Sen, A.; Islam, M.M.; Murase, K.; Yao, X. Binarization with boosting and oversampling for multiclass classification. IEEE Trans. Cybern. 2016, 46, 1078–1091. [Google Scholar] [CrossRef] [PubMed]
  26. Dong, L.; Frank, E.; Kramer, S. Ensembles of balanced nested dichotomies for multi-class problems. In Proceedings of the European Conference on Principles of Data Mining and Knowledge Discovery, Porto, Portugal, 3–7 October 2005; pp. 84–95. [Google Scholar]
  27. Hashemi, S.; Yang, Y.; Mirzamomen, Z.; Kangavari, M. Adapted one-versus-all decision trees for data stream classification. IEEE Trans. Knowl. Data Eng. 2009, 21, 624–637. [Google Scholar] [CrossRef]
  28. Gaikwad, V.; Kulkarni, P.J. One versus all classification in network intrusion detection using decision tree. Int. J. Sci. Res. Publ. 2012, 2, 1–5. [Google Scholar]
  29. Govindarajan, M.; Chandrasekaran, R. Intrusion detection using an ensemble of classification methods. In Proceedings of the World Congress on Engineering and Computer Science, San Francisco, CA, USA, 24–26 October 2012; Volume 1, pp. 459–464. [Google Scholar]
  30. Horng, S.-J.; Su, M.-Y.; Chen, Y.-H.; Kao, T.-W.; Chen, R.-J.; Lai, J.-L.; Perkasa, C.D. A novel intrusion detection system based on hierarchical clustering and support vector machines. Expert Syst. Appl. 2011, 38, 306–313. [Google Scholar] [CrossRef]
  31. Aburomman, A.A.; Reaz, M.B.I. A novel weighted support vector machines multiclass classifier based on differential evolution for intrusion detection systems. Inf. Sci. 2017, 414, 225–246. [Google Scholar] [CrossRef]
  32. Thaseen, I.S.; Kumar, C.A. Intrusion detection model using fusion of chi-square feature selection and multi class SVM. J. King Saud Univ. Comput. Inf. Sci. 2017, 29, 462–472. [Google Scholar]
  33. Iwendi, C.; Alastair, A.; Offor, K. Smart Security Implementation for Wireless Sensor Network Nodes. J. Wirel. Sens. Netw. 2015, 1, 1–2. [Google Scholar]
  34. Mittal, M.; Saraswat, L.K.; Iwendi, C.; Anajemba, J.H. A Neuro-Fuzzy Approach for Intrusion Detection in Energy Efficient Sensor Routing. In Proceedings of the 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), Ghaziabad, India, 18–19 April 2019; pp. 1–5. [Google Scholar]
  35. Iwendi, C.O.; Allen, A.R. Enhanced security technique for wireless sensor network nodes, Wireless Sensor Systems (WSS 2012). IET Conf. 2012, 2, 1–5. [Google Scholar]
  36. Iwendi, C.; Uddin, M.; Ansere, J.A.; Nkurunziza, P.; Anajemba, J.H.; Bashir, A.K. On Detection of Sybil Attack in Large-Scale VANETs using Spider-Monkey Technique. IEEE Access 2018, 6, 47258–47267. [Google Scholar] [CrossRef]
  37. Iwendi, C.; Suresh, P.; Revathi, M.; Srinivasan, K.; Chang, C.-Y. An Efficient and Unique TF/IDF Algorithmic Model-Based Data Analysis for Handling Applications with Big Data Streaming, published in Artificial Intelligence- Applications and Methodologies of Artificial Intelligence in Big Data Analysis. Electronics 2019, 8, 1331. [Google Scholar] [CrossRef] [Green Version]
  38. Bashir, A.K.; Arul, R.; Jayaram, R.; Arulappan, A.; Prathiba, S.B. An Optimal Multi-tier Resource Allocation of Cloud RAN in 5G using Machine Learning. Trans. Emerg. Telecommun. Technol. Wiley 2019, 30, e3627. [Google Scholar]
  39. Shafiq, M.; Yu, X.; Bashir, A.K.; Chuahdry, H.N.; Wang, D. A Machine Learning Approach for Feature Selection Traffic Classification using Security Analysis. J. Supercomput. 2018, 76, 4867–4892. [Google Scholar] [CrossRef]
  40. Kayacik, H.G.; Zincir-Heywood, A.N.; Heywood, M.I. Selecting features for intrusion detection: A feature relevance analysis on KDD 99 benchmark. In Proceedings of the Third Annual Conference on Privacy, Security and Trust, St. Andrews, NB, Canada, 12–14 October 2005. [Google Scholar]
  41. Saxena, H.; Richaariya, V. Intrusion Detection in kdd99 Dataset using SVM-PSO and Feature Reduction with Information Gain. Int. J. Comput. Appl. 2014, 98, 25–29. [Google Scholar] [CrossRef]
  42. Mittal, M.; Kumar, K. Data Clustering in Wireless Sensor Network Implemented On Self Organization Feature Map (SOFM) Neural Network. In Proceedings of the IEEE International Conference on Computing Communication and Automation(ICCCA), Noida, India, 29–30 April 2016; pp. 202–207. [Google Scholar]
  43. Mittal, M.; Kumar, K. Network Lifetime Enhancement of Homogeneous Sensor Network using ART1 Neural Network. In Proceedings of the Sixth International Conference on Computational Intelligence and Communication Networks, Bhopal, India, 14–16 November 2014; pp. 472–475. [Google Scholar]
  44. Mittal, M.; Kumar, K. Quality of Services Provisioning in Wireless Sensor Networks using Artificial Neural Network: A Survey. Int. J. Comput. Appl. 2015, 117, 28–40. [Google Scholar] [CrossRef]
  45. Hall, M.A. Correlation-Based Feature Selection for Machine Learning; University of Waikato: Hamilton, New Zealand, 1999. [Google Scholar]
  46. Wosiak, A.; Zakrzewska, D. Integrating correlation-based feature selection and clustering for improved cardiovascular disease diagnosis. Complexity 2018. [Google Scholar] [CrossRef]
  47. Sarumathiy, C.K.; Geetha, K.; Rajan, C. Improvement in Hadoop performance using integrated feature extraction and machine learning algorithms. Soft Comput. 2020, 24, 627–636. [Google Scholar] [CrossRef]
  48. Accuracy, Precision, Recall F1-Score: Interpretation of Performance Measures-Exsilio Blog. Available online: https://blog.exsilio.com/all/accuracy-precision-recall-F1-score-interpretation-of-performance-measures/ (accessed on 30 December 2019).
  49. Weka 3-Data Mining with Open Source Machine Learning Software in Java. Available online: https://www.cs.waikato.ac.nz/ml/weka/ (accessed on 24 November 2019).
  50. KDD Cup 1999 Data. Available online: http://kdd.ics.uci.edu/datasets/kddcup99/kddcup99.html (accessed on 26 December 2019).
  51. NSL-KDD|Datasets|Research|Canadian Institute for Cybersecurity|UNB. Available online: https://www.unb.ca/cic/datasets/nsl.html (accessed on 26 December 2019).
  52. Gaikwad, D.; Thool, R. DAREnsemble: Decision tree and rule learner based ensemble for network intrusion detection system. Smart Innov. Syst. Technol. 2016, 50, 185–193. [Google Scholar]
  53. Pajouh, H.H.; Dastghaibyfard, G.H.; Hashemi, S. Two-tier network anomaly detection model: A machine learning approach. J. Intell. Inf. Syst. 2017, 48, 61–74. [Google Scholar] [CrossRef]
  54. Pervez, M.S.; Farid, D.M. Feature Selection and Intrusion Classification in NSL-KDD cup 99 Dataset employing SVMs. In Proceedings of the 8th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2014), Dhaka, Bangladesh, 18–20 December 2014; pp. 1–6. [Google Scholar]
  55. Kanakarajan, N.K.; Muniasamy, K. Improving the accuracy of intrusion detection using gar-forest with feature selection. Adv. Intell. Syst. Comput. 2016, 404, 539–547. [Google Scholar]
  56. Pham, N.T.; Foo, E.; Suriadi, S.; Jeffrey, H.; Lahza, H.F.M. Improving performance of intrusion detection system using ensemble methods and feature selection. ACM 2018. [Google Scholar] [CrossRef]
  57. Ahmad, I. Feature Selection using Particle Swarm Optimization in Intrusion Detection. Int. J. Distrib. Sens. Netw. 2015, 11, 806954. [Google Scholar] [CrossRef]
Figure 1. Proposed methodology.
Figure 1. Proposed methodology.
Sensors 20 02559 g001
Figure 2. CFS work flow.
Figure 2. CFS work flow.
Sensors 20 02559 g002
Figure 3. Classification report for the Adaboost J48 KDD99 dataset.
Figure 3. Classification report for the Adaboost J48 KDD99 dataset.
Sensors 20 02559 g003
Figure 4. Classification report for Adaboost Random Forest KDD99 dataset.
Figure 4. Classification report for Adaboost Random Forest KDD99 dataset.
Sensors 20 02559 g004
Figure 5. Classification report for the Adaboost Reptree KDD99 dataset.
Figure 5. Classification report for the Adaboost Reptree KDD99 dataset.
Sensors 20 02559 g005
Figure 6. Classification report for the Bagging J48 KDD99 dataset.
Figure 6. Classification report for the Bagging J48 KDD99 dataset.
Sensors 20 02559 g006
Figure 7. Classification report for the Bagging Random Forest KDD99 dataset.
Figure 7. Classification report for the Bagging Random Forest KDD99 dataset.
Sensors 20 02559 g007
Figure 8. Classification report for the Bagging Reptree kdd99 dataset.
Figure 8. Classification report for the Bagging Reptree kdd99 dataset.
Sensors 20 02559 g008
Figure 9. Classification report using the Adaboost J48 NSLKDD dataset.
Figure 9. Classification report using the Adaboost J48 NSLKDD dataset.
Sensors 20 02559 g009
Figure 10. Classification report for the Adaboost Random Forest NSLKDD dataset.
Figure 10. Classification report for the Adaboost Random Forest NSLKDD dataset.
Sensors 20 02559 g010
Figure 11. Classification report for the Adaboost Reptree NSLKDD dataset.
Figure 11. Classification report for the Adaboost Reptree NSLKDD dataset.
Sensors 20 02559 g011
Figure 12. Classification report for the Bagging J48 NSLKDD dataset.
Figure 12. Classification report for the Bagging J48 NSLKDD dataset.
Sensors 20 02559 g012
Figure 13. Classification report for the Bagging Random Forest NSLKDD dataset.
Figure 13. Classification report for the Bagging Random Forest NSLKDD dataset.
Sensors 20 02559 g013
Figure 14. Classification report for the Reptree NSLKDD dataset.
Figure 14. Classification report for the Reptree NSLKDD dataset.
Sensors 20 02559 g014
Table 1. KDD99 dataset binary classifications total packets.
Table 1. KDD99 dataset binary classifications total packets.
Packets DetailsPackets Count
Normal Packets97,277
Anomaly Packets396,731
Total Size494,008
Table 2. Training and testing samples for KDD99.
Table 2. Training and testing samples for KDD99.
Training and Testing PacketsTraining and Testing Packets Count
Training Data Size345,806
Testing Data Size148,202
Table 3. Number of attacks used in this research for KDD99.
Table 3. Number of attacks used in this research for KDD99.
Attack NameCategoryCount
SmurfDoS280,790
NeptuneDoS107,200
NormalNormal97,277
BackDoS2203
SatanProbe1589
IpsweepProbe1247
PortsweepProbe1040
WarezclientR2L1020
TeardropDoS979
PodDoS264
NmapProbe231
Guess passwdR2L53
Buffer overflowU2R30
LandDoS21
WarezmasterR2L20
ImapR2L12
LoadmoduleU2R9
Ftp_writeR2L8
MultihopR2L7
PhfR2L4
PerlU2R3
Table 4. NSLKDD dataset binary classifications total packets.
Table 4. NSLKDD dataset binary classifications total packets.
Packets DetailsPackets Count
Normal Packets77,054
Anomaly Packets71,215
Total Size148,269
Table 5. NSLKDD dataset binary classifications total packets.
Table 5. NSLKDD dataset binary classifications total packets.
Training and Testing PacketsTraining and Testing Packets Count
Training Data Size103,789
Testing Data Size44,481
Table 6. Number of attacks used in this research for NSLKDD.
Table 6. Number of attacks used in this research for NSLKDD.
Attack NameCount
Normal77,054
Neptune45,871
Satan4368
Ipsweep3740
Smurf3311
Portsweep3088
Nmap1566
Back1315
Guess_passwd1284
Mscan996
Warezmaster964
Teardrop904
Warezclient890
Apache2737
Processtable685
Snmpguess331
Saint319
Mailbomb293
Pod242
Snmpgetattack178
Httptunnel133
Table 7. Total number of features for KDD99 and NSLKDD datasets.
Table 7. Total number of features for KDD99 and NSLKDD datasets.
S.No.Feature NameFeature TypeS.No.Feature NameFeature Type
1DurationNumber2Protocol TypeNon-Numeric
3ServiceNon-Numeric4FlagNon-Numeric
5Source BytesNumber6Destination BytesNumber
7LandNon-Numeric8Wrong FragmentNumber
9UrgentNumber10HotNumber
11Number of failed loginsNumber12logged inNon-Numeric
13Number Access FilesNumber14Root ShellNumber
15Su_AttempedNumber16Number RootNumber
17Number of File CreationsNumber18Number ShellsNumber
19Number Access FilesNumber20number outbound CommandsNumber
21Is Host LoginNon-Numeric22Is Guest LoginNon-Numeric
23CountNumber24Service CountNumber
25Serror RateNumber26Service Error RateNumber
27Rerror RateNumber28Service RError RateNumber
29Same Service RateNumber30Different Service RateNumber
31Service Different Host RateNumber32Dst_host_countNumber
33Dst_host_srv_countNumber34Dst_host_same_srv_rateNumber
35Dst_host_diff_srv_rateNumber36Dst_host_same_src_port_rateNumber
37Dst_host_srv_diff_host_rateNumber38Dst_host_serror_rateNumber
39Dst_host_srv_serror_rateNumber40Dst_host_rerror_rateNumber
41Dst_host_srv_rerror_rateNumber42Class Label TypeNon-Numeric
Table 8. Number of optimal features selected using CFS.
Table 8. Number of optimal features selected using CFS.
DatasetSelected Features Using CFS
KDD99 (For 2 Attacks)6, 12, 23, 31, 32
KDD99 (For 21 Attacks)2, 3, 4, 5, 6, 7, 8, 14, 23, 30, 36
nslkdd (For 2 Attacks)1, 3, 4, 5, 7, 8, 11, 12, 13, 30, 35, 36, 37
nslkdd (For 21 Attacks)1, 3, 4, 5, 7, 8, 11, 12, 13, 30, 35, 36, 37
Table 9. Confusion matrix for Adaboost J48.
Table 9. Confusion matrix for Adaboost J48.
NormalAnomaly
Normal28,934271
Anomaly759118,238
Table 10. Classification report for Adaboost J48.
Table 10. Classification report for Adaboost J48.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.100.6097.4099.1098.3099.90
Anomaly99.400.9099.8099.4099.6099.90
Table 11. Confusion matrix for the Adaboost random forest.
Table 11. Confusion matrix for the Adaboost random forest.
NormalAnomaly
Normal28,934271
Anomaly759118,238
Table 12. Classification report for the Adaboost Random Forest.
Table 12. Classification report for the Adaboost Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.100.6097.4099.1098.3099.90
Anomaly99.400.9099.8099.4099.6099.90
Table 13. Confusion matrix for Adaboost Reptree.
Table 13. Confusion matrix for Adaboost Reptree.
NormalAnomaly
Normal28,975230
Anomaly658118,339
Table 14. Classification report for Ensemble Reptree.
Table 14. Classification report for Ensemble Reptree.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.200.6097.8099.2098.5099.80
Anomaly99.400.8099.8099.4099.60100.00
Table 15. Confusion matrix for Bagging J48.
Table 15. Confusion matrix for Bagging J48.
NormalAnomaly
Normal28,838367
Anomaly772118,225
Table 16. Classification report for Bagging J48.
Table 16. Classification report for Bagging J48.
TP RateFP RatePrecisionRecallF1 ScoreROC Area
Normal98.700.6097.4098.7098.1099.50
Anomaly99.401.3099.7099.4099.50100.00
Table 17. Confusion matrix for Bagging Random Forest.
Table 17. Confusion matrix for Bagging Random Forest.
NormalAnomaly
Normal28,994211
Anomaly679118,318
Table 18. Classification report for Bagging Random Forest.
Table 18. Classification report for Bagging Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.300.6097.7099.3098.5099.70
Anomaly99.400.7099.8099.4099.60100.00
Table 19. Confusion matrix for Bagging Reptree.
Table 19. Confusion matrix for Bagging Reptree.
NormalAnomaly
Normal29,010195
Anomaly698118,299
Table 20. Classification report for Bagging Reptree.
Table 20. Classification report for Bagging Reptree.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.300.6097.7099.3098.5099.99
Anomaly99.400.7099.8099.4099.60100.00
Table 21. Multiclass classification report for KDD99 using Adaboost j48.
Table 21. Multiclass classification report for KDD99 using Adaboost j48.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.700.0099.8099.7099.80100.00
2Buffer-overflow46.200.00100.0046.2063.2099.70
3Loadmodule0.000.000.000.000.0099.10
4Perl100.000.0033.33100.0050.00100.00
5Neptune100.000.00100.00100.00100.00100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.0093.80100.0096.80100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep99.300.0095.3099.3097.30100.00
11Ipsweep97.900.1081.6099.9089.0099.20
12Land100.000.0083.30100.0090.90100.00
13Ftp_write0.000.000.000.000.00100.00
14Back99.700.0099.8099.7099.80100.00
15Imap50.000.00100.0050.0066.70100.00
16Satan98.500.0099.1098.5098.80100.00
17Phf0.000.000.000.000.00100.00
18Nmap55.600.0097.2055.6070.7099.80
19Multihop50.000.0033.3350.0040.0081.70
20Warezmaster60.000.00100.0060.0075.0071.70
21Warezclient93.000.0097.1093.0095.0099.10
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 22. Multiclass classification report for KDD99 using Adaboost Random Forest.
Table 22. Multiclass classification report for KDD99 using Adaboost Random Forest.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.800.0099.9099.8099.80100.00
2Buffer-overflow61.500.0088.9061.5072.7096.10
3Loadmodule20.000.0033.3320.0025.0089.70
4Perl100.000.00100.00100.00100.00100.00
5Neptune100.000.00100.00100.00100.00100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.00100.00100.00100.00100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep96.900.0099.3096.9098.10100.00
11Ipsweep97.900.1081.6097.9089.0099.40
12Land80.000.0080.0080.0080.0099.90
13Ftp_write0.000.000.000.000.00100.00
14Back100.000.00100.00100.00100.00100.00
15Imap100.000.00100.00100.00100.00100.00
16Satan98.700.0099.4098.7099.0099.90
17Phf100.000.00100.00100.00100.00100.00
18Nmap52.400.00100.0052.4068.8099.10
19Multihop50.000.00100.0050.0066.70100.00
20Warezmaster60.000.0075.0060.0066.7099.80
21Warezclient94.500.0097.8094.5096.1098.80
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 23. Multiclass classification report for KDD99 using Adaboost Reptree.
Table 23. Multiclass classification report for KDD99 using Adaboost Reptree.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.700.0099.8099.7099.80100.00
2Buffer-overflow53.800.0077.8053.8063.6099.90
3Loadmodule0.000.000.000.000.0099.90
4Perl0.000.000.000.000.00100.00
5Neptune100.000.0099.90100.00100.00100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.0083.30100.0090.90100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep94.800.0099.3094.8097.0099.40
11Ipsweep97.100.1081.3097.1088.5099.90
12Land80.000.0080.0080.0080.0099.90
13Ftp_write0.000.000.000.000.00100.00
14Back100.000.0099.80100.0099.90100.00
15Imap75.000.00100.0075.0085.7085.10
16Satan98.500.0098.5098.5098.5099.90
17Phf0.000.000.000.000.0099.30
18Nmap52.400.00100.0052.4068.8099.90
19Multihop0.000.000.000.000.0099.80
20Warezmaster60.000.00100.0060.0075.0088.00
21Warezclient93.300.0097.5093.3095.30100.00
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 24. Multiclass classification report for KDD99 using Bagging J48.
Table 24. Multiclass classification report for KDD99 using Bagging J48.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.700.0099.8099.7099.80100.00
2Buffer-overflow69.200.0081.8069.2075.0092.30
3Loadmodule0.000.000.000.000.0059.80
4Perl100.000.00100.00100.00100.00100.00
5Neptune100.000.00100.00100.00100.00100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.0093.80100.0096.80100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep99.000.0097.6099.0098.3099.80
11Ipsweep97.600.1081.4097.6088.8099.30
12Land100.000.0083.30100.0090.90100.00
13Ftp_write0.000.000.000.000.0099.90
14Back99.800.00100.0099.8099.90100.00
15Imap50.000.00100.0050.0066.7087.50
16Satan98.500.0099.3098.5098.9099.90
17Phf0.000.000.000.000.0094.50
18Nmap55.600.0097.2055.6070.7099.10
19Multihop50.000.0050.0050.0050.0075.00
20Warezmaster60.000.00100.0060.0075.0080.00
21Warezclient93.600.0097.2093.6095.4099.80
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 25. Multiclass classification report for KDD99 using the Bagging Random Forest.
Table 25. Multiclass classification report for KDD99 using the Bagging Random Forest.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.800.0099.8099.8099.80100.00
2Buffer-overflow61.500.0080.0061.5069.69100.00
3Loadmodule0.000.000.000.000.0090.00
4Perl0.000.000.000.000.00100.00
5Neptune100.000.0099.90100.0099.90100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.0088.20100.0093.80100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep93.800.00100.0093.8096.8097.80
11Ipsweep97.600.1081.6097.6088.9099.30
12Land80.000.0080.0080.0080.0090.00
13Ftp_write0.000.000.000.000.00100.00
14Back100.000.0099.80100.0099.90100.00
15Imap75.000.00100.0075.0085.70100.00
16Satan97.800.0099.8097.8098.8099.70
17Phf0.000.000.000.000.00100.00
18Nmap54.000.00100.0054.0070.1098.30
19Multihop0.000.000.000.000.0075.00
20Warezmaster60.000.00100.0060.0075.0090.00
21Warezclient93.300.0097.5093.3095.43100.00
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 26. Multiclass classification report for KDD99 using Bagging Reptree.
Table 26. Multiclass classification report for KDD99 using Bagging Reptree.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.700.0099.8099.7099.80100.00
2Buffer-overflow69.200.0081.8069.2075.0092.30
3Loadmodule0.000.000.000.000.0059.80
4Perl100.000.00100.00100.00100.00100.00
5Neptune100.000.00100.00100.00100.00100.00
6Smurf100.000.00100.00100.00100.00100.00
7Guess_passwd100.000.0093.80100.0096.80100.00
8Pod100.000.00100.00100.00100.00100.00
9Teardrop100.000.00100.00100.00100.00100.00
10Portsweep99.000.0097.6099.0098.3099.80
11Ipsweep97.600.1081.4097.6088.8099.30
12Land100.000.0083.30100.0090.90100.00
13Ftp_write0.000.000.000.000.0099.90
14Back99.800.00100.0099.8099.90100.00
15Imap50.000.00100.0050.0066.7087.50
16Satan98.500.0099.3098.5098.9099.90
17Phf0.000.000.000.000.0094.50
18Nmap55.600.0097.2055.6070.7099.10
19Multihop50.000.0050.0050.0050.0075.00
20Warezmaster60.000.00100.0060.0075.0080.00
21Warezclient93.600.0097.2093.6095.4099.80
22Weighted Avg99.900.0099.9099.9099.90100.00
Table 27. Confusion matrix for Adaboost J48.
Table 27. Confusion matrix for Adaboost J48.
NormalAnomaly
Normal22,944219
Anomaly23621,082
Table 28. Classification report for Adaboost J48.
Table 28. Classification report for Adaboost J48.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.101.1099.0099.1099.0099.90
Anomaly98.900.9099.0098.9098.9099.90
Table 29. Confusion matrix for Adaboost Random Forest.
Table 29. Confusion matrix for Adaboost Random Forest.
NormalAnomaly
Normal22,920243
Anomaly11621,152
Table 30. Classification report for Adaboost Random Forest.
Table 30. Classification report for Adaboost Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.000.8099.3099.0099.1099.80
Anomaly99.201.0098.9099.2099.0099.80
Table 31. Confusion matrix for Adaboost Reptree.
Table 31. Confusion matrix for Adaboost Reptree.
NormalAnomaly
Normal22,854309
Anomaly14421,174
Table 32. Classification report for Adaboost Random Forest.
Table 32. Classification report for Adaboost Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal98.700.7099.4098.7099.1099.90
Anomaly99.301.3098.6099.3098.9099.90
Table 33. Confusion matrix for Bagging J48.
Table 33. Confusion matrix for Bagging J48.
NormalAnomaly
Normal22,949214
Anomaly22821,090
Table 34. Classification report for Bagging Random Forest.
Table 34. Classification report for Bagging Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.101.1099.0099.1099.0099.90
Anomaly98.900.9099.0098.9099.0099.90
Table 35. Confusion matrix for Bagging Random Forest.
Table 35. Confusion matrix for Bagging Random Forest.
NormalAnomaly
Normal22,972191
Anomaly20121,117
Table 36. Classification report for Bagging Random Forest.
Table 36. Classification report for Bagging Random Forest.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.200.9099.1099.2099.2099.90
Anomaly99.100.8099.1099.1099.1099.90
Table 37. Confusion matrix for Bagging Reptree.
Table 37. Confusion matrix for Bagging Reptree.
NormalAnomaly
Normal22,925238
Anomaly23021,088
Table 38. Classification report for Bagging Reptree.
Table 38. Classification report for Bagging Reptree.
TP RateFP RatePrecisionRecallF1-ScoreROC Area
Normal99.001.1099.0099.0099.0099.90
Anomaly98.901.0098.9098.9098.9099.90
Table 39. Multiclass classification report for NSLKDD using Adaboost J48.
Table 39. Multiclass classification report for NSLKDD using Adaboost J48.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.001.2098.9099.0099.0099.99
2Neptune100.000.0099.90100.0099.90100.00
3Warezclient90.200.0095.6090.2092.8099.80
4Ipsweep90.500.0099.5090.5094.8099.90
5Portsweep97.900.1097.1097.9097.5099.90
6Teardrop100.000.0096.30100.0098.10100.00
7Nmap96.200.3078.2096.2086.3099.90
8Satan97.200.3091.4097.2094.2099.80
9Smurf99.500.2093.3099.5094.40100.00
10Pod98.400.0095.3098.4096.80100.00
11Back100.000.0099.80100.0099.90100.00
12Guess_passwd96.800.0096.5096.8096.7099.50
13Warezmaster92.400.0098.1092.4095.1099.20
14Saint0.000.000.000.000.0095.40
15Mscan95.700.0094.8095.7095.2099.80
16Apache299.100.00100.0099.1099.5099.80
17Snmpgetattack1.800.00100.001.803.4098.80
18Processtable99.500.0099.5099.5099.50100.00
19Httptunnel95.000.0090.5095.0092.7097.50
20Snmpguess40.000.1055.9046.6047.2099.30
21Mailbomb88.000.0095.7088.0091.7099.30
22Weighted Avg98.400.6098.3098.4098.3099.90
Table 40. Multiclass classification report for NSLKDD using the Adaboost Random Forest.
Table 40. Multiclass classification report for NSLKDD using the Adaboost Random Forest.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.201.1099.0099.2099.1099.80
2Neptune100.000.1099.70100.0099.80100.00
3Warezclient95.500.0094.4095.5095.0099.80
4Ipsweep90.600.0099.6090.6094.9099.80
5Portsweep95.900.0099.4095.9097.6099.10
6Teardrop100.000.0095.20100.0097.60100.00
7Nmap96.200.3077.9096.2086.1099.90
8Satan94.900.3092.2094.9093.2099.30
9Smurf99.900.1099.9097.2097.20100.00
10Pod98.400.0095.3098.4096.80100.00
11Back100.000.00100.00100.00100.00100.00
12Guess_passwd96.500.0097.0096.5096.8099.90
13Warezmaster94.900.0097.0094.9096.0098.10
14Saint02.000.0040.0002.2003.9091.70
15Mscan98.700.0097.4098.7098.8099.80
16Apache299.500.00100.0099.5099.8099.80
17Snmpgetattack0.700.0033.3007.0011.6097.10
18Processtable99.500.0095.0095.0099.00100.00
19Httptunnel95.000.0095.0095.0095.0097.50
20Snmpguess40.000.1055.9040.0046.6099.30
21Mailbomb98.700.0098.7098.7098.7099.30
22Weighted Avg98.500.6098.3098.5098.4099.80
Table 41. Multiclass classification report for NSLKDD using Adaboost Reptree.
Table 41. Multiclass classification report for NSLKDD using Adaboost Reptree.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.101.2098.9099.1099.0099.90
2Neptune99.900.3099.4099.9099.60100.00
3Warezclient94.700.0093.0094.7093.90100.00
4Ipsweep90.000.0098.4090.0094.0099.90
5Portsweep92.000.1096.9092.0094.4099.10
6Teardrop100.000.0095.20100.0097.60100.00
7Nmap92.700.3077.4092.7084.4099.40
8Satan93.100.3090.5093.1091.8099.60
9Smurf99.900.1094.7099.9097.20100.00
10Pod98.400.0095.3098.4096.80100.00
11Back100.000.0099.30100.0099.60100.00
12Guess_passwd96.500.0097.0096.5096.8099.80
13Warezmaster93.500.0098.5093.5099.7098.10
14Saint0.000.000.000.000.0098.50
15Mscan97.400.0093.9097.4095.6099.80
16Apache299.100.00100.0099.1099.5099.90
17Snmpgetattack0.000.000.000.000.0099.50
18Processtable99.500.00100.0095.5099.80100.00
19Httptunnel87.500.0087.5087.5087.5097.80
20Snmpguess40.000.1055.9040.0046.6099.80
21Mailbomb98.700.0098.7098.7098.7099.70
22Weighted Avg98.200.8097.9098.2098.0099.90
Table 42. Multiclass classification report for NSLKDD using Bagging J48.
Table 42. Multiclass classification report for NSLKDD using Bagging J48.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.101.1099.0099.1099.1099.90
2Neptune100.000.0099.90100.0099.90100.00
3Warezclient92.900.0095.0092.9093.9099.70
4Ipsweep90.500.0099.5090.5094.8099.80
5Portsweep98.400.0098.1098.4098.2099.50
6Teardrop100.000.0096.30100.0098.10100.00
7Nmap96.000.3078.2096.0086.2099.90
8Satan97.200.3091.9097.2094.4099.90
9Smurf99.500.1093.7099.5096.50100.00
10Pod98.400.0095.3098.4096.80100.00
11Back99.800.0099.8099.8099.80100.00
12Guess_passwd95.700.0095.7096.7096.7099.70
13Warezmaster93.500.0098.1093.5095.7095.70
14Saint01.000.0025.0001.0002.2098.20
15Mscan96.000.0097.0096.0096.5096.50
16Apache299.100.00100.0099.1099.5099.90
17Snmpgetattack03.500.0066.7003.5006.7099.70
18Processtable99.500.0099.1099.5099.30100.00
19Httptunnel95.000.0090.5092.7092.7097.50
20Snmpguess40.000.1055.9040.0046.6099.80
21Mailbomb96.000.0094.7096.0095.4099.30
22Weighted Avg98.500.6098.4098.5098.3099.90
Table 43. Multiclass classification report for NSLKDD using Bagging Random Forest.
Table 43. Multiclass classification report for NSLKDD using Bagging Random Forest.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.201.1099.1099.2099.2099.90
2Neptune100.000.0099.80100.0099.90100.00
3Warezclient98.900.0093.3098.9096.00100.00
4Ipsweep90.900.0099.7090.9095.10100.00
5Portsweep95.500.0099.2096.5097.9099.80
6Teardrop99.600.0096.3099.6097.90100.00
7Nmap95.300.3078.6095.3086.2099.90
8Satan96.700.3091.9096.7094.2099.90
9Smurf99.900.1094.5099.9097.70100.00
10Pod98.400.0095.3098.4096.80100.00
11Back100.000.00100.00100.00100.00100.00
12Guess_passwd96.800.0097.3096.8097.0099.60
13Warezmaster94.200.0098.9094.2096.5099.40
14Saint02.000.0028.6002.0003.8095.10
15Mscan99.300.0095.9099.3097.60100.00
16Apache299.500.00100.0099.5099.80100.00
17Snmpgetattack07.000.0050.0007.0012.3098.80
18Processtable100.000.00100.00100.00100.00100.00
19Httptunnel92.500.0092.5092.5092.5097.50
20Snmpguess40.000.1055.9040.0046.6099.30
21Mailbomb98.700.0097.4098.7098.0099.30
22Weighted Avg98.600.5098.4098.6098.4099.90
Table 44. Multiclass classification report for NSLKDD using Bagging Reptree.
Table 44. Multiclass classification report for NSLKDD using Bagging Reptree.
S.No.ClassTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Normal99.201.2098.9099.2099.0099.90
2Neptune99.900.3099.3099.9099.60100.00
3Warezclient92.500.0098.0092.5095.20100.00
4Ipsweep89.700.0098.8094.1094.0099.80
5Portsweep91.500.1096.1091.5093.8098.80
6Teardrop100.000.0095.60100.0097.70100.00
7Nmap92.900.3077.2092.9084.5097.90
8Satan93.900.3091.1093.9092.5099.30
9Smurf98.400.0094.6099.9097.20100.00
10Pod98.400.0095.3098.4096.80100.00
11Back100.000.0099.80100.0099.90100.00
12Guess_passwd94.400.0098.3094.4096.3099.70
13Warezmaster92.700.0098.8092.7095.70100.00
14Saint0.000.000.000.000.0096.60
15Mscan98.700.0095.2098.7096.90100.00
16Apache299.100.00100.0099.1099.5099.90
17Snmpgetattack01.800.0050.0001.8003.4099.70
18Processtable99.500.00100.0099.5099.80100.00
19Httptunnel87.500.0087.5087.5087.5097.50
20Snmpguess40.000.1055.9040.0046.6099.90
21Mailbomb98.700.0091.4098.7094.9099.30
22Weighted Avg98.200.7098.0098.2098.1099.90
Table 45. Comparison of proposed models for multiclass classification.
Table 45. Comparison of proposed models for multiclass classification.
KDD99 Experiment Average Results
S.No.Proposed ModelsTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Adaboost j4899.900.0099.9099.9099.90100.00
2Adaboost random forest99.900.0099.9099.9099.90100.00
3Adaboostreptree99.900.0099.9099.9099.90100.00
4Bagging j4899.900.0099.9099.9099.90100.00
5Bagging random forest99.900.0099.9099.9099.90100.00
6Bagging reptree99.900.0099.9099.9099.90100.00
NSLKDD Experiment Average Results
S.No.Proposed ModelsTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Adaboost j4898.400.6098.3098.4098.3099.90
2Adaboost random forest98.500.6098.3098.5098.4099.80
3Adaboostreptree98.200.8097.9098.2098.0099.90
4Bagging j4898.500.6098.4098.5098.3099.90
5Bagging random forest98.600.5098.4098.6098.4099.90
6Bagging reptree98.200.7098.0098.2098.1099.90
Table 46. Comparison of proposed models for binary class classification.
Table 46. Comparison of proposed models for binary class classification.
KDD99 Experiment Average Results
S.No.Proposed ModelsTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Adaboost j4899.300.9099.3099.3099.3099.90
2Adaboost random forest99.100.9099.1099.1099.1099.80
3Adaboostreptree99.400.7099.4099.4099.40100.00
4Bagging j4899.2001.1099.2099.2099.2099.90
5Bagging random forest99.400.7099.4099.4099.4099.90
6Bagging reptree99.400.7099.4099.4099.40100.00
NSLKDD Experiment Average Results
S.No.Proposed ModelsTP RateFP RatePrecisionRecallF1-ScoreROC Area
1Adaboost j4899.001.0099.0099.0099.0099.90
2Adaboost random forest99.100.9099.1099.1099.1099.80
3Adaboostreptree99.001.0099.0099.0099.0099.90
4Bagging j4899.001.0099.0099.0099.0099.90
5Bagging random forest99.100.9099.1099.1099.1099.80
6Bagging reptree98.9001.1098.9098.9098.1099.90
Table 47. Comparison analysis of our proposed model with other ensemble models.
Table 47. Comparison analysis of our proposed model with other ensemble models.
MethodAccuracy Detection Rate (%)FR Rate (%)
DAR Ensemble [52]78.88N/A
Naive Bayes-KNN-CF [53]82.0005.43
Feature Selection + SVM [54]82.3715.00
GAR Forest + Symmatrixal Uncertainity [55]85.0012.20
Bagging j48 [56]84.2502.79
PCA+PSO [57]99.400.60
Propose Model Bagging Random Forest (KDD99 dataset)99.900.00
Propose Model Bagging Random Forest (NSLKDD dataset)98.600.50

Share and Cite

MDPI and ACS Style

Iwendi, C.; Khan, S.; Anajemba, J.H.; Mittal, M.; Alenezi, M.; Alazab, M. The Use of Ensemble Models for Multiple Class and Binary Class Classification for Improving Intrusion Detection Systems. Sensors 2020, 20, 2559. https://doi.org/10.3390/s20092559

AMA Style

Iwendi C, Khan S, Anajemba JH, Mittal M, Alenezi M, Alazab M. The Use of Ensemble Models for Multiple Class and Binary Class Classification for Improving Intrusion Detection Systems. Sensors. 2020; 20(9):2559. https://doi.org/10.3390/s20092559

Chicago/Turabian Style

Iwendi, Celestine, Suleman Khan, Joseph Henry Anajemba, Mohit Mittal, Mamdouh Alenezi, and Mamoun Alazab. 2020. "The Use of Ensemble Models for Multiple Class and Binary Class Classification for Improving Intrusion Detection Systems" Sensors 20, no. 9: 2559. https://doi.org/10.3390/s20092559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop