You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

27 September 2022

Feature Subset Selection Hybrid Deep Belief Network Based Cybersecurity Intrusion Detection Model

,
,
,
,
,
,
and
1
Saudi Aramco Cybersecurity Chair, Networks and Communications Department, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
2
Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Computer Sciences, College of Computing and Information System, Umm Al-Qura University, Mecca 24382, Saudi Arabia
4
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 22254, Saudi Arabia
This article belongs to the Section Computer Science & Engineering

Abstract

Intrusion detection system (IDS) has played a significant role in modern network security. A key component for constructing an effective IDS is the identification of essential features and network traffic data preprocessing to design effective classification model. This paper presents a Feature Subset Selection Hybrid Deep Belief Network based Cybersecurity Intrusion Detection (FSHDBN-CID) model. The presented FSHDBN-CID model mainly concentrates on the recognition of intrusions to accomplish cybersecurity in the network. In the presented FSHDBN-CID model, different levels of data preprocessing can be performed to transform the raw data into compatible format. For feature selection purposes, jaya optimization algorithm (JOA) is utilized which in turn reduces the computation complexity. In addition, the presented FSHDBN-CID model exploits HDBN model for classification purposes. At last, chicken swarm optimization (CSO) technique can be implemented as a hyperparameter optimizer for the HDBN method. In order to investigate the enhanced performance of the presented FSHDBN-CID method, a wide range of experiments was performed. The comparative study pointed out the improvements of the FSHDBN-CID model over other models with an accuracy of 99.57%.

1. Introduction

Cybersecurity incident as a ubiquitous threat to enterprises, organizations, and governments, seems to be increasing in scale, severity, frequency, and sophistication []. Natural disasters like earthquakes, hurricanes, and floods, and human-made disasters (i.e., financial crashes and military nuclear accidents), cybersecurity events that include extreme events result in unintended significance or even disastrous damages []. A cybersecurity incident is described as an event by which an intruder uses a tool for implementing action that uses a susceptibility on an objective and makes an unauthorized result that would satisfy the intention of the attackers. A network security mechanism has a computer security system and a network security system []. All these systems involve intrusion detection systems (IDS), firewalls, and antivirus software. IDSs will be helpful in identifying, determining, and discovering unauthorized system behavior like modification, use, copying, and destruction. Security breaches involve internal intrusions and external intrusions []. There were 2 predominant kinds of network analysis for IDSs they are hybrid and misuse-based, otherwise called signature-based, anomaly-based. Misuse-based detecting approaches aim at detecting renowned assaults through the signatures of such attacks; they were utilized for known kinds of assaults without making a great number of false alarms []. However, administrators should manually update the signatures and database rules. New (zero-day) assaults could not be detected related to misused technologies [].
In General, an IDS can be overwhelmed with a massive amount of data and have redundant and irrelevant features will make a long-term issue in network traffic classifications []. One main limitation of recent IDS technologies was the necessity to sort out false alarms and the system being overwhelmed with data; it is because of additional and irrelevant features in datasets that diminish the speed of detection []. Feature selection can be defined as the preprocessing technique, which could efficiently solve the IDS issue through the selection of appropriate features and eradicating of irrelevant and redundant features []. The benefits of feature selection have limitations of required storage, space data understanding, reduction of processing cost, and data reduction. Advancing computational security methods for analyzing various cyber incident paradigms and simultaneously forecasting the threats by utilizing cybersecurity data is employed for framing a data-driven intellectual IDS []. Therefore, the knowledge of AI, specifically ML approaches could learn from security data well.
This paper presents a Feature Subset Selection Hybrid Deep Belief Network based Cybersecurity Intrusion Detection (FSHDBN-CID) model. The presented FSHDBN-CID model mainly concentrates on the recognition of intrusions to accomplish cybersecurity in the network. In the presented FSHDBN-CID model, different levels of data preprocessing can be performed to transform the raw data into compatible format. For feature selection purposes, jaya optimization algorithm (JOA) is utilized which in turn reduces the computation complexity. In addition, the presented FSHDBN-CID model exploits the HDBN model for classification purposes. At last, the chicken swarm optimization (CSO) technique can be applied as a hyperparameter optimizer for the HDBN method. In order to investigate the enhanced performance of the presented FSHDBN-CID method, a wide range of experiments was performed. In summary, the contributions of the paper is given as follows.
  • A new FSHDBN-CID technique is developed for intrusion detection process. To the best of our knowledge, the proposed FSHDBN-CID technique never existed in the literature.
  • Design a new JOA based feature selection technique to improve detection accuracy and reduce high dimensionality problem.
  • Develop a new CSO-HDBN model in which the hyperparameter tuning of the HDBN take place using the CSO algorithm.
The rest of the paper is organized as follows. Section 2 offers a brief survey of existing intrusion detection techniques and Section 3 introduces the proposed model. Next, Section 4 provides experimental validation and Section 5 concludes the work.

3. The Proposed Model

In this paper, an effective FSHDBN-CID technique has been developed for intrusion detection. The presented FSHDBN-CID model mainly concentrates on the recognition of intrusions to accomplish cybersecurity in the network. Figure 1 showcases the overall process flow of FSHDBN-CID technique. The presented FSHDBN-CID model initially preprocesses the network traffic data to make it compatible for further processing. Next, the preprocessed data is passed into the JOA based feature subset selection technique for effectively choosing the features. Then, the HDBN based intrusion detection technique is employed to detect intrusions and and CSO based hyperparameter optimization process take place.
Figure 1. Overall process of FSHDBN-CID approach.

3.1. Data Pre-Processing

In the presented FSHDBN-CID model, different levels of data preprocessing are performed to transform the raw data into a compatible format.
Data processing includes 3 major steps []:
  • Data transfer: detection method requires every input record in a form of vectors of real numbers. Thus, symbolic features in the data are converted to numeric values.
  • Data discretization: the main purpose was to limit the constant values to the limited sets. Discretized data will cause a superior categorization. Many features in the dataset were continuous. The technique of discretization can be utilized in this paper
    N o r m a l i z e d X = X X min X max X min
  • Data normalization: every feature in data contains distinct data limitations; thus, they were normalized into a particular range of [0, 1].

3.2. Feature Selection Using JOA

For feature selection purposes, the JOA is utilized which in turn reduces the computation complexity. Rao developed the JOA in [] to manage constraint and unconstraint optimization approaches; it is considerably easier for implementing those approaches since it only has single stage. Jaya means (“victory” in Sanskrit). This technique makes use of a population based metaheuristic that has swarm intelligence and evolutionary features; it is found in the behaviors of “survival of fittest” concept. “The search technique of the presented approach aims to get closer to success by finding the better global solution and avoid failure by evading the worst choice”. In study algorithm, the property of the evolutionary algorithm and swarm-based intelligence are integrated.
Assume that objective function ψ ( x ) should be maximized or minimized based on the problem. Consider that d number of design parameters and n number of candidate solutions as (population size, k = 1 ,   2 ,   n ) for t iteration. Consider the better candidate acquires the better values of ψ ( x )   ( ψ ( x ) b e s t ) in every candidate solution, and the worst candidate obtains the worst values of ψ ( x ) ( ψ ( x ) w o r s t ) in every candidate solution. When the value of jth parameter for k-th candidate during the t-th iteration is X ( j , k , t ) , then these values are upgraded in the following []:
X ( j , k , t ) = X ( j , k , t ) + q ( 1 , j , t ) ( X ( j , b e s t , t ) | X ( j , k , t ) | ) q ( 2 , j , t ) ( X ( j , w o r s t , t ) | X ( j , k , t ) | )
In Equation (9), the value of parameter j for the better candidate denotes X ( j , b e s t , t ) whereas the value of parameter j for the worst candidate represent X ( j , w o r s t , t ) .   X ( j , k , t ) indicates the upgrades values of X ( j , k , t ) , and q ( 1 , j , t ) and q ( 2 , j , t ) signifies two random values within [0, 1] for j-th parameter during the t-th iteration. The term q ( 1 , j , t ) ( X ( j , b e s t , t ) | X ( j , k , t ) ) shows the solution’s tendency to get closer to the better solution, while the term q ( 2 , j , t ) ( X ( j , w o r s t , t ) | X ( j , k , t ) ) shows the solution’s tendency for avoiding the worst. When X ( j , k , t ) produces superior function values, it is accepted. At the end of all the iterations, each satisfactory function value is used and kept as the input for upcoming iterations.
The fitness function (FF) of the JOA will consider the number of selected features and the classifier accuracy; it would maximize the classifier accuracy and reduces the size set of the selected features. Thus, the following FF can be utilized for evaluating individual solutions, as displayed in Equation (3).
F i t n e s s = α   E r r o r R a t e + ( 1 α ) # S F # A l l _ F  
whereas E r r o r R a t e specifies the classifier error rate utilizing the features which are selected. ErrorRate can be computed as the percentage of inaccurate classified to count of classifications made, indicated as a value among 0 and 1. (ErrorRate was the complement of the classifier accuracy), # S F was the selected features count and # A l l _ F was the total number of features in the actual dataset. α was utilized to control the significance of subset length and classification quality. In this experiment, α can be set to 0.9.

3.3. Intrusion Detection

In the presented FSHDBN-CID model, the HDBN model is applied for classification purposes. The DNN model comprises 4 layers of pretrained RBM and output layers (Softmax Regression) []. Parameter should be anticipated by training model beforehand utilizing DBN to classify and represent assaults. The training of DBN can be classified into pretraining for presentation and fine-tuning for classifications. Simultaneously, the resultant DBN was transferred to the input of Softmax Regression and included in the DBN that comprises stacked RBM. Initially, the DBN is trained for reconfiguring untagged trained datasets, and consequently implemented unsupervised. Emodel [.] and Edata [.] were expectations of probabilities.
  log   P ( x ) W i j = E d a t a [ h j x i ] E m o d e l [ h j x i ]
  log   P ( x ) a i = E d a t a [ x i ] E m o d e l [ x i ]
  log   P ( x ) b j = E d a t a [ h j ] E m o d e l [ h j ]
In this work, three algorithms (Equations (4)–(6)) first included in conventional DBN network, the second term couldn’t be directly attained. Since it was the probability in distribution that could be studied using the DBN. Gibb’s sampling is used for calculating these probabilities. However, this technique can be time-consuming and could be used in real-time. For finding a better solution, the contrastive divergence (CD) technique is used, fast-learning algorithm. Firstly, a training instance was utilized to finish the start of the Markov chain. Then, samples can be attained afterward the k steps of Gibb’s sampling. This technique is named CD-k. Note that the performance of CD can be satisfactory if k = 1 . Figure 2 depicts the infrastructure of DBN.
Figure 2. Architecture of DBN.
In this work, to train stacked RBM layer-wise for creating the DBN, b , and W parameters are upgraded based on the CD-1.
W t + 1 = W t + ε ( P ( h x ( 0 ) ) [ x ( 0 ) ] T P ( h x ( 1 ) ) [ x ( 1 ) ] T )   a t + 1 = a t + ε ( x ( 0 ) x ( 1 ) ) b t + 1 = b t + ε ( P ( h x ( 0 ) ) P ( h x ( 1 ) ) )
Form the equation, signifies learning rate, t denotes time steps. The visible variable can be denoted as h = { h } and the hidden variable as x = { x m } . Now, M and N nodes are in the visible and hidden layers. The weight of feature vector is initiated randomly within the network by sampling in CD approach.
The steps for executing greedy layer-wise training mechanisms for all the layers of the DBN are given below.
During initial RBM training, the data suitable to W 1 parameter is considered as input x . W 1 will be frozen and applied to the trained RBM h 1 as Q ( h 1 v ) = P ( h 2 h 1   ,   W 2 ) for training of RBM and the following binary feature layer. W 2 , that determines 2-layered features, was frozen, and the dataset essential for training the binary feature at 3 layers was attained from h 2 as ( h 2 h 1 ) = P ( h 2 h 1   ,   W 2 ) . This procedure repeats continuously across each layer. LR is utilized in conventional binary classification. However, the study preferred Softmax since there exist various classifications in the DBN.
As the training set { ( z ( 1 ) i , y ( 1 ) ) ,   ( z ( 2 ) i , y ( 2 ) ) , ( z ( m ) i , y ( m ) ) } ,   m indicates the number of samples in the training set and ( z ( 1 ) i ,   z ( m ) i ) shows the hidden vector of top RBM. The Softmax function ϕ in output layer, j = 0 , ,   k for all the classes, the conditional probability of P ( y = j | z ( i ) ) is evaluated by the following equation.
P ( y = j | z ( i ) ) = ϕ s o f t m a x   ( z ( i ) ) = e z ( i ) j k e z k ( i )
In Equation (8), z ( i ) R n + 1 shows the topmost secret vector and it is given in the following:
z = w 0 x 0 + w 1 x 1 + + w m x m = l = 0 m w l x l = w T X

3.4. Hyperparameter Tuning

To optimally adjust the hyperparameters related to the HDBN model, the CSO algorithm is utilized in this work. The CSO simulates the behavior and the movement of chicken swarm, the CSO is discussed in the following []: In CSO there exist several groups, and all the groups comprising of chicks, dominant rooster, and some hens. Chicks, Roosters, and hens in the group are defined based on their fitness values. Chicks are the chicken that has worst fitness value. Whereas roosters (group head) are the chicken that has better fitness values. Most of the chickens would be the hens and they arbitrarily choose which group to stay in. Indeed, the mother-child relationship between the chicks and the hens is randomly carried out. The mother-child and dominance relationships in a group remain unchanged and upgraded (G) time step. The movement of chicken is given as follows:
(1) The rooster position update equation can be given in the following:
X i , j t + 1 = X i , j t ( 1 + r a n d n ( 0 ,   σ 2 ) )
where
σ 2 = { 1 i f f i f k e x p ( f k f i | f i + ε | ) O t h e r w i s e
whereas k [ 1 ,   N r ] ,   k i and N r indicates the number of chosen roosters. X i , j characterizes the location of i - th rooster number in j - th dimension during t and t + 1 iteration, r a n d n   ( O ,   σ 2 ) generates Gaussian random value with mean 0 and variance σ 2 ,   ε represents a constant with lower values, and f i shows the fitness value for the respective rooster i .
(2) The equation that is utilized for the hen location upgrade is shown below:
X i , j t + 1 = X i , j t + S 1 r a n d n ( X r 1 , j t X i , j t ) + S 2 r a n d n ( X r 2 , j t X i , j t )
whereas,
S 1 = e x p ( f i f r 1 | f i | + ϵ )
and
S 1 =   e x p   ( f r 2 f i )
From the expression, r 1 ,   r 2 [ 1 ,   ,   N ] ,   r 1 r 2 indicates the index of rooster, whereas r 2 indicates a chicken from the swarm that is hen or rooster and a uniform random value can be produced using r a n d n .
(3) The equation utilized for chick position update is shown below:
X i , j t + 1 = X i , j t + F L ( X m , j t X i , j t ) ,   F L [ 0 , 2 ]
whereas, X m , j t indicates the location of i - th chick’s mother. The CSO algorithm will derive a fitness function (FF) for achieving enhanced classifier outcomes; it would determine a positive value for denoting superior performance of the candidate solutions. In this article, the reduced classifier error rate can be taken as the FF, as specified in Equation (15).
f i t n e s s ( x i ) = C l a s s i f i e r E r r o r R a t e ( x i ) = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s       100

4. Experimental Validation

The proposed model is simulated using Python 3.6.5 tool on PC i5-8600k, GeForce 1050 Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD. The performance validation of the FSHDBN-CID method is tested using the IDS dataset. The dataset comprises 125,973 samples under five class labels as depicted in Table 1.
Table 1. Dataset details.
The confusion matrices formed by the FSHDBN-CID method on cybersecurity intrusion detection are shown in Figure 3. The figure implied that the FSHDBN-CID model has attained effectual detection efficiency under distinct training (TR) and testing (TS) data. For instance, on 60% of TR data, the FSHDBN-CID model has identified 27,320 samples into DoS class, 547 samples into R2l class, 6982 samples into Probe class, 0 samples into U2r class, and 39,913 samples into normal class. Moreover, on 40% of TS data, the FSHDBN-CID technique has identified 18,111 samples into DoS class, 388 samples into R2l class, 4519 samples into Probe class, 0 samples into U2r class, and 26,815 samples into normal class. Furthermore, on 70% of TR data, the FSHDBN-CID approach has identified 31,480 samples into DoS class, 455 samples into R2l class, 7712 samples into Probe class, 2 samples into U2r class, and 46,641 samples into normal class.
Figure 3. Confusion matrices of FSHDBN-CID approach (a) 60% of TR dataset, (b) 40% of TS dataset (c) 70% of TR dataset, and (d) 30% of TS dataset.
Table 2 provides an overall intrusion detection outcome of the FSHDBN-CID model on 60% of TR data and 40% of TS data. Figure 4 reports the intrusion classification performance of the presented FSHDBN-CID method on 60% of TR data. The experimental outcomes inferred that the FSHDBN-CID model has shown enhanced results under both aspects. For instance, on DoS class, the FSHDBN-CID model has offered a c c u y of 99.35%, p r e c n of 99.34%, r e c a l of 98.88%, F s c o r e of 99.11%, and MCC of 98.60%. Simultaneously, on Probe class, the FSHDBN-CID technique has presented a c c u y of 99.64%, p r e c n of 97.46%, r e c a l of 98.76%, F s c o r e of 98.10%, and MCC of 97.91%. Concurrently, on Normal class, the FSHDBN-CID technique has rendered a c c u y of 99.28%, p r e c n of 99.53%, r e c a l of 99.12%, F s c o r e of 99.32%, and MCC of 98.56%.
Table 2. Result analysis of FSHDBN-CID algorithm with distinct class labels under 60:40 of TR and TS datasets.
Figure 4. Average analysis of FSHDBN-CID approach under 60% of TR dataset.
Figure 5 reports the intrusion classification performance of the presented FSHDBN-CID technique on 40% of TS data. The experimental outcomes exhibited the FSHDBN-CID approach have depicted enhanced consequences under both aspects. For instance, in DoS class, the FSHDBN-CID method has presented a c c u y of 99.37%, p r e c n of 99.29%, r e c a l of 98.98%, F s c o r e of 99.14%, and MCC of 98.64%. Concurrently, on Probe class, the FSHDBN-CID technique has granted a c c u y of 99.61%, p r e c n of 97.16%, r e c a l of 98.54%, F s c o r e of 97.85%, and MCC of 97.63%. Concurrently, on Normal class, the FSHDBN-CID method has presented a c c u y of 99.27%, p r e c n of 99.60%, r e c a l of 99.04%, F s c o r e of 99.32%, and MCC of 98.53%.
Figure 5. Average analysis of FSHDBN-CID approach under 40% of TS dataset.
Table 3 portrays the complete intrusion detection outcomes of the FSHDBN-CID technique on 70% of TR data and 30% of TS data. The experimental results denoted the FSHDBN-CID approach has displayed enhanced results under both aspects. For example, on DoS class, the FSHDBN-CID method has presented a c c u y of 98.38%, p r e c n of 97.55%, r e c a l of 98%, F s c o r e of 97.78%, and MCC of 96.50%. At the same time, on Probe class, the FSHDBN-CID algorithm has granted a c c u y of 99.19%, p r e c n of 96.32%, r e c a l of 94.87%, F s c o r e of 95.59%, and MCC of 95.15%. Parallelly, on Normal class, the FSHDBN-CID approach has exhibited a c c u y of 98.67%, p r e c n of 98.69%, r e c a l of 98.82%, F s c o r e of 98.76%, and MCC of 97.32.
Table 3. Result analysis of FSHDBN-CID algorithm with distinct class labels under 70:30 of TR and TS datasets.
Figure 6 signifies the average intrusion classification performance of the presented FSHDBN-CID method on 70% of TR data. The figure indicated that the FSHDBN-CID model has reached an enhanced average a c c u y of 99.14%.
Figure 6. Average analysis of FSHDBN-CID approach under 70% of TR dataset.
Figure 7 reports the intrusion classification performance of the presented FSHDBN-CID algorithm on 30% of TS data. The experimental outcomes denoted the FSHDBN-CID approach has exhibited enhanced results under both aspects. For example, on DoS class, the FSHDBN-CID technique has displayed a c c u y of 98.30%, p r e c n of 97.52%, r e c a l of 97.84%, F s c o r e of 97.68%, and MCC of 96.35%. At the same time, on Probe class, the FSHDBN-CID algorithm has granted a c c u y of 99.16%, p r e c n of 95.91%, r e c a l of 95.07%, F s c o r e of 95.49%, and MCC of 95.03%. Parallelly, on Normal class, the FSHDBN-CID approach has shown a c c u y of 98.59%, p r e c n of 98.59%, r e c a l of 98.76%, F s c o r e of 98.68%, and MCC of 97.16%.
Figure 7. Average analysis of FSHDBN-CID technique under 30% of TS dataset.
The training accuracy (TRA) and validation accuracy (VLA) gained by the FSHDBN-CID approach under test dataset is exemplified in Figure 8. The experimental result specified the FSHDBN-CID approach has achieved maximal values of TRA and VLA. seemingly the VLA is greater than TRA.
Figure 8. TRA and VLA analysis of FSHDBN-CID algorithm.
The training loss (TRL) and validation loss (VLL) obtained by the FSHDBN-CID method under test dataset are shown in Figure 9. The experimental outcome represents the FSHDBN-CID approach has exhibited minimal values of TRL and VLL. Particularly, the VLL is lesser than TRL.
Figure 9. TRL and VLL analysis of FSHDBN-CID algorithm.
A clear precision-recall study of the FSHDBN-CID technique under test dataset is described in Figure 10. The figure specified the FSHDBN-CID algorithm has resulted in enhanced values of precision-recall values in every class label.
Figure 10. Precision-recall analysis of FSHDBN-CID algorithm.
A brief ROC examination of the FSHDBN-CID technique under test dataset is shown in Figure 11. The results exhibited the FSHDBN-CID method has revealed its capability in classifying different classes in test dataset.
Figure 11. ROC curve analysis of FSHDBN-CID algorithm.
Finally, a comparison study of the FSHDBN-CID method with other IDS methods is given in Table 4 and Figure 12 []. The results indicated that the NB model has shown poor performance with minimal a c c u y of 90.03%. Next, the KNN model has attained moderately improved a c c u y of 94.73% whereas the LR and SVM models have reached reasonable a c c u y of 95.03% and 96.56%, respectively. Though the IntruDTree method has resulted in near optimal a c c u y of 98.13%, the FSHDBN-CID model has shown maximum a c c u y of 99.57%. Therefore, the presented FSHDBN-CID model is found to be effective in accomplishing cybersecurity.
Table 4. Comparative analysis of FSHDBN-CID approach with existing methodologies.
Figure 12. A c c u y analysis of FSHDBN-CID approach with existing methodologies.

5. Conclusions

In this paper, an effective FSHDBN-CID technique has been developed for intrusion detection. The presented FSHDBN-CID model mainly concentrates on the recognition of intrusions to accomplish cybersecurity in the network. In the presented FSHDBN-CID model, different levels of data preprocessing will be performed to transform the raw data into a compatible format. For feature selection purposes, the JOA is utilized which in turn reduces the computation complexity. In addition, the presented FSHDBN-CID model exploits the HDBN model for classification purposes. Lastly, the CSO algorithm can be applied as a hyperparameter optimizer for the HDBN method. In order to investigate the enhanced performance of the presented FSHDBN-CID method, a wide range of experiments was performed. The comparison study pointed out the improvements of the FSHDBN-CID model over other models. Therefore, the proposed model can be employed for cyberattack detection in real time environments such as finance, e-commerce, education, etc. As a part of the future scope, the improved performance of the FSHDBN-CID model will be improvised using the outlier detection and data clustering algorithms.

Author Contributions

Conceptualization, K.A.A. and H.S.; methodology, A.G.; software, A.S.A.A.; validation, H.S., K.A.A. and A.Y.; formal analysis, R.A.; investigation, O.A.; resources, A.Y.; data curation, M.A.D.; writing—original draft preparation, H.S., K.A.A. and A.G.; writing—review and editing, A.S.A.A., A.Y., R.A. and O.A.; visualization, M.A.D.; supervision, K.A.A.; project administration, M.A.D.; funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah Bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R135), Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4320484DSR04).

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare no conflict of interest. The manuscript was written through contributions of all authors.

References

  1. Sarker, I.H.; Abushark, Y.B.; Alsolami, F.; Khan, A.I. Intrudtree: A machine learning based cyber security intrusion detection model. Symmetry 2020, 12, 754. [Google Scholar] [CrossRef]
  2. Ferrag, M.A.; Maglaras, L.; Moschoyiannis, S.; Janicke, H. Deep learning for cyber security intrusion detection: Approaches, datasets, and comparative study. J. Inf. Secur. Appl. 2020, 50, 102419. [Google Scholar] [CrossRef]
  3. Abou El Houda, Z.; Brik, B.; Khoukhi, L. “Why Should I Trust Your IDS?”: An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks. IEEE Open J. Commun. Soc. 2022, 3, 1164–1176. [Google Scholar] [CrossRef]
  4. McCarthy, A.; Ghadafi, E.; Andriotis, P.; Legg, P. Functionality-Preserving Adversarial Machine Learning for Robust Classification in Cybersecurity and Intrusion Detection Domains: A Survey. J. Cybersecur. Priv. 2022, 2, 154–190. [Google Scholar] [CrossRef]
  5. Yasin, A.; Fatima, R.; Liu, L.; Yasin, A.; Wang, J. Contemplating social engineering studies and attack scenarios: A review study. Secur. Priv. 2019, 2, e73. [Google Scholar] [CrossRef]
  6. Khan, A.W.; Khan, M.U.; Khan, J.A.; Ahmad, A.; Khan, K.; Zamir, M.; Kim, W.; Ijaz, M.F. Analyzing and evaluating critical challenges and practices for software vendor organizations to secure big data on cloud computing: An AHP-based systematic approach. IEEE Access 2021, 9, 107309–107332. [Google Scholar] [CrossRef]
  7. Ali, L.; Niamat, A.; Khan, J.A.; Golilarz, N.A.; Xingzhong, X.; Noor, A.; Nour, R.; Bukhari, S.A.C. An optimized stacked support vector machines based expert system for the effective prediction of heart failure. IEEE Access 2019, 7, 54007–54014. [Google Scholar] [CrossRef]
  8. Fatima, R.; Yasin, A.; Liu, L.; Jianmin, W. Strategies for counteracting social engineering attacks. Comput. Fraud. Secur. 2022, 2022, 70583. [Google Scholar] [CrossRef]
  9. Yasin, A.; Fatima, R.; Liu, L.; Wanga, J.; Ali, R.; Wei, Z. Counteracting social engineering attacks. Comput. Fraud. Secur. 2021, 2021, 15–19. [Google Scholar] [CrossRef]
  10. Ahmad, I.; Wang, X.; Zhu, M.; Wang, C.; Pi, Y.; Khan, J.A.; Khan, S.; Samuel, O.W.; Chen, S.; Li, G. EEG-Based Epileptic Seizure Detection via Machine/Deep Learning Approaches: A Systematic Review. Comput. Intell. Neurosci. 2022, 2022, 6486570. [Google Scholar] [CrossRef]
  11. Li, B.; Wu, Y.; Song, J.; Lu, R.; Li, T.; Zhao, L. DeepFed: Federated deep learning for intrusion detection in industrial cyber–physical systems. IEEE Trans. Ind. Inform. 2020, 17, 5615–5624. [Google Scholar] [CrossRef]
  12. Wu, K.; Chen, Z.; Li, W. A novel intrusion detection model for a massive network using convolutional neural networks. IEEE Access 2018, 6, 50850–50859. [Google Scholar] [CrossRef]
  13. Al-Abassi, A.; Karimipour, H.; Dehghantanha, A.; Parizi, R.M. An ensemble deep learning-based cyber-attack detection in industrial control system. IEEE Access 2020, 8, 83965–83973. [Google Scholar] [CrossRef]
  14. Akgun, D.; Hizal, S.; Cavusoglu, U. A new DDoS attacks intrusion detection model based on deep learning for cybersecurity. Comput. Secur. 2022, 118, 102748. [Google Scholar] [CrossRef]
  15. Haider, A.; Khan, M.A.; Rehman, A.; Ur, R.M.; Kim, H.S. A real-time sequential deep extreme learning machine cybersecurity intrusion detection system. CMC Comput. Mater. Contin. 2021, 66, 1785–1798. [Google Scholar] [CrossRef]
  16. Zhang, H.; Wu, C.Q.; Gao, S.; Wang, Z.; Xu, Y.; Liu, Y. An Effective Deep Learning Based Scheme for Network Intrusion Detection. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 682–687. [Google Scholar]
  17. Yang, K.; Liu, J.; Zhang, C.; Fang, Y. Adversarial Examples against the Deep Learning Based Network Intrusion Detection Systems. In Proceedings of the MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018; pp. 559–564. [Google Scholar]
  18. Di Mauro, M.; Galatro, G.; Fortino, G.; Liotta, A. Supervised feature selection techniques in network intrusion detection: A critical review. Eng. Appl. Artif. Intell. 2021, 101, 104216. [Google Scholar] [CrossRef]
  19. Nimbalkar, P.; Kshirsagar, D. Feature selection for intrusion detection system in Internet-of-Things (IoT). ICT Express 2021, 7, 177–181. [Google Scholar] [CrossRef]
  20. Li, X.; Yi, P.; Wei, W.; Jiang, Y.; Tian, L. LNNLS-KH: A feature selection method for network intrusion detection. Secur. Commun. Netw. 2021, 2021, 8830431. [Google Scholar] [CrossRef]
  21. Mohammadi, S.; Mirvaziri, H.; Ghazizadeh-Ahsaee, M.; Karimipour, H. Cyber intrusion detection by combined feature selection algorithm. J. Inf. Secur. Appl. 2019, 44, 80–88. [Google Scholar] [CrossRef]
  22. Rao, R. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar]
  23. Thirumoorthy, K.; Muneeswaran, K. A hybrid approach for text document clustering using Jaya optimization algorithm. Expert Syst. Appl. 2021, 178, 115040. [Google Scholar] [CrossRef]
  24. Fang, Z.; Roy, K.; Mares, J.; Sham, C.W.; Chen, B.; Lim, J.B. Deep Learning-Based Axial Capacity Prediction for Cold-Formed Steel Channel Sections using Deep Belief Network. In Structures; Elsevier: Amsterdam, The Netherlands, 2021; Volume 33, pp. 2792–2802. [Google Scholar]
  25. Othman, A.M.; El-Fergany, A.A. Adaptive virtual-inertia control and chicken swarm optimizer for frequency stability in power-grids penetrated by renewable energy sources. Neural Comput. Appl. 2021, 33, 2905–2918. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.