Optimization of Intrusion Detection Systems Determined by Ameliorated HNADAMSGD Algorithm
Abstract
:1. Introduction
Gap Analysis
 RQ1—To find the combination of hyperparameters to maximize the performance of model by diminishing generalization error and computational cost.
 RQ2—To find a hyperparameter response space which depends on methodology, hyperparameters, dataset and metrics.
 RQ3—To deploy a methodology by the sampling of candidate parameters using a crossvalidation scheme.
2. Related Work
Research Gap
 The performance of the machine learning algorithm is dependent on the selection of hyperparameters.
 Hyperparameter optimization algorithm performance depends on various factors, such as total hidden layers, total perlayer units, dropout amount, regularizer learning rate and weight decay.
 The nonoptimal setting of hyperparameters will drastically affect how the algorithm performance varies from an extremely low learning rate to a very large learning rate.
 The hypertuning approach varies depending on the type of dataset, the nature of the dataset and its size, as there is no welldefined formula to find hyperparameters.
3. Dataset Description
4. Traditional Regression Analysis
5. Proposed Hyperparameter Optimization Algorithm
Experimental Analysis
Algorithm 1: HNADAMSDG 
Initialize “m” and “c” in the start with a random number. 
Calculate gradient. 
Update the calculated gradient with respect to “m” and “c” using Equations (50)–(52) [31]. 
$$\frac{\partial s}{\partial n}=\frac{\partial}{\partial n}({\sum}_{j=1}^{M}((n{q}_{j}+b){p}_{j}{)}^{2}),$$

$$\frac{\partial s}{\partial n}=2({\sum}_{j=1}^{M}((n{q}_{j}+b){p}_{j}))$$

The learning rate is multiplied by “n” and “b”. 
Update the value of “n” and “b” for every step. 
The classconditional densities are not being modeled in the logistic discrimination: 
Model Parameter β 
Parameter Initialization, 
Normal distribution $\beta ~M\left(Q,{\theta}^{2}\right)$ 
Initial vector n = 0, 
Initial vector u = 0, 
Initial steps T = 0 
Initial convergence parameter as Boolean = F, 
While Boolean = = F, 
Do shuffle the training set T for each minibatch $\mathrm{b}\subset T$ and do the update step T = T + 1 
Compute the gradient vector $\mathrm{G}={\nabla}_{\beta}\mathcal{L}\left(\beta ;b\right)$ on the minibatch b. 
Update Vector n. 
$\mathrm{m}={\alpha}_{1}.n+\left(1{\alpha}_{1}\right).G$ 
Update Vector $n={\alpha}_{2}.u+\left(1{\alpha}_{2}\right).G\u0298G$ 
Rescal Vector $\exists \text{}=\frac{\mathrm{n}}{1{\alpha}_{1}^{T}}$ 
Rescal Vector $\mathrm{U}=\mathrm{U}/\left(1{\alpha}_{2}^{T}\right)$ 
Update Variable in Equation (52) [29] 
$$\beta =\beta \frac{m}{\sqrt{u+x}}\u0298\text{}\left({\alpha}_{1}m+\frac{\left(1{\alpha}_{1}\right)}{1{\alpha}_{2}^{T}}.G\right),$$

End for 
If convergence condition holds then 
Boolean = T 
End if 
End while 
Return model variable β 
The iterative loop, For i = 0,,c 
${P}_{i}\leftarrow rand\left(0.10,0.10\right)$ to compute random variable ranging between −0.10 to 0.10 
The iteration is made: 
Repeat 
For i = 0,, 
$\u2206{P}_{i}\leftarrow 0$ 
For i = 0,,c 
$0\leftarrow 0+{P}_{i}{A}_{i}^{T}$ 
$B\leftarrow sigmoid\left(0\right)$ 
$\u2206{P}_{j}\leftarrow \u2206{P}_{j}+\left({R}^{T}B\right){A}_{i}^{T}$ 
For j = 0,,d 
${P}_{i}\leftarrow {P}_{i}+n\u2206{P}_{i}$ 
Until convergence repeat the iteration 
The iterative loop For j = 1,,v 
For i = 0,,c 
${P}_{ij}\leftarrow rand\left(0.10,0.10\right)$ 
The iteration is made using 
Repeat 
For j = 1,,v 
For i = 0,,c 
$\u2206{P}_{ij}\leftarrow 0$ 
For T = 1,,m 
For j = 1,,v 
${Q}_{i}\leftarrow 0$ 
For i = 0,,c 
${Q}_{i}\leftarrow {Q}_{i}+{P}_{ij}{A}_{i}^{T}$ 
For i = 1,,v 
${B}_{j}\leftarrow exponential\left({Q}_{i}\right)/{\mathsf{\Sigma}}_{\upsilon}exponential\left({Q}_{\upsilon}\right)$ 
For j = 1,,v 
For i = 0,,c 
$\u2206{P}_{ij}\leftarrow \u2206{P}_{ij}+\left({R}_{i}^{T}B\right){x}_{j}^{T}$ 
For j = 1,,v 
For i = 0,,d 
${P}_{ij}\leftarrow {P}_{ij}+n\u2206{P}_{ij}$ 
Repeat the iteration until convergence. 
Training T; Learning Rate N; Normal Distribution θ; Decay Parameters α_{1}, α_{2} 
Algorithm 2: Complexity Computation 
Initialize the weights as s1 
For m = 1 to M do 
Observation of sample n uniformly as random in Equation (53) [29]. 
$${\mathrm{s}}_{\mathrm{m}+1}\leftarrow \text{}{\mathrm{s}}_{\mathrm{m}}\mathbf{\beta}\nabla {\mathrm{f}}_{\mathrm{s}}\left({\mathrm{s}}_{\mathrm{m}}\right),$$

end for 
wM return. 
6. Results and Discussion
7. Research Limitations
 The hyperparameter optimization algorithm performance depends on various factors such as total hidden layers, total perlayer units, dropout amount, the regularizer learning rate and weight decay.
 The nonoptimal setting of hyperparameters will drastically affect the algorithm’s performance, that varies from extremely a low learning rate to a very large learning rate.
 The hypertuning approach varies depending upon the type of dataset, the nature of the dataset, and its size as there is no welldefined formula to find hyperparameters.
 The criticality is to choose what number of parameters are going to be tested, due to which performance gets affected by the extremely low learning rate of (1 $$ e^{−5}) or the very large learning rate of 10, by opting for the wrong hyperparameters.
 There exists no welldefined formula to find hyperparameters as it depends on the algorithm type, the dataset and the dataset size.
 The performance of the algorithm varies with the change in the dataset’s parameters.
8. Conclusions and Future Scope
 The IDS for Internet of Things (IoT), which is the booming area for attackers. This includes breaching security of automotive, wearable and connected devices.
 The IDS for cyber insurance, which is the upcoming ideology to receive attention to mitigate the damages from upcoming data losses, sabotage and theft events.
 The IDS for analyzing the effectiveness of nature inspired optimization algorithms over the latest datasets, such as CCIDS and streaming datasets.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
 Mohd, N.; Singh, A.; Bhadauria, H.S. Intrusion Detection System Based on Hybrid Hierarchical Classifiers. Wirel. Pers. Commun. 2021, 121, 659–686. [Google Scholar] [CrossRef]
 Pokuri, S.R. A Hybrid Approach for Feature Selection Analysis on The Intrusion Detection System Using Navi Bayes and Improved BAT Algorithm. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 5078–5087. [Google Scholar]
 Dahiya, N.; Bhatnagar, V.; Singh, M. Efficient Materialized View Selection for MultiDimensional Data Cube Models. Int. J. Inf. Retr. Res. 2016, 6, 52–74. [Google Scholar] [CrossRef] [Green Version]
 Derhab, A.; Bouras, A.; Senouci, M.R.; Imran, M. Fortifying intrusion detection systems in dynamic Ad Hoc and wireless sensor networks. Int. J. Distrib. Sens. Netw. 2014, 10, 608162. [Google Scholar] [CrossRef]
 Iman, A.N.; Ahmad, T. Data Reduction for Optimizing Feature Selection in Modeling Intrusion Detection System. Int. J. Intell. Eng. Syst. 2020, 13, 199–207. [Google Scholar] [CrossRef]
 DrewekOssowicka, A.; Pietrołaj, M.; Rumiński, J. A survey of neural networks usage for intrusion detection systems. J. Ambient Intell. Humaniz. Comput. 2021, 12, 497–514. [Google Scholar] [CrossRef]
 Talita, A.S.; Nataza, O.S.; Rustam, Z. Naïve Bayes Classifier and Particle Swarm Optimization Feature Selection Method for Classifying Intrusion Detection System Dataset. J. Phys. Conf. Ser. 2021, 1752, 12–21. [Google Scholar] [CrossRef]
 Benisha, R.B.; Ratna, S.R. Detection of data integrity attacks by constructing an effective intrusion detection system. J. Ambient Intell. Humaniz. Comput. 2021, 11, 5233–5244. [Google Scholar] [CrossRef]
 Singh, P.; Krishnamoorthy, S.; Nayyar, A.; Luhach, A.K.; Kaur, A. Softcomputingbased false alarm reduction for hierarchical data of intrusion detection system. Int. J. Distrib. Sens. Netw. 2021, 15, 1–12. [Google Scholar] [CrossRef] [Green Version]
 Halim, Z.; Yousaf, M.N.; Waqas, M.; Sulaiman, M.; Abbas, G.; Hussain, M.; Hanif, M. An effective genetic algorithmbased feature selection method for intrusion detection systems. Comput. Secur. 2021, 110, 102448. [Google Scholar] [CrossRef]
 Li, Y.; Ghoreishi, S.M.; Issakhov, A. Improving the Accuracy of Network Intrusion Detection System in Medical IoT Systems through Butterfly Optimization Algorithm. Wirel. Pers. Commun. 2021, 1–19. [Google Scholar] [CrossRef]
 Tu, S.; Waqas, M.; Rehman, S.U.; Mir, T.; Abbas, G.; Abbas, Z.H.; Ahmad, I. Reinforcement learning assisted impersonation attack detection in devicetodevice communications. IEEE Trans. Veh. Technol. 2021, 70, 1474–1479. [Google Scholar] [CrossRef]
 Halim, Z.; Rehan, M. On identification of drivinginduced stress using electroencephalogram signals: A framework based on wearable safetycritical scheme and machine learning. Inf. Fusion 2021, 53, 66–79. [Google Scholar] [CrossRef]
 Di Mauro, M.; Galatro, G.; Fortino, G.; Liotta, A. Supervised feature selection techniques in network intrusion detection: A critical review. Eng. Appl. Artif. Intell. 2021, 101, 104216. [Google Scholar] [CrossRef]
 Lee, J.; Pak, J.; Lee, M. Network Intrusion Detection System using Feature Extraction based on Deep Sparse Autoencoder. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 21–23 October 2020; pp. 1282–1287. [Google Scholar]
 Injadat, M.; Moubayed, A.; Nassif, A.B.; Shami, A. Multistage optimized machine learning framework for network intrusion detection. IEEE Trans. Netw. Serv. Manag. 2020, 18, 1803–1816. [Google Scholar] [CrossRef]
 Di Mauro, M.; Galatro, G.; Liotta, A. Experimental review of neuralbased approaches for network intrusion management. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2480–2495. [Google Scholar] [CrossRef]
 Belgrana, F.Z.; Benamrane, N.; Hamaida, M.A.; Chaabani, A.M.; TalebAhmed, A. Network Intrusion Detection System Using Neural Network and Condensed Nearest Neighbors with Selection of NSLKDD Influencing Features. In Proceedings of the 2020 IEEE International Conference on Internet of Things and Intelligence System (IoTaIS), Bali, Indonesia, 27–28 January 2021; pp. 23–29. [Google Scholar]
 Liu, Z.; Liu, Y.; Yang, X.; Li, X. Integrity Auditing for MultiCopy in Cloud Storage Based on RedBlack Tree. IEEE Access 2021, 9, 75117–75131. [Google Scholar] [CrossRef]
 Yu, D.; Lee, H.; Park, S.H.; Hong, S.E. Deep Learning Methods for Joint Optimization of Beamforming and Fronthaul Quantization in Cloud Radio Access Networks. IEEE Wirel. Commun. Lett. 2021, 10, 2180–2184. [Google Scholar] [CrossRef]
 Latif, S.; e Huma, Z.; Jamal, S.S.; Ahmed, F.; Ahmad, J.; Zahid, A.; Abbasi, Q.H. Intrusion Detection Framework for the Internet of Things using a Dense Random Neural Network. IEEE Trans. Ind. Inform. 2021. [Google Scholar] [CrossRef]
 Huma, Z.E.; Latif, S.; Ahmad, J.; Idrees, Z.; Ibrar, A.; Zou, Z.; Baothman, F. A Hybrid Deep Random Neural Network for Cyberattack Detection in the Industrial Internet of Things. IEEE Access 2021, 9, 595–605. [Google Scholar] [CrossRef]
 Moustafa, N.; Slay, J. UNSWNB15: A Comprehensive Data Set For Network Intrusion Detection Systems (UNSWNB15 Network Data Set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 10–12 November 2015. [Google Scholar] [CrossRef]
 Moustafa, N.; Slay, J. The evaluation of Network Anomaly Detection Systems: Statistical analysis of the UNSWNB15 data set and the comparison with the KDD99 data set. Inf. Secur. J. A Glob. Perspect. 2016, 25, 1–14. [Google Scholar] [CrossRef]
 Moustafa, N.; Slay, J.; Creech, G. Novel geometric area analysis technique for anomaly detection using trapezoidal area estimation on largescale networks. IEEE Trans. Big Data 2017, 5, 481–494. [Google Scholar] [CrossRef]
 Bhattacharjee, P.S.; Fujail, A.K.M.; Begum, S.A. Intrusion detection system for NSLKDD data set using vectorised fitness function in genetic algorithm. Adv. Comput. Sci. Technol. 2017, 10, 235–246. [Google Scholar]
 Wu, J.; Liu, S.; Zhou, Z.; Zhan, M. Toward intelligent intrusion prediction for wireless sensor networks using threelayer brainlike learning. Int. J. Distrib. Sens. Netw. 2012, 8, 243–841. [Google Scholar] [CrossRef] [Green Version]
 Singh, D.K.; Shrivastava, M. Evolutionary Algorithmbased Feature Selection for an Intrusion Detection System. Eng. Technol. Appl. Sci. Res. 2021, 11, 7130–7134. [Google Scholar] [CrossRef]
 AlSafi, A.H.S.; Hani, Z.I.R.; Zahra, M.M.A. Using A Hybrid Algorithm and Feature Selection for Network Anomaly Intrusion Detection. J. Mech. Eng. Res. Dev. 2021, 44, 253–262. [Google Scholar]
 Alrajeh, N.A.; Lloret, J. Intrusion detection systems based on artificial intelligence techniques in wireless sensor networks. Int. J. Distrib. Sens. Netw. 2013, 9, 351047. [Google Scholar] [CrossRef]
 Khan, M.A. HCRNNIDS: Hybrid Convolutional Recurrent Neural NetworkBased Network Intrusion Detection System. Processes 2021, 9, 834. [Google Scholar] [CrossRef]
 Moustafa, N.; Creech, G.; Slay, J. Big data analytics for intrusion detection system: Statistical decisionmaking using finite dirichlet mixture models. In Data Analytics and Decision Support for Cybersecurity; Springer: Cham, Switzerland, 2017. [Google Scholar]
 Sarhan, M.; Layeghy, S.; Moustafa, N.; Portmann, M. Netflow datasets for machine learningbased network intrusion detection systems. In Big Data Technologies and Applications; Springer: Cham, Switzerland, 2020. [Google Scholar]
S.No.  Feature Name  Description 

1  Proto  Protocol 
2  Dur  Duration 
3  State  State Protocol 
4  Service  Services by Network 
5  Spkts  Source to Destination Packet Count 
6  Dpkts  Destination to Source Packet Count 
7  Sbytes  Transaction Bytes Source to destination 
8  Dbytes  Transaction Bytes Destination to Source 
9  Rate  Data Rates 
10  Sttl  Source to destination Time to Live Value 
11  Dttl  Destination to source Time to Live Value 
12  Sload  Source Bits per second 
13  Dload  Destination Bits per second 
14  Sloss  Source packets retransmitted or dropped 
15  Dloss  Destination packets retransmitted or dropped 
16  Sinpkt  Source interpacket arrival time (mSec) 
17  Dinpkt  Destination interpacket arrival time (mSec) 
18  Sjit  Source jitter (mSec) 
19  Djit  Destination jitter (mSec) 
20  Swin  Source TCP window advertisement value 
21  Stcpb  Destination TCP window advertisement value 
22  Dtcpb  Destination TCP base sequence number 
23  Dwin  Destination TCP window advertisement value 
24  Tcprtt  TCP connection setup roundtrip time 
Method  Iterations  Per Iteration Cost  Total Cost 

HNADAMSDG  $\mathrm{O}(1/\in )$  O(i)  $\mathrm{O}(\mathrm{i}/\in )$ 
Logistic Regression  

Precision  Recall  F1Score  Support  
0  0.48  0.82  0.61  3971 
1  0.99  0.96  0.98  78,361 
Accuracy  0.96  82,332  
Macro Average  0.74  0.89  0.80  82,332 
Weighted Average  0.98  0.96  0.96  82,332 
Ridge Classifier  
Precision  Recall  F1Score  Support  
0  0.70  0.72  0.71  3971 
1  0.99  0.98  0.98  78,361 
Accuracy  0.98  82,332  
Macro Average  0.85  0.86  0.85  82,332 
Weighted Average  0.98  0.98  0.98  82,332 
HNADAMSDG  
Precision  Recall  F1Score  Support  
0  0.71  0.70  0.71  3971 
1  0.99  0.98  0.99  78,361 
Accuracy  0.99  82,332  
Macro Average  0.87  0.88  0.86  82,332 
Weighted Average  0.98  0.98  0.98  82,332 
Ensemble  
Precision  Recall  F1Score  Support  
0  0.49  0.80  0.60  3971 
1  0.98  0.98  0.98  78,361 
Accuracy  0.97  82,332  
Macro Average  0.76  0.88  0.86  82,332 
Weighted Average  0.98  0.96  0.97  82,332 
Processing Time  

S.No.  Algorithms  Training Time (Second)  Testing Time (Second) 
1.  Logistic Regression  500.23  15.23 
2.  Ridge Classifier  450.50  14.56 
3.  HNADAMSDG  300.30  12.42 
4.  Ensemble  780.48  20.31 
Performance Analysis  

Algorithms  Accuracy  Sensitivity(s)  Specificity  
1.  Logistic Regression  0.967  0.955  0.971 
2.  Ridge Classifier  0.986  0.961  0.989 
3.  HNADAMSDG  0.998  0.977  0.995 
4.  Ensemble  0.977  0.957  0.963 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shyla, S.; Bhatnagar, V.; Bali, V.; Bali, S. Optimization of Intrusion Detection Systems Determined by Ameliorated HNADAMSGD Algorithm. Electronics 2022, 11, 507. https://doi.org/10.3390/electronics11040507
Shyla S, Bhatnagar V, Bali V, Bali S. Optimization of Intrusion Detection Systems Determined by Ameliorated HNADAMSGD Algorithm. Electronics. 2022; 11(4):507. https://doi.org/10.3390/electronics11040507
Chicago/Turabian StyleShyla, Shyla, Vishal Bhatnagar, Vikram Bali, and Shivani Bali. 2022. "Optimization of Intrusion Detection Systems Determined by Ameliorated HNADAMSGD Algorithm" Electronics 11, no. 4: 507. https://doi.org/10.3390/electronics11040507