Method for Detecting Low-Intensity DDoS Attacks Based on a Combined Neural Network and Its Application in Law Enforcement Activities
Abstract
1. Introduction
2. Related Works Discussion
3. Materials and Methods
3.1. Theoretical Foundations and Development of an Algorithm for Analysing the DDoS Attacks at a Low-Intensity Level
3.2. Development of a Combined Neural Network
- Confidence estimate conf = 1 − entropy(p);
- Agreement metric heads
3.3. Synthesis and Test Implementation of a Neural Network Method for Detecting Low-Intensity DDoS Attacks
- An autoencoder (or one-class module) for training normal behaviour and generating a raw anomaly score;
- A discriminative head (classifier) for the binary attack/benign decision, taking into account anti-imbalance mechanics (focal loss, class weights);
- An intensity regressor, producing an uncalibrated attack “level”. The blend regressor, AE-score, and classifier probabilities’ outputs are combined, followed by a calibration step (Platt or isotonic), producing the final calibrated score, S ∈ [0, 1].
- An autoencoder (the encoder and decoder are implemented via a “fullyConnectedLayer” with a “custom MSE loss” in a “custom training loop”) for one-class anomaly scoring;
- A classifier (implemented via the sequence “fullyConnectedLayer → softmaxLayer → classificationLayer with focal-loss via a custom loss”) for attack/benign;
- An attack-level regressor (implemented via the sequence “fullyConnectedLayer → sigmoidLayer → custom regression loss”).
3.4. Estimation of the Developed Methods’ Computational Cost
4. Case Study
4.1. Formation, Analysis, and Preprocessing of the Training Dataset
4.2. Results of the Developed Combined Neural Network Training
4.3. Results of Solving the Low-Intensity DDoS Problem Detection Using the Developed Method
4.4. Results of Forensic Analysis and the Developed Method Implementation in the Law Enforcement Agencies’ Operational Activities
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| DDoS | Distributed Denial of Service | 
| IDS/IPS | Intrusion Detection System/Intrusion Prevention System | 
| TCP | Transmission Control Protocol | 
| WAF | Web Application Firewall | 
| LDDM | Local Discriminative Distance Metrics | 
| HTTP | HyperText Transfer Protocol | 
| SVM | Support Vector Machine | 
| KNN | k-Nearest Neighbours | 
| CNN | Convolutional Neural Network | 
| RNN | Recurrent Neural Network | 
| LSTM | Long Short-Term Memory | 
| IPFIX | Internet Protocol Flow Information Export | 
| RF | Random Forest | 
| SDN | Software-Defined Networking | 
| MAD | Median Absolute Deviation | 
| EWMA | Exponentially Weighted Moving Average | 
| ReLU | Rectified Linear Unit | 
| AE | Autoencoder | 
| TPR | True Positive Rate | 
| FPR | False Positive Rate | 
| STFT | Short-Time Fourier Transform | 
| PCAP | Packet Capture | 
| MSE | Mean Squared Error | 
| FFN | Feed-Forward Network | 
| FLOPs | Floating-Point Operations Per Second | 
| RAM | Random-Access Memory | 
| ANOVA | Analysis of Variance | 
| AUC | Area Under the Curve | 
| ROC | Receiver Operating Characteristic | 
| KS-test | Kolmogorov–Smirnov test | 
| SLA | Service Level Agreement | 
| IAT | Inter-Arrival Time | 
| ECE | Expected Calibration Error | 
Appendix A



- For H0 (when Ak = 0 for all k), each term and for H0;
- For H1, the non-zero part of each term is equivalent to a shift in the mean by μk = Ak, and the total non-centrality parameter is


- The IW and IQ(T) calculation;
- A test statistic T and p-value calculation;
- If T > tα and exceeds the empirical threshold, an alarm is generated and is issued, with an interpretation consisting of listing the most contributing k and comparing them with flows/IPs;
- For operational work, use exponentially weighted estimates to dampen short-term bursts and accumulate low-and-slow signals:
- According to Theorem A2, asymptotic confidence intervals for are constructed using the delta method: that is, if , then the confidence interval for I is constructed as ;
- The attention map is constructed as normalised contributions of the coefficients:and elements with the largest wk are associated with time windows via the inverse transformation ϕk ↦ “time support”. For flow-level attribution, a joint decomposition over spatial indices (IP × time) is used: that is, a basis of the form ϕk,i(t) and the corresponding coefficients Xk,i, which provide a direct link to the signals’ sources under study.
References
- Alashhab, A.A.; Zahid, M.S.M.; Azim, M.A.; Daha, M.Y.; Isyaku, B.; Ali, S. A Survey of Low Rate DDoS Detection Techniques Based on Machine Learning in Software-Defined Networks. Symmetry 2022, 14, 1563. [Google Scholar] [CrossRef]
- Adedeji, K.B.; Abu-Mahfouz, A.M.; Kurien, A.M. DDoS Attack and Detection Methods in Internet-Enabled Networks: Concept, Research Perspectives, and Challenges. J. Sens. Actuator Netw. 2023, 12, 51. [Google Scholar] [CrossRef]
- Dimolianis, M.; Pavlidis, A.; Maglaris, V. Signature-Based Traffic Classification and Mitigation for DDoS Attacks Using Programmable Network Data Planes. IEEE Access 2021, 9, 113061–113076. [Google Scholar] [CrossRef]
- Lohachab, A.; Karambir, B. Critical Analysis of DDoS—An Emerging Security Threat over IoT Networks. J. Commun. Inf. Netw. 2018, 3, 57–78. [Google Scholar] [CrossRef]
- Sayed, M.S.E.; Le-Khac, N.-A.; Azer, M.A.; Jurcut, A.D. A Flow-Based Anomaly Detection Approach With Feature Selection Method Against DDoS Attacks in SDNs. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1862–1880. [Google Scholar] [CrossRef]
- Haseeb-ur-rehman, R.M.A.; Aman, A.H.M.; Hasan, M.K.; Ariffin, K.A.Z.; Namoun, A.; Tufail, A.; Kim, K.-H. High-Speed Network DDoS Attack Detection: A Survey. Sensors 2023, 23, 6850. [Google Scholar] [CrossRef] [PubMed]
- Avcı, İ.; Koca, M. Predicting DDoS Attacks Using Machine Learning Algorithms in Building Management Systems. Electronics 2023, 12, 4142. [Google Scholar] [CrossRef]
- Ye, J.; Wang, Z.; Yang, J.; Wang, C.; Zhang, C. An LDDoS Attack Detection Method Based on Behavioral Characteristics and Stacking Mechanism. IoT 2025, 6, 7. [Google Scholar] [CrossRef]
- Ali, M.N.; Imran, M.; din, M.S.U.; Kim, B.-S. Low Rate DDoS Detection Using Weighted Federated Learning in SDN Control Plane in IoT Network. Appl. Sci. 2023, 13, 1431. [Google Scholar] [CrossRef]
- Meng, F.; Yan, X.; Zhang, Y.; Yang, J.; Cao, A.; Liu, R.; Zhao, Y. Mitigating DDoS Attacks in LEO Satellite Networks Through Bottleneck Minimize Routing. Electronics 2025, 14, 2376. [Google Scholar] [CrossRef]
- Badotra, S.; Tanwar, S.; Bharany, S.; Rehman, A.U.; Eldin, E.T.; Ghamry, N.A.; Shafiq, M. A DDoS Vulnerability Analysis System against Distributed SDN Controllers in a Cloud Computing Environment. Electronics 2022, 11, 3120. [Google Scholar] [CrossRef]
- Zaman, A.; Khan, S.A.; Mohammad, N.; Ateya, A.A.; Ahmad, S.; ElAffendi, M.A. Distributed Denial of Service Attack Detection in Software-Defined Networks Using Decision Tree Algorithms. Future Internet 2025, 17, 136. [Google Scholar] [CrossRef]
- Hernandez, D.V.; Lai, Y.-K.; Ignatius, H.T.N. Real-Time DDoS Detection in High-Speed Networks: A Deep Learning Approach with Multivariate Time Series. Electronics 2025, 14, 2673. [Google Scholar] [CrossRef]
- Li, M.; Zheng, L.; Ma, X.; Li, S. Real-Time Monitoring Model of DDoS Attacks Using Distance Thresholds in Edge Cooperation Networks. J. Inf. Secur. Appl. 2025, 89, 103972. [Google Scholar] [CrossRef]
- Liu, Y.; Han, Y.; Chen, H.; Zhao, B.; Wang, X.; Liu, X. IGED: Towards Intelligent DDoS Detection Model Using Improved Generalized Entropy and DNN. Comput. Mater. Contin. 2024, 80, 1851–1866. [Google Scholar] [CrossRef]
- Pandey, N.; Mishra, P.K. Conditional Entropy-Based Hybrid DDoS Detection Model for IoT Networks. Comput. Secur. 2025, 150, 104199. [Google Scholar] [CrossRef]
- Pandey, N.; Mishra, P.K. Performance Analysis of Entropy Variation-Based Detection of DDoS Attacks in IoT. Internet Things 2023, 23, 100812. [Google Scholar] [CrossRef]
- Xu, K.; Li, Z.; Liang, N.; Kong, F.; Lei, S.; Wang, S.; Paul, A.; Wu, Z. Research on Multi-Layer Defense against DDoS Attacks in Intelligent Distribution Networks. Electronics 2024, 13, 3583. [Google Scholar] [CrossRef]
- Mutar, M.H.; El Fawal, A.H.; Nasser, A.; Mansour, A. Predicting the Impact of Distributed Denial of Service (DDoS) Attacks in Long-Term Evolution for Machine (LTE-M) Networks Using a Continuous-Time Markov Chain (CTMC) Model. Electronics 2024, 13, 4145. [Google Scholar] [CrossRef]
- Hajtmanek, R.; Kontšek, M.; Smieško, J.; Uramová, J. One-Parameter Statistical Methods to Recognize DDoS Attacks. Symmetry 2022, 14, 2388. [Google Scholar] [CrossRef]
- Jing, X.; Yan, Z.; Jiang, X.; Pedrycz, W. Network Traffic Fusion and Analysis against DDoS Flooding Attacks with a Novel Reversible Sketch. Inf. Fusion 2019, 51, 100–113. [Google Scholar] [CrossRef]
- Han, H.; Yan, Z.; Jing, X.; Pedrycz, W. Applications of Sketches in Network Traffic Measurement: A Survey. Inf. Fusion 2022, 82, 58–85. [Google Scholar] [CrossRef]
- Liu, X.; Ren, J.; He, H.; Wang, Q.; Song, C. Low-Rate DDoS Attacks Detection Method Using Data Compression and Behavior Divergence Measurement. Comput. Secur. 2021, 100, 102107. [Google Scholar] [CrossRef]
- Salopek, D.; Mikuc, M. Enhancing Mitigation of Volumetric DDoS Attacks: A Hybrid FPGA/Software Filtering Datapath. Sensors 2023, 23, 7636. [Google Scholar] [CrossRef]
- Chovanec, M.; Hasin, M.; Havrilla, M.; Chovancová, E. Detection of HTTP DDoS Attacks Using NFStream and TensorFlow. Appl. Sci. 2023, 13, 6671. [Google Scholar] [CrossRef]
- Tariq, U. Optimized Feature Selection for DDoS Attack Recognition and Mitigation in SD-VANETs. World Electr. Veh. J. 2024, 15, 395. [Google Scholar] [CrossRef]
- Yang, B.; Arshad, M.H.; Zhao, Q. Packet-Level and Flow-Level Network Intrusion Detection Based on Reinforcement Learning and Adversarial Training. Algorithms 2022, 15, 453. [Google Scholar] [CrossRef]
- Chen, S.-R.; Chen, S.-J.; Hsieh, W.-B. Enhancing Machine Learning-Based DDoS Detection Through Hyperparameter Optimization. Electronics 2025, 14, 3319. [Google Scholar] [CrossRef]
- Shieh, C.-S.; Nguyen, T.-T.; Chen, C.-Y.; Horng, M.-F. Detection of Unknown DDoS Attack Using Reconstruct Error and One-Class SVM Featuring Stochastic Gradient Descent. Mathematics 2022, 11, 108. [Google Scholar] [CrossRef]
- Ma, R.; Wang, Q.; Bu, X.; Chen, X. Real-Time Detection of DDoS Attacks Based on Random Forest in SDN. Appl. Sci. 2023, 13, 7872. [Google Scholar] [CrossRef]
- Rizvi, F.; Sharma, R.; Sharma, N.; Rakhra, M.; Aledaily, A.N.; Viriyasitavat, W.; Yadav, K.; Dhiman, G.; Kaur, A. An Evolutionary KNN Model for DDoS Assault Detection Using Genetic Algorithm Based Optimization. Multimed. Tools Appl. 2024, 83, 83005–83028. [Google Scholar] [CrossRef]
- Shieh, C.-S.; Nguyen, T.-T.; Horng, M.-F. Detection of Unknown DDoS Attack Using Convolutional Neural Networks Featuring Geometrical Metric. Mathematics 2023, 11, 2145. [Google Scholar] [CrossRef]
- Setitra, M.A.; Fan, M.; Agbley, B.L.Y.; Bensalem, Z.E.A. Optimized MLP-CNN Model to Enhance Detecting DDoS Attacks in SDN Environment. Network 2023, 3, 538–562. [Google Scholar] [CrossRef]
- Yousuf, O.; Mir, R.N. DDoS Attack Detection in Internet of Things Using Recurrent Neural Network. Comput. Electr. Eng. 2022, 101, 108034. [Google Scholar] [CrossRef]
- Polat, H.; Türkoğlu, M.; Polat, O.; Şengür, A. A Novel Approach for Accurate Detection of the DDoS Attacks in SDN-Based SCADA Systems Based on Deep Recurrent Neural Networks. Expert Syst. Appl. 2022, 197, 116748. [Google Scholar] [CrossRef]
- Li, X.; Li, R.; Liu, Y. HP-LSTM: Hawkes Process–LSTM-Based Detection of DDoS Attack for In-Vehicle Network. Future Internet 2024, 16, 185. [Google Scholar] [CrossRef]
- Vladov, S.; Vysotska, V.; Sokurenko, V.; Muzychuk, O.; Nazarkevych, M.; Lytvyn, V. Neural Network System for Predicting Anomalous Data in Applied Sensor Systems. Appl. Syst. Innov. 2024, 7, 88. [Google Scholar] [CrossRef]
- Wang, H.; Li, W. DDosTC: A Transformer-Based Network Attack Detection Hybrid Mechanism in SDN. Sensors 2021, 21, 5047. [Google Scholar] [CrossRef] [PubMed]
- Junior, E.P.F.; de Neira, A.B.; Borges, L.F.; Nogueira, M. Transformers Model for DDoS Attack Detection: A Survey. Comput. Netw. 2025, 270, 111433. [Google Scholar] [CrossRef]
- Mousa, A.K.; Abdullah, M.N. An Improved Deep Learning Model for DDoS Detection Based on Hybrid Stacked Autoencoder and Checkpoint Network. Future Internet 2023, 15, 278. [Google Scholar] [CrossRef]
- Ma, J.; Su, W. Collaborative DDoS Defense for SDN-Based AIoT with Autoencoder-Enhanced Federated Learning. Inf. Fusion 2025, 117, 102820. [Google Scholar] [CrossRef]
- Paolini, D.; Dini, P.; Soldaini, E.; Saponara, S. One-Class Anomaly Detection for Industrial Applications: A Comparative Survey and Experimental Study. Computers 2025, 14, 281. [Google Scholar] [CrossRef]
- Reed, A.; Dooley, L.; Mostefaoui, S.K. The Guardian Node Slow DoS Detection Model for Real-Time Application in IoT Networks. Sensors 2024, 24, 5581. [Google Scholar] [CrossRef]
- Sikora, M.; Fujdiak, R.; Kuchar, K.; Holasova, E.; Misurec, J. Generator of Slow Denial-of-Service Cyber Attacks. Sensors 2021, 21, 5473. [Google Scholar] [CrossRef]
- Muraleedharan, N.; Janet, B. A Deep Learning Based HTTP Slow DoS Classification Approach Using Flow Data. ICT Express 2021, 7, 210–214. [Google Scholar] [CrossRef]
- Ahmed, S.; Khan, Z.A.; Mohsin, S.M.; Latif, S.; Aslam, S.; Mujlid, H.; Adil, M.; Najam, Z. Effective and Efficient DDoS Attack Detection Using Deep Learning Algorithm, Multi-Layer Perceptron. Future Internet 2023, 15, 76. [Google Scholar] [CrossRef]
- Mansoor, A.; Anbar, M.; Bahashwan, A.; Alabsi, B.; Rihan, S. Deep Learning-Based Approach for Detecting DDoS Attack on Software-Defined Networking Controller. Systems 2023, 11, 296. [Google Scholar] [CrossRef]
- Aslam, N.; Srivastava, S.; Gore, M.M. DDoS SourceTracer: An Intelligent Application for DDoS Attack Mitigation in SDN. Comput. Electr. Eng. 2024, 117, 109282. [Google Scholar] [CrossRef]
- Alshdadi, A.A.; Almazroi, A.A.; Ayub, N.; Lytras, M.D.; Alsolami, E.; Alsubaei, F.S.; Alharbey, R. Federated Deep Learning for Scalable and Privacy-Preserving Distributed Denial-of-Service Attack Detection in Internet of Things Networks. Future Internet 2025, 17, 88. [Google Scholar] [CrossRef]
- Orosz, P.; Nagy, B.; Varga, P. Real-Time Detection and Mitigation Strategies Newly Appearing for DDoS Profiles. Future Internet 2025, 17, 400. [Google Scholar] [CrossRef]
- Ain, N.U.; Sardaraz, M.; Tahir, M.; Abo Elsoud, M.W.; Alourani, A. Securing IoT Networks Against DDoS Attacks: A Hybrid Deep Learning Approach. Sensors 2025, 25, 1346. [Google Scholar] [CrossRef]
- Wahab, S.A.; Sultana, S.; Tariq, N.; Mujahid, M.; Khan, J.A.; Mylonas, A. A Multi-Class Intrusion Detection System for DDoS Attacks in IoT Networks Using Deep Learning and Transformers. Sensors 2025, 25, 4845. [Google Scholar] [CrossRef]
- Alghazzawi, D.; Bamasag, O.; Ullah, H.; Asghar, M.Z. Efficient Detection of DDoS Attacks Using a Hybrid Deep Learning Model with Improved Feature Selection. Appl. Sci. 2021, 11, 11634. [Google Scholar] [CrossRef]
- Khedr, W.I.; Gouda, A.E.; Mohamed, E.R. P4-HLDMC: A Novel Framework for DDoS and ARP Attack Detection and Mitigation in SD-IoT Networks Using Machine Learning, Stateful P4, and Distributed Multi-Controller Architecture. Mathematics 2023, 11, 3552. [Google Scholar] [CrossRef]
- Smiesko, J.; Segec, P.; Kontsek, M. Machine Recognition of DDoS Attacks Using Statistical Parameters. Mathematics 2023, 12, 142. [Google Scholar] [CrossRef]
- Berti, P.; Pratelli, L.; Rigo, P. A Central Limit Theorem for Predictive Distributions. Mathematics 2021, 9, 3211. [Google Scholar] [CrossRef]
- Polat, H.; Polat, O.; Cetin, A. Detecting DDoS Attacks in Software-Defined Networks Through Feature Selection Methods and Machine Learning Models. Sustainability 2020, 12, 1035. [Google Scholar] [CrossRef]
- Nikolić, M.; Nikolić, D.; Stefanović, M.; Koprivica, S.; Stefanović, D. Mitigating Algorithmic Bias Through Probability Calibration: A Case Study on Lead Generation Data. Mathematics 2025, 13, 2183. [Google Scholar] [CrossRef]
- Li, M.; Zhou, H.; Qin, Y. Two-Stage Intelligent Model for Detecting Malicious DDoS Behavior. Sensors 2022, 22, 2532. [Google Scholar] [CrossRef] [PubMed]
- Hu, G.; Sun, M.; Zhang, C. A High-Accuracy Advanced Persistent Threat Detection Model: Integrating Convolutional Neural Networks with Kepler-Optimized Bidirectional Gated Recurrent Units. Electronics 2025, 14, 1772. [Google Scholar] [CrossRef]
- Vladov, S.; Scislo, L.; Sokurenko, V.; Muzychuk, O.; Vysotska, V.; Osadchy, S.; Sachenko, A. Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions. Sensors 2024, 24, 4246. [Google Scholar] [CrossRef]
- Vladov, S.; Sachenko, A.; Sokurenko, V.; Muzychuk, O.; Vysotska, V. Helicopters Turboshaft Engines Neural Network Modeling under Sensor Failure. J. Sens. Actuator Netw. 2024, 13, 66. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R. Method for Forecasting of Helicopters Aircraft Engines Technical State in Flight Modes Using Neural Networks. CEUR Workshop Proc. 2022, 3171, 974–985. Available online: https://ceur-ws.org/Vol-3171/paper70.pdf (accessed on 18 August 2025).
- Yan, H.; Li, J.; Du, L.; Fang, B.; Jia, Y.; Gu, Z. Adversarial Hierarchical-Aware Edge Attention Learning Method for Network Intrusion Detection. Appl. Sci. 2025, 15, 7915. [Google Scholar] [CrossRef]
- Radivilova, T.; Kirichenko, L.; Alghawli, A.S.; Ageyev, D.; Mulesa, O.; Baranovskyi, O.; Ilkov, A.; Kulbachnyi, V.; Bondarenko, O. Statistical and Signature Analysis Methods of Intrusion Detection. In Lecture Notes on Data Engineering and Communications Technologies; Springer: Cham, Switzerland, 2022; Volume 115, pp. 115–131. [Google Scholar] [CrossRef]
- Mulesa, O.; Povkhan, I.; Radivilova, T.; Baranovskyi, O. Devising a Method for Constructing the Optimal Model of Time Series Forecasting Based on the Principles of Competition. East.-Eur. J. Enterp. 2021, 5, 113. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R. Optimization of Helicopters Aircraft Engine Working Process Using Neural Networks Technologies. CEUR Workshop Proc. 2022, 3171, 1639–1656. Available online: https://ceur-ws.org/Vol-3171/paper117.pdf (accessed on 18 August 2025).
- Vladov, S.; Shmelov, Y.; Yakovliev, R. Methodology for Control of Helicopters Aircraft Engines Technical State in Flight Modes Using Neural Networks. CEUR Workshop Proc. 2022, 3137, 108–125. [Google Scholar] [CrossRef]
- Estupiñán Cuesta, E.P.; Martínez Quintero, J.C.; Avilés Palma, J.D. DDoS Attacks Detection in SDN Through Network Traffic Feature Selection and Machine Learning Models. Telecom 2025, 6, 69. [Google Scholar] [CrossRef]
- Han, W.; Xue, J.; Wang, Y.; Liu, Z.; Kong, Z. MalInsight: A Systematic Profiling Based Malware Detection Framework. J. Netw. Comput. Appl. 2019, 125, 236–250. [Google Scholar] [CrossRef]
- Lytvyn, V.; Dudyk, D.; Peleshchak, I.; Peleshchak, R.; Pukach, P. Influence of the Number of Neighbours on the Clustering Metric by Oscillatory Chaotic Neural Network with Dipole Synaptic Connections. CEUR Workshop Proc. 2024, 3664, 24–34. Available online: https://ceur-ws.org/Vol-3664/paper3.pdf (accessed on 23 August 2025).
- Vladov, S.; Shmelov, Y.; Yakovliev, R.; Petchenko, M.; Drozdova, S. Neural Network Method for Helicopters Turboshaft Engines Working Process Parameters Identification at Flight Modes. In Proceedings of the 2022 IEEE 4th International Conference on Modern Electrical and Energy System (MEES), Kremenchuk, Ukraine, 20–23 October 2022; pp. 604–609. [Google Scholar] [CrossRef]
- Bodyanskiy, Y.; Shafronenko, A.; Pliss, I. Clusterization of Vector and Matrix Data Arrays Using the Combined Evolutionary Method of Fish Schools. Syst. Res. Inf. Technol. 2022, 4, 79–87. [Google Scholar] [CrossRef]
- Ablamskyi, S.; Tchobo, D.L.R.; Romaniuk, V.; Šimić, G.; Ilchyshyn, N. Assessing the Responsibilities of the International Criminal Court in the Investigation of War Crimes in Ukraine. Novum Jus 2023, 17, 353–374. [Google Scholar] [CrossRef]
- Ablamskyi, S.; Nenia, O.; Drozd, V.; Havryliuk, L. Substantial Violation of Human Rights and Freedoms as a Prerequisite for Inadmissibility of Evidence. Justicia 2021, 26, 47–56. [Google Scholar] [CrossRef]
- Geche, F.; Batyuk, A.; Mulesa, O.; Voloshchuk, V. The Combined Time Series Forecasting Model. In Proceedings of the 2020 IEEE Third International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 21–25 August 2020; pp. 272–275. [Google Scholar] [CrossRef]
- Vladov, S.; Chyrun, L.; Muzychuk, E.; Vysotska, V.; Lytvyn, V.; Rekunenko, T.; Basko, A. Intelligent Method for Generating Criminal Community Influence Risk Parameters Using Neural Networks and Regional Economic Analysis. Algorithms 2025, 18, 523. [Google Scholar] [CrossRef]


























| Method (Class of Solutions) | Brief Description | Key Disadvantage | 
|---|---|---|
| Threshold (statistical) [14,15] | Simple metrics for amount, duration, and rate limit | High false positive rate; Does not detect “slow” attacks. | 
| Entropy and statistics [16,17,18,19,20] | Address (port) entropy, IP/port distribution | Sensitive to window selection; Poorly adaptable. | 
| Sketch and multimetric structures (LDDM, etc.) [21,22,23,24] | Compact flow aggregation for scalability | Hash approximations reduce sensitivity to rare events. | 
| Signature (WAF) [25,26] | Rules (patterns) for known vectors (Slowloris, R.U.D.Y.) | Easily bypassed by new variants; Limited applicability. | 
| Flow-level analytics (NetFlow, IPFIX) [27,28] | Aggregated flow features for large-scale monitoring | Smoothing of temporal patterns; Subtle anomalies are lost. | 
| Classical machine learning (SVM, RF, etc.) [29,30,31] | Feature engineering with a classifier | Feature dependence; Imbalance and drift issues. | 
| Deep neural networks (CNN, RNN, Transformer) [32,33,34,35,36,37,38,39,40,41,42,43,44,45] | Spatio-temporal model, anomaly scoring | Big data, low interpretability, risk of overfitting. | 
| Hybrid, ensembles (Canopy, etc.) [46,47,48] | A combination of statistics with machine learning, deep learning, and mitigation | Deployment complexity; Resource-intensive; Explainability. | 
| Online (federated) [49,50] | Streaming learning, privacy, adaptation | Network or latent constraints; Model synchronisation. | 
| SDN (P4 solutions) [51,52,53,54] | Data-plane detection for rapid response | Data-plane logic constraints; Event evidence. | 
| Stage Number | Stage Name | Description | 
|---|---|---|
| 1 | Preprocessing | It is assumed that raw packet counters (flows) x(t) with a Δt sampling frequency are available for analysis, and a basis (wavelet scales) and an “attack” scale’s (low-frequency long-scale) KA indices set are selected. At this stage, trends (seasonality) are removed, for example, using a median filter window wb, or a low pass (LP): . | 
| 2 | Basis decomposition | —are calculated for the selected k. | 
| 3 | Noise variance estimation | For  (robust) is estimated, for example, through the median of absolute deviations’ MAD over the reference period: | 
| 4 | Statistical test | Calculated: and also calibrated energy intensity: in this case, we can assume that determined from the noise estimate. We normalise and map it in [0, 1] using the logistic function: The decision threshold is given by the following: H1 if T > tα or S > τ. | 
| 5 | Explanation (forensics) | contributions are sorted in descending order and mapped back to the time intervals (flows) with the most significant contributions. PCAP slices of the corresponding time frames are also returned. | 
| 6 | Practical calibration and adaptation | The threshold tα, using estimation of the empirical quantile 1 − α of the statistic T on the “clean” validation dataset, or analytically if normality holds. To adapt to concept drift, the estimates’ exponential sliding update is applied. | 
| Stage Number | Stage Name | Description | 
|---|---|---|
| 1 | Data preprocessing | . | 
| 2 | Batch construction and augmentation | Time-jitter, source-mix, packet-padding; Create mini batches of windows of length N. | 
| 3 | Forward pass | |
| 4 | Heads forward | p = softmax(Wc · Pool(Ztr)); | 
| 5 | Compute losses | . | 
| 6 | Backprop and update | Adam step: . | 
| 7 | Periodic calibration | . | 
| 8 | Validation and early stopping | Monitor AUC, FPR and TPR, calibration error (ECE or MCE); Rollback if no improvement. | 
| 9 | Online adaptation | ; Periodic fine-tune on recent data with a small LR. | 
| 10 | Explainability export | and attention Cn; Export top-k flows (and time) windows and PCAP slices. | 
| Hyperparameter | Designation and Meaning | Comment | 
|---|---|---|
| Window length (samples) | N = 600 (≈10 min at 1 Hz) | A window long enough to accumulate low-and-slow patterns with acceptable latency | 
| Sampling rate | 1 Hz (or adaptive) | Time limit for network telemetry, aggregation (or resampling) allowed | 
| CNN layers | Lcnn = 3 | Channels are [64, 128, 256], kernel sizes are [5, 5, 3], and the stride is 1. | 
| Transformer layers | Ltr = 4 | dmodel = 256, h = 8, dk = 32, FFN = 512 | 
| Autoencoder bottleneck dim | Ce = 64 | Compression and recovery balance for modelling “normal” | 
| Pooling (heads) | GlobalAvgPool or MLP heads | For classification and regression | 
| Batch size | 32 | Gradient stability with a moderate data amount | 
| Optimiser and LR | Adam, η = 10−4 | Refined selection for transformers (and CNNs) | 
| Epochs (Early stop) | 50…150 with early stopping | AUC validation (calibration error) | 
| Dropout | 0.1 | Prevention of overfitting | 
| Weight decay | 1 · 10−5 | L2 regularisation of parameters | 
| Loss weights | λcls = 1.0, λreg = 1.0, λae = 0.5, λatt = 0.01, λadv = 0.1 | Balanced classification, regression, autoencoding, sparsity attention, and robustness tasks | 
| Focal loss params | γ = 2.0, wpos = 10 | Combating strong imbalance (rare attacks) | 
| EWMA factors | ηE = 0.90, ηS = 0.80 | A long-term low-rate effects accumulation, smoothing | 
| Attention sparsity reg | ηatt = 1 · 10−2 (L1) | Incentive for sparse, interpretable attention weights | 
| Calibration method | Platt (logistic) or isotonic | fit on validation (lr 10−3) | 
| Data augmentation | time-jitter 10%, source-mix up to 5, packet-pad p = 0.3. | Increases the low-rate scenarios’ variability | 
| Matched filter (optional) | kernel length 31 s | In an a priori attack form presence, increases SNR | 
| Alarm thresholds (initial) | τS = 0.5, τp = 0.5 | Calibrated during validation (FPR/TPR trade-off) | 
| Stage | Name | Description | 
|---|---|---|
| 1 | Ingest and normalise | Network telemetry collection, baseline removal, channel normalisation. | 
| 2 | Windowing and multiscale | Formation of the required length windows and calculation of the wavelet (or STFT) representation. | 
| 3 | Local feature extraction | Passing through a 1D CNN, obtaining local embeddings. | 
| 4 | Long-range modelling and attention | Transformer-encoder → attention matrices (for attribution). | 
| 5 | Multi-head inference | Obtaining AEscore, pattack, and raw level (regressor). | 
| 6 | Statistical check and accumulation | Calculation of projection energies, χ2 statistics, and EWMA accumulation. | 
| 7 | Blend and calibrate | Mixing head outputs and Platt (or Isotonic) calibration → final S. | 
| 8 | Decision and thresholds | Alarm rule: S > τS ∨ pattack > τp ∨ χ2 > tα, take EWMA (or timeout) into account. | 
| 9 | Explain and export | Formation of attention (or IG) attribution; Export top-k flows (or PCAP), timeline, confidence. | 
| 10 | Mitigation and forensic workflow | Mitigation automation (rate limit, blackhole) and transfer to law enforcement agencies with evidence. | 
| 11 | Online adaptation | Normalisation updates, EWMA, periodic fine-tuning, recalibration data. | 
| Component | Formula | FLOPs | Parameters | 
|---|---|---|---|
| Conv1D layer 1 (k = 5, Cin = 12, Cout = 64, L = 600) | 2 · k · Cin · Cout · N | 8,640,000 | 38,464 | 
| Conv1D layer 2 (k = 5, 64 → 128) | – | 122,880,000 | 40,768 | 
| Conv1D layer 3 (k = 3, 128 → 256) | – | 294,912,000 | 98,304 | 
| Sum of CNN (3 layers) | – | 426,432,000 | 177,536 | 
| Transformer (1 layer) is the Q, K, and V projections with attention, outproj and FFN | see (73)–(77) | 997,785,600 | 458,752 | 
| Transformer (4 layers) | 4 × 4 | 3,991,142,400 | 1,835,008 | 
| Autoencoder (dmodel → Ce → dmodel by position) | see (78) | 115,200,000 | 32,768 | 
| Heads (classifier with regressor and pooling) | – | 1024 | 516 | 
| Total (per window N = 600) | – | 4.2 · 109 | 2.01 · 106 | 
| Row | Time (mm: ss) | Packets_per_sec | Smoothed pps (10 s MA) | Region Label | 
|---|---|---|---|---|
| 1 | 07:30 | 131.2 | 129.6 | none | 
| … | … | … | … | … | 
| 2 | 07:50 | 134.5 | 130.8 | none | 
| … | … | … | … | … | 
| 3 | 07:59 | 299.8 | 212.4 | spike | 
| … | … | … | … | … | 
| 4 | 08:01 | 110.7 | 110.2 | none | 
| … | … | … | … | … | 
| 5 | 12:05 | 116.2 | 118.1 | low-and-slow | 
| … | … | … | … | … | 
| 6 | 12:20 | 122.9 | 125.7 | low-and-slow | 
| … | … | … | … | … | 
| 7 | 16:00 | 130.3 | 131.0 | low-and-slow | 
| … | … | … | … | … | 
| 8 | 19:30 | 135.0 | 133.8 | low-and-slow | 
| … | … | … | … | … | 
| 9 | 24:00 | 126.8 | 124.9 | low-and-slow | 
| … | … | … | … | … | 
| 10 | 29:59 | 71.5 | 72.1 | none | 
| … | … | … | … | … | 
| Group | n | Mean (pps) | Std | Skew | Kurtosis_Excess | cv | Median | 
|---|---|---|---|---|---|---|---|
| pre_spike | 400 | 121.9149 | 8.9670 | –0.2339 | –0.3050 | 0.0736 | 122.8513 | 
| spike | 40 | 139.6939 | 52.2940 | 1.9388 | 2.4041 | 0.3743 | 118.3247 | 
| low_and_slow | 720 | 124.1262 | 9.2589 | 0.0642 | –0.5403 | 0.0746 | 124.2028 | 
| post | 360 | 119.5545 | 8.7099 | 0.1442 | –0.3985 | 0.0729 | 118.9396 | 
| k | Inertia (Sum of Squared Distances) | Silhouette (Mean) | 
|---|---|---|
| 2 | 256,060.4639 | 0.500747 | 
| 3 | 123,939.9667 | 0.514603 | 
| 4 | 90,705.5858 | 0.404405 | 
| 5 | 74,916.0963 | 0.368623 | 
| 6 | 62,611.2520 | 0.370713 | 
| Model | ROC-AUC | PR-AUC | Calibration Error | Interpretability | 
|---|---|---|---|---|
| Developed a combined neural network | 0.80 | 0.866 | 0.04 | High (AE with attention) | 
| LSTM-based Detector | 0.74 | 0.790 | 0.09 | Medium (saliency) | 
| CNN-only Classifier | 0.70 | 0.750 | 0.11 | Low | 
| Transformer-only Detector | 0.77 | 0.820 | 0.07 | Medium (attention) | 
| Model | Processing Time (Seconds) | Memory Used (MB) | CPU Load (%) | 
|---|---|---|---|
| Developed a Combined Neural Network | 35 | 115 | 70 | 
| LSTM-based Detector | 45 | 105 | 75 | 
| CNN-only Classifier | 40 | 90 | 65 | 
| Transformer-only Detector | 50 | 100 | 80 | 
| Metric | Value | 
|---|---|
| Peak packets | 290.2182 | 
| Mean packets | 125.2943 | 
| Max energy ratio | 1.2120 | 
| Max AE error | 134.3532 | 
| Max χ2 | 6097.4596 | 
| p-attack peak | 0.9976 | 
| Alarm any | True | 
| IP Address | Total Packets | Average Baseline pps | Packets in a Burst | Packets in Low-and-Slow | Contribution Rate | First Occurrence | Last Occurrence | Number of Abnormal Windows | 
|---|---|---|---|---|---|---|---|---|
| 192.0.2.11 | 9924 | 5.0 | 91 | 799 | 7.41% | 07:55 | 08:15 | 2 | 
| 192.0.2.9 | 9356 | 4.0 | 0 | 2133 | 6.99% | 12:00 | 24:00 | 1 | 
| 192.0.2.1 | 8172 | 4.0 | 117 | 816 | 6.11% | 07:55 | 08:15 | 2 | 
| 192.0.2.16 | 7356 | 3.0 | 3 | 1943 | 5.50% | 12:00 | 24:00 | 1 | 
| 198.51.100.8 | 7246 | 4.0 | 1 | 0 | 5.41% | 03:13 | 27:42 | 0 | 
| 198.51.100.2 | 6827 | 3.0 | 4 | 1418 | 5.10% | 12:00 | 24:00 | 1 | 
| 198.51.100.10 | 6536 | 3.0 | 4 | 1125 | 4.88% | 12:00 | 24:00 | 1 | 
| 192.0.2.8 | 6348 | 3.0 | 4 | 927 | 4.74% | 12:00 | 24:00 | 1 | 
| 192.0.2.5 | 6160 | 3.0 | 1 | 758 | 4.60% | 12:00 | 24:00 | 1 | 
| 198.51.100.6 | 5497 | 3.0 | 66 | 0 | 4.11% | 07:55 | 08:15 | 1 | 
| 198.51.100.3 | 5421 | 3.0 | 2 | 0 | 4.05% | 04:11 | 27:26 | 0 | 
| 192.0.2.7 | 5340 | 2.0 | 1 | 1706 | 3.99% | 12:00 | 24:00 | 1 | 
| Limitation | Evidence | Impact | Suggested Research and Mitigation | 
|---|---|---|---|
| Sensitivity to class imbalance (rare attacks yield few training examples) | PR-AUC and precision–recall behaviour | A drop in precision with increasing recall leads to many false positives in real-world operating conditions. | Development of realistic data augmentation, synthetic low-and-slow scenario generation, few-shot or transfer-learning and weak-label techniques | 
| Difficulty interpreting deep learning outputs (requires forensic explanation) | Need attention, IG, “Gradient × Input” for attribution | Limited evidential value when transferred to LEA without reliable artefacts | Deepen explainability methods (sparse attention, formal confidence intervals for contributions, attribution validation for synthesised cases) | 
| High computational and memory-intensive load (attention matrices, large FLOPs) | FLOPs and memory evaluation | Difficulty deploying in edge or real-time environments, high latency | Explore local (subquadratic) attention, down-sampling, knowledge distillation, and model compression for edge inference | 
| Energy ratio dependence on baseline or drift (drift or seasonality causes false alarms) | Energy ratio, EWMA explanations, and χ2-dynamics | Increased FPR during long drifts, requiring frequent recalibration | Automatic threshold adaptation (online calibration), robust baseline removal (adaptive filters, robust statistics), and change-point detection for threshold control | 
| Limited generalisation to unknown attack vectors (adversarial vulnerabilities) | Deep learning is vulnerable to adaptive modifications | Loss of detection when attackers change tactics | Adversarial training, domain randomisation, matched-filter layers for known shapes, continuous learning | 
| Loss of temporal fine-grained patterns during flow aggregation (flow-level “coarse” representation) | Flow level “smooths” | Low-rate attacks “dissolve” into aggregates, meaning they become undetectable. | A fine-grain packet features and aggregates combination, multi-scale windowing, and sketch structures with loss control | 
| Forensics limitations: chain-of-custody and exported PCAP amount | Requirements for signed hashes and WORM storage in the pipeline | Difficulty in the legal use of materials if procedures are not followed | Develop standardised evidence formats (signed metadata, provenance), automation of PGP signatures and WORM archiving | 
| Threshold logic and trade-off TPR/FPR require empirical calibration | The χ2 test, tα thresholds (Theorem 3), and the α and β choice | The need for manual adjustments to different networks and policies | Explore adaptive thresholding, cost-sensitive decision rules, and optimisation based on operational costs (cost of false alarm vs. miss) | 
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. | 
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vladov, S.; Mulesa, O.; Vysotska, V.; Horvat, P.; Paziura, N.; Kolobylina, O.; Mieshkov, O.; Ilnytskyi, O.; Koropatov, O. Method for Detecting Low-Intensity DDoS Attacks Based on a Combined Neural Network and Its Application in Law Enforcement Activities. Data 2025, 10, 173. https://doi.org/10.3390/data10110173
Vladov S, Mulesa O, Vysotska V, Horvat P, Paziura N, Kolobylina O, Mieshkov O, Ilnytskyi O, Koropatov O. Method for Detecting Low-Intensity DDoS Attacks Based on a Combined Neural Network and Its Application in Law Enforcement Activities. Data. 2025; 10(11):173. https://doi.org/10.3390/data10110173
Chicago/Turabian StyleVladov, Serhii, Oksana Mulesa, Victoria Vysotska, Petro Horvat, Nataliia Paziura, Oleksandra Kolobylina, Oleh Mieshkov, Oleksandr Ilnytskyi, and Oleh Koropatov. 2025. "Method for Detecting Low-Intensity DDoS Attacks Based on a Combined Neural Network and Its Application in Law Enforcement Activities" Data 10, no. 11: 173. https://doi.org/10.3390/data10110173
APA StyleVladov, S., Mulesa, O., Vysotska, V., Horvat, P., Paziura, N., Kolobylina, O., Mieshkov, O., Ilnytskyi, O., & Koropatov, O. (2025). Method for Detecting Low-Intensity DDoS Attacks Based on a Combined Neural Network and Its Application in Law Enforcement Activities. Data, 10(11), 173. https://doi.org/10.3390/data10110173
 
        




 
       