A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification
Abstract
1. Introduction
2. Related Work
- They rely on full payload data, which is inaccessible in encrypted traffic;
- They require complete flow captures, making them unsuitable for real-time use;
- They do not specifically target low-latency traffic, which has distinct statistical and temporal patterns.
3. Methodology
- Temporal Feature Extraction: This stage extracts time-domain metrics from raw network traffic, including throughput (total data volume per interval), slope (rate of packet change over time), downlink-to-uplink ratio (asymmetry in traffic direction), and moving averages (smoothed trends of throughput). These features capture dynamic traffic behavior, such as burstiness and periodicity, which are critical for distinguishing low-latency traffic (e.g., rapid bidirectional packet exchanges in video conferencing) from bulk transfers (e.g., FTP). Equations (2)–(6) formalize these metrics.
- Wavelet Transform (Frequency-Domain Enhancement): The temporal features are processed using a continuous wavelet transform (CWT) with a Ricker (Mexican hat) wavelet kernel. The CWT decomposes each feature into multi-scale frequency components, isolating high-frequency patterns (e.g., microbursts in low-latency traffic) and low-frequency trends (e.g., steady streaming flows). This stage transforms raw time-series data into wavelet coefficients, effectively separating structured “noise” (low-latency traffic) from the broader network “signal”.
- ANN Classifier (Dual-Domain Analysis): An MLP artificial neural network receives concatenated time-domain and wavelet-transformed features as input. The MLP comprises an input layer (temporal + wavelet features), two hidden layers with ReLU activation for nonlinear pattern recognition, and a softmax output layer for probabilistic classification.
3.1. Data Collection
3.2. Introducing Continuous Wavelet Transform (CWT) with Ricker Wavelet
3.3. Data Preparation
3.3.1. Throughput
3.3.2. Moving Averages
3.3.3. Ratio
3.3.4. Slope
3.4. Artificial Neural Network
- Input Layer: The input layer consists of N neurons, where N represents the number of features in the dataset. These features include throughput T, moving averages , downlink-to-uplink ratio R, slope S and wavelet transformed counterparts of them . The input vector X is represented as
- First Hidden Layer: The first hidden layer has neurons, where in this implementation. Each neuron applies the Rectified Linear Unit (ReLU) activation function to its weighted sum of inputs. Mathematically, for each neuron j in the first hidden layer,
- Second Hidden Layer: The second hidden layer consists of neurons, where in this configuration. Similar to the first hidden layer, each neuron applies the ReLU activation function to its weighted sum of inputs. Mathematically, for each neuron k in the first hidden layer,
- Output Layer: The output layer comprises C neurons, where C is the different number of traffic classes in the dataset. It uses the softmax activation function to produce class probabilities. The softmax function calculates the probability of each class i, given the input features. Mathematically, for each class i in the output layer,The class with the highest probability is selected as the predicted class. This architecture enables multi-class classification, making it suitable for low-latency network traffic identification.
4. Experiment Setup
4.1. Dataset
- FTP + Video Streaming (A + B);
- FTP + Low-Latency (A + C);
- Video Streaming + Low-Latency (B + C);
- Repeated instances of the same traffic type (A + A, B + B, C + C).
- Three instances of the same traffic type (3A, 3B, 3C);
- Two instances of one type combined with one instance of another (2A + B, A + 2B, 2A + C, A + 2C, 2B + C, B + 2C);
- One instance of each traffic type (A + B + C);
- One instance of each traffic type (A + B + C);
- Multiple instances of mixed traffic types (2A + B + C, A + 2B + C, A + B + 2C).
4.2. Evaluation Metrics
4.3. Hyperparameter Tuning
- Number of Hidden Layers: Determines the depth of the neural network. We tested configurations ranging from shallow (1 layer) to deeper networks (up to 5 layers). A 2-layer model offered a good balance between learning complexity and computational efficiency.
- Activation Function: Controls the nonlinear transformations applied at each neuron. We compared sigmoid, tanh, and ReLU. ReLU was selected for its faster convergence and ability to mitigate vanishing gradient issues in deep networks.
- Learning Rate: Dictates how quickly the model updates its weights during training. We tested values of 0.1, 0.01, and 0.001. A learning rate of 0.001 provided stable convergence without overshooting minima.
- Batch Size: Specifies the number of training samples used in one iteration of model updates. We tested 16, 32, 64, and 128. A batch size of 32 provided a good trade-off between computational cost and convergence speed.
- Number of Epochs: Refers to the number of times the entire training dataset is passed through the model. We evaluated ranges from 10 to 150. We selected 100 epochs as the model showed consistent convergence without signs of overfitting.
- Optimizer: Defines the algorithm used to adjust model weights. We chose Adam for its adaptive learning rate and proven effectiveness in deep learning tasks.
- Dropout Rate: A regularization parameter that randomly deactivates a fraction of neurons during training to prevent overfitting. We tested rates from 0 to 0.5. A dropout rate of 0 was ultimately selected, as the model did not exhibit overfitting during training.
5. Experimental Results and Analysis
- Diverse Traffic Types: We included a variety of traffic types such as FTP, video streaming, and low-latency traffic. This selection ensures that our model is tested against different patterns of Internet traffic, reflecting real-world scenarios.
- Balanced Dataset: The dataset used for training and testing the classification algorithm was designed to maintain a balanced representation of each traffic type, with approximately one hour’s worth of sampling per category. This balance is critical for avoiding bias in the model’s performance.
- Comparison with Established Methods: Our approach was compared with state-of-the-art classification methods, including k-NN, CNN, and LSTM-based models, as highlighted in various studies [4,39,40,50,51]. This comparison not only validates the robustness of our model but also situates our results within the context of existing research.
- Evaluation Metrics: We measured the model’s performance using key metrics such as accuracy, precision, recall, and the F1 score. These metrics are standard in the field of traffic classification and provide a comprehensive assessment of the model’s effectiveness.
- Confusion Matrix Analysis: The use of confusion matrices allowed us to visualize the classification performance across different traffic types, providing insights into the strengths and weaknesses of our model.
- Mixed Traffic Scenarios: We evaluated our model under both simple and complex traffic scenarios to understand its performance in real-world conditions where multiple types of traffic coexist. This evaluation is crucial for demonstrating the practical applicability of our approach.
6. Discussion and Limitations
7. Conclusions and Future Works
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cisco. What Is Low Latency? Available online: https://www.cisco.com/c/en/us/solutions/data-center/data-center-networking/what-is-low-latency.html (accessed on 2 May 2024).
- ETSI. Service Requirements for the 5G System. Available online: https://www.etsi.org/deliver/etsi_ts/122200_122299/122261/16.14.00_60/ts_122261v161400p.pdf (accessed on 20 July 2025).
- Union, I.T. One-Way Transmission Time. Recommendations: G Series: G.114. 2003. Available online: https://www.itu.int/rec/T-REC-G.114 (accessed on 20 July 2025).
- Enisoglu, R.; Rakocevic, V. Low-Latency Internet Traffic Identification using Machine Learning with Trend-based Features. In Proceedings of the 2023 International Wireless Communications and Mobile Computing (IWCMC), Marrakesh, Morocco, 19–23 June 2023; pp. 394–399. [Google Scholar]
- Middleton, S.E.; Modafferi, S. Scalable classification of QoS for real-time interactive applications from IP traffic measurements. Comput. Netw. 2016, 107, 121–132. [Google Scholar] [CrossRef]
- Hirchoren, G.A.; Porrez, N.; La Sala, B.; Buraczewski, I. Quality of service in networks with self-similar traffic. In Proceedings of the 2017 XVII Workshop on Information Processing and Control (RPIC), Mar del Plata, Argentina, 20–22 September 2017; pp. 1–5. [Google Scholar]
- Bentaleb, A.; Taani, B.; Begen, A.C.; Timmerer, C.; Zimmermann, R. A survey on bitrate adaptation schemes for streaming media over HTTP. IEEE Commun. Surv. Tutorials 2018, 21, 562–585. [Google Scholar] [CrossRef]
- Osadchiy, A.; Kamenev, A.; Saharov, V.; Chernyi, S. Signal processing algorithm based on discrete wavelet transform. Designs 2021, 5, 41. [Google Scholar] [CrossRef]
- Sun, G.; Zhang, R.; Liu, Z.; Wu, L.; Yu, Q.; Tan, X. Application of EMD combined with wavelet algorithm for filtering slag noise in steel cord conveyor belt. J. Phys. Conf. Ser. 2023, 2638, 012014. [Google Scholar] [CrossRef]
- Habeeb, I.Q.; Fadhil, T.Z.; Jurn, Y.N.; Habeeb, Z.Q.; Abdulkhudhur, H.N. An ensemble technique for speech recognition in noisy environments. Indones. J. Electr. Eng. Comput. Sci. 2020, 18, 835–842. [Google Scholar] [CrossRef]
- Cisco, U. Cisco Annual Internet Report (2018–2023) White Paper; Cisco: San Jose, CA, USA, 2020; Volume 10, pp. 1–35. [Google Scholar]
- Drajic, D.; Krco, S.; Tomic, I.; Popovic, M.; Zeljkovic, N.; Nikaein, N.; Svoboda, P. Impact of online games and M2M applications traffic on performance of HSPA radio access networks. In Proceedings of the 2012 Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Palermo, Italy, 4–6 July 2012; pp. 880–885. [Google Scholar]
- Sander, C.; Kunze, I.; Wehrle, K.; Rüth, J. Video conferencing and flow-rate fairness: A first look at Zoom and the impact of flow-queuing AQM. In Proceedings of the Passive and Active Measurement: 22nd International Conference, PAM 2021, Virtual Event, 29 March–1 April 2021; pp. 3–19. [Google Scholar]
- Rajadurai, S.; Alazab, M.; Kumar, N.; Gadekallu, T.R. Latency evaluation of SDFGs on heterogeneous processors using timed automata. IEEE Access 2020, 8, 140171–140180. [Google Scholar] [CrossRef]
- Finsterbusch, M.; Richter, C.; Rocha, E.; Muller, J.A.; Hanssgen, K. A survey of payload-based traffic classification approaches. IEEE Commun. Surv. Tutor. 2013, 16, 1135–1156. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Armitage, G. A survey of techniques for internet traffic classification using machine learning. IEEE Commun. Surv. Tutor. 2008, 10, 56–76. [Google Scholar] [CrossRef]
- Salman, O.; Elhajj, I.H.; Kayssi, A.; Chehab, A. A review on machine learning–based approaches for Internet traffic classification. Ann. Telecommun. 2020, 75, 673–710. [Google Scholar] [CrossRef]
- Adje, E.A.; Houndji, V.R.; Dossou, M. Features analysis of internet traffic classification using interpretable machine learning models. IAES Int. J. Artif. Intell. 2022, 11, 1175. [Google Scholar] [CrossRef]
- Deri, L.; Sartiano, D. Monitoring IoT Encrypted Traffic with Deep Packet Inspection and Statistical Analysis. In Proceedings of the 2020 15th International Conference for Internet Technology and Secured Transactions (ICITST), London, UK, 8–10 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Manju, N.; Harish, B.; Nagadarshan, N. Multilayer Feedforward Neural Network for Internet Traffic Classification. Int. J. Interact. Multim. Artif. Intell. 2020, 6, 117–122. [Google Scholar] [CrossRef]
- Khandait, P.; Hubballi, N.; Mazumdar, B. Efficient Keyword Matching for Deep Packet Inspection based Network Traffic Classification. In Proceedings of the 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 567–570. [Google Scholar] [CrossRef]
- Oliveira, T.P.; Barbar, J.S.; Soares, A.S. Multilayer perceptron and stacked autoencoder for Internet traffic prediction. In Proceedings of the Network and Parallel Computing: 11th IFIP WG 10.3 International Conference, NPC 2014, Ilan, Taiwan, 18–20 September 2014; pp. 61–71. [Google Scholar]
- Aswad, S.A.; Sonuç, E. Classification of VPN network traffic flow using time related features on Apache Spark. In Proceedings of the 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Istanbul, Turkey, 22–24 October 2020; pp. 1–8. [Google Scholar]
- Ishak, S.; Alecsandru, C. Optimizing traffic prediction performance of neural networks under various topological, input, and traffic condition settings. J. Transp. Eng. 2004, 130, 452–465. [Google Scholar] [CrossRef]
- Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.M.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
- Roy, S.; Shapira, T.; Shavitt, Y. Fast and lean encrypted Internet traffic classification. Comput. Commun. 2022, 186, 166–173. [Google Scholar] [CrossRef]
- Ertam, F.; Avcı, E. A new approach for internet traffic classification: GA-WK-ELM. Measurement 2017, 95, 135–142. [Google Scholar] [CrossRef]
- Salagean, M.; Firoiu, I. Anomaly detection of network traffic based on analytical discrete wavelet transform. In Proceedings of the 2010 8th International Conference on Communications, Bucharest, Romania, 10–12 June 2010; pp. 49–52. [Google Scholar]
- Gál, Z.; Terdik, G. Wavelet analysis of QoS based network traffic. In Proceedings of the 2011 6th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 19–21 May 2011; pp. 275–280. [Google Scholar]
- Shi, H.; Li, H.; Zhang, D.; Cheng, C.; Wu, W. Efficient and robust feature extraction and selection for traffic classification. Comput. Netw. 2017, 119, 1–16. [Google Scholar] [CrossRef]
- Liu, Z.; Liu, Q. Balanced feature selection method for Internet traffic classification. IET Netw. 2012, 1, 74–83. [Google Scholar] [CrossRef]
- Zhang, H.; Lu, G.; Qassrawi, M.T.; Zhang, Y.; Yu, X. Feature selection for optimizing traffic classification. Comput. Commun. 2012, 35, 1457–1471. [Google Scholar] [CrossRef]
- Sun, G.; Chen, T.; Su, Y.; Li, C. Internet traffic classification based on incremental support vector machines. Mob. Networks Appl. 2018, 23, 789–796. [Google Scholar] [CrossRef]
- Tong, D.; Qu, Y.R.; Prasanna, V.K. Accelerating decision tree based traffic classification on FPGA and multicore platforms. IEEE Trans. Parallel Distrib. Syst. 2017, 28, 3046–3059. [Google Scholar] [CrossRef]
- Schmidt, B.; Al-Fuqaha, A.; Gupta, A.; Kountanis, D. Optimizing an artificial immune system algorithm in support of flow-Based internet traffic classification. Appl. Soft Comput. 2017, 54, 1–22. [Google Scholar] [CrossRef]
- Crotti, M.; Dusi, M.; Gringoli, F.; Salgarelli, L. Traffic classification through simple statistical fingerprinting. ACM SIGCOMM Comput. Commun. Rev. 2007, 37, 5–16. [Google Scholar] [CrossRef]
- Wang, X.; Parish, D.J. Optimised multi-stage tcp traffic classifier based on packet size distributions. In Proceedings of the 2010 Third International Conference on Communication Theory, Reliability, and Quality of Service, Athens, Greece, 13–19 June 2010; pp. 98–103. [Google Scholar]
- Qin, T.; Wang, L.; Liu, Z.; Guan, X. Robust application identification methods for P2P and VoIP traffic classification in backbone networks. Knowl. Based Syst. 2015, 82, 152–162. [Google Scholar] [CrossRef]
- Lotfollahi, M.; Jafari Siavoshani, M.; Shirali Hossein Zade, R.; Saberian, M. Deep packet: A novel approach for encrypted traffic classification using deep learning. Soft Comput. 2020, 24, 1999–2012. [Google Scholar] [CrossRef]
- Shapira, T.; Shavitt, Y. Flowpic: Encrypted internet traffic classification is as easy as image recognition. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 680–687. [Google Scholar]
- Draper-Gil, G.; Lashkari, A.H.; Mamun, M.S.I.; Ghorbani, A.A. Characterization of encrypted and vpn traffic using time-related. In Proceedings of the 2nd International Conference on Information Systems Security and Privacy (ICISSP), Rome, Italy, 19–21 February 2016; pp. 407–414. [Google Scholar]
- Lashkari, A.H.; Gil, G.D.; Mamun, M.S.I.; Ghorbani, A.A. Characterization of tor traffic using time based features. In Proceedings of the International Conference on Information Systems Security and Privacy, SciTePress, Porto, Portugal, 19–21 February 2017; Volume 2, pp. 253–262. [Google Scholar]
- Zou, Y.; Zhu, J.; Wang, X.; Hanzo, L. A survey on wireless security: Technical challenges, recent advances, and future trends. Proc. IEEE 2016, 104, 1727–1765. [Google Scholar] [CrossRef]
- Kontogeorgaki, S.; Sánchez-García, R.J.; Ewing, R.M.; Zygalakis, K.C.; MacArthur, B.D. Noise-processing by signaling networks. Sci. Rep. 2017, 7, 532. [Google Scholar] [CrossRef] [PubMed]
- Hammedi, R. A Deep Learning Based Traffic Classification in Software Defined Networking. In Proceedings of the 14th IADIS International Conference Information Systems, Virtual, 3–5 March 2021. [Google Scholar] [CrossRef]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Largescale Machine Learning on Heterogeneous Systems. Available online: https://www.tensorflow.org/?hl=tr. (accessed on 25 October 2023).
- White, G.; Sundaresan, K.; Briscoe, B. Low latency docsis: Technology overview. Res. Dev. 2019, 1, 11–13. [Google Scholar]
- Yang, Y.; Theisen, R.; Hodgkinson, L.; Gonzalez, J.E.; Ramchandran, K.; Martin, C.H.; Mahoney, M.W. Test accuracy vs. generalization gap: Model selection in nlp without accessing training or testing data. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 3011–3021. [Google Scholar]
- Du, X.; Xu, H.; Zhu, F. Understanding the effect of hyperparameter optimization on machine learning models for structure design problems. Comput. Aided Des. 2021, 135, 103013. [Google Scholar] [CrossRef]
- Wang, W.; Zhu, M.; Wang, J.; Zeng, X.; Yang, Z. End-to-end encrypted traffic classification with one-dimensional convolution neural networks. In Proceedings of the 2017 IEEE International Conference on Intelligence and Security Informatics (ISI), Beijing, China, 22–24 July 2017; pp. 43–48. [Google Scholar]
- Chang, L.H.; Lee, T.H.; Chu, H.C.; Su, C.W. Application-based online traffic classification with deep learning models on SDN networks. Adv. Technol. Innov. 2020, 5, 216–229. [Google Scholar] [CrossRef]
Study | Traffic Type | Low-Latency | Granularity | Algorithms | Input | Dataset | Accuracy |
---|---|---|---|---|---|---|---|
[20] | Encrypted | Yes | Application | MLP | Raw Traffic | Cambridge | 99.08 |
[21] | Encrypted | No | Protocol | Aho-Corasic | Raw Traffic | Private/ Digital Corpora | 98 |
[22] | Encrypted | No | Application | MLP, SAE | Raw Traffic | Private | - |
[23] | VPN/ Non-VPN | Yes | Application | ANN | Packet Feature | Darknet2020 | 97 |
[24] | VPN/ Non-VPN | Yes | Application/ Service Group | ANN | Raw Traffic | ISCX | 98 |
[27] | VPN/ Non-VPN | Yes | Application | ML | Raw Traffic | ISCX | 98.6 |
[39] | VPN/ Non-VPN | Yes | Application/ Service Group | CNN, SAE | Raw Traffic | ISCX | 98 |
[40] | VPN/ Non-VPN | Yes | Application | CNN | Flow Features | ISCX | 99.7 |
Traffic Scenario | Low-Latency |
---|---|
A + B | NO |
A + C | YES |
B + C | YES |
A + A | NO |
B + B | NO |
C + C | YES |
Traffic Scenario | Low-Latency |
---|---|
3A | NO |
3B | NO |
3C | YES |
2A + B | NO |
A + 2B | NO |
2A + C | YES |
A + 2C | YES |
2B + C | YES |
B + 2C | YES |
A + B + C | YES |
2A + B + C | YES |
A + 2B + C | YES |
A + B + 2C | YES |
Hyperparameters | Range | Selection |
---|---|---|
Number of hidden layers | [1, 2, 3, 4, 5] | 2 |
Activation Function | [sigmoid, tanh, ReLU] | ReLU |
Learning Rate | [0.1, 0.01, 0.001] | 0.001 |
Batch Size | [16, 32, 64, 128] | 32 |
Number of Epochs | [10, …, 50, …, 150] | 100 |
Optimizer | [Adam] | Adam |
Dropout Rate | [0, 0.1, 0.2, 0.3, 0.4, 0.5] | 0 |
Mixed Traffic Scenarios (Basic) | ||
---|---|---|
Traffic Scenario | Low-Latency | Accuracy (%)∼ |
A + B | NO | 89.7 |
A + C | YES | 92.8 |
B + C | YES | 94.2 |
A + A | NO | 88.3 |
B + B | NO | 90.6 |
C + C | YES | 96.5 |
Mixed Traffic Scenarios (Complex) | ||||
---|---|---|---|---|
No | Scenario | Low-Latency | Acc (%)∼ | Acc (%)∼ with WT |
1 | 3A | NO | 82.9 | 86.8 |
2 | 3B | NO | 83.6 | 88.2 |
3 | 3C | YES | 88.2 | 93.2 |
4 | 2A + B | NO | 77.2 | 82.1 |
5 | A + 2B | NO | 71.1 | 77.4 |
6 | 2A + C | YES | 76.5 | 83.2 |
7 | A + 2C | YES | 79.6 | 83.8 |
8 | 2B + C | YES | 77.0 | 84.1 |
9 | B + 2C | YES | 80.7 | 84.4 |
10 | A + B + C | YES | 72.9 | 78.2 |
11 | 2A + B + C | YES | 68.8 | 74.2 |
12 | A + 2B + C | YES | 69.7 | 75.3 |
13 | A + B + 2C | YES | 72.5 | 79.2 |
Traffic Type | Duration | Total Samples |
---|---|---|
FTP | ∼1 h | 14,679 |
Video Streaming | ∼1 h | 14,287 |
Low-Latency | ∼1 h | 14,510 |
Traffic Type | Scaling Method | Classification Accuracy (%)∼ |
---|---|---|
FTP | SS | 93.41 |
SS + Wavelet | 99.09 | |
Video Streaming | SS | 92.76 |
SS + Wavelet | 99.30 | |
Low-Latency | SS | 92.12 |
SS + Wavelet | 99.56 |
Paper | Algorithm | Classification Accuracy (%)∼ | ||
---|---|---|---|---|
FTP | Video | Low-Latency | ||
Enisoglu et al. [4] | k-NN | 97.5 | 97.9 | 98.2 |
Wang et al. [50] | CNN | 94.5 | 96.5 | 84.5 |
Deep Packet [39] | CNN | 98.0 | 98.0 | 98.0 |
Chang et al. [51] | ANN | NA | 59.0 | 92.0 |
FlowPic [40] | LSTM | 98.8 | 99.9 | 99.6 |
This Paper | ANN | 99.1 | 99.3 | 99.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Enisoglu, R.; Rakocevic, V. A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification. Algorithms 2025, 18, 457. https://doi.org/10.3390/a18080457
Enisoglu R, Rakocevic V. A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification. Algorithms. 2025; 18(8):457. https://doi.org/10.3390/a18080457
Chicago/Turabian StyleEnisoglu, Ramazan, and Veselin Rakocevic. 2025. "A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification" Algorithms 18, no. 8: 457. https://doi.org/10.3390/a18080457
APA StyleEnisoglu, R., & Rakocevic, V. (2025). A Novel Wavelet Transform and Deep Learning-Based Algorithm for Low-Latency Internet Traffic Classification. Algorithms, 18(8), 457. https://doi.org/10.3390/a18080457