Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (153)

Search Parameters:
Keywords = IP traffic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4863 KiB  
Article
Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
by José Ramón Trillo, Felipe González-López, Juan Antonio Morente-Molinera, Roberto Magán-Carrión and Pablo García-Sánchez
Electronics 2025, 14(15), 3073; https://doi.org/10.3390/electronics14153073 (registering DOI) - 31 Jul 2025
Abstract
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not [...] Read more.
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Graphical abstract

21 pages, 5080 KiB  
Article
Sustainable Dynamic Scheduling Optimization of Shared Batteries in Urban Electric Bicycles: An Integer Programming Approach
by Zongfeng Zou, Xin Yan, Pupu Liu, Weihao Yang and Chao Zhang
Sustainability 2025, 17(10), 4379; https://doi.org/10.3390/su17104379 - 12 May 2025
Viewed by 472
Abstract
With the proliferation of electric bicycle battery swapping models, spatial supply demand imbalances of battery resources across swapping stations have become increasingly prominent. Existing studies predominantly focus on location optimization but struggle to address dynamic operational challenges in battery allocation efficiency. This paper [...] Read more.
With the proliferation of electric bicycle battery swapping models, spatial supply demand imbalances of battery resources across swapping stations have become increasingly prominent. Existing studies predominantly focus on location optimization but struggle to address dynamic operational challenges in battery allocation efficiency. This paper proposes an integer programming (IP)-based dynamic scheduling optimization method for shared batteries, aiming to minimize transportation costs and balance battery distribution under multi-constraint conditions. A resource allocation model is constructed and solved via an interior-point method (IPM) combined with a branch-and-bound (B&B) strategy, optimizing the dispatch paths and quantities of fully charged batteries among stations. This study contributes to urban sustainability by enhancing resource utilization efficiency, reducing redundant production, and supporting low-carbon mobility infrastructure. Using the operational data from 729 battery swapping stations in Shanghai, the spatiotemporal heterogeneity of rider demand is analyzed to validate the model’s effectiveness. Results reveal that daily swapping demand in core commercial areas is 3–10 times higher than in peripheral regions. The optimal scheduling network exhibits a ‘centralized radial’ structure, with nearly 50% of batteries dispatched from low-demand peripheral stations to high-demand central zones, significantly reducing transportation costs and resource redundancy. This study shows that the proposed model effectively mitigates battery supply demand mismatches and enhances scheduling efficiency. Future research may incorporate real-time traffic data to refine cost functions and introduce temporal factors to improve model adaptability. Full article
Show Figures

Figure 1

22 pages, 4539 KiB  
Article
Resource-Efficient Design and Implementation of Real-Time Parking Monitoring System with Edge Device
by Jungyoon Kim, Incheol Jeong, Jungil Jung and Jinsoo Cho
Sensors 2025, 25(7), 2181; https://doi.org/10.3390/s25072181 - 29 Mar 2025
Viewed by 802
Abstract
Parking management systems play a crucial role in addressing parking shortages and operational challenges; however, high initial costs and infrastructure requirements often hinder their implementation. Edge computing offers a promising solution by reducing latency and network traffic, thus optimizing operational costs. Nonetheless, the [...] Read more.
Parking management systems play a crucial role in addressing parking shortages and operational challenges; however, high initial costs and infrastructure requirements often hinder their implementation. Edge computing offers a promising solution by reducing latency and network traffic, thus optimizing operational costs. Nonetheless, the limited computational resources of edge devices remain a significant challenge. This study developed a real-time vehicle occupancy detection system utilizing SSD-MobileNetv2 on edge devices to process video streams from multiple IP cameras. The system incorporates a dual-trigger mechanism, combining periodic triggers and parking space mask triggers, to optimize computational efficiency and resource usage while maintaining high accuracy and reliability. Experimental results demonstrated that the parking space mask trigger significantly reduced unnecessary AI model executions compared to periodic triggers, while the dual-trigger mechanism ensured consistent updates even under unstable network conditions. The SSD-MobileNetv2 model achieved a frame processing time of 0.32 s and maintained robust detection performance with an F1-score of 0.9848 during a four-month field validation. These findings validate the suitability of the system for real-time parking management in resource-constrained environments. Thus, the proposed smart parking system offers an economical, viable, and practical solution that can significantly contribute to developing smart cities. Full article
Show Figures

Figure 1

18 pages, 5999 KiB  
Article
Simulation and Modelling of C+L+S Multiband Optical Transmission for the OCATA Time Domain Digital Twin
by Prasunika Khare, Nelson Costa, Marc Ruiz, Antonio Napoli, Jaume Comellas, Joao Pedro and Luis Velasco
Sensors 2025, 25(6), 1948; https://doi.org/10.3390/s25061948 - 20 Mar 2025
Viewed by 419
Abstract
C+L+S multiband (MB) optical transmission has the potential to increase the capacity of optical transport networks, and thus, it is a possible solution to cope with the traffic increase expected in the years to come. However, the introduction of MB optical technology needs [...] Read more.
C+L+S multiband (MB) optical transmission has the potential to increase the capacity of optical transport networks, and thus, it is a possible solution to cope with the traffic increase expected in the years to come. However, the introduction of MB optical technology needs to come together with the needed tools that support network planning and operation. In particular, quality of transmission (QoT) estimation is needed for provisioning optical MB connections. In this paper, we concentrate on modelling MB optical transmission for provide fast and accurate QoT estimation and propose machine learning (ML) approaches based on neural networks, which can be easily integrated into an optical layer digital twin (DT) solution. We start by considering approaches that can be used for accurate signal propagation modelling. Even though solutions such as the split-step Fourier method (SSFM) for solving the nonlinear Schrödinger equation (NLSE) have limited application for QoT estimation during provisioning because of their very high complexity and time consumption, they could be used to generate datasets for ML model creation. However, even that can be hard to carry out on a fully loaded MB system with hundreds of channels. In addition, in MB optical transmission, interchannel stimulated Raman scattering (ISRS) becomes a major effect, which adds more complexity. In view of that, the fourth-order Runge–Kutta in the interaction picture (RK4IP) method, complemented with an adaptive step size algorithm to further reduce the computation time, is evaluated as an alternative to reduce time complexity. We show that RK4IP provided an accuracy comparable to that of the SSFM with reduced computation time, which enables its application for MB optical transmission simulation. Once datasets were generated using the adaptive step size RK4IP method, two ML modelling approaches were considered to be integrated in the OCATA DT, where models predict optical signal propagation in the time domain. Being able to predict the optical signal in the time domain, as it will be received after propagation, opens opportunities for automating network operation, including connection provisioning and failure management. In this paper, we focus on comparing the proposed ML modelling approaches in terms of the models’ general and QoT estimation accuracy. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

30 pages, 2300 KiB  
Article
Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms
by Wenlan Diao, Jianping An, Tong Li, Yu Zhang and Zhoujie Liu
Electronics 2025, 14(6), 1206; https://doi.org/10.3390/electronics14061206 - 19 Mar 2025
Viewed by 489
Abstract
Low Earth Orbit (LEO) satellite networks are promising for satellite-based cloud platforms. Due to frequent link switching and long transmission distances in LEO satellite networks, applying the TCP/IP architecture introduces challenges such as packet loss and significant transmission delays. These issues can trigger [...] Read more.
Low Earth Orbit (LEO) satellite networks are promising for satellite-based cloud platforms. Due to frequent link switching and long transmission distances in LEO satellite networks, applying the TCP/IP architecture introduces challenges such as packet loss and significant transmission delays. These issues can trigger excessive retransmissions, leading to link congestion and increased data acquisition delay. Deploying Named Data Networking (NDN) with connectionless communication and link-switching tolerance can help address these problems. However, the existing congestion control methods in NDN lack support for congestion avoidance, lossless forwarding, and tiered traffic scheduling, which are crucial for achieving low-delay operations in satellite-based cloud platforms. In this paper, we propose a Congestion Control method with Lossless Forwarding (CCLF). Addressing the time-varying nature of satellite networks, CCLF implements zero packet loss forwarding by monitoring output queues, aggregating packets, and prioritizing packet scheduling. This approach overcomes traditional end-to-end bottleneck bandwidth limitations, enhances network throughput, and achieves low-delay forwarding for critical Data packets. Compared with the Practical Congestion Control Scheme (PCON), the CCLF method achieves lossless forwarding at the network layer, reduces the average flow completion time by up to 41%, and increases bandwidth utilization by up to 57%. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

20 pages, 2732 KiB  
Article
Throughput of Buffer with Dependent Service Times
by Andrzej Chydzinski
Appl. Syst. Innov. 2025, 8(2), 34; https://doi.org/10.3390/asi8020034 - 7 Mar 2025
Viewed by 640
Abstract
We study the throughput and losses of a buffer with stochastically dependent service times. Such dependence occurs not only in packet buffers within TCP/IP networks but also in many other queuing systems. We conduct a comprehensive, time-dependent analysis, which includes deriving formulae for [...] Read more.
We study the throughput and losses of a buffer with stochastically dependent service times. Such dependence occurs not only in packet buffers within TCP/IP networks but also in many other queuing systems. We conduct a comprehensive, time-dependent analysis, which includes deriving formulae for the count of packets processed and lost over an arbitrary period, the temporary intensity of output traffic, the temporary intensity of packet losses, buffer throughput, and loss probability. The model considered enables mimicking any packet interarrival time distribution, service time distribution, and correlation between service times. The analytical findings are accompanied by numerical computations that demonstrate the influence of various factors on buffer throughput and losses. These results are also verified through simulations. Full article
(This article belongs to the Section Applied Mathematics)
Show Figures

Figure 1

38 pages, 18446 KiB  
Article
Hybrid Machine Learning for IoT-Enabled Smart Buildings
by Robert-Alexandru Craciun, Simona Iuliana Caramihai, Ștefan Mocanu, Radu Nicolae Pietraru and Mihnea Alexandru Moisescu
Informatics 2025, 12(1), 17; https://doi.org/10.3390/informatics12010017 - 11 Feb 2025
Viewed by 1282
Abstract
This paper presents an intrusion detection system (IDS) leveraging a hybrid machine learning approach aimed at enhancing the security of IoT devices at the edge, specifically for those utilizing the TCP/IP protocol. Recognizing the critical security challenges posed by the rapid expansion of [...] Read more.
This paper presents an intrusion detection system (IDS) leveraging a hybrid machine learning approach aimed at enhancing the security of IoT devices at the edge, specifically for those utilizing the TCP/IP protocol. Recognizing the critical security challenges posed by the rapid expansion of IoT networks, this work evaluates the proposed IDS model with a primary focus on optimizing training time without sacrificing detection accuracy. The paper begins with a comprehensive review of existing hybrid machine learning models for IDS, highlighting both their strengths and limitations. It then provides an overview of the technologies and methodologies implemented in this work, including the utilization of “Botnet IoT Traffic Dataset For Smart Buildings”, a newly released public dataset tailored for IoT threat detection. The hybrid IDS model is explained in detail, followed by a discussion of experimental results that assess the model’s performance in real-world conditions. Furthermore, the proposed IDS is evaluated for its effectiveness in enhancing IoT security within smart building environments, demonstrating how it can address unique challenges such as resource constraints and real-time threat detection at the edge. This work aims to contribute to the development of efficient, reliable, and scalable IDS solutions to protect IoT ecosystems from emerging security threats. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

29 pages, 8224 KiB  
Article
Detection of Domain Name Server Amplification Distributed Reflection Denial of Service Attacks Using Convolutional Neural Network-Based Image Deep Learning
by Hoon Shin, Jaeyeong Jeong, Kyumin Cho, Jaeil Lee, Ohjin Kwon and Dongkyoo Shin
Electronics 2025, 14(1), 76; https://doi.org/10.3390/electronics14010076 - 27 Dec 2024
Viewed by 1389
Abstract
Domain Name Server (DNS) amplification Distributed Reflection Denial of Service (DRDoS) attacks are a Distributed Denial of Service (DDoS) attack technique in which multiple IT systems forge the original IP of the target system, send a request to the DNS server, and then [...] Read more.
Domain Name Server (DNS) amplification Distributed Reflection Denial of Service (DRDoS) attacks are a Distributed Denial of Service (DDoS) attack technique in which multiple IT systems forge the original IP of the target system, send a request to the DNS server, and then send a large number of response packets to the target system. In this attack, it is difficult to identify the attacker because of its ability to deceive the source, and unlike TCP-based DDoS attacks, it usually uses the UDP protocol, which has a fast communication speed and amplifies network traffic by simple manipulating options, making it one of the most widely used DDoS techniques. In this study, we propose a simple convolutional neural network (CNN) model that is designed to detect DNS amplification DRDoS attack traffic and has hyperparameters adjusted through experiments. As a result of evaluating the accuracy of the proposed CNN model for detecting DNS amplification DRDoS attacks, the average accuracy of the experiment was 0.9995, which was significantly better than several machine learning (ML) models in terms of performance. It also showed good performance compared to other deep learning (DL) models, and, in particular, it was confirmed that this simple CNN had the fastest time in terms of execution compared to other deep learning models by experimentation. Full article
(This article belongs to the Special Issue Machine Learning and Cybersecurity—Trends and Future Challenges)
Show Figures

Figure 1

26 pages, 559 KiB  
Article
A Petri Net and LSTM Hybrid Approach for Intrusion Detection Systems in Enterprise Networks
by Gaetano Volpe, Marco Fiore, Annabella la Grasta, Francesca Albano, Sergio Stefanizzi, Marina Mongiello and Agostino Marcello Mangini
Sensors 2024, 24(24), 7924; https://doi.org/10.3390/s24247924 - 11 Dec 2024
Cited by 1 | Viewed by 1472
Abstract
Intrusion Detection Systems (IDSs) are a crucial component of modern corporate firewalls. The ability of IDS to identify malicious traffic is a powerful tool to prevent potential attacks and keep a corporate network secure. In this context, Machine Learning (ML)-based methods have proven [...] Read more.
Intrusion Detection Systems (IDSs) are a crucial component of modern corporate firewalls. The ability of IDS to identify malicious traffic is a powerful tool to prevent potential attacks and keep a corporate network secure. In this context, Machine Learning (ML)-based methods have proven to be very effective for attack identification. However, traditional approaches are not always applicable in a real-time environment as they do not integrate concrete traffic management after a malicious packet pattern has been identified. In this paper, a novel combined approach to both identify and discard potential malicious traffic in a real-time fashion is proposed. In detail, a Long Short-Term Memory (LSTM) supervised artificial neural network model is provided in which consecutive packet groups are considered as they flow through the corporate network. Moreover, the whole IDS architecture is modeled by a Petri Net (PN) that either blocks or allows packet flow throughout the network based on the LSTM model output. The novel hybrid approach combining LSTM with Petri Nets achieves a 99.71% detection accuracy—a notable improvement over traditional LSTM-only methods, which averaged around 97%. The LSTM–Petri Net approach is an innovative solution combining machine learning with formal network modeling for enhanced threat detection, offering improved accuracy and real-time adaptability to meet the rapid security needs of virtual environments and CPS. Moreover, the approach emphasizes the innovative role of the Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) as a form of “virtual sensing technology” applied to advanced network security. An extensive case study with promising results is provided by training the model with the popular IDS 2018 dataset. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human)
Show Figures

Figure 1

28 pages, 26501 KiB  
Article
A Reordering Buffer Management Method at Edge Gateway in Hybrid IP-ICN Multipath Transmission System
by Yuqi Liu, Rui Han and Xu Wang
Future Internet 2024, 16(12), 464; https://doi.org/10.3390/fi16120464 - 11 Dec 2024
Viewed by 1082
Abstract
Multipath transmission in ICN provides high transmission efficiency and stability. In an IP-ICN compatible network environment, unmodified IP terminal devices can access ICN through gateways, benefiting from these performance enhancements. This paper proposes a gateway framework for hybrid IP-ICN multipath transmission systems, enabling [...] Read more.
Multipath transmission in ICN provides high transmission efficiency and stability. In an IP-ICN compatible network environment, unmodified IP terminal devices can access ICN through gateways, benefiting from these performance enhancements. This paper proposes a gateway framework for hybrid IP-ICN multipath transmission systems, enabling protocol conversion and quality of service management. A packet reordering module is integrated at the egress gateway to address complex packet disorder issues caused by ICN multipath transmission, thereby enhancing the service quality provided to IP terminals. A Reordering Buffer Management Method (RBMM) is introduced, consisting of two key components. First, RBMM employs an improved dynamic threshold scheme for reserved buffer partitioning, efficiently identifying congestion and optimizing buffer resource utilization. Second, a flow-priority-based replacement strategy is designed to enhance fairness in resource allocation by evicting packets with lower delivery probability during congestion. Experimental results demonstrate that RBMM dynamically adapts to varying traffic conditions, maintaining high transmission performance while reducing buffer resource consumption. In comparison to existing methods, RBMM significantly reduces queuing delay and flow completion time, providing more balanced resource allocation when multiple flows compete for limited buffer capacity. Full article
Show Figures

Figure 1

29 pages, 1451 KiB  
Article
A Coloring-Based Packet Loss Rate Measurement Scheme on Network Nodes
by Shuhe Wang, Rui Han and Xu Wang
Electronics 2024, 13(23), 4692; https://doi.org/10.3390/electronics13234692 - 27 Nov 2024
Viewed by 962
Abstract
Network measurement is an efficient way to understand network behavior. Traditional measurement techniques focus on internet protocol (IP) networks, where the processing capacity of network nodes is limited and primarily dedicated to packet forwarding. As a result, these techniques typically rely on end [...] Read more.
Network measurement is an efficient way to understand network behavior. Traditional measurement techniques focus on internet protocol (IP) networks, where the processing capacity of network nodes is limited and primarily dedicated to packet forwarding. As a result, these techniques typically rely on end hosts or external systems to analyze traffic and evaluate network performance. This reliance introduces several challenges, such as increased measurement latency and scalability limitations, particularly in large-scale networks. With the emergence of next-generation internet architectures, especially information-centric networking (ICN), network nodes have gained enhanced capabilities, enabling measurement tasks to be performed directly at these nodes. This paper proposes a distributed measurement scheme where network nodes collaborate to monitor the packet loss rate on the intermediate link. By setting an unused bit in the packet header, the upstream node “colors” the packets into different color blocks. The minimum duration of each block is determined by the degree of reordering on the link, and the number of packets in each block must be a power of two. The downstream node recognizes blocks, assigns packets to the right block, and deduces the original number of packets for each block to calculate packet loss. Moreover, the upstream node adjusts the number of packets in each block based on the packet transmission rate on the link, aiming to balance measurement accuracy and frequency. A P4-based implementation on a BMv2 software switch is presented to demonstrate the feasibility of the proposed scheme. Simulations show that this scheme improves measurement accuracy and is more robust against packet reordering. Additionally, the proposed scheme maintains relatively low network overhead and, at higher measurement frequencies, exhibits the lowest overhead compared to existing methods. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

26 pages, 3237 KiB  
Article
QoS-Aware Power-Optimized Path Selection for Data Center Networks (Q-PoPS)
by Mohammed Nsaif, Gergely Kovásznai, Ali Malik and Ruairí de Fréin
Electronics 2024, 13(15), 2976; https://doi.org/10.3390/electronics13152976 - 28 Jul 2024
Viewed by 1233
Abstract
Data centers consume significant amounts of energy, contributing indirectly to environmental pollution through greenhouse gas emissions during electricity generation. According to the Natural Resources Defense Council, information and communication technologies and networks account for roughly 10% of global energy consumption. Reducing power consumption [...] Read more.
Data centers consume significant amounts of energy, contributing indirectly to environmental pollution through greenhouse gas emissions during electricity generation. According to the Natural Resources Defense Council, information and communication technologies and networks account for roughly 10% of global energy consumption. Reducing power consumption in Data Center Networks (DCNs) is crucial, especially given that many data center components operate at full capacity even under low traffic conditions, resulting in high costs for both service providers and consumers. Current solutions often prioritize power optimization without considering Quality of Service (QoS). Services such as video streaming and Voice over IP (VoIP) are particularly sensitive to loss or delay and require QoS to be maintained below certain thresholds. This paper introduces a novel framework called QoS-Aware Power-Optimized Path Selection (Q-PoPS) for software-defined DCNs. The objective of Q-PoPS is to minimize DCN power consumption while ensuring that an acceptable QoS is provided, meeting the requirements of DCN services. This paper describes the implementation of a prototype for the Q-PoPS framework that leverages the POX Software-Defined Networking (SDN) controller. The performance of the prototype is evaluated using the Mininet emulator. Our findings demonstrate the performance of the proposed Q-PoPS algorithm in three scenarios. Best-case: Enhancing real-time traffic protocol quality without increasing power consumption. midrange-case: Replacing bottleneck links while preserving real-time traffic quality. Worst-case: Identifying new paths that may increase power consumption but maintain real-time traffic quality. This paper underscores the need for a holistic approach to DCN management, optimizing both power consumption and QoS for critical real-time applications. We present the Q-PoPS framework as evidence that such an approach is achievable. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

31 pages, 4049 KiB  
Article
A Novel Deep Learning Framework for Intrusion Detection Systems in Wireless Network
by Khoa Dinh Nguyen Dang, Peppino Fazio and Miroslav Voznak
Future Internet 2024, 16(8), 264; https://doi.org/10.3390/fi16080264 - 25 Jul 2024
Cited by 1 | Viewed by 2457
Abstract
In modern network security setups, Intrusion Detection Systems (IDS) are crucial elements that play a key role in protecting against unauthorized access, malicious actions, and policy breaches. Despite significant progress in IDS technology, two of the most major obstacles remain: how to avoid [...] Read more.
In modern network security setups, Intrusion Detection Systems (IDS) are crucial elements that play a key role in protecting against unauthorized access, malicious actions, and policy breaches. Despite significant progress in IDS technology, two of the most major obstacles remain: how to avoid false alarms due to imbalanced data and accurately forecast the precise type of attacks before they even happen to minimize the damage caused. To deal with two problems in the most optimized way possible, we propose a two-task regression and classification strategy called Hybrid Regression–Classification (HRC), a deep learning-based strategy for developing an intrusion detection system (IDS) that can minimize the false alarm rate and detect and predict potential cyber-attacks before they occur to help the current wireless network in dealing with the attacks more efficiently and precisely. The experimental results show that our HRC strategy accurately predicts the incoming behavior of the IP data traffic in two different datasets. This can help the IDS to detect potential attacks sooner with high accuracy so that they can have enough reaction time to deal with the attack. Furthermore, our proposed strategy can also deal with imbalanced data. Even when the imbalance is large between categories. This will help significantly reduce the false alarm rate of IDS in practice. These strengths combined will benefit the IDS by making it more active in defense and help deal with the intrusion detection problem more effectively. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

37 pages, 1437 KiB  
Article
Unveiling Malicious Network Flows Using Benford’s Law
by Pedro Fernandes, Séamus Ó Ciardhuáin and Mário Antunes
Mathematics 2024, 12(15), 2299; https://doi.org/10.3390/math12152299 - 23 Jul 2024
Cited by 1 | Viewed by 2412
Abstract
The increasing proliferation of cyber-attacks threatening the security of computer networks has driven the development of more effective methods for identifying malicious network flows. The inclusion of statistical laws, such as Benford’s Law, and distance functions, applied to the first digits of network [...] Read more.
The increasing proliferation of cyber-attacks threatening the security of computer networks has driven the development of more effective methods for identifying malicious network flows. The inclusion of statistical laws, such as Benford’s Law, and distance functions, applied to the first digits of network flow metadata, such as IP addresses or packet sizes, facilitates the detection of abnormal patterns in the digits. These techniques also allow for quantifying discrepancies between expected and suspicious flows, significantly enhancing the accuracy and speed of threat detection. This paper introduces a novel method for identifying and analyzing anomalies within computer networks. It integrates Benford’s Law into the analysis process and incorporates a range of distance functions, namely the Mean Absolute Deviation (MAD), the Kolmogorov–Smirnov test (KS), and the Kullback–Leibler divergence (KL), which serve as dispersion measures for quantifying the extent of anomalies detected in network flows. Benford’s Law is recognized for its effectiveness in identifying anomalous patterns, especially in detecting irregularities in the first digit of the data. In addition, Bayes’ Theorem was implemented in conjunction with the distance functions to enhance the detection of malicious traffic flows. Bayes’ Theorem provides a probabilistic perspective on whether a traffic flow is malicious or benign. This approach is characterized by its flexibility in incorporating new evidence, allowing the model to adapt to emerging malicious behavior patterns as they arise. Meanwhile, the distance functions offer a quantitative assessment, measuring specific differences between traffic flows, such as frequency, packet size, time between packets, and other relevant metadata. Integrating these techniques has increased the model’s sensitivity in detecting malicious flows, reducing the number of false positives and negatives, and enhancing the resolution and effectiveness of traffic analysis. Furthermore, these techniques expedite decisions regarding the nature of traffic flows based on a solid statistical foundation and provide a better understanding of the characteristics that define these flows, contributing to the comprehension of attack vectors and aiding in preventing future intrusions. The effectiveness and applicability of this joint method have been demonstrated through experiments with the CICIDS2017 public dataset, which was explicitly designed to simulate real scenarios and provide valuable information to security professionals when analyzing computer networks. The proposed methodology opens up new perspectives in investigating and detecting anomalies and intrusions in computer networks, which are often attributed to cyber-attacks. This development culminates in creating a promising model that stands out for its effectiveness and speed, accurately identifying possible intrusions with an F1 of nearly 80%, a recall of 99.42%, and an accuracy of 65.84%. Full article
Show Figures

Figure 1

16 pages, 257 KiB  
Review
Optimization Algorithms in SDN: Routing, Load Balancing, and Delay Optimization
by Maria Daniela Tache (Ungureanu), Ovidiu Păscuțoiu and Eugen Borcoci
Appl. Sci. 2024, 14(14), 5967; https://doi.org/10.3390/app14145967 - 9 Jul 2024
Cited by 15 | Viewed by 6090
Abstract
Software-Defined Networking is today a mature technology, which is developed in many networks and also embedded in novel architectures like 5G and 6G. The SDN control centralization concept brings significant advantages for management and control in SDN together with the programmability of the [...] Read more.
Software-Defined Networking is today a mature technology, which is developed in many networks and also embedded in novel architectures like 5G and 6G. The SDN control centralization concept brings significant advantages for management and control in SDN together with the programmability of the data plane. SDN represents a paradigm shift towards agile, efficient, and secure network infrastructures, moving away from traditional, hardware-centric models to embrace dynamic, software-driven paradigms. SDN is compliant also with the virtualization architecture defined in the Network Function Virtualization framework. However, SDN should cooperate seamlessly for some years with the distributed TCP/IP control developed during the years all over the world. Among others, the traditional tasks of routing, forwarding, load balancing, QoS assurance, security, and privacy should be solved. The SDN native centralization brings also some new challenges and problems which are different from the traditional distributed control IP networks. The algorithms and protocols usable in SDN should meet requirements like scalability, convergence, redundancy assurance, sustainability, and good real-time response, and allow orchestrated automation in enhancing network resilience and adaptability. This work presents a theoretical review of state-of-the-art SDN optimization techniques, offering a critical and comparative discussion of various algorithms having tasks such as routing (including dynamic ones), forwarding, load balancing and traffic optimization, and forwarding delay minimization. Attention is pointed to general algorithms which can offer pragmatic solutions for large systems or multiple metric routing. Full article
(This article belongs to the Special Issue Emerging Technologies in Network Security and Cryptography)
Back to TopTop