Next Article in Journal
A Local 3D Voronoi-Based Optimization Method for Sensor Network Deployment in Complex Indoor Environments
Previous Article in Journal
Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application Layer Packet Processing Using PISA Switches

1
Department of Computer Engineering, KTH Royal University of Technology, SE-114 28 Stockholm, Sweden
2
Department of Computer Engineering, Konya Food and Agriculture University, Konya 42080, Turkey
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(23), 8010; https://doi.org/10.3390/s21238010
Submission received: 3 November 2021 / Revised: 24 November 2021 / Accepted: 25 November 2021 / Published: 30 November 2021
(This article belongs to the Section Internet of Things)

Abstract

:
This paper investigates and proposes a solution for Protocol Independent Switch Architecture (PISA) to process application layer data, enabling the inspection of application content. PISA is a novel approach in networking where the switch does not run any embedded binary code but rather an interpreted code written in a domain-specific language. The main motivation behind this approach is that telecommunication operators do not want to be locked in by a vendor for any type of networking equipment, develop their own networking code in a hardware environment that is not governed by a single equipment manufacturer. This approach also eases the modeling of equipment in a simulation environment as all of the components of a hardware switch run the same compatible code in a software modeled switch. The novel techniques in this paper exploit the main functions of a programmable switch and combine the streaming data processor to create the desired effect from a telecommunication operator perspective to lower the costs and govern the network in a comprehensive manner. The results indicate that the proposed solution using PISA switches enables application visibility in an outstanding performance. This ability helps the operators to remove a fundamental gap between flexibility and scalability by making the best use of limited compute resources in application identification and the response to them. The experimental study indicates that, without any optimization, the proposed solution increases the performance of application identification systems 5.5 to 47.0 times. This study promises that DPI, NGFW (Next-Generation Firewall), and such application layer systems which have quite high costs per unit traffic volume and could not scale to a Tbps level, can be combined with PISA to overcome the cost and scalability issues.

1. Introduction

The telecommunication world is undergoing a great transformation. The most important aspect of this transformation is to switch from old hardware-dependent, vertical architectures to software-defined architecture. Although the use of NFV was a key improvement in a data plane with improved flexibility, Protocol Independent Switch Architecture (PISA) is one of the key elements with the accelerated performance and intelligent processing ability in the data plane during this change. The change in the architecture affects all stakeholders in a telecommunication operator infrastructure including applications. Legacy Applications written for legacy hardware are transformed into software-defined architecture.
Independent from the Software Defined Architectures, application identification became critical in the last decade. It positioned itself in the center of cyber-security, accounting, quality of service management, and similar services. One of the most important problems incurred by application identification is resource hungry behavior in itself. Next-Generation Firewalls (NGFW), and Deep Packet Inspection (DPI) systems are two of the most popular usage areas of application identification. DPI, as the name implies, inspects every packet that is running through the network deeply, and tries to classify it under a human-readable name. It not only relies on packets metadata and headers but also packet payload, hence the name “Deep”.
While L4 (OSI Layer-4) provides valuable information about a packet, it cannot give us any clue about the payload. In order to do that, packets must be inspected by maintaining the stateful information, and the payload must be constructed accordingly, so that it can be classified correctly [1]. With the help of L4 information, network-side security, such as stateful firewalls, can be built. Similar to NGFW processing packets in L7, DPI still needs to be inspected at L7. With the emergence of SDN architecture, DPI vendors switched from hardware to software based L7 DPIs. As they switch from hardware-dependent architecture to SDN based architecture, they lack the proper scalability to match the actual line speed of the switches. While the capacities of the data backbone increase, the systems depending on application identification became the bottleneck of the infrastructure.
In this transformation, the network applications become a Virtual Network Functions (VNF). Current state-of-the-art high performance software-based DPI systems (DPI VNFs) can scale up to 160 Gbps in a Virtual Machine running on top of powerful hardware [2]. As the demand increases, the telecom operators will need application identification systems like NGFWs and DPI systems running with the speeds in the order of Tbps of traffic classification in real time and such as in a single instance of DPI. The performance gain arises from the fact that the classification operation starts at the switch-level code data plane and continues in the user-plane.
In the meantime, several parallel works concentrated on various aspects (machine and deep learning methods, cybersecurity, etc.) of SDN. For instance, Ref. [3] argued about the necessity of providing quality of service (QoS) for each application on the network by the network operators, which can be accomplished by classifying network traffic associated with the applications. Authors have shown that deep learning models can be employed for classifying the network traffic, and residual network (ResNet) model outperforms the CNN convolutional neural network (CNN) model. In another work, cybersecurity related issues on the network layer are investigated [4]—for instance, detecting application-layer DoS attacks that utilize encrypted protocols by applying an anomaly-detection-based approach to statistics extracted from network packets.
In this paper, we aimed to introduce application layer processing capabilities of P4-based programmable switches and their usage in application layer processing. We investigated and proposed a solution for Protocol Independent Switch Architecture to process application layer data, enabling the inspection of an application content and triggering appropriate response. Protocol Independent Switch Architecture is a novel approach in networking where the switch does not run any embedded binary code but rather an interpreted code written in a purpose-specific language. The main motivation behind this approach is that telecommunication operators do not want to be locked in by a vendor for any type of networking equipment, developing their own networking code in a hardware environment that is not governed by a single equipment manufacturer. This approach also eases the modeling of equipment in a simulation environment as all of the components of a hardware switch run the same compatible code in a software model. The novel techniques in this paper exploit the main functions of a programmable switch to create the desired effect from a telecommunication operator perspective to lower the costs and govern the network in a comprehensive manner.
As stated before, the current demand in traffic growth puts a burden on the applications running in the application layer in the telecommunication world, although the performance and capacities of DPI systems and Next-Generation Firewalls do not grow with the demand of traffic growth; in addition, they cannot adapt themselves to the current revolution which migrates the networks into SDN-based systems. This paper proposes a solution using PISA switches with a DPI enabling application visibility (type identification) in an outstanding performance. The proposed architecture processes the packets in a network switch while selecting only necessary ones to the L7 based systems such as DPI and NGFW. The proposed solution distributes a load of DPI/NGFW systems into PISA switches and DPI/NGFW systems. Using the proposed solution in a network allows the users to grow in the Tbps scale as well as benefit from Network/Service Function chaining, which will also remove the overhead of passing through all inspection systems for unnecessary traffic. The simulation studies demonstrate that this approach increments the performance of NGFW and DPI systems in the order of 40 times. Building such a flexible and scalable application visibility system is challenging. This study also tries to give an answer to how network operators should design their networks in order to benefit from such solution processing packets in L7 knowledge with the performance of L4; in other words, they should figure out how to scale out such system for a high volume of data in real time.

2. Background

2.1. Protocol Independent Switch Architecture (PISA)

The research on programmable switches led to the definition of a re-configurable match-action table (RMT) [5] based hardware that can be programmed with a domain-specific language. Protocol Independent Switch Architecture (PISA) is a special case of RMT that supports the P4 language as the default domain-specific language [6].
A typical PISA switch consists of a programmable parser, ingres match-action table, a queue, a set of registers to keep the state of variable, egres match-action table, and a programmable deparse as shown in Figure 1.
The parser and deparser are programmed for processing user-defined packet header formats. The ingress and egress pipelines are the actual packet processing units that go through match-action tables in stages. Match-action tables match the header based on a set of rules that is controlled by control plane and performs the corresponding action on the packet. Actions use primitives to modify the non-persistent resources (headers or metadata) of each packet.

2.2. P4 Language

Although there are several studies developing and using programmable hardware [8,9,10,11], the early use of programmable hardware is to make telemetry data easy to use. Telemetry data are crucial for an automated future but generating telemetry data is not a trivial task. Adding more hardware and software to the routing and switching systems makes the current architecture more complex than ever. Since the telemetry data are generated at the packet level, the most logical way of doing this seems to be arising from the packet generating software at the hardware level, which leads us to P4, Programming protocol-independent Packet Processors, as it is referred to in the original paper defining it [12]. P4 is a domain-specific programming language for packet-processing hardware such as a router, switch, network interface cards, and network function related appliances that work and data plane based on the decisions from the control plane as in Figure 2.
In a typical PISA switch, execution of a P4 program is explained in Figure 3.
  • The user develops a P4 program, which can be any type of network function, such as router, firewall, load balancer, or packet inspection switch.
  • P4 compiler compiles the program as a JSON file and sends it to the switch, which can be a physical switch or a software model of it.
  • The states of parser, match-parser, match-action tables, ingress, egress queue, and deparser is controlled by P4 execution.
  • The states of match-action tables are additionally controlled by control-plane which can change the behavior of the P4 code at run-time.
When the P4 compiler is placed between the program and the API, the P4 compiler translates the domain-specific language P4 code into a JSON file, which acts as an executable file for the PISA switch. The required CLI commands to configure to be switched are also sent with this JSON file, which typically contains the newly added match-action table names, ingress, and egress queue names to be created on the PISA switch. This JSON file is actually a series of match-action table entries that acts as an executable for the switch to change the state of its tables based on the incoming packet.
The control plane commands contains the necessary table initialization based on the packet processing actions. The implementation of P4 control plane commands may differ from each other depending on the switch type (physical or virtual), vendor and the version of P4 (P4-14, P4-16). The following commands are valid for Simple Switch Behavioral Model V2, P4-16: [14]
  • table_set_default <table_name> <action_name> <action_parameters>
    is used to set the default action (i.e., the action executed when no match is found) of a table.
  • table_add <table_name> <action_name> <match_fields> => <action_parameters>
    is used to set the action related to a specific match in a table.
  • mirror_add <source> <destination>
    is used to mirror a specific port internally.
P4 programs ease the development of a network equipment code to a level for which only 128 lines are enough to build a simple IP switch with header validation [15]. Although the language itself is simple, there are other tools that emit P4 language code from another high-level language, such as the work done by the authors [13], P4HDL, which generates P4 code from a pseudo-code.

2.3. In-Band Telemetry with Programmable Switches

The above three requirements to develop a programmable hardware are not the only features addressed by P4. One of the most promising features of P4 arises in the telemetry. In-band Network Telemetry (INT) is defined in P4 language as one of the main applications [14]. Since P4 executes at the packet-processing level, it can rewrite every segment of the packet header, including the custom headers. This type of modification cannot be done in traditional statically programmed hardware-based network equipment. P4 helps set up a data plane by using the packet headers appropriately to collect even more information on the network’s status than what we can determine using conventional methods [16]. The idea behind INT is to collect telemetry metadata for each packet, including routing paths for the packet, entry and exit timestamps, the packets’ latency, queue occupancy in a given node, use of egress port connections, and the like. These measurements can be produced by each network node and sent in the form of a report to the monitoring system. Another way to embed them in packets is to delete them into allocated nodes at any node on the packet visits and connect them to the monitoring system. In a recent study, researchers used P4 INT experimental validation for telemetry based monitoring applications on the multi-layer optical network switches [17]. Using the telemetry data and the integrated software around it, semi-automatic congestion control over optical network switches can be achieved with the currently available SDN/NFV systems.
Although telemetry data can be collected in any way that is defined by P4 code, there are two types of telemetry that are defined in a standard P4 implementation [6]. As shown in Figure 4, telemetry data can be either embedded within a packet, which is called INT-MD, or extracted as a separate packet, called INT-XD. INT-MD is usually used by intermediate routers (switches to identify any type of problem that might occur along the path, which INT-XD is useful for external applications that do not need the payload of the original packet. In this experiment, INT metadata are used to help to the measurement of a switch internal state such as ingress/egress port ID, switch ID, queue occupancy, processing time, etc. These metrics are application agnostic and help in application-layer processing.

2.4. Real-Time Data Streaming

Real-time data streaming is shown to be beneficial for safety critical networks by removing possible bottleneck situations at the data accumulation joints, such as the data aggregator switches at the industrial networks [18]. In these networks, a possible delay in data would cause disastrous events, and data-streaming is a very good candidate solution as a remedy to this problem [19]. In the context of programmable switches, real-time data streaming is combined with telemetry to add application analytics, visibility, and troubleshooting features to a network stream. Apache Spark [20] and Apache Flink [21] are two of the most prominent pieces of software that is being used in streaming network telemetry data.

2.5. Deep Packet Inspection (DPI) and Application Layer Visibility

Deep Packet Inspection is important for telecommunication operators to gain more insight about the network and subscribers for revenue generation as well as cyber-security. A series of research [22,23] made in this area by the same author showed that subscriber profiling based on application level classification is critical for operators to increase the revenue and generate insight about the network. As the name implies, DPI inspects every packet with respect to the source, destination, header information, payload, and any other layer that is wrapped into it. Application layer visibility enable operators to distinguish between their subscribers and offer them new subscription services accordingly. As the video content is on the rise, operators can offer subscribers based on their use of online video services, such as Netflix, Amazon Prime, or Hulu. In addition, DPI is a supportive tool in employing Lawful Intercept or applying some appropriate filters to the Internet access of children.

3. Application Layer Processing with P4 Switches

The transformation from legacy systems into software defined architectures triggered the change in the hardware architectures. The demand for the change resulted in the development of PISA switches. The current state of the art in a PISA switch can scale up to 12.8 Tbps with a single ASIC/FPGA interface running with the speed of 400 Gbps. After the introduction of PISA switches into the production environment, the applications running in L4 such as Load Balancers, Volumetric DDoS attack detection, and prevention systems, port-based DNS applications are being ported into PISA switches. In this study, we aim to extend the use of PISA switches into L7 applications by designing a proper architecture. In the proposed architecture, by using PISA switches and its primary programming language P4, an application-level traffic analyzing system is proposed in a software-based emulation environment. It is basically combining L4 analytics of P4 architecture and L7 properties of the current state of the art in DPI or similar application layer inspection systems. The proposed architecture can be used to build a brand-new NGFW or DPI, by eliminating the complexities arising from switch dependent code.

3.1. Proposed System Architecture

The proposed system architecture in Figure 5 consists of five main components: PISA, Stream Processor, Control Plane, and Data Plane Configuration.
PISA: Programmable Switch that can run multiple instances of different P4 codes.
Data Plane: The generated P4 code for specific monitoring/telemetry/DPI/NGFW tasks. These P4 programs can be deployed according to specific task needs.
Control Plane: Programmable Switch related control plane engine to be placed. The control plane is aware of Data plane drivers and can communicate with the underlying switch according to the specific tasks. Although the proposed architecture supports any application specific task, from now on, the architecture will be coupled with DPI use case to make it easier to understand. This module is DPI-aware, which is fed from the specific packet stream, so that any decision to be made on the switch can be controlled by examining the specific packets.
Stream Processor: The stream processor to operate on the matching stream patterns based on the decisions taken from data plane configuration. Specific telemetry tasks can be offloaded to the stream processor to decrease the workload over the switch or vice versa. Workload trade-off between the stream processor and the switch is based on the number of streams that matches a specific monitoring task.
Application-Level Visibility: Application-level visibility is the component that actually identifies the types of application based on their L4 to L7 properties, which is also called DPI. In a typical DPI system, a server with network interfaces is running the DPI application. There are two usage modes of DPI systems which are active and passive DPI systems. In the passive mode, they are fed by a mirror of the traffic and processes it offline. On the other hand, active DPI systems fall within the whole traffic and are supposed to process all the traffic piece by piece in real time.
In the proposed architecture, the PISA switch processes the packets in the network layer and can even process the flows in the transport layer and co-operates with the stream processor to identify the applications. This is the point where the aggregation–disaggregation of high performing PISA switch and application identification engines.
The PISA switch selects the minimal packets from the flows and forwards them to the stream processor/DPI engine to identify the applications and generate the actions among the predefined policies. The proposed architecture combines the power of PISA and L7 application inspection/classification/processing features by designing them together. The simulation results indicate that in the near future most of the systems using application awareness will re-design their systems running on top of PISA switches together with their redesigned applications as a stream processor. The following algorithm shown in Listing 1 explains our approach:
Listing 1. Pseudo Code proposed for the P4 switches.
While packet -> in ingres buffer
	Extract telemetry headers
	Put in Flow-Keys Telemetry Headers
	If Flow Not in Flow-Table
Create flow in Flow-Table
	Else IF Flow-Packet-Count.< 2
		Put Payload in Flow-Packets ...
		with Flow-Keys in Flow-Table
Continue
	Else
		Create telemetry header with ...
		    INT-XD options
		Send Flow-Table in Flow-Keys ...
		    to External Telemetry
The accurate accounting of the flows can also be done with P4 language. The accounting of a flow should include the following information: Considering the definition of the flow, for every flow, count number of packets, number of bytes, flow start time, flow end time, in addition to that, for TCP flows, TCP flags.
The P4 code on a switch would combine the accounting information and send the rest to the aggregator with a pseudo-code. Please see the Appendix A for the details of the mentioned pseudo-code.This pseudo-code works as the preprocessor of the flow, extracts the required fields and sends it to the stream processor for further processing.
Lastly, the traditional DPI systems has two operating modes:
  • Inline
  • Out-of-Band
In the inline mode, DPI systems are placed between the edge and core network, so that the traffic is processed as the flow continues. This operating mode enables DPI to apply policies directly on the flow, without requiring any other hardware. The biggest disadvantage of this approach is that the DPI becomes the weakest link of the network, it should be scaled at least as much as the aggregated sum of the traffic received from the edges.
In out-of-band mode, DPI acts like a simple traffic analyzing tool; it received the traffic passively from a mirror port of a network aggregation device, collecting all the traffic information and applying policies accordingly. In this mode, the biggest challenge is policy application, as the traffic is not directly passing through the DPI; it can only act on TCP traffic by sending TCP-resets to the source addresses, for example, to apply a restricted access policy to a particular destination address within the scope of the network. Other types of policy applications, such as bandwidth restriction, quality-of-service changes, etc., require control plane integration with the underlying network device.
Our architecture also combines the benefits of inline DPI devices with the out-of-band ones where the traffic is actively received on the switch, counted and reported on the aggregated external devices and the policies are actively applied as the event triggers occur.

3.2. Simulation Environment

To simulate the proposed architecture, the following components are built as a development and simulation environment:
P4 Simulation Environment: This is the default simulation target for BMV2 PISA switches, as shown in Figure 6, which includes Mininet by default and handles virtual NIC creating, switch port allocation, connecting the switch port to the host process, and running the rest of the packet flow.
Virtual Machine: This is the default virtual machine, built in a programmatic way with Vagrant, a developer friendly VM running environment, based on Ubuntu 14.04 (ubuntu/trusty64) and several other necessary components, as shown in Listing 2:
Listing 2. Virtual machine set-up.
Simple_switch_bmv2:
BMV2 software switch, based on Python2.7
	m-veth-1: Ingres mininet Switch Port
	m-veth-2: Egres mininet Switch Port
	out-veth-1: Ingrest Server Host Port
	out-veth-2: Egres Server Host Port

4. Experimental Study

In this experimental study, we have used a P4 simulation environment which was discussed above and presented in Figure 6. The following items describe each component of our experimental simulation environment in detail:
Flow Generation: This is the controlled flow generation tool, written in Go. Synthetic flows are created with Python, while real-flow is taken from the Canadian Institute for Cyber-Security [24].
DPI: Deep Packet Inspection module written in Go, based on nDPI [25].
Emitter: Flow emitter that reads from the mirroring port, extracts metadata header information written by Data-Plane and sends the rest of the packet for stream processor. This module is also Apache-Spark aware; the final result of the telemetry query is calculated by the Emitter module.
Stream Processor: The streaming processor for the rest of the flows that match the final criteria for the expected output. In this simulation, we used Apache-Spark as a stream processor. The stream processor will be upgraded to Apache-Flink for better scalability.
Switch Script Control: This script controls the switch tables to update the relevant switch tables under control.
The parameters for running the simulation are adjusted according to the following criteria:
  • Session is TCP (Session has 3-way handshake);
  • Session is UDP (Session has no 3-way handshake);
  • Session is detected by nDPI;
  • Session is not detected by nDPI.

4.1. Experiment-1: Application Identification Performance Improvement DPI Application Classification on Mixed Flow Captures

Our hypothesis is that, to identify an application in a packet, a few bytes in a flow (one or two packets depending on the application) should be enough to determine the type of application correctly. Keeping this in mind, we must first identify the session in a packet. This use case demonstrates the performance improvement in DPI systems by eliminating the number of packets by some factor.
In order to adjust the parameters of this identification, we first analyzed the packet stream with nDPI, counting the number of identified protocols and the number of packets that are included in each stream. We then start reducing the number of packets in each stream and run the protocol identification with nDPI once again, comparing the results of identification with the previous run. By reducing the number of packets each time, we calculated the number of identified protocols in each reduced packet stream.
Session Identification in an IP flow is based on two different IP sessions:

4.1.1. TCP Session

SrcIP, DstI, SrcPort, DstPort, TCPSeqNum
TCP Session Identification is based Source IP, Destination IP, Source Port Destination Port and the TCP Sequence Number. The TCP session is established after the 3-way handshake as shown in Listing 3:
Listing 3. Pseudo Code proposed for the 3-way handshake.
---
Source -> Destination (SYN+Seq #)
Destination -> Source (SYN ACK+Seq #)
Source -> Destination (ACK+Seq #)
---
After the last ACK of the source, Sequence Number is incremented for a flow in the TCP session. Actually, it comes from the nature of TCP. It starts randomly and increments by the amount of the data transferred in each packet. The same is valid for ACK number.
The packets that will be reduced should be the packets after this 3-way handshake packet. To identify the flows, we will use the packet SYN ACK, and the response to the third packet—in other words, the first two packets of the server (or destination to source).

4.1.2. UDP Session

SrcIP, DstI, SrcPort, DstPort
UDP is a connectionless protocol; there is no clear definition of a UDP session. Every packet may create a flow independently. Basic identification for UDP flow consists of Source IP, Destination IP, Source Port, and Destination Port. Since Source Port is randomly allocated depending on the OS (which is called ephemeral ports), any flow that is using the same source port is considered as the same UDP session.

4.2. Sample Packet Captures

To study the flow reduction, we used the sample captures from nDPI that is used for verification of protocol identification. The capture files consist of 183 files, containing more than one protocol in one capture file. Twenty-two files that are too small for reduction (having packets less than 2) are excluded from study. One packet especially crafted for testing invalid packet type is also excluded since we are interested in valid packets, leaving us 160 packet captures.
To reduce the flow, the following pseudo-code is used as shown in Listing 4:
Listing 4. Pseudo Code proposed for the reduced algorithm.
---
network_packets = rdpcap(infile)
sessions = network_packets.sessions()
for key in sessions:
        pktCount=0
        for pkt in sessions[key]:
                if (pktCount < 2):
                        write(pkt, outfile)
                        pktCount = pktCount + 1
---
In this code, sessions are extracted by the criteria of whether they are TCP or UDP sessions. As mentioned earlier, for the TCP session, 3-way handshake packets are excluded from the session, whereas, for a UDP session, there is no precondition to exclude the packets. We use the second packets of the 3-way handshake as the first packet of the flow.
After the extraction of sessions, an nDPI sample classifier is used to classify the application in each reduced capture by replaying the capture file on the switch.
The following Table 1 summarizes the results of the experiment:
The full data are available in Table A1.

4.3. Experiment-2: TCP-Based Application Identification Using Real-Life Data

In the second experiment, we used the real captures from [24], namely the files in the dataset named PCAP-01-12_0750-0818.
There are 69 files located in this dataset, each containing a real world data capture that contains data from a real DDoS attack along with different types of traffic.
To see the effect of TCP, we extracted the TCP streams and used the extracted streams to send to the simulation.
For the sake of convenience to the readers, the results in Table A3 are summarized in the following Table 2:

4.4. Experiment-3: Application Identification in Full Stream Using Real-Life Data

In the final experiment, using the same capture files in Experiment-2, we treated the streams as is, sending them directly to the switch. The following results in Table 3 are achieved.
Full results are given in Table A2.

4.5. Results and Discussion of the Experiments

The experimental study on the packet captures showed us that 2-packet reduction of a flow is accurate enough to identify a flow.
In Experiment 1, the decrease in detection rate is mostly caused by TLS encryption, which shows us that further study is needed to identify encrypted flow as shown in Table 1. An ML based approach would be implemented to success in application identification of all flows. Based on the results from Table A1, 125 out of 160 packet captures are correctly identified. Sixteen out of 160 packet captures could not be identified. Normally, 160 out of 160 packets would be identified correctly. In addition, 125 files were identified correctly; 16 not identified at all (0 identification); 19 partially identified; 16 non-identified protocols are completely encrypted protocols; 125 identified protocols, mixed partially TLS and plain protocols; and 19 partially identified protocols are mixed partially TLS and plain protocols. Detection Rate drops with the reduced flow. (i.e., as we reduce the flow, we also lose important flow information that is needed for packet identification, short flows) The reason for not identifying these packet captures are that they are mostly encrypted protocols, which require more than two packets to identify. We will expand the experiments according to this. Since the flows used in Experiment 1 are taken from nDPI’s test captures, they consist of an artificially selected short flow containing all of the applications that nDPI can identify.
In Experiment 2, the results in Table 2 showed that it is possible to increase the detection rate while the reduction rate is also increased. This is due to the fact that there are only 17 protocols detected in TCP streams as indicated in Table A3, and most of them are not TLS-based protocols, or can be identified without deep inspection of the remaining payload.
In Experiment 3, the results in Table 3 indicated that, if we include UDP streams, the accuracy goes even higher, but the reduction rate decreases. This behavior is expected since the number of detected applications in Table A2 is 84, more than the number applications detected in TCP streams, but the number of packets in UDP streams is lower than the number of packets in TCP streams. This result is in line with results obtained from the study in [26].

5. Conclusions

The results of this study indicate that the application layer data processing can be done with PISA switches. We do not always need complex techniques to inspect the packets in L7, and a simple flow-based packet reduction can achieve significant accuracy to identify the flows and add application-level visibility over the network. Streaming processing combined with switch-level applications helps us build strong networking applications. In-band Network Telemetry is in the central position of a programmable switch that distinguishes and separates them from the traditional switches. The proposed method constructs a Network Processor with a specific task from each PISA-stream processor pair. The simulation results indicate that using such PISA switches in the center of all network traffic will increase the performance of such systems on the order of tens of folds. The use of such (proposed) systems will solve the capacity problems experienced with applying full network service chaining. In other words, by using a single PISA switch and tens of stream processors with different features (DPI, NGFW, etc.) on different ports, it constructs a big traffic exchange fabric with dynamically attached Network Processors of different types with very low costs.
The results of this study demonstrate that the proposed system reduces the traffic load of such systems by a factor of 5.5 to 47.0 times with acceptable application identification. Applying some ML based approaches would increase the success rate as if all traffic is going through legacy systems with the higher power of proposed systems. In addition, real traffic scenarios indicate that the performance gain would reach up to a factor of 40 on average by using the statistics in this study [26].
The studies in the literature and our experimental studies demonstrated that PISA switches are the glue for the SDN-NFV couple increasing the performance of such systems. One of the major problems of the NFV based application layer processing systems were the network packet processing performance bottleneck; however, the proposed solution offering an architecture avoids the performance bottleneck of both PNF (Pyhsical Network Function) and VNF (Virtual Network Function) systems by decreasing the network packet load.

6. Future Study

DPI, NGFW (Next-Generation Firewall), and such application layer systems that have quite a high cost per unit traffic volume and could not scale to a Tbps level can be combined with PISA to overcome the cost and scalability issues. Practical applications are expected to be available in the upcoming years, maybe even months.
Encrypted network traffic identification with P4 language is one of the main future areas of study for this thesis. In-band Telemetry seems to be a good place to start this study, as it tells us about the characteristics of a flow on a packet level. In this kind of an analysis, AI/ML methods can provide a lot of help in defining the features of traffic. As stated above, the use of PISA switches will allow the operators to collect in-band telemetry information, which will also create the building ground for Zero Touch Networking (ZSM) once the networks are utilized with the use of proposed systems. Once ZSM features are injected into the infrastructures, operational costs and outage times will decrease dramatically.
Another area of interest based on this study could be Digital Twins in Telecommunication Networks. As PISA switches allow you to model the hardware in a software environment, it would be straightforward to build a DT (Digital Twin) of a telecom network and feed forward the actual data and commands towards the active network. Particularly, the data center network can be modeled completely using the DTs of core and edge network devices. Telcos can gain an advantage from this by running different scenarios on their DT based on different types of network flows. These network flows can be adjusted to plan the data center network topology according to the SLA of the customers.

Author Contributions

Conceptualization, K.O. and Y.K.T.; methodology, K.O. and Y.K.T.; software, Y.K.T.; validation, Y.K.T., K.O. and I.B.; formal analysis, K.O. and Y.K.T.; investigation, K.O., Y.K.T. and I.B.; resources, K.O., Y.K.T. and I.B.; data curation, K.O. and Y.K.T.; writing—original draft preparation, K.O., Y.K.T. and I.B.; writing—review and editing, K.O., Y.K.T. and I.B.; visualization, Y.K.T.; supervision, K.O.; project administration, K.O.; funding acquisition, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Authors would like to acknowledge helpful staff of the MDPI Sensors for their endless help during the publication phase of our paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Code proposed for P4 switches is provided as follows in Listing A1:
Listing A1. Code proposed for P4 switches.
// Flow key registers
reg_src_ip = Register();
reg_dst_ip = Register();
reg_proto  = Register();
reg_l4  = Register();
// Flow statistics registers
reg_pkt_count = Register();
reg_byte_count = Register();
reg_time_start = Register();
reg_time_end = Register();
reg_flags = Register();
initialize_registers(hdr: PacketHeader, index:
HashIndex, md: Metadata):
reg_src_ip[index] = hdr.src_ip;
reg_dst_ip[index] = hdr.dst_ip;
reg_proto[index] = hdr.proto;
reg_l4[index] = hdr.l4;
reg_pkt_count[index] = 1;
reg_byte_count[index] = length(hdr.ethernet) + hdr.ip_len
reg_time_start[index] = md.timestamp;
reg_time_end[index] = md.timestamp;
reg_flags[index] = hdr.tcp_flags;
with pkt = ingress.next_packet():
hdr = parse(pkt);
md = pkt.metadata;
index = hash({hdr.src_ip, hdr.dst_ip, hdr.proto, hdr.
l4});
collision =
hdr.src_ip != reg_src_ip[index]
|| hdr.dst_ip != reg_dst_ip[index]
|| hdr.proto
!= reg_proto[index]
|| hdr.l4
!= reg_l4[index]
if collision:
// Export info and keep track of new flow
flow_record = { reg_src_ip[index],
reg_dst_ip[index],
reg_proto[index],
reg_l4[index],
reg_pkt_count[index],
reg_byte_count[index],
reg_time_start[index],
reg_time_end[index],
reg_flags[index] }
emit({hdr.ethernet, flow_record});
initialize_registers(hdr, index, md);
else:
// Update statistics of current flow
reg_pkt_count[index] += 1;
reg_byte_count[index] += length(hdr.ethernet)
+ hdr.ip_len
reg_time_end[index] = md.timestamp;
reg_flags[index] ||= hdr.tcp_flags;
Table A1. Detected protocols from the capture files.
Table A1. Detected protocols from the capture files.
BYTESPACKETSDET.STAT.REDUCE RATE (%)
ORGRDCORGRDCPOS.NEG.
anydesk2,962,572767696381099.97
exe_download734,33532870341099.96
exe_download_as542,26532853441099.94
tor3,106,09635243859424099.89
whatsappfiles467,11376062081099.84
wireguard791,7581576239941099.80
ps_vue2,242,71051841740153099.77
tls_long_cert121,96938018241099.69
ftp1,158,19638051192123099.67
quic-mvfst408,962141435321099.65
git76,1653769041099.51
netflix6,323,01732,77669992175099.48
coap_mqtt954,91755058516513099.42
dns-tunnel80,66852843881099.35
bitcoin596,3624816637241099.19
wa_video998,59385871567386099.14
ssh41,73840125841099.04
quic_t51589,126566464241099.04
quic-28252,865278225341098.90
bittorrent_ip519,514651247981098.75
skype-conf44,48761620041098.62
dns_exfiltr80,745114930041098.58
instagram3,009,24747,58034431227098.42
tls_verylong_ce23,3813804841098.37
check_mk_new22,5943919841098.27
quic-mvfst-22300,063523249041098.26
bad-dns-traffic108,5421934382121098.22
capwap108,0372113422212098.04
anyconnect-vpn1,088,92923,234300116617097.87
openvpn64,2631392298121097.83
webex902,82319,93715802236097.79
bittorrent_utp43,5539798641097.75
facebook31,9517526081097.65
nintendo357,05791561000663097.44
simple-dnscrypt47,3401344111161097.16
443-opvn12,6773804641097.00
Oscar11,0903527141096.83
google_ssl97803282841096.65
nest_log_sink137,03648061000603096.49
modbus912935810241096.08
quic04693,697372310041096.03
fix145,77858581261481095.98
weibo279,50711,2874981046095.96
tls_esni_sni_b16,8116963881095.86
pps2,307,979104,79925572434095.46
http-crash-3544168921095.26
smb_deletefile33,172166010141095.00
WebattackXSS4,946,124248,266937426411094.98
teams1,554,28778,248281726715094.97
dnp351,7862752543321094.69
wechat707,43843,775167228715093.81
s7comm65804085541093.80
telegram374,40925,197156611915093.27
youtube_quic198,57513,389289122093.26
1kxun664,36145,690143929716093.12
bittorrent312,90421,595299741093.10
ja3_lots_of176145282741093.07
ja3_lots_of253963801141092.96
wa_voice187,83213,2767367611092.93
viber157,31112,098424819092.31
youtubeupload130,32610,358137121092.05
dropbox110,8849056848481091.83
amqp27,3542284160121091.65
iphone232,61621,92250013812090.58
skype708,14071,068328463913089.96
WebattackSQLinj32,264338494361089.51
quic360,99837,893518344089.50
hangout32303401921089.47
ssdp-m-search16531741921089.47
BGP_Cisco_hdlc13051441421088.97
dos_win98_smb_10,055113022093088.76
skype_unknown537,72060,508214653713088.75
netbios30,9223546260242088.53
sip51,8475966112113088.49
whatsapp_l_call223,13026,502125318711088.12
rx29,6433641132181087.72
6in4tunnel43,3415326127265087.71
android143,35418,80950016714086.88
ajp7414102038102086.24
quic_q4621,72130282041086.06
quic_q5020,91430482041085.43
ethereum264,11139,31720002602085.11
malware8625134726104084.38
teamspeak322233541321084.08
quic_q3925,62541316041083.88
iec60780-5-10412,5612034147241083.81
whatsapp_login32,369596393197081.58
whatsapp_voice_34,3196492261523081.08
quic-mvfst-exp27,02952723041080.50
netflowv914,12828321021079.95
ftp_failed21324761841077.67
smpp_in_general15523471741077.64
EAQ26,5636732197822074.66
upnp10,24829281441071.43
fuzz-2020-02158,04346,4453661253070.61
quic-29974630111541069.11
quic-24836030291541063.77
zabbix9553761041060.63
4in4tunnel970388521060.00
quic-2713,36756642041057.63
quic-mvfst-2713,36756642041057.63
quic_q46_b750032392041056.81
fuzzing32,26815,422131813052.21
mongodb3388164827162051.36
mssql_tds17,172872838201049.17
malformed_dns60043096641048.43
quic-23767139562041048.43
fuzz-200699,98653,9306913999046.06
dnscrypt-v2-doh230,431132,9875771361042.29
skype_udp459278531039.43
teredo3150198024141037.14
quic_t50870856641241034.96
smbv11365895741034.43
diameter21241488641029.94
websocket561428541023.71
steam11,51610,218104971011.27
kerberos30,13929,4127775402.41
encrypted_sni2382238233100.00
tls-esni-fuzzed2382238233100.00
4in6tunnel2284228444100.00
mysql-846346344100.00
ubntac21928192888100.00
filtered21,59521,5957474100.00
dnscrypt-v1321,274321,274608564200.00
WebattackRCE210,131210,131797797200.00
Table A2. Application identification in full stream.
Table A2. Application identification in full stream.
APPNAMEREDUCED BYTESORIGINAL B.REDUCED PACKETORIGINAL P.REDUCTION %
AFP75.888142.84813625646.88%
Amazon222.8103.539.2001.89210.95993.70%
AmongUs74.772187.48813433660.12%
Ayiya70.308167.40012630058.00%
BitTorrent264.492566.9284741.01653.35%
BJNP110.484223.20019840050.50%
CAPWAP110.484225.43219840450.99%
CiscoVPN90.636174.45616631848.05%
Cloudflare3.43257.1085229093.99%
COAP205.344429.66036877052.21%
Collectd94.860180.79217032447.53%
CPHA149.544305.78426854851.09%
DHCP188.802575.7303551.25967.21%
DHCPV66.178238.728421.62497.41%
DNS1.285.7981.528.16411.15012.35415.86%
Dropbox118.296232.12821241649.04%
EAQ162.936363.81629265255.21%
Facebook78.98083.8368048485.79%
FTP_CONTROL9.73627.94014843065.15%
Github8.5928.98692964.38%
GMail20.928704.5381924.45897.03%
Google2.274.50544.542.51023.970142.07194.89%
GoogleServices115.4722.215.5169649.06294.79%
GTP263.376565.8124721.01453.45%
H323159.588351.54028663054.60%
HTTP799.41617.123.43611.30059.25795.33%
HTTP_Proxy3.1323.25252543.69%
IAX118.296241.05621243250.93%
ICMP380.0646.251.6584.05248.53693.92%
ICMPV65.54888.9046295493.76%
Instagram74.94877.9504845123.85%
IPsec279.632590.9965001.05852.68%
IRC95.976213.15617238254.97%
iSCSI212.040449.74838080652.85%
Kerberos48.228124.1169022661.14%
LDAP94.860249.98417044862.05%
LinkedIn15.34617.77414416813.66%
LISP156.240330.33628059252.70%
LLMNR149.644304.96828258850.93%
MDNS213.722678.8914162.02368.52%
Megaco46.872103.7888418654.84%
Memcached8.05215.864183249.24%
Microsoft76.694784.1046402.49390.22%
Microsoft3655.064144.7764431496.50%
MsSQL-TDS2.6403.600446026.67%
NetBIOS134.656300.52424858255.19%
NFS111.600243.28820043654.13%
NTP54.684112.7169820251.49%
OpenVPN105.024224.43619040453.21%
OSPF21.368880.7422289.30797.57%
Playstation75.012167.76013830655.29%
Radius213.156444.16838279652.01%
RDP110.964221.56820640649.92%
Reddit9.33210.29288969.33%
RemoteScan190.836379.44034268049.71%
RTSP45.756100.4408218054.44%
RX8.928.18825.862.37216.00246.35065.48%
sFlow131.688280.11623650252.99%
SIP245.320512.04444692452.09%
SMBv11.45816.52466891.18%
SMBv239.19212.36015220425.63%
SOCKS64.092141.09612226054.58%
SOMEIP386.136850.3926921.52454.59%
SSDP169.968230.16041876626.15%
SSH254.0247.116.6083.40646.88896.43%
Starcraft90.396196.41616235253.98%
Syslog128.340262.26023047051.06%
TeamViewer116.064247.75220844453.15%
Telnet7.0808.52011814216.90%
Teredo107.136234.36019242054.29%
TFTP51.336109.3689219653.06%
TINC100.440223.20018040055.00%
TLS229.37016.020.5782.90035.10598.57%
Twitter12.50012.8281321362.56%
UBNTAC2106.020233.24419041854.55%
UbuntuONE7.1143.997.352803.25299.82%
VHUA80.352181.90814432655.83%
Viber686.3401.487.6281.2302.66653.86%
VMware217.620501.08439089856.57%
Wikipedia24.35226.8322802969.24%
WireGuard112.716255.56420245855.90%
Xbox229.896510.01241291454.92%
XDMCP100.440213.15618038252.88%
YouTube14.96015.36092962.60%
Table A3. TCP-based Detection Applications and Reduction Rates.
Table A3. TCP-based Detection Applications and Reduction Rates.
APPNAMEREDUCED BYTESORIGINAL B.REDUCED PACKETORIGINAL P.REDUCTION %
Amazon49.8143.358.9447529.75598.52%
CiscoVPN2403604633.33%
Cloudflare3.43257.1085229093.99%
FTP_CONTROL9.61227.81614642865.44%
Google556.51542.745.0867.630124.99198.70%
HTTP796.64417.120.66411.25659.21395.35%
HTTP_Proxy2403604633.33%
ICMP5321.02461248.05%
Microsoft36526420.03444098.68%
MsSQL-TDS2.6403.600446026.67%
Playstation2403604633.33%
RDP48060081020.00%
SMBv238.70011.86814419626.69%
SSH253.9007.116.4843.40446.88696.43%
Telnet6.6008.04011013417.91%
TLS223.75416.014.9622.80835.01398.60%
UbuntuONE1.5103.991.232203.18899.96%

References

  1. Yazici, M.A.; Oztoprak, K. Policy broker-centric traffic classifier architecture for deep packet inspection systems with route asymmetry. In Proceedings of the 2017 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom), Istanbul, Turkey, 5–8 June 2017; pp. 1–5. [Google Scholar] [CrossRef]
  2. Sandvine Inc. Virtual ActiveLogic—Hyperscale Data Plane for Next, Generation Telco Networks. Available online: https://www.sandvine.com/hubfs/Sandvine_Redesign_2019/Downloads/2020/Datasheets/Network%20Optimization/Sandvine_DS_Virtual_ActiveLogic.pdf (accessed on 20 June 2021).
  3. Lim, H.K.; Kim, J.B.; Heo, J.S.; Kim, K.; Hong, Y.G.; Han, Y.H. Packet-based network traffic classification using deep learning. In Proceedings of the 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Okinawa, Japan, 11–13 February 2019; pp. 046–051. [Google Scholar]
  4. Zolotukhin, M.; Hämäläinen, T.; Kokkonen, T.; Siltanen, J. Increasing web service availability by detecting application-layer DDoS attacks in encrypted traffic. In Proceedings of the 2016 23rd International Conference on Telecommunications (ICT), Thessaloniki, Greece, 16–18 May 2016; pp. 1–6. [Google Scholar]
  5. Bosshart, P.; Gibb, G.; Kim, H.S.; Varghese, G.; McKeown, N.; Izzard, M.; Mujica, F.; Horowitz, M. Forwarding metamorphosis: Fast programmable match-action processing in hardware for SDN. ACM SIGCOMM Comput. Commun. Rev. 2013, 43, 99–110. [Google Scholar] [CrossRef]
  6. Kim, C. Programming the Network Dataplane; ACM SIGCOMM: Florianopolis, Brazil, 2016. [Google Scholar]
  7. Gupta, A.; Harrison, R.; Canini, M.; Feamster, N.; Rexford, J.; Willinger, W. Sonata: Query-driven streaming network telemetry. In Proceedings of the 2018 conference of the ACM special interest group on data communication, Budapest, Hungary, 20–25 August 2018; pp. 357–371. [Google Scholar]
  8. Wang, S.Y.; Hu, H.W.; Lin, Y.B. Design and Implementation of TCP-Friendly Meters in P4 Switches. IEEE/ACM Trans. Netw. 2020, 28, 1885–1898. [Google Scholar] [CrossRef]
  9. Yan, Y.; Beldachi, A.F.; Nejabati, R.; Simeonidou, D. P4-enabled Smart NIC: Enabling Sliceable and Service-Driven Optical Data Centres. J. Light. Technol. 2020, 38, 2688–2694. [Google Scholar] [CrossRef]
  10. Fernández, C.; Giménez, S.; Grasa, E.; Bunch, S. A P4-Enabled RINA Interior Router for Software-Defined Data Centers. Computers 2020, 9, 70. [Google Scholar] [CrossRef]
  11. Kundel, R.; Nobach, L.; Blendin, J.; Maas, W.; Zimber, A.; Kolbe, H.J.; Schyguda, G.; Gurevich, V.; Hark, R.; Koldehofe, B.; et al. OpenBNG: Central office network functions on programmable data plane hardware. Int. J. Netw. Manag. 2021, 31, e2134. [Google Scholar] [CrossRef]
  12. Bosshart, P.; Daly, D.; Gibb, G.; Izzard, M.; McKeown, N.; Rexford, J.; Schlesinger, C.; Talayco, D.; Vahdat, A.; Varghese, G.; et al. P4: Programming protocol-independent packet processors. ACM SIGCOMM Comput. Commun. Rev. 2014, 44, 87–95. [Google Scholar] [CrossRef]
  13. Hang, Z.; Wen, M.; Shi, Y.; Zhang, C. Programming protocol-independent packet processors high-level programming (P4HLP): Towards unified high-level programming for a commodity programmable switch. Electronics 2019, 8, 958. [Google Scholar] [CrossRef] [Green Version]
  14. The P4.org Applications Working Group. In-Band Network Telemetry (INT) Data Plane Specification. Available online: https://github.com/p4lang/p4-applications/blob/master/docs/INT_v2_1.pdf (accessed on 10 March 2021).
  15. The P4 Language Consortium. Getting Started with P4 Language. Available online: https://p4.org/p4/getting-started-with-p4.html (accessed on 15 March 2021).
  16. Parol, P. P4 Network Programming Language—What Is It All About? Available online: https://codilime.com/p4-network-programming-language-what-is-it-all-about/ (accessed on 21 March 2021).
  17. Sgambelluri, A.; Paolucci, F.; Giorgetti, A.; Scano, D.; Cugini, F. Exploiting telemetry in multi-layer networks. In Proceedings of the 2020 22nd International Conference on Transparent Optical Networks (ICTON), Bari, Italy, 19–23 July 2020; pp. 1–4. [Google Scholar]
  18. Sari, A.; Lekidis, A.; Butun, I. Industrial networks and IIoT: Now and future trends. In Industrial IoT; Springer: Cham, Switzerland, 2020; pp. 3–55. [Google Scholar]
  19. Butun, I.; Almgren, M.; Gulisano, V.; Papatriantafilou, M. Intrusion Detection in Industrial Networks via Data Streaming. In Industrial IoT; Springer: Cham, Switzerland, 2020; pp. 213–238. [Google Scholar]
  20. Zaharia, M.; Xin, R.S.; Wendell, P.; Das, T.; Armbrust, M.; Dave, A.; Meng, X.; Rosen, J.; Venkataraman, S.; Franklin, M.J.; et al. Apache Spark: A Unified Engine for Big Data Processing. Commun. ACM 2016, 59, 56–65. [Google Scholar] [CrossRef]
  21. Apache Foundation. Apache Flink - Stateful Computations over Data Streams. Available online: https://flink.apache.org/ (accessed on 13 February 2021).
  22. Oztoprak, K. Subscriber Profiling for Connection Service Providers by Considering Individuals and Different Timeframes. IEICE Trans. Commun. 2016, E99.B, 1353–1361. [Google Scholar] [CrossRef]
  23. Oztoprak, K. Profiling subscribers according to their internet usage characteristics and behaviors. In Proceedings of the 2015 IEEE International Conference on Big Data (Big Data), Santa Clara, CA, USA, 29 October–1 November 2015; pp. 1492–1499. [Google Scholar] [CrossRef]
  24. Sharafaldin, I.; Lashkari, A.H.; Hakak, S.; Ghorbani, A. Developing Realistic Distributed Denial of Service (DDoS) Attack Dataset and Taxonomy. In Proceedings of the 2019 International Carnahan Conference on Security Technology (ICCST), Chennai, India, 1–3 October 2019; pp. 1–8. [Google Scholar]
  25. Deri, L.; Martinelli, M.; Bujlow, T.; Cardigliano, A. ndpi: Open-source high-speed deep packet inspection. In Proceedings of the 2014 International Wireless Communications and Mobile Computing Conference (IWCMC), Nicosia, Cyprus, 4–8 August 2014; pp. 617–622. [Google Scholar]
  26. Jurkiewicz, P.; Rzym, G.; Boryło, P. Flow length and size distributions in campus Internet traffic. Comput. Commun. 2021, 167, 15–30. [Google Scholar] [CrossRef]
Figure 1. PISA match-action table processing pipeline (Source: Adapted from [7]).
Figure 1. PISA match-action table processing pipeline (Source: Adapted from [7]).
Sensors 21 08010 g001
Figure 2. P4 Architecture (Source: Adapted from [9]).
Figure 2. P4 Architecture (Source: Adapted from [9]).
Sensors 21 08010 g002
Figure 3. Pipeline execution in a P4-enabled switch (Source: Adapted from [13]).
Figure 3. Pipeline execution in a P4-enabled switch (Source: Adapted from [13]).
Sensors 21 08010 g003
Figure 4. In-band network telemetry.
Figure 4. In-band network telemetry.
Sensors 21 08010 g004
Figure 5. Proposed system architecture.
Figure 5. Proposed system architecture.
Sensors 21 08010 g005
Figure 6. Simulation environment.
Figure 6. Simulation environment.
Sensors 21 08010 g006
Table 1. Rates for Test Captures.
Table 1. Rates for Test Captures.
μ REDUCTION RATIO82%
μ REDUCTION FACTOR5.5
μ DETECTION RATE84%
Table 2. Rates for real-life captures using only TCP streams.
Table 2. Rates for real-life captures using only TCP streams.
μ REDUCTION RATIO97.88%
μ REDUCTION FACTOR47.16
μ DETECTION RATE95%
Table 3. Rates for real-life captures using full streams.
Table 3. Rates for real-life captures using full streams.
μ REDUCTION RATIO84.73%
μ REDUCTION FACTOR6.5
μ DETECTION RATE99.83%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Butun, I.; Tuncel, Y.K.; Oztoprak, K. Application Layer Packet Processing Using PISA Switches. Sensors 2021, 21, 8010. https://doi.org/10.3390/s21238010

AMA Style

Butun I, Tuncel YK, Oztoprak K. Application Layer Packet Processing Using PISA Switches. Sensors. 2021; 21(23):8010. https://doi.org/10.3390/s21238010

Chicago/Turabian Style

Butun, Ismail, Yusuf Kursat Tuncel, and Kasim Oztoprak. 2021. "Application Layer Packet Processing Using PISA Switches" Sensors 21, no. 23: 8010. https://doi.org/10.3390/s21238010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop