Network Slicing for mMTC and URLLC Using Software-Deﬁned Networking with P4 Switches

: Massive machine-type communication (mMTC) and ultra-reliable low-latency communication (URLLC) are two key services in ﬁfth-generation (5G) mobile wireless networks. These networks have been developed with extremely high service quality requirements: scalability for mMTC and reliability with low latency for URLLC. Fifth-generation network slicing will play a key role in supporting the distinct requirements of various services. Software-deﬁned networking (SDN), a promising technology for network softwarization, physically separates the network control plane from the data plane by centrally controlling switches with an SDN controller. However, control channel bottleneck and processing delays due to this centralized control may reduce the scalability, reliability, and security of SDN. This paper proposes an SDN framework with programming protocol– independent packet processor (P4) switches (SDNPS), and deﬁnes a packet format containing in-band network telemetry data to simultaneously support heavy Internet of Things and URLLC trafﬁc in 5G network slices. The method both satisﬁes the requirements of mMTC and URLLC and alleviates the load on the SDN controller. P4 is an advanced switch interface technology that provides enhanced stateful forwarding and reveals a persistent state on the SDN data plane. To demonstrate the superiority of SDNPS, simulations are performed on conventional SDNs and SDNPS. SDNPS outperforms the other schemes in terms of average throughput, packet loss ratio, and packet delay for both the mMTC and URLLC network slices.


Introduction
Numerous studies have investigated next-generation mobile communication network technologies, such as network architectures, communication protocols, information security and smart applications [1,2]. The performance of Internet of Things (IoT), in which connected devices transfer sensor data over the internet, is a key consideration in the development of fifth-generation (5G) networks. At the dawn of the era of 5G communication networks in 2020, the number of connected IoT devices had already reached 50 billion; each person had an average of 10.3 connected devices ( Figure 1). By 2030, the number of IoT device nodes is expected to grow to 80 billion, and each person is expected to have an average of 20.5 connected devices [2].
The International Telecommunication Union Radio-communication Sector divided 5G communication services into three categories: mMTC (massive machine-type communication), URLLC (ultra-reliable low-latency communication) and eMBB (enhanced mobile broadband) [3,4]. Figure 2 displays the nine service quality requirements, such as latency, and the corresponding importance of these requirements for each of the three categories, as indicated by the length of the radial bars [5]. mMTC and URLLC have two extremely different service quality requirements of scalability and reliability with low latency, respectively. Fifth generation and future communication networks should be able to integrate The International Telecommunication Union Radio-communication Sector divided 5G communication services into three categories: mMTC (massive machine-type communication), URLLC (ultra-reliable low-latency communication) and eMBB (enhanced mobile broadband) [3,4]. Figure 2 displays the nine service quality requirements, such as latency, and the corresponding importance of these requirements for each of the three categories, as indicated by the length of the radial bars [5]. mMTC and URLLC have two extremely different service quality requirements of scalability and reliability with low latency, respectively. Fifth generation and future communication networks should be able to integrate heterogeneous services with different latency requirements, reliability scores, and data transfer rates in an environment with shared network infrastructure instead of providing individual network solutions for each service type. In other words, a communication network designed for IoT devices alone is unsuitable for supporting heterogeneous services in a shared network environment. Fifth-generation network slicing (NS), a technology for integrating multiple heterogeneous services in a shared network infrastructure, is expected to be widely used in future networks. In NS, virtualization is used to slice a physical 5G network into multiple  The International Telecommunication Union Radio-communication Sector divided 5G communication services into three categories: mMTC (massive machine-type communication), URLLC (ultra-reliable low-latency communication) and eMBB (enhanced mobile broadband) [3,4]. Figure 2 displays the nine service quality requirements, such as latency, and the corresponding importance of these requirements for each of the three categories, as indicated by the length of the radial bars [5]. mMTC and URLLC have two extremely different service quality requirements of scalability and reliability with low latency, respectively. Fifth generation and future communication networks should be able to integrate heterogeneous services with different latency requirements, reliability scores, and data transfer rates in an environment with shared network infrastructure instead of providing individual network solutions for each service type. In other words, a communication network designed for IoT devices alone is unsuitable for supporting heterogeneous services in a shared network environment. Fifth-generation network slicing (NS), a technology for integrating multiple heterogeneous services in a shared network infrastructure, is expected to be widely used in future networks. In NS, virtualization is used to slice a physical 5G network into multiple Fifth-generation network slicing (NS), a technology for integrating multiple heterogeneous services in a shared network infrastructure, is expected to be widely used in future networks. In NS, virtualization is used to slice a physical 5G network into multiple isolated end-to-end (E2E) logical networks of varying sizes and structures that are dedicated to different use cases. The equipment, access, transmission, and core network of each virtual network (i.e., network slice) are independent [5,6]. By using network slices, companies could ask internet service providers to provide a customized 5G network. For example, hospitals, with high security and reliability requirements, could use a network slice that enables low-latency, high-reliability applications such as remote surgery. With network softwarization, including software-defined networking (SDN), network function virtualization (NFV) [7,8], etc., different virtual networks can serve different functions, optimizing network resources usage. Figure 3 illustrates the 5G NS architecture as left and right blocks (framed by dotted lines). The left block represents the actual slicing implementation, and the right block represents the slice management and configuration. The left implementation SDN is a promising technology for network softwarization for realizing 5G NS. Potential network slices can be defined in accordance with the relevant requirements of each SDN user. SDN can include artificial intelligence algorithms and provide flexible and programmable applications or services [10,11]. The Open Network Foundation [12] defined SDN as 'The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.' This separation of the control plane from the data plane enables SDN to flexibly and centrally control the entire network and to quickly respond to changes in network conditions, business, markets, and user demands. The major difference between contemporary network operations and SDN is the creation of a virtualized control plane that can execute intelligent management decisions for network functions, remediating the gap between service delivery and network management ( Figure 4) [13]. With an SDN, network control can be achieved directly and programmatically through a standardized southbound interface. For instance, the commonly used OpenFlow protocol [14] defines a communication mechanism between the controller in the control plane and the forwarding components in the data plane. SDN is a promising technology for network softwarization for realizing 5G NS. Potential network slices can be defined in accordance with the relevant requirements of each SDN user. SDN can include artificial intelligence algorithms and provide flexible and programmable applications or services [10,11]. The Open Network Foundation [12] defined SDN as 'The physical separation of the network control plane from the forwarding plane, and where a control plane controls several devices.' This separation of the control plane from the data plane enables SDN to flexibly and centrally control the entire network and to quickly respond to changes in network conditions, business, markets, and user demands. The major difference between contemporary network operations and SDN is the creation of a virtualized control plane that can execute intelligent management decisions for network functions, remediating the gap between service delivery and network management ( Figure 4) [13]. With an SDN, network control can be achieved directly and programmatically through a standardized southbound interface. For instance, the commonly used OpenFlow protocol [14] defines a communication mechanism between the controller in the control plane and the forwarding components in the data plane.  The state information of all packets in the network relies on the actions of the controller rather than those of the OpenFlow switches in the data plane, substantially increasing the burden on the SDN controller [15]. The processing delay between the SDN controller and switches and the control channel bottleneck [16,17] may reduce the scalability,  The state information of all packets in the network relies on the actions of the controller rather than those of the OpenFlow switches in the data plane, substantially increasing the burden on the SDN controller [15]. The processing delay between the SDN controller and switches and the control channel bottleneck [16,17] may reduce the scalability, reliability, and security of 5G NS. Several advanced interface protocols for the SDN controller and switches have been proposed to provide enhanced state forwarding rules and enable data plane switches to obtain persistent state information, such as OpenState [18], programming protocol-independent packet processors (P4) [19], protocol-oblivious forwarding [20], stateful data plane architecture [21], and stateful network-wide abstractions for packet processing [22]. Among these, P4 is a protocol-independent, high-level programming language that enables programmers to modify how SDN switches process data packets. By maintaining the status of forwarding packets with P4, 5G NS can both support a variety of communication protocols and data transmission mechanisms and achieve independent monitoring, control, and management. Moreover, P4 implements a top-down design concept; the programmer or user determines the function of the underlying switch and how it should process packets, and the underlying communication protocol does not limit the services that the network can support.
In order to provide the heterogeneous service quality requirements in a 5G network and to achieve easy management, scalability, and high responsiveness to variations in network conditions, this paper proposes a framework of software-defined networking with P4 switches (SDNPS) to support both mMTC and URLLC in 5G network slices. The main contributions of the proposed SDNPS are the design of an SDN framework with P4 switches and defining a packet format containing in-band network telemetry (INT) [23] data that simultaneously support massive IoT and URLLC traffic in the core of 5G network slices. Thus, the extremely different service requirements of mMTC and URLLC can be satisfied, and the control load on the SDN controller can be alleviated. We assume that all of the switches in the data plane would support P4 and divide the forwarding rules of the P4 switches into two parts: one for mMTC and one for URLLC services. In the SDNPS, INT enables data packets to carry network status information such that P4 switches on the forwarding path can update the included status information before forwarding packets, thereby reducing the control load on the SDN controller. To the best of our knowledge, this paper is the first to implement 5G NS in SDN with P4 switches to support both mMTC and URLLC. Finally, we perform simulations of 5G NS on SDNs with traditional or P4 switches to demonstrate the effectiveness and superiority of the proposed SDNPS. The performance measures include the average throughput, packet loss ratio, and packet delay for the mMTC and URLLC network slices.
The remainder of this paper is organized as follows. Section 2 explores the related literature. Section 3 describes the framework and operating procedure of our proposed SDNPS. Section 4 analyzes and compares the simulation results. Finally, concluding remarks and future works are given in Section 5.

Related Works
Most studies on IoT in modern communication networks have aimed to reduce the high collision rate of the random access channel (RACH) competition mechanism in 5G. If numerous mMTC devices appear in a radio access network (RAN), the number of mMTC devices that can successfully connect is drastically reduced, reducing the mMTC connection density. Studies on this phenomenon can be broadly divided into those on LTEmachine (LTE-M) [24,25] and narrowband IoT (NB-IoT) communications [26][27][28]. These two technologies use different cell coverage areas and reserve different numbers of resource blocks (RBs) during a frame period. By adopting simpler coding techniques or lower data transmission rates for dedicated RBs, the mMTC connection density can be increased. Table 1 presents a comparison of the LTE-M and NB-IoT parameters based on 3GPP-Release 13 [29]. Several studies have applied artificial intelligence algorithms to RACH collision detection [30][31][32][33]; these algorithms can dynamically adjust the time that each mMTC device accesses the RAN to reduce the probability of access collisions if numerous mMTC devices simultaneously attempt to connect. Key 5G network services include not only mMTC, but also URLLC and must support their heterogeneous requirements. Relevant research on supporting both mMTC and URLLC in 5G networks typically covers one of the following groups of topics: 1. Bandwidth resource allocation, differentiating access times, or RAN frequency reuse. 2. Dividing the entire network into multiple virtual E2E networks (i.e., network slices) by using virtualization technology. Network slices are independent of each other in their equipment, access, transport, and core networks.
Many authors have proposed solutions for 5G networks with numerous coexisting IoT devices and URLLC services [3,13,[34][35][36][37][38][39][40][41][42][43][44]. The authors of [3,[39][40][41][42] proposed mechanisms for bandwidth resource allocation, differentiation of access time, or frequency reuse in the RAN of 5G networks to enable the provision of multiple service types by reducing the probability of access collisions and increasing the efficiency of bandwidth use. Pokhrel et al. [3] indicated that even a network dominated by IoT services may require handling short-term emergencies, such as those at large-scale disaster sites or the wireless monitoring of factory automation, in which quality requirements are similar to those of URLLC. Therefore, 5G network designs must be effective for both mMTC and URLLC services.
Fifth-generation NS can simultaneously meet the heterogeneous quality requirements of mMTC and URLLC [13,[34][35][36][37][38]. Fifth-generation NS separates the network into multiple virtual E2E networks (i.e., network slices); the equipment, access, transport, and core networks belonging to a network slice are independent of each other. Different network slices correspond to different services to ensure that each service's distinct quality requirements can be satisfied. The authors of [34] observed that NS in an RAN can support delay-sensitive applications. To satisfy the transmission rate and delay requirements of different services, the authors proposed a RAN slicing mechanism for deterministic periodic, deterministic nonperiodic, and nondeterministic traffic. Yang et al. [35] established an analytical model to determine the optimal RAN bandwidth resource slices to maximize the connection density of IoT devices with an energy-saving method while supporting URLLC services. However, E2E coordination and management are necessary for NS. RAN slicing alone cannot guarantee that quality requirements are met in E2E, especially if the network equipment or status changes.
Hence, the authors of [13,[36][37][38][43][44][45] proposed using SDN and NFV to implement 5G NS and facilitate network management, scaling, and adaptation. In [13,36], the functional concepts required to implement E2E NS at each vertical layer in SDN were discussed. Furthermore, the authors of [37] proposed a general 5G NS model for network components and computing resources from the edge to the core, with the principle that lightweight and complex computing should be performed at the edge and the upper layer of the infrastructure, respectively. Chahbar et al. [38] claimed to be the first to fully define each module and its function corresponding to each end user in the RAN, core network, and transmission network in terms of SDN and NFV technologies to achieve E2E NS. As Electronics 2022, 11, 2111 6 of 17 mentioned in Section 1, on an SDN with traditional switches, the state information of all packets cannot be maintained in the data plane and relies only on the actions of the controller in the control plane, which may result in processing delays and a control channel bottleneck between the controller and switches. The solution to this problem is to use higher-level interface standards, such as P4 [19], between the controller and switches. Alvarez-Horcajo et al. [43] proposed an enhanced SDN switch with forwarding actions specified by the P4 language. Its SDN functions were enabled by using the P4 runtime as the control plane protocol to specify the forwarding rules in the programmable data plane. The enhanced switch could autonomously define forwarding rules by using P4 registers, thereby preventing SDN controller overload. The authors of [44] presented a programmable data transmission path design between user equipment-RAN and user plane function for industrial 5G networks. They also discussed methods of supporting URLLC services with lower latency in 5G networks by using P4 switches as required in industrial applications. Paolucci et al. [45] illustrated the potential of the P4 language, aiming to show its innovative functionalities at the data plane level, and provided five use cases. Despite that one of the five use cases mentioned in [45] is slicing and multi-tenancy, there is no description of how to implement and manage both mMTC and URLLC network slices and no definition for the format of packets carrying INT data in an SDN environment with P4 switches. In a word, these studies on SDNs for implementing 5G NS either have not fully considered the heterogeneous service quality requirements of mMTC and URLLC or have not completely defined the operation of the P4 switches.

SDNPS Framework
First, we present the SDNPS architecture. The client services are divided into two categories-mMTC and URLLC-which are assigned different network slices ( Figure 5). Each network slice is logically independent of the other in terms of network resources, including equipment, access, transmission, and core networks. The SDN comprises three planes: application, control, and data. The SDN controller in the control plane uses client context-aware forwarding rules or server context-aware strategies to manage the network slices. The client context comprises the client identity, service quality requirements, and virtual network resources to satisfy all incoming end user requests. With INT [23], each P4 switch along a path in the data plane can directly record the network status, including the queue occupancy, link throughput, and processing delay, from an INT header added by the ingress switch before forwarding. When a data packet carrying INT data arrives at the egress switch, the INT data are duplicated and sent upward to the INT data collector module in the application plane to monitor the status of each switch on a flow path. The NS controller is responsible for changing paths and informing the reroute module of path reconfigurations. The P4 switches are activated to update the forwarding rules when necessary. The SDNPS can both reduce the burden on the SDN controller and achieve real-time control of transmission paths in network slices. Thus, both mMTC and URLLC can meet their respective service quality requirements by using 5G NS with SDN and P4 switches. module in the application plane to monitor the status of each switch on a flow path. The NS controller is responsible for changing paths and informing the reroute module of path reconfigurations. The P4 switches are activated to update the forwarding rules when necessary. The SDNPS can both reduce the burden on the SDN controller and achieve realtime control of transmission paths in network slices. Thus, both mMTC and URLLC can meet their respective service quality requirements by using 5G NS with SDN and P4 switches.

Data Packet Format
INT is a new monitoring technology based on P4. The main concept is collecting and reporting the network status in the data plane without the intervention of the control plane. In INT, network status information (for example, the switch queue occupancy, packet output rate, and processing delay) is encapsulated into a data packet or a predefined control packet by each P4 switch along a data transmission path. The packet carrying the collected network status information is extracted and sent to the application plane for further analysis prior to being forwarded to its destination. Thus, the load of bidirectional information exchange between the control plane and the data plane is considerably reduced, and up-to-date, accurate network status information in the data plane can be obtained and acted upon as quickly as possible. Figure 6 illustrates the procedure for INT. The INT source, INT transit hop, and INT sink are all P4 switching devices that support INT and represent the start, middle, and end point of a flow path, respectively. As shown in Figure 6, the INT procedure is divided into three steps: 1. When a data packet from the source host enters the data plane, the INT source can add an INT header to tell the subsequent P4 switches which information (called 'INT data') they should write.

Data Packet Format
INT is a new monitoring technology based on P4. The main concept is collecting and reporting the network status in the data plane without the intervention of the control plane. In INT, network status information (for example, the switch queue occupancy, packet output rate, and processing delay) is encapsulated into a data packet or a predefined control packet by each P4 switch along a data transmission path. The packet carrying the collected network status information is extracted and sent to the application plane for further analysis prior to being forwarded to its destination. Thus, the load of bidirectional information exchange between the control plane and the data plane is considerably reduced, and up-to-date, accurate network status information in the data plane can be obtained and acted upon as quickly as possible. Figure 6 illustrates the procedure for INT. The INT source, INT transit hop, and INT sink are all P4 switching devices that support INT and represent the start, middle, and end point of a flow path, respectively. As shown in Figure 6, the INT procedure is divided into three steps:  Among the above Metadata, hop latency, egress port TX link utilization, and queue occupancy are used by the proposed SDNPS to manage the flow paths in the mMTC and URLLC slices. INT does not specify the packet encapsulation format. The format of the INT header and which INT data each P4 switch should write can be defined by the administrator. SDNPS writes 0 × 0801 in the EtherType field of the Ethernet header to identify if it is a P4 switch (Figure 7). If it is a P4 switch, the field 'Flow_h' follows the EtherType field. Flow_h (i.e., Flow Header) is mainly used to inform P4 switches as to whether the forwarded packet contains INT data because the INT data for each P4 switch are sent only periodically-not with every data packet. Figure 8 presents the format of the 4-byte flow header (denoted as Flow_h), comprising Protocol (1 byte), Counter (1 byte), and RouteID (2 bytes). The Protocol field is set to 0 × 00 and 0 × 01 in our framework to identify whether the data after the Flow Header are an IP header or INT data, respectively. The Counter field records the number of P4 switches a packet passes through; each P4 switch adds one. The RouteID is configured in accordance with the source IP and destination IP. Each RouteID is unique and indicates a flow path; packets containing INT data include these data from each P4 switch along the flow path indicated by the RouteID. Figure 9 presents the format of the 5-byte INT data (denoted as INT_h) for a P4 switch, comprising SwitchID (1 byte), Qoccupancy (1 byte), Qlatency (2 bytes), and Link_utilization (1 byte). The SwitchID field is used to record the identity of the switch, the Qlatency field is used to record the processing time, the Qoccupancy field is used to record the utilization ratio of the egress queue, and the Link_u field is used to record the bandwidth utilization of the egress link. Among the above Metadata, hop latency, egress port TX link utilization, and queue occupancy are used by the proposed SDNPS to manage the flow paths in the mMTC and URLLC slices. INT does not specify the packet encapsulation format. The format of the INT header and which INT data each P4 switch should write can be defined by the administrator. SDNPS writes 0 × 0801 in the EtherType field of the Ethernet header to identify if it is a P4 switch (Figure 7). If it is a P4 switch, the field 'Flow_h' follows the EtherType field. Flow_h (i.e., Flow Header) is mainly used to inform P4 switches as to whether the forwarded packet contains INT data because the INT data for each P4 switch are sent only periodically-not with every data packet. Figure 8 presents the format of the 4-byte flow header (denoted as Flow_h), comprising Protocol (1 byte), Counter (1 byte), and RouteID (2 bytes). The Protocol field is set to 0 × 00 and 0 × 01 in our framework to identify whether the data after the Flow Header are an IP header or INT data, respectively. The Counter field records the number of P4 switches a packet passes through; each P4 switch adds one. The RouteID is configured in accordance with the source IP and destination IP. Each RouteID is unique and indicates a flow path; packets containing INT data include these data from each P4 switch along the flow path indicated by the RouteID. Figure 9 presents the format of the 5-byte INT data (denoted as INT_h) for a P4 switch, comprising SwitchID (1 byte), Qoccupancy (1 byte), Qlatency (2 bytes), and Link_utilization (1 byte). The SwitchID field is used to record the identity of the switch, the Qlatency field is used to record the processing time, the Qoccupancy field is used to record the utilization ratio of the egress queue, and the Link_u field is used to record the bandwidth utilization of the egress link.

Pseudocode of SDNPS
We assume that all switches in the data plane of SDN would support P4. Prior to describing the SDNPS pseudocode, we first define the symbols used by the SDNPS. As summarized in Table 2, Ts denotes the sampling period for INT data. Hd, Hq, and Hu are the thresholds of delay, queue occupancy, and link utilization, respectively, at any switch that triggers a path change in the same slice because a bottleneck at any switch in a flow path leads to severe performance degradation for the path. Di, Qi, and Li denote the processing delay, queue occupancy, and link utilization of switch i, respectively.  Table 3, when each URLLC or mMTC packet arrives at the ingress switch, it is forwarded through the assigned flow path in its corresponding slice. The protocol field in Flow_h is set to 0 × 01 every 0.5 s, and each switch along a flow path identified by the RouteID field adds its INT data following Flow_h before forwarding the packet. The INT data comprise the queue occupancy, link throughput, and processing delay for each switch i along the flow path. When a packet carrying INT data arrives at the egress switch, the INT data are duplicated and sent to the application plane to collect the status of each switch on the flow path. The network slice controller is responsible for changing paths in each slice and informing the reroute module of any path reconfigurations. mMTC packets are rerouted to the other paths in the mMTC slice if the queue occupancy or link utilization of at least one switch is larger than Hq or Hu, respectively. URLLC packets will be rerouted to the other paths in the URLLC slice if the processing delay, queue occupancy, or link utilization of at least one switch is larger than Hd, Hq, or Hu, respectively, because URLLC services are time critical. All P4 switches on a flow path are activated to update the forwarding rules when necessary.

Pseudocode of SDNPS
We assume that all switches in the data plane of SDN would support P4. describing the SDNPS pseudocode, we first define the symbols used by the SDN summarized in Table 2, Ts denotes the sampling period for INT data. Hd, Hq, and the thresholds of delay, queue occupancy, and link utilization, respectively, at any that triggers a path change in the same slice because a bottleneck at any switch in path leads to severe performance degradation for the path. Di, Qi, and Li denote cessing delay, queue occupancy, and link utilization of switch i, respectively.  Table 3, when each URLLC or mMTC packet arrives at the ingress it is forwarded through the assigned flow path in its corresponding slice. The p field in Flow_h is set to 0 × 01 every 0.5 s, and each switch along a flow path ident the RouteID field adds its INT data following Flow_h before forwarding the pac INT data comprise the queue occupancy, link throughput, and processing delay switch i along the flow path. When a packet carrying INT data arrives at the egress the INT data are duplicated and sent to the application plane to collect the status switch on the flow path. The network slice controller is responsible for changing each slice and informing the reroute module of any path reconfigurations. mMTC are rerouted to the other paths in the mMTC slice if the queue occupancy or link uti of at least one switch is larger than Hq or Hu, respectively. URLLC packets will be r to the other paths in the URLLC slice if the processing delay, queue occupancy utilization of at least one switch is larger than Hd, Hq, or Hu, respectively, because

Pseudocode of SDNPS
We assume that all switches in the data plane of SDN would support P4. describing the SDNPS pseudocode, we first define the symbols used by the SD summarized in Table 2, Ts denotes the sampling period for INT data. Hd, Hq, and the thresholds of delay, queue occupancy, and link utilization, respectively, at an that triggers a path change in the same slice because a bottleneck at any switch i path leads to severe performance degradation for the path. Di, Qi, and Li denote cessing delay, queue occupancy, and link utilization of switch i, respectively.

Symbols Denotations Ts
Sampling period for INT data Hd Threshold of delay for each switch Hq Threshold of queue occupancy for each switch Hu Threshold of link utilization for each switch Di Hop Delay of switch i Qi Queue occupancy of switch i Li Link utilization of switch i As listed in Table 3, when each URLLC or mMTC packet arrives at the ingres it is forwarded through the assigned flow path in its corresponding slice. The field in Flow_h is set to 0 × 01 every 0.5 s, and each switch along a flow path iden the RouteID field adds its INT data following Flow_h before forwarding the pac INT data comprise the queue occupancy, link throughput, and processing delay switch i along the flow path. When a packet carrying INT data arrives at the egres the INT data are duplicated and sent to the application plane to collect the status switch on the flow path. The network slice controller is responsible for changing each slice and informing the reroute module of any path reconfigurations. mMTC are rerouted to the other paths in the mMTC slice if the queue occupancy or link ut of at least one switch is larger than Hq or Hu, respectively. URLLC packets will be r to the other paths in the URLLC slice if the processing delay, queue occupancy utilization of at least one switch is larger than Hd, Hq, or Hu, respectively, because services are time critical. All P4 switches on a flow path are activated to update

Pseudocode of SDNPS
We assume that all switches in the data plane of SDN would support P4. Prior to describing the SDNPS pseudocode, we first define the symbols used by the SDNPS. As summarized in Table 2, Ts denotes the sampling period for INT data. Hd, Hq, and Hu are the thresholds of delay, queue occupancy, and link utilization, respectively, at any switch that triggers a path change in the same slice because a bottleneck at any switch in a flow path leads to severe performance degradation for the path. Di, Qi, and Li denote the processing delay, queue occupancy, and link utilization of switch i, respectively. Table 2. Symbols and denotations.

Sampling period for INT data Hd
Threshold of delay for each switch Hq Threshold of queue occupancy for each switch Hu Threshold of link utilization for each switch Di Hop Delay of switch i Qi Queue occupancy of switch i Li Link utilization of switch i As listed in Table 3, when each URLLC or mMTC packet arrives at the ingress switch, it is forwarded through the assigned flow path in its corresponding slice. The protocol field in Flow_h is set to 0 × 01 every 0.5 s, and each switch along a flow path identified by the RouteID field adds its INT data following Flow_h before forwarding the packet. The INT data comprise the queue occupancy, link throughput, and processing delay for each switch i along the flow path. When a packet carrying INT data arrives at the egress switch, the INT data are duplicated and sent to the application plane to collect the status of each switch on the flow path. The network slice controller is responsible for changing paths in each slice and informing the reroute module of any path reconfigurations. mMTC packets are rerouted to the other paths in the mMTC slice if the queue occupancy or link utilization of at least one switch is larger than Hq or Hu, respectively. URLLC packets will be rerouted to the other paths in the URLLC slice if the processing delay, queue occupancy, or link utilization of at least one switch is larger than Hd, Hq, or Hu, respectively, because URLLC services are time critical. All P4 switches on a flow path are activated to update the forwarding rules when necessary. Network Slicing m paths in the mMTC slice: M = {ft j |1 ≤ j ≤ m} r paths in the URLLC slice: R = {fr j |1 ≤ j ≤ r} Packet forwarding:
Assign the path with the minimum load in M; 3.
Assign the path with the minimum load in R; 6.

Performance Evaluation
In this section, the performance of SDNPS is evaluated through comparison with an SDN with traditional switches. The performance measures include the average throughput, packet loss ratio, and packet delay for mMTC and URLLC network slices.

Simulation Settings
The SDN topology used for performance evaluation is depicted in Figure 10. The parameters and their values used in the simulation are summarized in Table 4. We set up an SDN environment comprising four hosts and six P4 switches on the Mininet simulator [47]. The well-established Ryu controller is adopted for the control plane [48]. The communication interface between the data and control planes is OpenFlow Protocol V1.3 for traditional switches and SDNPS for the P4 switches. The Behavioral Model version 2 (BMv2) [49], employing the P4 language, is deployed in Mininet. The bandwidth of each link between the two switches is 20, 30, or 60 Mbps ( Figure 10). The queue size of each switch is 50 packets. The Iperf tool [50] is used to generate UDP flows at a constant data rate of 50 Kbps for each machine-type device, and the number of machine-type devices varies from 100 to 500. Two URLLC UDP flows are transmitted at 5 and 17 Mbps. The source and destination hosts of the mMTC flows are H1 and H3, respectively, and those of the URLLC flows are H2 and H4, respectively. assigned two flow paths, S1-S2-S5-S4-S6 and S1-S2-S4-S5-S6. Initially, the first flow path in either slice is always used, and the second flow path is used if the first path has service quality degradation. The sampling period for INT data is 0.5 s. The threshold values for the INT data are listed in Table 4. The simulation duration is 10 s. During the simulation, a URLLC flow of 5 Mbps is generated initially, and another URLLC flow of 17 Mbps is generated after 3 s; 100, 250, or 500 mMTC flows are simultaneously generated from the beginning to the end of the simulation.  Flow paths for URLLC slice #1. S1-S3-S5-S6 #2. S1-S3-S4-S6 Flow paths for mMTC slice #1. S1-S2-S5-S4-S6 #2. S1-S2-S4-S5-S6 Ts 0.5 s Hd 1 millisecond Hq 80% Hu 80% Simulation time 10 s

Results and Discussions
The performance of the proposed SDNPS is compared with that of a conventional SDN with either static or dynamic routing. In the conventional SDN, the monitoring period with dynamic routing is fixed at 3 s; no monitoring period is used with static routing. In this subsection, we obtain and discuss the average throughput, packet loss ratio, and packet delay for the mMTC and URLLC network slices in the conventional SDN and the proposed SDNPS.  Flow paths for URLLC slice #1. S1-S3-S5-S6#2. S1-S3-S4-S6 Flow paths for mMTC slice #1. S1-S2-S5-S4-S6#2. S1-S2-S4-S5-S6 Ts 0.5 s

Hd 1 millisecond
Hq 80% Hu 80% Simulation time 10 s The URLLC slice is assigned two flow paths, S1-S3-S5-S6 and S1-S3-S4-S6, with the fewest hops. To achieve high connection density instead of latency, the mMTC slice is assigned two flow paths, S1-S2-S5-S4-S6 and S1-S2-S4-S5-S6. Initially, the first flow path in either slice is always used, and the second flow path is used if the first path has service quality degradation. The sampling period for INT data is 0.5 s. The threshold values for the INT data are listed in Table 4. The simulation duration is 10 s. During the simulation, a URLLC flow of 5 Mbps is generated initially, and another URLLC flow of 17 Mbps is generated after 3 s; 100, 250, or 500 mMTC flows are simultaneously generated from the beginning to the end of the simulation.

Results and Discussions
The performance of the proposed SDNPS is compared with that of a conventional SDN with either static or dynamic routing. In the conventional SDN, the monitoring period with dynamic routing is fixed at 3 s; no monitoring period is used with static routing.
In this subsection, we obtain and discuss the average throughput, packet loss ratio, and packet delay for the mMTC and URLLC network slices in the conventional SDN and the proposed SDNPS. Figure 11 shows the average packet loss ratio versus time for the URLLC slice. Figure 11a-c display the results for 100, 250, and 500 mMTC devices, respectively. The first flow path (i.e., S1-S3-S5-S6) in the URLLC slice with 5 Mbps of initial URLLC traffic becomes overloaded at 3 s when 17 Mbps of URLLC traffic are transmitted through the flow path. Figure 11 reveals that the average packet loss ratios with SDNPS (blue curves) always reduce to near zero more quickly after the third second than with the conventional SDN with either static (grey curves) or dynamic routing (orange curves), regardless of the total load of mMTC devices. This result occurs because with SDNPS, the INT data for each switch are carried by a data packet every 0.5 s, extracted at the egress switch, and sent to the network slice controller, enabling a rapid response to network congestion. The NS controller thus quickly decides to reroute the 17 Mbps URLLC flow to the second path (i.e., S1-S3-S4-S6) in the URLLC slice. By contrast, for the conventional SDN with dynamic routing, updating the forwarding rules for the switches in the data plane requires substantial information exchange between the SDN controller and every OpenFlow switch; thus, rerouting requires more time. For the conventional SDN with static routing, the average packet loss ratio is increased continuously after the third second because the forwarding rules do not change in the static routing unless the current path fails.

mMTC Slice
The key service quality requirement for mMTC is connection density, and the least significant service quality requirement is latency. Figures 15 and 16 show the average throughput and packet loss ratio per node, respectively, versus the number of mMTC devices. Figure 15 reveals that the average throughput per mMTC node with SDNPS (blue bars) is approximately equal to the packet generation rate (i.e., 50 Kbps), regardless of the number of nodes requesting connections. However, a slight reduction in throughput is observed for 500 nodes (i.e., heavy load) because processing time is required for path changes for some connections in the mMTC slice. SDNPS outperforms both the dynamic or and static SDNs, especially under heavy load in the slice. Thus, SDNPS achieves higher connection density. Similarly, Figure 16 reveals the superiority of SDNPS over the other two schemes in terms of the average packet loss ratio per mMTC node.

Conclusions
This paper proposes an SDNPS with an INT data packet format to simultaneously support mMTC and URLLC by using NS. The method satisfies the opposing service requirements of both mMTC and URLLC in 5G and alleviates the control load on the SDN controller. In SDNPS, INT data are recorded in data packets forwarded by P4 switches

Conclusions
This paper proposes an SDNPS with an INT data packet format to simultaneously support mMTC and URLLC by using NS. The method satisfies the opposing service requirements of both mMTC and URLLC in 5G and alleviates the control load on the SDN Figure 16. Average packet loss ratio per node in the mMTC slice.

Conclusions
This paper proposes an SDNPS with an INT data packet format to simultaneously support mMTC and URLLC by using NS. The method satisfies the opposing service requirements of both mMTC and URLLC in 5G and alleviates the control load on the SDN controller. In SDNPS, INT data are recorded in data packets forwarded by P4 switches every sampling period, and these data are collected by an INT data collector module in the application plane. Before forwarding a packet with an INT header added by the ingress switch, each P4 switch along a flow path in the data plane records its current status from this header. The INT data comprise the queue occupancy, link throughput, and processing delay. When a packet carrying INT data arrives at the egress switch, the INT data are duplicated and sent to the application plane to determine the status of each switch along the flow path. The network slice controller is responsible for changing paths in each slice and informing the reroute module of these path reconfigurations. The P4 switches are instructed to update their forwarding rules as necessary. The SDNPS is verified through a simulation to substantially improve 5G NS performance for supporting mMTC and URLLC in the average throughput, packet loss, and packet delay, compared with a conventional SDN with dynamic or static routing. The proposed SDNPS is expected to improve the core network used in 5G NS. We intend to use SDNPS with a RAN to completely realize E2E 5G NS.