Packet Optical Transport Network Slicing with Hard and Soft Isolation

: Network operators have been dealing with the necessity of a dynamic network resources allocation to provide a new generation of customer-tailored applications. In that sense, Telecom providers have to migrate their BSS/OSS systems and network infrastructure to more modern solutions to introduce end-to-end automation and support the new use cases derived from the 5G adoption and transport network slices. In general, there is a joint agreement on making this transition to an architecture deﬁned by programmable interfaces and standard protocols. Hence, this paper uses the iFusion architecture to control and program the network infrastructure. The work presents an experimental validation of the network slicing instantiation in an IP/Optical environment using a set of standard protocols and interfaces. The work provides results of the creation, modiﬁcation and deletion of the network slices. Furthermore, it demonstrates the usage of standard communication protocols (Netconf and Restconf) in combination with standard YANG data models.


Introduction
Network slicing is positioned as the next paradigm for service delivery in telecom operator networks. the slicing concept departs from the idea of allocating network resources to customer services on demand, leverages on the paradigms of Software Defined Networking (SDN) and Network Function Virtualization (NFV) [1], those resources being dedicated to them during service lifetime. Those assigned resources (including compute, storage and transport ones) can be either physical or virtual, tailored to the customer need expressed through the slice request. Since distinct customers could have different requirements to be satisfied, different slice types can be considered from the point of view of management and control, as defined in [2].
The way of slicing can be seen as an alternative form of consuming network capabilities, permitting us to address the particular needs of new industries and markets [1] as was not possible before. Specific service characteristics like deterministic extreme low latency or guaranteed high bandwidth, different degrees of isolation (with respect other customers' services), scalability adapted to the actual demand, or resource management and control can be now enabled, allowing more sophisticated services to emerge and co-exist in a commonly shared infrastructure.
In order to enable this flexible consumption of network capabilities, it is necessary to develop new forms of network control to make possible a dynamic partitioning and assignment of resources end-to-end. Therefore, several initiatives have been discussed across the industry, including the IP network resources management (L2 [3] and L3VPNs [4]), the optical layer management (Transport API [5]) or the network devices configuration (Openconfig [6]). This paper extends the work presented in the latest OFC conference [7]. In addition, this work adds a set of tests; Including creating two isolated network slices, prefixes addition and modification to each slice and slices deletion. A last set of tests included the service continuity evaluation after a physical device reset. For the whole set of experiments, a hierarchical control architecture over multi-layer IP over DWDM networks was used.
This paper is organized as follows: The authors describe those concepts in Section 2. Section 3 described the whole control architecture used for the network slice instantiation in a service provider environment. Section 3.5 describes the proposed architecture's implementation choices. The authors selected one of those choices and made a proofof-concept in Telefonica and CTTC Laboratories, Section 4 includes the results.

Network Slicing Concepts
Network slicing refers to the partitioning of one physical network into multiple virtual networks, each network slice is architected and optimized for a specific application/service. In this context, the Next Generation Mobile Networks (NGMN) defines two main concepts [8]: • The Service Instance (SI) is an end-user service or a business service realized in a Network Slice. • The Network Slice Instance (NSI) as the complete, instantiated logical network that meets specific characteristics required by a Service instance.
Hence, network slicing is sharing network infrastructure across different Service Instance(s) to meet network-specific requirements. Depending on the communication layer where a network operator implements network slicing, resource management, the network characteristics or the toolbox used to implement the network slice can change. Some examples of the network characteristics requested by a Service Instance include ultra-low-latency or ultra-reliability, among others [8].
The network slicing concept provides a framework for broad applicability across various industries. The majority of these scenarios envisioned to suit emerging and diverse business models are based on the Network As A Service (NASS) approach, creating opportunities for intelligent services and a new business ecosystem [9][10][11]. One of the main enablers to drive the network slicing is the Fifth-generation (5G) networks realization. Due to the necessity to integrate multiple services with various performance requirements-such as high throughput, low latency, high reliability, high mobility, and high security-into a single physical network infrastructure, and provide each service with a customized logical network. the Network Slicing being the key technology to achieve the aforementioned goals.
5G expects to support a new generation of customer-tailored applications with diverse requirements regarding capacity, latency, level of mobility, number of users, and user density. Hence, 5G sets the stage for innovation and transformation in customer services and vertical industries, such as the ones described in above. Table 1 [12]. The services described in Table 1 demonstrate highly diversified traffic characteristics and differentiated quality-of-service (QoS) requirements. For example, the mMTC is based on its application in machine-to-machine communication [13]. The actual realization of those services in telecom networks implies the migration of the current heterogeneous IP/MPLS + DWDM transmission networks to more modern and 5G-ready designs, using the Network programmability, Software-Defined Networking (SDN) and Machine Learning (ML) as their main pillars.
Implementing the network slices has two main alternatives: Hard and Soft. The way the network resources are shared between the services in each alternative is described in the following sections.

Soft Network Slicing
Soft slicing corresponds to a lower level of isolation between the services a network is transporting. Soft slicing implies sharing the physical infrastructure but creates logical segmentations between the customers. According to this definition, soft slicing is not a new concept from the IP/MPLS networks perspective [4,14,15].
Traditional L3VPNs are samples of soft-slicing implementations in an MPLS network, because a VPN can be thought of as a series of tunnels connecting customer sites, each site can potentially have different QoS treatment, and all traffic to and from each site is internal to the customer. In the VPN service, the provider ensures that each customer's traffic is logically discriminated over shared physical infrastructure based on routing policies configuration across the network.
In that sense, Network slicing at the MPLS level can be implemented using: • Virtual Routing and Forwarding (VRF), that enables multiple routing environments over a shared MPLS transport network. • Virtual Interfaces (VSI), that enables multiple switching environments over the same shared infrastructure.
Each physical router is able to host multiple VRF(s) and multiple VSI(s) (along with their attached logical interfaces), effectively slicing it into multiple routing and switching environments that can be assigned to different tenants/customers/services.

Hard Network Slicing
Despite the apparent advantages that an overlay MPLS tunnels could provide, there are some disadvantages. Overlay tunnels built by data encapsulation have neither visibility nor control of the underlying physical network. With more and more tunnels deployed on shared physical infrastructure, network congestion necessarily becomes an attention point. Thus, VRFs, VSIs or Optical Data units (ODus) cannot be managed by their corresponding tenants directly because they are part of the same administrative domain represented by the physical device, and they need to be operated by the network administrator. Due to the massive number of tunnels and services in a complex network, the QoS and Traffic Engineering policies management becomes a crucial task to guarantee the right SLAs to its customers [13,16].
The solution to this limitation and the way to move the networking industry to more automated scenarios is Hard slicing. Hard slicing refers to the provision of dedicated resources to a specific Network Slice Instance. For example, Data-plane resources are provided by allocating time-domain multiplexed resources such as a Flex Ethernet channel or as a service such as an MPLS hard-pipe, route diversity (disjoint paths), wavelength selection, among others. Disaggregated routers is an example of hard-slicing. In the disaggregated case, the data plane runs in the physical device, and the control plane runs outside the device in a remote cloud or server. In that case, the 1:1 relationship between physical device and routing logic is disassociated, enabling the support of multiple virtual routers over a single physical network device. Each of these virtual routers is a router in its full sense, being able to host multiple VRFs and VSIs, or to be managed independently of the other virtual routers running in the same device. Virtual routers are thus administratively separated from each other, which means separate virtual routers can be assigned to different tenants, and each tenant can manage the virtual routers directly, without the need from the operator that owns the physical network to intervene intervene (e.g., following an approach similar to the one described in [17]).
Network slicing at the IP transport level can then be accomplished by grouping multiple virtual routers running over a shared physical network infrastructure into a common virtual infrastructure under its separated administrative domain. Virtual routers under the same administrative domain are then known as hard network slice, so different hard slices can be assigned to different tenants.

5G Network Slicing
5G must enable network operators to ensure the same network can fulfil the heterogeneous demands of diverse types of applications. To efficiently achieve those requirements and determining how network resources are assigned, a service provider must integrate technologies like Software-Defined Networking (SDN), Network Functions Virtualization (NFV), and Machine Learning (ML) with a hierarchical transport architecture.
According to the 5G definition the architecture has three main components [18,19]: • Radio Access Network (RAN): It covers everything related to the air interface between the user element and the base station. The RAN interfaces and interconnections is specified within 3G PPP Architecture Working Group. • Mobile Core (MC): Its central role is to act as a gateway for user traffic to and from the internet. The mobile core is composed of a set of network functions (NF) responsible for managing user mobility, access authentication, access authorization, location service management, registration and establish per-user tunnels between base stations for each different traffic type. • Backhaul Network is the network that interconnects the RAN with the MC. It is not part of the 5G specification, so it is up to each network operator to decide how to implement it. It requires functionalities such as QoS management, synchronization and a stack of protocols like IP/MPLS or segment routing.
Network slicing capabilities need to be available across all components of the 5G cellular network (RAN, MC, Backhaul network) in order to ensure a differentiated treatment of the packets in the network. Thus, the 5G working groups have specified a standard set of network slices, denoted Standardized Slice Type (SST), to determine how resources should be assigned at the RAN and MC [20].
5G specifies two mechanisms for network slicing. The first one is based on QoS techniques, by applying a dynamic allocation of available network resources to different classes of traffic, and it is denoted as soft network slicing. The second one takes advantage of the software-based, cloud-based architecture of 5G, as well as component disaggregation, and achieves slicing through 5G component virtualization and replication. This second approach is denoted hard network slicing.

Proposed Architecture
The iFUSION architecture is an architecture defined by Telefonica to strengthen the network automation and programmability in a service provider environment, as depicted in Figure 1 [21]. iFUSION is a two-layer control architecture, with specific domain controllers per technological domain (IP/MPLS, microwave and optical) in the bottom and a Software-Defined Transport Network controller (SDTN controller) to handle the multi-layer and multi-domain transport network resources. The domain controllers directly communicate with the network elements and the SDTN controller with the OSS/BSS systems. More than the functional block definition, iFUSION includes the usage of: (1) Standard interfaces based on RESTCONF/YANG [22] for the communication between control components and NETCONF/YANG [23] to configure the network elements; (2) YANG data models based on latest releases in the standards-development organizations (SDOs): IETF, ONF and OpenConfig.  Figure 1 shows the network scheme of the iFUSION architecture in terms of components and relationships among them. The following sections define each of the structural pillars in the architecture, including its role in network slicing.

Software Defined Transport Network Controller
The Software-Defined Transport Network Controller is a functional block with the following purposes [21]; It is The main entry point from the OSS/BSS systems to the network. It is in charge of coordinating/providing services through several domains and layers. It has the multi-layer/multi-domain topological view of the network. The SDTN controller can split requirements based on the technological requirements. During this process, the SDTN controller can add/assign logical resources to be used by the network at the service implementation. The SDTN controller have two RESTCONF interfaces, one to process the OSS/BSS systems' requirements and one to send the specific requests to the domain controllers. TeraFlow project [24] is proposing the development of a cloud-native SDN controller that will serve as SDTN.
The SDTN controller can have incorporated the Network Slice Controller as a working piece of its implementation.

Network Slice Controller
The Network Slice Controller (NSC) effectuates a transport network slice in the underlying transport infrastructure, manages and control the state of resources and topologies associated with it. The NSC receives a transport network slice request from the Operation Support System and Business Support System (OSS/BSS). The NSC runs an internal workflow for transport network slice life-cycle management and interacts with underlying IP and Optical Domain controllers via a RESTCONF client. As described in the following sections, depending on the NSC location in the architecture, the NSC will delegate to SDN Domain controllers to configure the network (Section 3.5 (a) and (b)) or make it directly through a NETCONF SBI (Section 3.5 (c)).
The network slice controller is the key building block for control and management of network slice. It provides the creation/modification/deletion, monitoring an optimization of network slices in a multi-domain, a multi-technology and multi-vendor environment. It has two main functionalities, defined in [25,26]: • Map: The NSC must map the Network Slice requests to the underlying technologyspecific infrastructure. Accordingly, It maintains a record of the mapping from user requests to slice instantiations, as needed to allow for subsequent control functions like modification or deletion. • Realize: The NSC should realize the network slice request using its SBI interface against the domain controllers in either physical or logical connectivity through VPNs or various tunnelling technologies such as Segment Routing, MPLS, etc.

Network Domain Controller
The network domain controller (SDN Controller) is in charge of network elements (network domain). It has standard southbound interfaces to communicate with the network elements. The Domain controller SBI relies on using the Network configuration protocol (NETCONF) to interact with the underlying technology's network elements. The SDN controller also has a northbound interface to communicate with the SDTN controller or the OSS/BSS Systems using RESTCONF.

Yang Models for Network Controllers
As described until now, the three control elements: SDTN controller, Network Slicing Controller and Network Domain controllers, has standard SBI and NBI interfaces to communicate between them and against the network or the OSS/BSS systems. The standard interfaces are composed of selecting a protocol to transfer the data and YANG data models to define how the message is formed. For that sense, the YANG modelling activities have acquired significant relevance across the standardization entities. This is to such an extent that by 2019, the number of correctly extracted YANG models from IETF drafts was 283, in the Broadband forum was 214, and in Openconfig was 137 [27]; similarly, other organizations like the MEF, 3GPP or ONF produced YANG data models to describe technologies, protocols or connectivity services as well. Thus, navigating through the massive set of YANGs available and selecting the suitable pack of data models to define each functional block's interfaces becomes an essential task from the architectural definitions point of view.
To request and after instantiate the network slices a set of these data models are described as the experimental base of this work are described in the Table 2. Table 2. Set of these data models are described as the experimental base of this work.

Models Description Example
LxVPN These models describe a VPN service from the customer or network operator point of view.
L3SM: [15] L2SM: [28] L3NM: [4] L2SM: [3] Traffic Engineering These models allow to manipulate Traffic Engineering tunnels within the network segment. Technology-specific extensions allow to work with a desired technology (e.g., MPLS RSVP-TE tunnels, Segment Routing paths, OTN tunnels, etc.) TE: [29] TE Topology: [30,31] TE Service Mapping extensions These extensions allow to specify for LxVPN the details of an underlay based on Traffic Engineering Service Mapping: [32] ACLs and Routing policies Even though ACLs and routing policies are device models, It's exposure in the NBI of a domain controller allows to provide an additional granularity that the network domain controller is not able to infer on its own.
ACL: [33] Routing Policy: [34,35] OTN As a part of the transport network, OTN can provide hard pipes with guaranteed data isolation and deterministic low latency, which are highly demanded in the Service Level Agreement (SLA).
OTN Slice: [36] Slicing Set of data models available to map and realize the network Slices. Network Slice NBI: [25,26]

Instantiating of Network Slices in SDN Transport Networks
As described before in the Section 3.1, the OSS/BSS systems may request the deployment of a new network slices with certain transport characteristics. Each network slice must be isolated from any other network slices or different services delivered to particular customers and naturally, other network slices or services must not negatively impact the requested transport network slice's delivery.
To provide this isolation and instantiate the slice in the network there are certain implementation options ranging from softer to hardest grades of isolation, as follows: • As the isolation grade is a significant constraint to consider for the network slice implementation, the selected network infrastructure and the control elements selected would generate different sets of capabilities in the Network Slice Controller.  To maintain the data coherence between the control layers, the network-slice-id used must be directly mapped to the transport-instance-id at the VPN-Node level. (b) Network Slice Controller as an stand-alone controller: When the Network Slice Controller is a stand-alone controller module, the NSC's should perform the same two tasks described before:

(d) Network Slice Controller as part of the domain controller:
When the Network Slice Controller is part of the domain controller, the OSS/BSS systems process the Slices requests and introduce the network abstraction layer. At the network level, the same device data model would be used in the NBI and SBI of the SDN controller. The direct translation would reduce the service logic implemented at the SDN controller level, grouping the mapping and translation into a single task. : • Map & Realize:The mapping and realization can be done by the Domain controller applying the service logic to create policies directly on the Network elements.

Experimental Validation
The experimental testbed has been distributed between Telefónica and CTTC laboratory premises, as depicted in Figure 3. The testbed includes two layers, a control layer and a transport layer.
Telefonica and CTTC deployed a control layer composed of three relevant items. First, to receive all the service requests, a Network Slice Controller was developed as part of the SDTN controller. Second, to interact with the transport domains, the testbed included Two (2) controllers, one for IP and one for Optical. To configure the routers, the IP SDN Controller used gRPC, while the Optical SDN controller used Netconf with T-API for the configuration tasks.
The Edgecore used a 10 Gigabit Ethernet (10G) interface for the connection against the Spirent testers and 1G interfaces to the Optical devices. Furthermore, on each Edgecore, two (2) separate virtual routers were configured to test redundancy and route filtering between the ends. The Network Operating system (NOS) running on the virtual-routers were volta-stack deployment version 20.4-2-36-g0ba8807. As we have described previously, there are several requirements for network automation for network slicing. However, a service provider can not all treat all the needs in the same way. Thus, a set of use cases were designed and executed as part of the whole testing process, the use cases were the following: The T-API [5] was used to create a new connectivity service in the transport network to enable the L2-L1 communication between the network slice endpoints.

2.
Add Prefixes and destroy: The second use case expects to validate the IP connectivity between the network slices. Hence the Spirent testers announced a set of 5k IP prefixes through the prior created network slices. Afterwards, the testers make an automatic IP reachability test and a CLI route redistribution validation. Once the route propagation was validated, the testing team stopped all the Spirent prefixes' announcement, and a new automatic connectivity test was done.

3.
Device recovery test: The third use case was to verify the network slices service continuity so. The testing team manually rebooted all the DCSGs. Once the devices got online again, we have checked the service status with a measurement of the service restoration time.

Slice Creation
The iFusion architecture enhancement proposed for this paper performs the management of services and resources through the use of information models that capture the definitions of managed entities in terms of attributes and supported operations. Hence, a set of Yang Data models has been defined to render and realize a network slice between each of the control entities. The workflow proposed used postman to simulate, the network slice creation requests. The YANG data-model used for this request was defined in [37]. It includes the endpoints, the customer information and the service level agreement of the slice requests. The PE-CE and the end-to-end connectivity protocols for the network slice realization was automatically expanded by the SDTN controller and received by the IP SDN controller to properly configure the network elements. Each network slice, The workflow can be seen in the Figure 4.
A capture of the messages exchanged between the control items is depicted in Figure 5. The Figure illustrates in Orange the messages exchanged for the Network Slice Creation using the IETF network slices data model. The subsequent statements show how the SDTN controller unwrapped the creation request message to the optical and IP domains. It illustrates in red the IP network massages using gRPC, including the access ports, routing protocols, and PE-CE connectivity parameters. In purple, the T-API messages have the connectivity and optical service requirements.
The time consumed in the Slices creation was 10.34 s on average. The left part of the Figure 6 depicts a cumulative histogram of the time consumed for the slice creation in the controller layer using ten samples. Additionally, the rigth side Figure 6 shows the percentage of time spent for the VPN-instantiation in the Virtual routers. The most time-consuming task is the BGP Neighbours creation (13 s), followed by the BGP instance redistribution (10 s). In contrast, the device implemented the primary VPN configuration parameters (Rute-Distinghuiser, Route Target and Router ID) in less than a second.

Add Prefixes and Destroy
Once the domain controller realized the network slices, and we have validated the service status. The next step vas to verify the correct establishment of the data-plane sessions. Thus, the Spirent testers added 5k prefixes to each of the services(VRF-Blue and VRF-Red). To confirm the status of each service, we have captured the virtual routers BGP information, as depicted in Figure 7, where each VRF has 5002 prefixes received (PfxRcd).
After the end-to-end service creation and control and data plane validation, we have rebooted the devices to validate the service continuity after a simulated power failure. We validate that after the recovery was complete, all the traffic flows again between the network slices. To confirm the status of each device, we have captured the counters before and after the reboot process Figure 8. Each instance recovered sequentially, and it took up to 4.6 min for the latest service to completely restore the traffic flow.

Conclusions
Dealing with the necessity of a dynamic network resources allocation to provide a new generation of customer-tailored applications is a primary concern nowadays. In that sense, Telecom providers have to prepare their whole set of systems and network infrastructure to allow the introduction of end-to-end network automation. In that sense, this paper uses and validates the iFusion architecture defined by Telefonica, which is ready to support new use cases derived from the 5G adoption and transport network slices. Additionally, this work validates an end-to-end creation, modification and deletion of transport network slices with several degrees of isolation. Furthermore, the results indicate the feasibility of deploying multi-layer IP over DWDM transport network slices based on virtual routers and disjoint optical paths. Future work testing the map and realization of network slices in the different NSC controller positions is required. This testing would allow us to fully understand the information exchanged/stored in each layer to make feasible the deployments in real networks. Funding: This research was partially supported by the EC H2020 TeraFlow (101015857) and Span-ishAURORAS (RTI2018-099178-I00).

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.