You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Review
  • Open Access

15 October 2025

AI Methods in Network Slice Life-Cycle Phases: A Survey

,
,
and
1
Department of Digital Media and Communication, Ionian University, 28100 Argostoli, Greece
2
Department of Information and Electronic Engineering, International Hellenic University, 57400 Thessaloniki, Greece
3
Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM 87131-0001, USA
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Design, Control, Optimization, and Security of Next-Generation Communications Networks

Abstract

Network slicing (NS) plays a vital role in enabling flexible and efficient resource allocation, tailored to diverse use cases and network domains. This survey paper explores the synergy between NS and Artificial Intelligence (AI), emphasizing how Machine Learning (ML) techniques can address challenges across the slice life-cycle. A key contribution of this work is an in-depth analysis of AI and primarily ML applications in each phase of the slice life-cycle, delving into their specific tasks and discussing the techniques applied to these tasks. Furthermore, we present a taxonomy based on different slicing criteria, offering a structured perspective to enhance understanding and implementation.

1. Introduction

The 5G and Beyond 5G (B5G) networks are designed to serve different types of users, i.e., verticals, with a plethora of different services, achieving higher performance in terms of latency, throughput, and reliability, fostering the digital transformation of vertical industries []. The International Telecommunication Union (ITU) categorized 5G services [] into (a) Enhanced Mobile Broadband (eMBB), for capacity enhancement, (b) Ultra-Reliable and Low-Latency Communications (URLLCs), for providing robust connectivity with very low latency, and (c) Massive Machine Type Communications (mMTCs), to support low-rate burst-rate communication among a huge number of devices.
The diverse nature and different performance requirements of these services have driven both researchers and industry professionals to seek a technology capable of enabling a unified network infrastructure, capable of meeting these varied requirements effectively. Network Slicing (NS) has emerged as a solution to this challenge, widely recognized by academia and industry as a key enabler for the delivery of customized and on-demand network services []. Network Function Virtualization (NFV) and Software-Defined Networking (SDN) are the main technologies that NS exploits to create end-to-end logical network-oriented on-demand services, called network slices, capable of supporting the different demands of each type of service. Each slice spans over all network domains, i.e., the Radio Access Network (RAN), the Transport Network (TN), and the Core Network (CN), and is created to provide a specific service, which has different performance requirements than the services provided by other slices.
Each network slice progresses through multiple phases, from its initial request for creation to its eventual decommissioning when it is no longer required. These phases compose the slice life-cycle and have been defined by the 3rd Generation Partnership Project (3GPP) [].
The application of NS involves tasks such as the network slice design, construction, deployment, operation, control, and management. These tasks require real-time analysis of a large volume of complex data, along with dynamic decision making, to ensure that the deployed network slice effectively meets the targeted Quality of Service (QoS) requirements. Given the volume of data to be processed and the time constraints, tasks related to slice creation and operation are not always feasible to perform manually, and hence must be automated [].
In order to address the challenges arising from the application of NS, Artificial Intelligence (AI) and especially Machine Learning (ML) have been widely proposed as key enablers to make its implementation more feasible and efficient. The use of such techniques can automate many tasks related to the creation, deployment, and operation of network slices, as well as enable the optimized real-time reconfiguration of parameters. Building on such research efforts, the adoption of AI/ML methods for NS has also been standardized by leading global Standards Development Organizations (SDOs), which have defined specific use cases for these methods and the requirements they must meet in this context [].
In the literature, there are many research works proposing AI, and mainly ML methods, that deal with different problems arising from the application of NS in 5G and B5G networks. Those problems are related to different tasks that need to be performed in each phase of the slice life-cycle. Thus, the proposed applications of AI and ML are associated with slice life-cycle phases.
The primary motivation for this work is to fill the gap in the literature by providing a comprehensive survey on the application of ML methods to various tasks across each phase of the network slice life-cycle, as outlined in ref. []. Our goal is to summarize the tasks associated with each phase and the corresponding ML methods proposed to address the challenges related to these tasks. Additionally, we aim to contribute to the broader discourse on intelligent networks, and offer readers a valuable resource that covers the fundamental concepts of both NS and ML.
The main contributions of this work can be summarized as follows:
  • We present the state-of-the-art related surveys focusing on the application of ML to NS, and discuss their respective contributions.
  • We provide insights about the basic functionalities of NS, as well as existing architectural approaches, and slicing types. Moreover, we present the main technology enablers of NS and the different phases that compose the slice life-cycle.
  • We introduce the foundational concepts of ML techniques and their respective categories.
  • We present the findings of our extensive literature review on the applications of ML methods to NS. Specifically, for each phase of the life-cycle and each task associated with a phase, we outline the ML applications proposed in the literature, the challenges they address, and the methods employed.
  • Whenever feasible, we also associate each proposed application with the different types of slicing, considering the resource allocation approach, use case application, and network domain in which NS is applied.
The remainder of this survey is organized as follows: Section 2 presents the state-of-the-art related surveys and their contributions. Section 3 introduces the basics of NS, including the different architectures, slicing types, the technology enablers of NS, the phases of the slice life-cycle, and how AI/ML empowers NS. Section 4 provides key information about ML, including different training approaches and the resulting ML categories. Section 5 highlights the main contributions of this survey, focusing on ML applications proposed to address challenges arising from the deployment of NS for each slice life-cycle phase. Section 6 discusses issues related to specific ML methods identified during this survey, and finally, Section 7 concludes our work. The structure of our survey is illustrated in Figure 1.
Figure 1. Structure of the survey.

3. Network Slicing Basics

The concept of network slice was introduced in 2015 by the Next Generation Mobile Network (NGMN) Alliance to successfully accommodate the diverse service requirements of the 5G use cases using a common network infrastructure. According to the 3GPP, a network slice is defined as “a logical network that provides specific network capabilities and network characteristics, supporting various service properties for network slice customers” []. Moreover, according to Afolabi et al. [], NS should adhere to the seven following principles:
  • Automation: Enables on-demand slice setup without manual effort, specifying Service Level Agreements (SLAs) and timing via signaling.
  • Isolation: Enables on-demand slice setup without manual effort, specifying SLAs and timing via signaling.
  • Customization: Tailors resources to tenant needs with programmable policies and value-added services.
  • Elasticity: Adjusts resources dynamically to maintain SLAs under changing conditions.
  • Programmability: Open Application Programming Interfaces (APIs) allow third parties to manage slice resources flexibly.
  • End-to-end: Ensures seamless service delivery across domains and technologies.
  • Hierarchical abstraction: Allows recursive resource sharing, enabling layered services.
Thus, the benefits of NS are numerous and can be summarized below [,]:
  • Through virtual networks’ multiplexing, it can support multi-tenancy, leading to reduced capital expense in both network deployment and operation.
  • It has the capability to achieve service differentiation and ensure the fulfillment of SLAs for each type of service.
  • On-demand creation and adjustment of slices, along with their potential annulment as required, can increase the flexibility and adaptability in network management.

3.1. Network Slicing Architectures

Several SDOs, such as the 3GPP, the NGMN alliance, etc., have focused on the development of NS-oriented architectures [,]. The NGMN slicing architecture, as depicted in Figure 2, consists of the following layers [,]: (1) resource (infrastructure) layer; (2) network slice instance layer; and (3) application (service instance) layer. The resource layer provides all the virtual or physical resources to the network slice instance layer that furnishes the necessary network characteristics for a service instance. Finally, the application layer consists of all services (end-user service or business services). A detailed overview regarding NS architectures of SDOs and related NS projects can be found in ref. [].
Figure 2. The NGMN architecture.

3.2. Types of Slicing

NS can be classified according to different criteria, as summarized in Table 2. Firstly, based on the resource allocation method (slice elasticity) used, NS can be distinguished into the following categories [,]:
Table 2. Types of network slicing.
  • Static Slicing, where each Virtual Network (VN) gets a fixed portion of physical resources for its entire service life. The main advantage of this approach is its simplicity, as no continuous control signaling or coordination is required. However, since static slicing lacks flexibility to adapt to varying traffic loads, it often leading to inefficient resource utilization. Moreover, the determination of the optimal fixed allocation among slices is a very challenging task.
  • Dynamic Slicing, where operators are able to dynamically design, deploy, customize, and optimize the slices according to the service requirements or conditions in the network. This approach enhances flexibility, resource efficiency, and responsiveness since it allows the dynamic deployment or expansion of slices whenever is needed. Moreover, predictive or adaptive mechanisms can further optimize allocation by foreseeing demand variations. However, dynamic slicing increases system complexity and requires accurate traffic prediction and orchestration.
  • Semi-Static or Semi-Dynamic Slicing, where part of the resources is allocated statically and it can be guaranteed while the other part is dynamically allocated. This hybrid model balances between stability and adaptability; however it cannot guarantee efficient resource utilization while it requires partial dynamic control mechanisms.
In addition, depending on the use case application, NS can be applied as shown below []:
  • Vertically, where each slice is customized to meet the specific requirements of different vertical industries or applications. (i.e., automotive, smart grid, healthcare, etc.) In this context, vertical industries collaborate with the core network to address diverse QoS and QoE demands across different use cases []. Thus, this approach ensures strong performance isolation; however it comes at the cost of increased orchestration complexity.
  • Horizontally, in which the resources are distributed evenly across different slices to meet the diverse needs of various users or services within the network. Thus, this approach promotes fair resource sharing and scalability; however, it cannot guarantee QoS differentiation and latency assurance.
Based on the ownership, NS can also be classified into the following categories []:
  • Local 5G Operator (L5GO) Slicing, which allows the local 5G operator to create and manage network slices that meet the specific requirements of various use cases, such as hospitals, universities, and industrial environments. This approach supports localized control and enhances performance isolation, but in many countries, it may face limitations due to spectrum availability and regulatory constraints.
  • Mobile Network Operators (MNO) Slicing, which involves the MNO managing the entire slice life-cycle. This allows the MNO to achieve centralized control, efficient resource allocation; however, it comes at the cost of increased capital and operational expenditures and greater complexity in orchestration, tenant isolation, and SLA assurance.
Finally, according to the network domain wherein slicing is applied, NS can be classified into the following categories:
  • Radio Access Network (RAN) Slicing, that focuses on virtualizing radio resources such as spectrum, scheduling, and other RAN components including base stations and antennas. It enables flexible the sharing of radio resources among slices but it faces challenges in allocating resources in real time, managing interference, and coordinating between base stations.
  • Core Slicing, which involves partitioning the CN elements and functions, such as routers, gateways, and servers, to create virtualized network slices. It enables customized service paths and the independent operation of each slice, but it faces issues regarding scaling, security, and keeping slices properly isolated.
  • Transport slicing, which encompasses partitioning the network transport infrastructure, including optical fibers, switches, and routers, into separate virtual slices. It enables differentiated and personalized 5G services tailored to specific applications and user needs. However, integration with legacy networks and coordination across multiple transport layers remain significant challenges.
  • E2E Slicing, which refers to the orchestration and coordination of network slices across the entire network, from the core to the edge, to provide seamless connectivity and service delivery to end-users. It enables seamless connectivity and stable service quality but is hard to implement because it requires coordination across domains, works with different vendors, and involves complex service management.

3.3. Network Slicing Technology Enablers

The feasibility of implementing and applying the NS concept derives from the elaboration of several technologies that allow the softwarization and virtualization of the network. This subsection overviews the key enabler technologies of NS, namely SDN, NFV, Multi-access Edge Computing (MEC), and Cloud Computing [].

3.3.1. Software-Defined Networking (SDN)

The SDN approach makes networks capable of orchestrating and controlling applications/services in a more granular and E2E manner that brings intelligence, flexibility, and centralized control with a global view of the entire network. Moreover, responding rapidly to changing network conditions, as well as business-market and end-user needs, is made possible via SDN. SDN bridges the gap between service provisioning and network management by introducing a virtualized control plane that enables intelligent management decisions across network functions. It uses standardized south-bound interfaces (SBIs) to make network control directly programmable [].

3.3.2. Network Function Virtualization (NFV)

With NFV, certain network functions (NFs) can be virtualized and run on top of commodity hardware devices. These NFs can be easily deployed and dynamically allocated independently of the software and hardware that exist in traditional vendor offerings. This enables network resources to be efficiently allocated to Virtual Network Functions (VNFs) through dynamic scaling to achieve Service Function Chaining (SFC). Resource provisioning optimization to end-users with high QoS is ensured by NFV as well as VNF operation performance by including minimum latency and failure rate thresholds [].

3.3.3. Multi-Access Edge Computing (MEC) and Cloud Computing

By leveraging MEC, data is processed near the point of generation and consumption, close to end-users. This approach provides cloud computing capabilities to application and content providers, along with an IT service environment at the mobile network’s edge. As a result, the network can deliver ultra-low latency services essential for business-critical applications while also supporting interactive user experiences in high-traffic areas. In 2012, Cisco coined the term fog computing [] especially for IoT architectures. According to Yi et al. [] “Fog Computing is a geographically distributed computing architecture with a resource pool which consists of one or more ubiquitously connected heterogeneous devices (including edge devices) at the edge of network and not exclusively seamlessly backed by Cloud services, to collaboratively provide elastic computation, storage and communication (and many other new services and tasks) in isolated environments to a large scale of clients in proximity”. Depending on the exact location of computation, alternative versions, like Mist Computing [], have also been proposed.

3.4. Network Slice Life-Cycle Phases

According to ITU-T TS 28.530 [], the management aspects of NS can be distinguished in four phases, namely the Preparation, the Commissioning, the Operation, and the Decommissioning. Every task, which is relevant to slice design, instantiation, management, etc., is included in one of the aforementioned phases. Table 3 summarizes the tasks of each phase.
Table 3. Tasks related to each slice life-cycle phase. Adapted from ref. [].

3.4.1. Preparation Phase

Before the creation/instantiation of a network slice, the following preparatory tasks must be carried out. Slice Design (SD) involves the design of a requirements template that should be fulfilled by the new slice. These requirements are derived from the services provided to the tenants or to the users that will be associated with the new slice. Capacity Planning (CP) involves the estimation of the network load that the new slice should be able to handle, in conjunction with the overall network traffic. Network Function Evaluation (NFE) involves the association of specific VNFs with the slice to be created and the assessment of the performance of these functions. Network Environment Preparation (NEP) involves the necessary preparations and decision making to be performed prior to the instantiation of the new slice. This may include the modification of network parameters affecting already running slices and the acceptance or rejection of a new slice request, based on the current network environment status, i.e., the admission control, one of the most important decision-making tasks in the Preparation phase.

3.4.2. Commissioning Phase

After the completion of the previous tasks, network slices should be created according to the slice requests that have been admitted in the previous phase. Moreover, in this phase, the following tasks have to be performed. The Necessary Resources Reservation (NRR) task is performed first in order to ensure that the essential amount of resources for the new slice will be available upon its creation. Then, the Slice Instance Creation (SIC) task is performed in order to create the Network Slice Instance (NSI) of the new slice. Finally, the Resources Initial Allocation and Configuration (RIAC) task is performed, during which the reserved resources are allocated to the new slice.

3.4.3. Operation Phase

Following the Commissioning phase, the slice is ready to provide service entering the operation phase. The first task to be performed in this phase is the Slice Activation (SA) of the NSI, which is a decision-making task that basically checks whether or not the NSI is prepared to support communication services. Once activated, the reserved resources are allocated to the network slice, enabling it to schedule these resources and provide services to the subscribed end-users.
After the activation, the tasks of Monitoring (MON) and Performance Reporting (PREP) are carried out in a continuous manner, as they intend to supervise specific Key Performance Indicators (KPIs) and QoS fulfillment. Furthermore, in order to maintain the efficiency of the allocated resources, the Resource Capacity Planning (RCP) task calculates the slice resource usage and the performance relative to the allocated resources. This task may also take into account traffic predictions in order to calculate forthcoming performance. The outcome of this task may trigger the slice parameter modification (SPM) task which generates modification polices that may affect certain slice parameters. The SPM task may be triggered by receiving new network slice requirements or new supervision/reporting results. For all the aforementioned tasks of this phase, AI/ML has been proposed to enhance them.
Finally, the operation phase includes the deactivation task that is instantiated by reports indicating that a specific network slice is no longer required. This task renders the NSI inactive and stops the provision of communication services to end-users associated with that slice.

3.4.4. Decommissioning Phase

In this phase, all reserved resources that were dedicated to the specific slice are released. Meanwhile, resources that were shared among the deactivated slice and other slices are modified to remove any configuration specific to the deactivated slice.

3.5. Network Slicing Empowered with AI/ML

The complexity of creating, operating, and managing a sliced network is very high considering that multiple network slices must coexist on top of the same infrastructure, sharing common available physical resources, while being logically isolated, independent and operated by different tenants. This renders traditional human-driven network management approaches inadequate and turns machine-driven management approaches into the only option []. This is the reason why AI and more specifically ML approaches have been widely proposed in the literature to empower NS.
Furthermore, the 5G System (5GS) has been fully specified by 3GPP TS 23.501 in ref. [] and consists of various Network Functions (NFs) for both the user and the control plane, each one of which is responsible for implementing specific functionalities. For instance, one of the most popular NF is Access and Mobility Management Function (AMF), a control plane key function of the 5G CN, which is responsible of registering the UEs to the network, authenticating them and authorizing their access to services. Each NF has appropriate service interfaces for exchanging the necessary information with other NFs or other sources. For instance, AMF utilizes the Namf and N1 interfaces in order to exchange information with other NFs and the UE, respectively.
An NS-related NF in the 5GS is the Network Slice Selection Function (NSSF), which is responsible for selecting the appropriate slice for a UE. AMF exchanges UE-related data with the NSSF through the N22 interface and after processing them, NSSF informs AMF about the slice that fits most the UE. This is, in brief, the slice selection process, one of the processes performed during the Operation phase of the slice life-cycle. Every process that needs to be carried out during the slice life-cycle is part of a corresponding NF, which exchanges the necessary input and output data through the corresponding interfaces.
The elaboration of ML methods has been widely proposed in order to tackle the challenges and solve the problems arising during the application of the different NFs related to almost all the tasks of the three first phases of the network slice life-cycle, which were described in the previous subsection. Briefly, in the Preparation phase, ML has been proposed to perform slice feature selection and extraction, traffic and congestion prediction, embedding or placing VNFs and admission control. In the Commissioning phase, ML has been proposed for adaptive resource reservation and both SIC and RIAC. Finally, in the Operation phase, ML has been proposed for solving decision making and optimization problems, for various predictions and for allocating resources. In Section 5, these applications are presented in detail.

4. ML Methods

ML is a subset of AI that focuses on teaching machines to process data efficiently. Depending on the nature of the data used for training, ML techniques can be broadly categorized into four main groups: Supervised Learning (SL), Unsupervised Learning (UL), Semi-Supervised Learning (SSL), and Reinforcement Learning (RL) [,]. However, besides these broad known categories that are based on the nature of feedback provided to the algorithm during the learning process, in the literature, there are many other classifications following different criteria, such as the purpose for which the ML is used for, the manner in which the model evolves in response to feedback [], or the processed dataset exchanging approach []. In this survey, we will adopt the classification that is based on the nature of the data used for training. Furthermore, Neural Networks (NNs) cannot be confined to a single category, as they serve as fundamental tools applicable across the four aforementioned categories. For this reason, NNs are positioned here within both SL and RL, with specific variants, such as Feed-Forward NNs (FFNNs), included in both categories.
It is important to note that this work focuses solely on presenting the ML methods proposed in the literature to address problems related to NS applications in 5G and Beyond 5G Networks. A detailed description of the functionality of these methods is beyond the scope of this study. Readers are encouraged to consult the respective referenced articles for more comprehensive information on the functionality of these methods and their various intricacies.

4.1. Supervised Learning

In SL techniques, labeled training datasets are used to build models. This process involves providing the algorithm with sample data pairs of inputs and desired outputs as training data. The primary objective of these techniques is to construct a function that maps the inputs to the outputs. Consequently, given new inputs, this function will be capable of estimating the corresponding unknown outputs. The output of the function can be either continuous numeric values (in the case of regression) or class labels for the input values (in the case of classification) [].
Indicative methods of this category include Support Vector Machine (SVM), Least Absolute Shrinkage and Selection Operator (LASSO), Random Forest (RF), k-Nearest Neighbor (KNN), Gradient Boosting Decision Tree (GBDT), and several different Artificial Neural Networks (ANNs) like the Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), etc.

4.2. Unsupervised Learning

In UL techniques, unstructured data or data without labels are provided to the learning algorithm as input. The algorithm is tasked with identifying patterns or structures within the input data, even though no explicit feedback is provided. Such techniques are mainly used for data clustering, dimensionality reduction, and density estimation [,].
Algorithms in this category include K-means, Spectral Clustering, Principal Component Analysis (PCA), Sparse Autoencoder (SAE), and Expectation Maximization (EM).

4.3. Semi-Supervised Learning

SSL techniques combine both labeled and unlabeled data for the training. An enormous amount of unlabeled data and sparse labeled data are used to mainly create an appropriate model of data classification [].
An algorithm in this category that is related to NS is the Variational Autoencoders (VAEs) for Semi-Supervised Learning.

4.4. Reinforcement Learning

In RL techniques, the entities of the agent (i.e., the learning algorithm) and the environment are introduced. The learning process is based on a series of interactions between these entities. Specifically, the agent first receives percepts containing the state of the environment (or system). The agent then performs an action that results in a change in the environment’s state. Based on this new state, the agent receives feedback in the form of a reward or a penalty. This action and feedback process is iterated until the agent learns to navigate the environment effectively. The two key factors that characterize RL techniques are the transition model (which defines how the environment transitions from one state to the next) and the policy (which determines the action taken in a given state) [].
Algorithms included in this category are the Markov Decision Process (MDP), the State–Action–Reward–State–Action (SARSA), Q-Learning (QL), the Linear Function Approximation (LFA), the Multi-Armed Bandit (MAB), etc.

5. Phase-Related ML Applications

The exploitation of ML methods is a key enabler of the NS concept in 5G. Consequently, the majority of research works in the field of NS refer to AI/ML applications as a cornerstone for making the NS concept feasible. Some key applications of ML techniques in the context of NS include classification, prediction, clustering, and pattern recognition.
In this section, we will summarize, for each slice of the life-cycle phase, the applications of AI/ML that have been proposed in the literature, in order to address potential challenges that arise during NS implementation.

5.1. ML in Preparation Phase

Various ML techniques can be applied in different tasks of the Preparation phase. Singh et al. [] used SVM and K-Means for slice design (SD). More specifically, the authors, in their Network Sub-Slicing Framework, applied SVM for feature selection and K-Means for grouping similar services, for use cases such as IoT. Experimental results showed improved performance in terms of latency and energy efficiency.
In addition to slice design, ML methods can be incorporated during the Network Environment Preparation (NEP) task. To this end, Bega et al. [] analyzed 5G infrastructure markets from the perspective of a mobile network operator, focusing on how the operator should manage network slice requests to maximize revenue while ensuring that existing slice guarantees are upheld. They modeled this challenge as a Semi-Markov Decision Process (SMDP) and proposed a QL algorithm for slice admission control. Their evaluation demonstrates that the QL algorithm performs nearly as well as the optimal Value Iteration policy in terms of revenue and gain, and it significantly outperforms the optimal policy in perturbed scenarios due to its adaptive nature. Moreover, in ref. [], to deal with scalability issues, the authors extended their previous work by introducing the Network-slicing Neural Network Admission Control (N3AC) algorithm, a DRL algorithm that uses two FFNNs to calculate rewards for accepting or rejecting requests. Its goal is to maximize the number of accepted slice requests while maintaining QoS for existing slices. Simulation results demonstrated that N3AC achieves performance close to the optimal policy. Bakri et al. [] focused on maximizing the Infrastructure Providers’ (InPs) revenue while minimizing the SLA violation penalties. Specifically, the authors evaluated the QL, the Deep QL (DQL), and the Regret Matching (RM) algorithms’ ability to identify optimal admission policies and assessed their performance in both offline and online scenarios, which is crucial for practical deployment. The results indicated that QL and DQL perform better when trained offline before online deployment, whereas RM outperforms both QL and DQL in real-time online usage. Raza et al. [] proposed a slice admission strategy that increases InPs’ revenue by accepting as many slice requests as possible while minimizing SLA violation penalties by rejecting requests that could degrade other services. To achieve this goal, they use RL (i.e., an ANN). Performance evaluation against non-ML approaches, including static and threshold-based heuristics, showed that their RL-driven admission policy outperforms the other approaches in terms of overall loss, which includes potential revenue loss from rejected services and losses from service degradation when resource slices cannot be scaled up as needed. Business profit is also the focus in ref. []. Specifically, the authors focused on the Admission Control for Service Federation (ACSF) problem, where the admission controller selects the domain for service deployment or rejects it to maximize long-term profit while accounting for federation costs. To deal with this problem, the authors applied a specific type of RL, called average reward learning, along with QL. The results showed that the proposed method outperforms QL, which struggles due to its reliance on the reward discount factor. Sciancalepore et al. [] introduced Online Network Slicing (ONETS), an online slice broker that approves or denies slice requests by analyzing past data and using resource overbooking to maximize revenue and minimize SLA violations. They modeled the problem as a Budgeted Lock-up Multi-Armed Bandit (BLMAB) and enhanced the Upper Confidence Bound (UCB) approach to reduce complexity. Evaluation showed that ONETS outperforms non-ML-based solutions in system utilization, multiplexing gain, and SLA violation reduction. Rezazadeh et al. [,], worked on the joint slice admission control and resource allocation problem, which lays in both Preparation and Commissioning phases; thus, their work will be presented in Section 5.2.
On top of these studies, in the literature, there have been works that apply ML for both NEP and Network Function Evaluation (NFE) tasks. Guan et al. [] presented a hierarchical resource management framework for customized slicing where multiple InPs manage multi-tenant E2E slices. The authors introduced a global resource manager that, within its functional plane, deploys a Service Broker responsible for the admission control of real-time slice requests from multiple tenants using the DQL algorithm. The framework aims to optimize admission control decisions and maximize the average reward by reserving resources for requests that generate higher revenue for the InP. Evaluation results demonstrated that the proposed intelligent framework achieves better service quality satisfaction compared to non-intelligent approaches. Sulaiman et al. [] proposed a multi-agent DRL solution to address NS and admission control in 5G Cloud-RAN (CRAN), aiming to improve long-term InP revenue. Their multi-agent DRL approach uses separate reward functions for each agent based on admission control and slicing policies, which allows both agents to work synergistically without interference. The evaluation results showed that the proposed solution outperforms greedy and single-agent DRL approaches in terms of InP revenue and average available bandwidth. Yan et al. [] tackled the automatic Virtual Network Embedding (VNE) problem, where Virtual Network Requests (VNRs) must be mapped onto substrate network resources. Each VNR requires specific resources and, if available, these resources are allocated; otherwise, the VNE algorithm may reject or postpone the request. The authors proposed a DRL-based VNE scheme that uses a Graph Convolutional Network (GCN) for feature extraction and uses Asynchronous Advantage Actor–Critic (A3C) for parallel policy gradient training, optimizing the VNE policy with considerations in terms of request acceptance, long-term revenue, load balance, and policy exploration. Simulations showed that the proposed A3C-GCN scheme significantly improves the acceptance ratio and average revenue compared to other state-of-the-art methods. Rkhami et al. [] also addressed the VNE problem in core slices using DRL. They proposed a two-step approach: first, applying a heuristic to find a sub-optimal solution, then optimizing with DRL. In their model, called Improving the Quality of VNE Heuristics (IQH), the system states are represented as heterogeneous graphs using a Relational GCN (RGCN), and action probabilities are calculated with an MLP. The goal is to improve the revenue-to-cost metric. Simulations showed significant improvements while using the First-Fit and Best-Fit heuristics. Esteves et al. [] proposed a Heuristically Assisted DRL (HA-DRL) approach for the Network Slice Placement (NSP) problem, using a GCN for feature extraction and A3C for solving the NSP. HA-DRL aims to minimize resource consumption, maximize slice acceptance, and balance node load. It incorporates a modified Actor Network with a heuristic layer based on the Power of Two Choices (P2C) algorithm. HA-DRL was compared against pure DRL and P2C, showing faster convergence, near-real-time placement, and better acceptance ratios. Kibalya et al. [] tackled the slice deployment challenge across multi-provider infrastructures, focusing on the candidate search problem. In response, they proposed the Candidate Search Algorithm (CaSA), which filters feasible InPs based on network topology and slice constraints. A DRL neural network is then used to select the optimal InP set, maximizing the revenue-to-cost ratio for slice deployment.
Another task in which ML can be incorporated in the Preparation phase is Capacity Planning (CP). Aboeleneen et al. [] proposed the Error-aware, Cost-effective and Proactive NS Framework (ECP) for CP, predicting traffic levels and proactively creating slices for increased demand. ECP operates in two phases: the first uses prediction models like the RF, the AutoRegressive Integrated Moving Average (ARIMA), and the Long Short Term Memory (LSTM) to forecast future load, while the second uses a DRL method with Proximal Policy Optimization (PPO) to create cost-effective slices. Simulations showed that LSTM provides the highest accuracy, and ECP allocates lower-cost slices efficiently. ECP’s first phase aligns with the Preparation phase, and the second phase aligns with the Commissioning phase. Dandachi et al. [] proposed a Cross-slice Admission and Congestion Control algorithm, dealing with both NEP and CP tasks. Their solution combines admission and congestion controllers using the SARSA algorithm with LFA in order to optimize resource utilization and system performance by reducing the rejection of best-effort slice requests and increasing the acceptance rate of guaranteed QoS slices.
Finally, in the Preparation phase, according to ref. [], another task in which ML can be applied is the prediction of service demand. Techniques like RNNs can analyze historical data to make accurate predictions of service demand for a slice, which can then inform decision making during the Commissioning phase.
Table 4 summarizes the related tasks, the allocation, the use case and network domain types, the ML category, as well as the method proposed, and whether that method was tested for each of the referred works of the preparation phase.
Table 4. Summary of ML-based NS in the Preparation phase (SD: slice design, NFE: Network Function Evaluation, NEP: Network Environment Preparation, CP: Capacity Planning).

5.2. ML in Commissioning Phase

The main tasks to be carried out during the Commissioning phase involve slice creation, including resource reservation and orchestration, service grouping, and resource allocation. Once a slice request is accepted and preparation for its creation begins, the necessary resources must first be reserved to ensure that they are available for allocation to the new slice later.
According to the network services that each slice should provide, the appropriate VNFs should be assigned to it. For this placement, AI/ML methods, like Deep Learning (DL), can be applied [] to ensure a dynamic approach that adapts to time-varying service demands while meeting service delay requirements. Additionally, the appropriate resources must be reserved for each slice, prior to its creation, based on the service demands that each slice is expected to handle. Since these demands are influenced by time-varying data traffic loads, resource reservation should be adaptive to these variations. To address this, RL methods, such as Deep Deterministic Policy Gradient (DDPG), can be employed for efficient resource management.
Extending the aforementioned VNF placement task, the authors of ref. [] state that the placement of the VNFs on the slices needs to be carried out autonomously, as this is a key aspect of the Zero-touch network and Service Management (ZSM) in 5G and beyond networks. With that said, they propose a mechanism called SCHEMA, which is a ZSM scheme that emphasizes the scalability of multi-domain networks and the minimization of service latency. The complex multi-domain slice service placement problem is modeled as a Distributed MDP, and a Distributed RL approach is employed to solve it by placing RL agents in each domain, orchestrating in this way the VNFs independently of other RL agents in different domains. The proposed scheme was evaluated and proven to be able to significantly reduce the average service latency when compared against a centralized RL solution.
The last task of the Commissioning phase, denoting the creation of the network slice, is the initial allocation of network resources to the new slice. The resources to be allocated vary and include communication (radio), caching, and computing resources. In this resource allocation process, AI/ML methods can be applied [,]. The initial allocation of resources takes place in this phase to enable the slice, even though the amount of allocated resources may alter according to the needs of the slice in the next phase.
In ref. [], Zhang et al. addressed the problem of vehicular multi-slice optimization, focusing on slice allocation strategies. Specifically, their proposal aims to calculate the priority queue of slice requests waiting to be deployed on the RAN, taking into account the system status, available spectrum resources, and the deployment delay constraints of each request. The optimization objective is to select a priority queue that minimizes average latency while maximizing overall service utility. To solve this optimization problem, the authors employed QL and developed an algorithm called the Intelligent Vehicular Multi-slice Optimization (IVMO) algorithm. To demonstrate the effectiveness of their approach, they conducted simulations and compared IVMO’s performance—measured in terms of the number of requests in the queue and service utility—against two other non-ML-based methods (fair allocation and greedy algorithm). According to the simulation results presented, the proposed IVMO algorithm achieves better performance on both metrics.
In ref. [], Quang et al. worked on the VNF-Forwarding Graph (VNF-FG) embedding problem, which involves deploying service requests on the CN by allocating resources to meet the service requirements in terms of QoS, while adhering to the constraints of the underlying infrastructure. Each VNF-FG request consists of VNFs connected by Virtual Links (VLs). VNFs require specific computing resources, such as CPU, RAM, and storage, while VLs are characterized by network-oriented metrics, e.g., bandwidth, latency, and packet loss rate. The authors formulated the VNF-FG allocation problem as an MDP with appropriate states and actions, aiming to maximize the number of accepted VNF-FGs. A VNF-FG is considered accepted only if all its VNFs and VLs are allocated and the QoS requirements are satisfied. To address this complex problem, the authors proposed an enhanced DDPG algorithm called Enhanced Exploration DDPG (E2D2PG). E2D2PG incorporates the concepts of “replay memory” and the Heuristic Fitting Algorithm (HFA) to determine the embedding strategy. To evaluate the performance of their proposed algorithm, the authors conducted simulations comparing E2D2PG with the standard DDPG. The results demonstrated that E2D2PG outperforms DDPG in terms of acceptance ratio and the percentage of deployed VLs.
In ref. [], Zhao and Li proposed an RL-based approach for the cooperative mapping of slice requests to physical nodes and links. Their proposed algorithm, named RLCO, aims to deploy network slice requests to the CN by determining the optimal slicing policy. This policy cooperatively maps requests to physical nodes and links, considering the real-time resource status of the substrate network while maximizing the request acceptance rate and the earning-to-cost ratio. In the proposed mapping scheme, RLCO handles node mapping, Dijkstra’s algorithm is used for link mapping, and an RL-based iterative process is applied to identify the mapping strategy that maximizes a reward function. This reward function accounts for the cross-impact among benefits, costs, nodes, and links.
There also exist approaches that tackle problems lying in both the Preparation and Commissioning phases. For instance, the ECP NS framework proposed in ref. [], which was already presented in Section 5.1, has two phases, the first of which belongs to the Preparation phase, while the second belongs to the Commissioning phase. During the second phase, a PPO method is used to create cost-effective slices, based on the forecast load derived in the first phase. Another example is presented in ref. [], where the authors focused on a CRAN joint slice admission control and resource allocation problem. They first formulated this problem as an MDP and then applied an advanced continuous DRL method, called Twin Delayed DDPG (TD3), to solve it. The TD3 algorithm, based on the state–actor–critic model, aims to optimize policies over time, enabling the Central Unit to autonomously reconfigure computing resource allocations across slices. This approach minimizes latency, energy consumption, and the instantiation overhead of VNFs for each slice.
In their later work [], the authors tackled the same joint problem in a B5G RAN environment. In particular, they proposed an NS resource allocation algorithm based on a lifelong zero-touch framework. This algorithm, termed prioritized twin delayed distributional DDPG (D-TD3), differs from TD3 in that it employs distributional return learning instead of directly estimating the Q-value. Here, the Q-value is represented as a distribution function of state–action returns. Additionally, the authors incorporated a replay buffer, allowing D-TD3 agents to memorize and reuse past experiences. For evaluation purposes, the proposed algorithm’s performance was assessed in terms of admission rate, latency, CPU utilization, and energy consumption against other state-of-the-art DRL approaches, such as TD3, DDPG, and Soft–Actor–Critic (SAC). The results demonstrated that D-TD3 outperforms these methods, achieving superior results in all evaluated metrics.
Table 5 summarizes the related tasks, the allocation, the use case and network domain types, the ML category, the method proposed, and whether that method was tested for each of the referred works in the Commissioning phase.
Table 5. Summary of ML-based NS in the Commissioning Phase (SIC: Slice Instance Creation, RIAC: Resources’ Initial Allocation and configuration, NRR: Necessary Resources Reservation).

5.3. ML in Operation Phase

The operation phase has attracted the most attention from the research community for applying ML methods to its related tasks. This is because the tasks in this phase have the greatest impact on the performance and effectiveness of NS, requiring complex real-time computations. For instance, ultra-low latency (equal or less than 1 ms for URLLC applications such as autonomous driving or industrial automation), high reliability (≥of 99.999% availability for mission-critical services), massive connectivity (support for up to 1 million devices per square kilometer in mMTC scenarios such as smart cities or sensor networks), and guaranteed high throughput (multi-Gbps rates for eMBB services, like AR/VR and UHD video streaming) have to be achieved. These requirements are highly dynamic and context-dependent, making real-time adaptation essential.
The volume of past research works in this phase is significant enough to necessitate grouping the works by broader task categories rather than the ones explicitly listed in Table 3. As such, each group encompasses the multiple tasks mentioned in Table 3. The resulting groups are as follows:
  • Decision Making–Optimization–Classification
  • Monitoring–Prediction
  • Resource Allocation

5.3.1. Decision Making–Optimization–Classification

This group involves works that deal with problems related to the tasks of slice activation (SA), slice parameter modification (SPM), Performance Reporting (PREP) and Resource Capacity Planning (RCP).
More specifically, in SA, the decision on whether a slice instance should be activated is taken. For example, Phyu et al. [] proposed a MAB-based slice activation/deactivation approach that considers the user QoS and the network energy consumption. More specifically, the authors formulated the slice activation/deactivation problem using MDP, and then used the MAB approach to identify the near-optimal slice activation/deactivation decision while ensuring the QoS requirements for individual users are maintained. Furthermore, the authors extended their previous work in ref. [] by considering the joint slice activation/deactivation and the user association problem. The proposed multi-agent fully cooperative decentralized framework (ICE-CREAM) uses the Decentralized Partially Observable MDP (Dec-POMDP) to formulate the problem and the multi-agent partially observable state-aware MAB (POMAB) to find the near-optimal solution to the above problem.
Slice selection is another type of decision-making task performed in this phase. In ref. [], the authors proposed an ML model based on a CNN to determine the most suitable network slice for a device to associate with. Their model, DeepSlice, first predicts the traffic load for each slice (as discussed in the next subsection). Then, based on the service requested parameters, it employs a DLNN to make the slice selection decision. In ref. [], Xi et al. considered the resource slicing concept to maximize Mobile Virtual Network Operators’ (MVNOs) benefits from the shared RAN infrastructure. They proposed a mechanism for assigning users to slices, aiming to optimize the MVNO’s long-term benefits while considering resource availability. They modeled the problem as an SMDP and employed a DRL-based algorithm to address the curse of dimensionality and enable online optimization. Specifically, they used Deep Q-Network (DQN) and compared its performance with conventional QL and Random Resource Allocation (RRA). Their results showed that DQN converges significantly faster than QL. Moreover, in high-user scenarios, DQN achieves higher network throughput and total MVNO utility compared to both QL and RRA. Shome et al. [] consider the 3GPP Rel. 16, which enables User Equipment (UE) to connect to up to 8 network slices—an aspect overlooked in related works. They proposed a DRL-based slice selection and bandwidth allocation approach that considers Quality of Experience (QoE), price satisfaction, and spectral efficiency. In this approach, each virtual base station is assigned a DQN agent, which determines the slice(s) to which a user will connect, based on the service requirements and available resources. Compared to another DQN approach, the proposed method achieves better performance in terms of user satisfaction, convergence time, and bandwidth savings per user. In ref. [], Tang et al. focused on the context of intelligent vehicular networks, where the vehicles need to offload their intelligent tasks to the appropriate slices. These slices, called resource slices, must provide sufficient resources to meet the demands of the connected vehicle tasks. The authors proposed a Slice Selection-based Online Offloading (SSOO) algorithm (using a Deep Neural Network (DNN)) that leverages distributed intelligence to perform resource slice selection and vehicle assignment, and takes into account the current system environment, aiming to minimize the system’s energy cost. To evaluate their proposed scheme’s performance, they compared it with three baseline schemes: ETCORA, DRL-CORA, and GA-NSS. The selected performance metrics included average device energy consumption, total energy consumption, task completion rate, and the number of tasks processed by slices. The experimental results demonstrate that the proposed SSOO algorithm outperforms the other algorithms.
Another decision-making task related to this phase is the allocation of user service requests to slices. In ref. [], Zhang et al. proposed a slicing model and a resource allocation scheme for the CN. The main challenge they aimed to address revolved around determining the slice in which a service request should be deployed to ensure a high transmission success rate. To achieve this, they proposed a Multi-output Classification Edge-based Graph Convolutional Network (MCEGCN) which essentially is an SL-based resource allocation model that predicts the slice on which a request should be deployed to maximize the number of successfully transmitted requests. The model consists of two Edge-based Graph CNNs (EGCNs) that can extract the spatial correlations of the network’s edge features. The authors chose this approach over a DNN because, as they state, DNNs may fail in prediction since they cannot exploit the graph structure of the network’s links. For evaluation, they compared the performance of their proposed model with other approaches, including a Multi-Layer Perceptron (MLP), another neural network-based model. The performance metrics included a deployment success rate and total transmission across different utilization rates, and the results showed that MCEGCN outperforms the others. In ref. [], Tsourdinis et al. presented a framework for slice allocation based on the services running on top of a cloud-native network, enabling the creation of a fully service-aware network for B5G applications. More specifically, this framework makes accurate decisions regarding slice allocation by classifying, in real-time, the traffic exchanged by different users and by predicting the future connectivity needs of applications. The authors explored the adoption of several ML models for this task (FFNN, RNN, LSTM, Bidirectional LSTM (BiLSTM)) and concluded that a hybrid CNN-LSTM distributed learning scheme provides the best performance. Specifically, the CNN is applied to user-generated traffic for feature extraction, noise reduction, and dimensionality reduction. The processed information is then passed to the LSTM, which captures data patterns using memory components, enabling the forecasting of future data trends.
In the Operation phase, there are tasks related to optimizing slice parameters, policies, or even profit-related values. To this end, the authors in ref. [] proposed a computing resource allocation optimization approach for a fog-RAN scenario, which includes a cluster of fog nodes coordinated with an edge controller (EC). The proposed DQN-based approach can learn the network’s dynamics and adapt to them by optimizing the computing resource allocation policy of the edge controllers. Mei et al. [] introduced an intelligent network slicing architecture for Vehicle-to-Everything (V2X) services, which includes a Slice Deployment Controller (SDCon) that adjusts the network slicing configuration scheme to ensure the QoS requirements of V2X services and reduce the network cost for mobile network operators. Authors have proposed a DQN that incorporates LSTM to make the adaptation decisions, which improves the QoS compared to a non-ML-based approach. Zhang et al. [] proposed a DDPG-based pricing and resource allocation scheme in optical data centers, which encourages tenants, i.e., the service providers (SPs), to request resources in a load-balanced manner that reduces the cumulative blocking probability. Similarly, in ref. [], Lu et al. proposed a DLR-based scheme for an inter-datacenter optical network (IDCON). Simulation results confirmed that, compared to a traditional centralized approach, the proposed scheme increases the InPs’ profit and reduces computational complexity. In ref. [], Khodapanah et al. proposed a slice-aware radio resource management framework that ensures the slice KPIs’ fulfillment by fine-tuning the slice-related control parameters of the packet scheduler and the admission controller. For this purpose, authors employed an SL-based ANN whose objective is to provide the appropriate set of control parameters that maximize the KPIs.
Several works address slice reconfiguration and decision making under uncertainty. Wei et al. [] investigated the Network Slice Reconfiguration Problem (NSRP) caused by traffic variability. Specifically, the authors reformulated the problem as an MDP and then employed a Dueling Double-DQN (DDQN) to solve it, forming the proposed Intelligent NS Reconfiguration Algorithm (INSRA). Numerical results illustrate that INSRA can minimize long-term resource consumption and avoid unnecessary reconfigurations. Yang et al. [] presented an intent-driven optical network architecture combining DRL-based slicing policy generation and reconfiguration. The architecture uses a DDPG algorithm for fine-grained slicing strategies, including spectrum slicing, computing resource slicing, and storage resource slicing, which impact network components such as blocking probability, load balancing, and delay. In the evaluation, the authors compare their approach to DQN and show that the proposed method outperforms DQN in learning time, blocking probability, and resource utilization.
The reconfiguration of the slice parameters in some works is based on the exchange of the network state information between the InPs and the tenants. Rago et al. [] proposed a tenant-driven RAN slice enforcement scheme, where the slice enforcement is made by the InP based on the outcome of a DDPG algorithm that each tenant is utilizing to calculate adaptive bandwidth requests for its slices. Each tenant is aware of the overall network status as InP utilizes a DL scheme, which is a convolutional autoencoder, to the compress network status and share it with tenants. Similarly, in ref. [], the authors leveraged an SAE that encodes network contextual information, such as SNR and data load pattern, in order to exchange this information between the Base Band Units (BBUs) and the Remote Radio Units (RRUs) in an open RAN scenario.
Besides works focused solely on decision making or optimization tasks, some works jointly address both. In ref. [], the authors explored Fog-RAN slicing in a scenario with a hotspot and Vehicle-to-Infrastructure (V2I) slice instances on a RAN segment composed of Fog Access Points (F-APs) and RRUs. Specifically, they proposed a DRL-based approach to address the interdependence of caching decisions and UE associations under dynamic conditions. Experimental results showed that the proposed DRL-based solution outperforms non-ML approaches in terms of cache hit ratio and cumulative reward.
Finally, the last task in this task group is classification, wherein ML methods are applied to classify slices, use case scenarios, and traffic. Endes and Yuksekkaya [] proposed a 5-layer NN to classify users based on their service requirements and assign them to the most suitable network slice to fulfill their needs. Abbas et al. [] employed a K-means clustering algorithm to efficiently place the base stations in groups. Wu et al. [] proposed an AI-based traffic classification algorithm that classifies and allocates traffic to the appropriate slice based on the current service and network state. The authors used several ML methods in their classifier and showed that RF and GBDT outperform XGBoost and KNN in terms of accuracy.
Table 6 summarizes the related tasks, the allocation, the use case and network domain types, the ML category, as well as the ML method proposed and whether or not that method was tested for each of the referred works in the Decision Making–Optimization–Classification group.
Table 6. Summary of works related to Decision Making–Optimization–Classification (SA: slice activation, SPM: slice parameter modification, PREP: Performance Reporting, RCP: Resource Capacity Planning).

5.3.2. Monitoring-Prediction

This group involves works that deal with problems related to the tasks of Monitoring (MON)-Performance Reporting (PREP), and Resource Capacity Planning (RCP). All of the following works propose the application of ML methods for predicting different slice-related values.
Several works focus on traffic prediction and slice selection. Thantharate et al. [] proposed an ML-based model that first leverages RF to predict the traffic load of each slice, and based on the prediction outcome, a DLNN is applied for the assignment decision. Song et al. [] introduced the ML-based traffic-aware dynamic slicing framework. This framework leverages ML for traffic prediction and allocates network resources accordingly to reduce delay and blocking probability, based on a three-layer FFNN.
Ensuring the SLA in slice services is the focal point of the work in ref. []. Specifically, the authors proposed a method for forecasting SLA violations in slice services. SLA breach prediction relies on VNF bandwidth prediction, which is performed by a hybrid model combining LSTM and ARIMA, a non-ML-based model.
Many studies focus on predicting future workloads and resource needs, helping network operators prepare in advance and avoid allocating more resources than necessary. Camargo et al. [] proposed two ML models, LSTM and RF, to forecast the expected throughput of network slices. The results showed that RF outperformed LSTM, which lacked generalization. Kafle et al. [] adopted the LASSO regression model to perform server workload predictions in a CN scenario. Bega et al. proposed two schemes: the DeepCog [], an ML-based slice capacity forecasting scheme, designed to enable anticipatory resource allocation; and the AZTEC [], a capacity allocation framework which effectively allocates capacity to individual slices, using a multi-timescale forecasting model. Specifically, DeepCog forecasts future capacity needs, minimizing resource overprovisioning and SLA violations, using a 3D CNN for encoding and MLPs for decoding. In contrast, the AZTEC is implemented by DNNs, and more specifically with 3-Dimension CNNs (3D-CNNs), which allow different slices to be examined for prediction in parallel, as they are very efficient in extracting spatio-temporal features. The forecasting ability of the different DNNs, which are part of the AZTEC, was investigated, and showed that the predictions made follow the fluctuations of the slice traffic.
Several works explore how ML can be applied to determine whether new flows can be accepted without compromising QoS. Garrido et al. [] presented a Context-Aware Traffic Predictor (CATP), and a Prediction-Based Admission Control (Pre-BAC), an admission control mechanism that exploits advanced time-series forecasting. Both approaches intend to make predictions, and both leverage ML methods to do so. More specifically, in Pre-BAC, RL was used, while in CATP, different DNN versions were used (LSTMs, 3D-CNN, Spatio-Temporal NN, and Convolutional LSTM (ConvLSTM)). Buyakar et al. [] employed the LSTM algorithm to predict future bandwidth requirements and Mondrian Random Forests (MRF) to predict E2E delay. These predictions are integrated into their admission control algorithm to determine whether a flow with specific QoS requirements can be admitted without violating the QoS of already admitted flows. Yan et al. [] proposed an intelligent Resource Scheduling Strategy (iRSS) combining DL and RL to manage network and traffic dynamics. The iRSS performs periodic traffic predictions in a large time-scale using LSTM and the predicted data are used to perform resource allocations. In their evaluation, the authors demonstrated that LSTM achieves accurate traffic predictions. Tayyaba et al. [] proposed a policy framework for optimized resource allocation in SDN-based 5G cellular networks that consists of several modules, including an adaptive policy generator, a resource manager, a traffic scheduler, and a traffic classifier. In the traffic classifier module, the authors employ LSTM, CNN, and DNN methods for traffic prediction and evaluate their detection accuracy. LSTM achieved the highest accuracy, followed next by CNN, with DNN showing the lowest accuracy. Monteil et al. [] presented a solution for determining the optimal resource reservation for a service provider based on DNN and LSTM methods. The authors compared their framework with the baseline ARIMA prediction model and found that DNN and LSTM performed better, as they captured the daily and seasonal trends of the data.
Special attention has been given to vehicular and mobile networks due to dynamic traffic patterns. Cui et al. [] aimed to optimize slice weights and reduce delay. For traffic prediction, they used ConvLSTM, combining CNN and LSTM, to model the temporal-spatial dependencies of slice service traffic. Performance evaluations showed that their predictive approach significantly reduces slice resource allocation delay compared to a non-predictive approach. Cui et al., in later studies [,], proposed LSTM-DDPG, an algorithm that uses LSTM to predict long-term traffic demand by extracting temporal correlations in data sequences, and DDPG for fine-grained resource scheduling, outperforming traditional ARIMA models. Khan et al. [] focused on joint resource allocation for URLLC and eMMB slices in vehicular networks, and used DNN to estimate Channel State Information (CSI), achieving better resource allocation without excessive signaling.
Other works also present specialized prediction models to reduce complexity or overhead. Sapavath et al. [] proposed a Sparse Bayesian Linear Regression (SBLR) algorithm for predicting CSI in large-scale multi-input–multi output (MIMO) wireless networks, achieving better prediction accuracy while avoiding the overfitting and high complexity associated with neural networks. Matoussi et al. [] introduced a real-time user-centric RAN slicing scheme for CRAN-based applications, which aims to maximize user throughput and minimize deployment cost by optimizing resource allocation, and using a BiLSTM-based DNN. Finally, Jiang et al. [] presented a framework integrating AI for intelligent tasks in NS. Two use cases were examined: MIMO channel prediction and security anomaly detection. In the first use case, a three-layer RNN was used to predict fading channels, improving the transmission antenna selection. In the second, RF and SVM were utilized to detect security threats in industrial networks, with high detection accuracy rates achieved.
Researchers are also frequently interested in slice brokering and orchestration. Gutterman et al. [] proposed a short-time-scale prediction model for RAN slice brokers, called X-LSTM. The model is a modification of LSTM inspired by ARIMA and the X-11 statistical method. Their simulations showed that X-LSTM outperforms ARIMA and standard LSTM in terms of prediction accuracy, resulting in lower slice costs. Sciancalepore et al. [] introduced a Reinforcement Learning-based 5G Network Slice Broker (RL-NSB) framework, for assisting NSBs in associating SLA requirements with physical resources that includes a traffic forecasting module, an admission control algorithm, and a slice scheduler. Similarly, Abbas et al. [] proposed a framework for multi-domain slice resource orchestration that uses LSTM to predict resource utilization (CPU, RAM, storage) and throughput for VNF running on CN slices. It also employs K-means clustering to process the predicted dataset and determine whether reconfiguration is needed.
Moreover, inter-slice balance and fairness are another research direction. Silva et al. [] evaluated SVM, ANN, and kNN for predicting QoS degradation, with SVM achieving the best performance. Bouzidi et al. [] integrated LSTM and Logistic Regression (LR) to predict congestion and support proactive slice adaptations in an SDN-based architecture. Finally, Thantharate et al. [] presented ADAPTIVE6G, an adaptive learning framework for resource management and load prediction in NS applications for B5G and 6G systems. ADAPTIVE6G aims to improve network load estimation, promoting fairer and more uniform resource management. The authors combined DL and TL for load prediction across different slices and demonstrated that their framework outperforms traditional DNN approaches in their experimental results.
Table 7 summarizes the related tasks, the allocation, the use case and network domain types, the ML category, as well as the ML method proposed, and whether or not that method was tested for each of the referred works in the monitoring–prediction task group.
Table 7. Summary of works related to monitoring–prediction (MON: Monitoring, RCP: Resource Capacity Planning, SPC: Slice Parameter Classification).

5.3.3. Resource Allocation

This group includes works focused on resource allocation, specifically addressing the tasks of Performance Reporting (PREP), which relies on slice performance observations, and slice parameter modification (SPM), aimed at reconfiguring specific resource allocation parameters to enhance slice performance.
Some works look at distributed and collaborative resource management, where different network stakeholders—like MVNOs, infrastructure providers, and cloud providers—work together. For example, Hu et al. [] proposed a federated slicing approach using blockchain and RL, helping all parties make fair and efficient allocation decisions. Li et al. [] took a more centralized approach, applying DQL to better match resource supply with user demand in both the RAN and the core network. Cui et al. [,] extended this by focusing on vehicular networks, combining LSTM and DDPG to predict long-term traffic and adjust resources in real time; an approach that showed a strong performance in maintaining QoS.
Other researchers have turned to multi-agent or hybrid learning methods to handle more complex scenarios. Wang et al. [] combined a multi-agent RL algorithm with an RF classifier to allocate both radio and core network resources in networks that include public and private slices. Moon et al. [] presented a smart ensemble method that blends fast-learning with high-performance algorithms to improve RAN slicing. Similarly, Hua et al. [] introduced Generative Adversarial Network-powered Deep Distributional Q Network (GAN-DDQN) for better spectrum sharing and SLA satisfaction, whereas Chen et al. [] addressed dual connectivity scenarios using an LSTM-enhanced DQN to balance QoE, throughput, and energy use. Shome et al. [] proposed a DRL-based slice selection and bandwidth allocation approach, which considers multi-slice-connected UEs and multiple DQN agents that decide the bandwidth allocation to the UEs.
A number of studies focus on demand-aware or spectrum-sensitive resource tuning. Meng et al. [] used DRL to optimize bandwidth allocation in smart grid networks, while Shi et al. [] accounted for radar systems coexisting with 5G users in their spectrum allocation model. Albonda et al. [] looked at how to balance resources between eMBB and V2X slices, combining Q-learning with heuristic rules for better QoS. Nouri et al. [] worked on the joint allocation of power and Physical Resource Blocks (PRBs) to the UEs while ensuring QoS requirements and leveraged a Semi-Supervised learning approach, which combines a VAE with contrastive loss, named SS-VAE, in order optimize the overall network utility.
When it comes to dealing with interference and slice-specific needs, researchers, like Zambianco et al. [], have applied DQN to handle mixed numerology issues, reducing interference while maintaining slice capacity. Shao et al. [] designed a multi-agent system with Graph Attention Networks to manage resource sharing in dense networks. Chergui et al. [] focused on making sure that RAN resource allocation decisions also respect operator cost constraints by using neural networks trained on KPI data.
Some works explore resource control under constraints or in real-time environments. Xu et al. [] introduced a constrained version of the SAC algorithm to deal with energy and delay constraints. Liu et al. [] used a similar idea with a constrained MDP model and Interior-Point Policy Optimization (IPPO) to allocate bandwidth while considering latency. Chen et al. [] used multi-agent DQNs to optimize resource allocation across slices and tenants, showing noticeable improvements in performance metrics. Yan et al. [], in their proposed iRSS, combined large time-scale predictions acquired by LSTM and A3C in order to perform short-term resource scheduling. In their evaluation, the authors demonstrated that A3C outperforms state-of-the-art methods such as QL, AC, and a heuristic resource scheduling algorithm in terms of cumulative reward and resource utilization.
There is also a wide range of work focused on vehicular and edge networks. Sun et al. [] proposed a DRL-based resource controller for D2D vehicle communications, while Yu et al. [] presented a two-stage bandwidth allocation model using DDPG for V2X. Li et al. [] built an E2E framework that coordinates resource allocation across the RAN and Core, improving user access. Akyildiz et al. [] introduced a hierarchical multi-agent system to manage resource sharing between URLLC and eMBB slices. Zhou et al. [] followed a similar approach using cooperative Q-learning to allocate resources fairly between competing slices.
In contrast, several works tackle resource allocation at the link level or in energy-efficient ways. Wang et al. [] proposed a GCN-based solution called LinkSlice for better PRB scheduling. Li et al. [] showed how combining LSTM with A2C leads to better bandwidth allocation, especially in balancing spectral efficiency with SLA compliance. Alcaraz et al. [] focused on resource-constrained environments and proposed a lightweight, kernel-based RL method that offers strong performance without excessive computation.
Moving beyond allocation, some researchers also address device-to-slice assignment and multi-domain orchestration. Dangi et al. [] used a hybrid CNN–BiLSTM model to classify traffic from unknown devices and assign it to the best-fit slice. Lai et al. [] presented a DRL model for optimizing multi-resource allocation across UEs, the edge, and the cloud. Elmosilhy et al. [] combined regret learning and Q-learning to manage user association in multi-RAT networks, while Boutiba et al. [] tackled interference and performance issues in 5G new radio with mixed numerology using a DQN-based scheduler.
A few more works focus on end-to-end optimization under uncertainty. Liu et al. [] proposed a two-level system (TORCH and ASSAIL) that uses DQN to coordinate between the RAN and core. Gharehgoli et al. [] addressed uncertain traffic and channel conditions using recurrent policy gradient methods. Mei et al. [] applied actor–critic LSTM in a partially observable setting to improve Vehicle-to-Vehicle (V2V) slicing.
Lastly, several newer approaches bring federated learning, edge intelligence, and energy-awareness into the picture. Chergui et al. [] used a federated learning model to dynamically allocate resources in B5G CRANs, significantly reducing SLA violations. Raftopoulos et al. [] developed a DRL-based agent for the O-RAN RAN intelligent controllers to fine-tune slicing policies while minimizing PRB use. Alkhoury et al. [] addressed MEC resource allocation using DQN to improve request acceptance in dense urban environments. Ayala et al. [] designed a contextual bandit-based controller to manage both computing and radio resources in virtual-RAN, showing adaptability even under constrained CPU resources.
Table 8 summarizes the allocation, the use case and network domain types, the ML category, as well as the ML method proposed and whether or not that method was tested for each of the referred works in the Resource Allocation task group. The related tasks of all the referred works are the PREP and SPM. Thus, for convenience, this information is not included in this table.
Table 8. Summary of works related to Resource Allocation.

6. Discussion

NS is an arrow in the quiver of InPs that allows them to slice their networks into multiple slices and lease those slices to tenants, so that the latter can provide differentiated services. Naturally, the implementation of NS has, over the years, embraced the adoption of various AI and ML techniques to tackle different problems that arise in the different life-cycle phases of a network slice.
The main objective of this survey paper was to present the applications of ML methods that have been proposed in the literature as solutions to various problems related to NS. The extended search that was conducted and presented in the previous sections reveals that ML methods are widely favored for performing tasks that are critical to the deployment and operation of network slices.
In this section, we will summarize the findings that derive from the presented search regarding the network domain and slicing types adopted, the proposed ML methods and their efficiency, the categories these methods fall into, the tasks these methods are proposed for and the life-cycle phases these tasks are related to.

6.1. Network Domains and Slicing Types

Following the detailed analysis of ML applications across various network slice life-cycle phases presented in Section 5, we now synthesize how these methods align with the multi-dimensional network slice taxonomy articulated previously in Section 3.2. Specifically, we focus on ML implementation variability across the different slicing types and deployment domains, as shown in Table 9, which directly conditions the selection, effectiveness, and challenges of ML approaches employed for slice management and optimization []. In practice, the applicability of ML in each slicing type depends on how data is collected (via monitoring interfaces and telemetry), which features are extracted (e.g., traffic load, mobility, QoS indicators), and how AI outputs are enforced (e.g., via orchestration or control plane NFs).
Table 9. Network slice taxonomy versus ML approaches.
Static allocation environments assign fixed resources for extended durations. These scenarios lack real-time adaptivity but demand robust offline profiling and planning. Hence, in these settings, SL- and UL-based techniques such as SVM, RF, and K-means clustering are commonly applied to predict traffic distributions and guide Capacity Planning. Notably, such static models often fail to address temporal fluctuations or sudden demand bursts. In contrast, dynamic allocation mandates ongoing, low-latency slice reconfiguration under unpredictable traffic and user behavior, accommodating in this way the need for the frequent adjustment of resources based on fluctuating service demands. RL and its deep variants, including Q-Learning, DQN, DDPG, and PPO, facilitate and optimize continuous admission control, resource scheduling, and policy fulfillment in such volatile contexts. These adaptive algorithms enable continuous learning and decision making to balance resource utilization against QoS in near real-time, although the non-stationary behavior of network conditions poses challenges to convergence and stability.
Within vertical slicing, domain-specific requirements demand sophisticated slice-specific ML strategies, tailored to handle heterogeneous KPIs (e.g., latency, reliability, and security) and meet stringent SLAs across diverse industries like industrial IoT, vehicular communication, and eMBB. Hybrid SL-RL models, alongside recurrent architectures and neural networks such as LSTM, have proven adept at addressing these complex requirements and offering adaptive control. For horizontal slicing, on the other hand, the primary focus is on judicially balancing the resources among multiple tenants with varying and often competing demands. To this end, frameworks combining clustering algorithms, ensemble learning, and multi-agent reinforcement learning facilitate equitable resource distribution, ensuring fairness without sacrificing resource efficiency. Such approaches support multi-tenant dynamism and preempt resource contention.
At the domain level, RAN slicing faces highly variable channel conditions and stringent latency requirements, favoring fast-adapting, multi-agent DRL approaches for dynamic spectrum management and congestion control. Meanwhile, the relatively stable but complex interdependencies of the core network benefit from graph-based reinforcement learning (e.g., GNNs), DNNs, and actor–critic methods for scheduling, VNF placement, and orchestration over relatively stable demand patterns. On the contrary, the E2E slicing paradigm requires collaborative coordination among multiple administrative domains, necessitating privacy-aware and scalable learning AI models, such as distributed reinforcement learning and federated learning, to orchestrate resources across both administrative and technological domains.

6.2. Method Categories

Starting from the ML categories, which were presented in Section 4, it can be noted that the proposed ML methods span all four ML categories, but not equally. SSL is the least popular category, as only one approach proposes a method that belongs to this category. Conversely, a considerable number of approaches propose UL-based methods mainly for data clustering, but this number is less than the respective SL-based approaches, which are significantly more and are proposed mainly for clustering and prediction. The ML category that can be characterized as the most popular is RL, encapsulating the highest number of proposed approaches, and being used mainly for decision making, admission control, and resource allocation. Nevertheless, no single category can be regarded as a panacea.

6.3. Method Efficiency and Evaluation

The efficiency of the proposed methods is another issue that needs to be noted. As mentioned earlier, SL- and RL-based methods have been proposed more than others, but this does not mean that any method belonging to these categories is equally effective when applied to any given task. In fact, all proposed methods have both advantages and disadvantages, as one that is effective in performing a specific task can be, in parallel, ineffective in accomplishing another one.
Moreover, the proposed methods are mostly evaluated through simulations with synthetic data, which limits their practical reliability. Hence, real-world performance evaluation is crucial for ensuring commercial acceptance and effective deployment.
For instance, in ref. [], RF, which is an SL-based method, is preferred over other SL-based methods, such as KNN, Naive Bayes, or Decision Tree, due to the nature and amount of data that the dataset of the optimization problem consists of. Another case is ref. [], stating that QL, which is an RL-based method, has a computational complexity low enough to allow its execution in an online learning fashion. However, since QL utilizes Q-tables as a learning mechanism, these tables may, in the case of slice admission control, end up being too large, hence becoming memory-intensive in dense application scenarios. According to ref. [], as QL suffers from the curse of dimensionality, it is only suitable for RAN slicing problems in small-scale networks and not as suitable for admission control. Furthermore, ref. [] states that DRL approaches can interact with the large set of variables that characterize such a system, and determine the best admission decision according to a specific target. This is why DRL-based methods represent the majority of the proposed ML methods for NS.

6.4. Life-Cycle Phases and Tasks

Regarding the life-cycle phases, we can deduce that the application of ML methods has been proposed for three out of the four phases of the slice life-cycle. More specifically, these include the Preparation, Commissioning, and Operation phases, but not the Decommissioning phase. This is to be expected, as the only task that falls under this phase, i.e., the release of the reserved resources, does not require any advanced computation or decision making to be performed, so the elaboration of ML techniques for this task has not been considered yet.
Starting from the Preparation phase, ML methods have been mainly proposed for performing the NEP task, as this is the task related to admission control, one of the key processes of this phase. Most of the proposed approaches consider a dynamic allocation type of slicing and a vertical use case type. As for the network domain, most of the approaches are for RAN applications, followed by E2E, and lastly by Core. The most popular methods of this phase are RL-based, such as DRL and DQL.
Commissioning is the phase that, in general, has the least number of proposed ML applications, compared to the others. Almost all the proposed ML applications are related to the RIAC task, which is also the main task of this phase. All the proposed approaches consider a dynamic allocation type of slicing, and the majority of them consider a vertical use case type. RAN is the network domain in which most of the approaches are applied to, then comes E2E, followed by Core. The most popular methods of this phase are again the RL-based, one of which is DDPG.
Finally, the Operation phase is the one that has attracted the most attention from the research community for applying ML methods to its related tasks. As a result, a large number of research approaches propose the application of ML to address problems or perform tasks that reside in the Operation phase. This, however, is expected since these tasks have the greatest impact on the performance and effectiveness of NS, and require complex real-time computations.
Due to the sheer volume of research works in this phase, we grouped and analyzed the works by broader task categories; therefore, we will discuss the findings for each group separately. Starting from the Decision Making-Optimization–Classification group, the RCP task is the one that has the highest number of proposed applications, as it is related to slice selection, a very important process that has to be performed in the operation phase. Nearly all the proposed works are based on dynamic allocation, while the use case type in the majority of them is vertical. Most of them are applied for the RAN domain, then for the Core, and lastly for the E2E. RL is again the favorite ML category, with DQN being the most proposed method in this group.
The RCP and MON tasks are the ones that almost all studies in the Monitoring-Prediction group are related to. This happens because the information of the current slice status, in terms of capacity and other KPIs, is mandatory for maintaining its efficiency and performing predictions. In this group, almost all approaches consider dynamic allocation, most of which are about vertical use cases and RAN domain applications. The proposed ML methods are, in their majority, NNs that fall into mainly RL-based and also SL-based categories. In particular, DNN and LSTM are the methods that have been most widely proposed.
The last group, termed Resource Allocation, is the one with the highest number of proposed works. All the works in this group presented ML methods for performing the PREP and SPM tasks. Moreover, all of them leverage a dynamic allocation type. This is because the common objective of these works is to reconfigure the resource allocation in a dynamic manner to maintain a high slice performance. Most of the works consider horizontal use cases and are applied in the RAN domain. In this case, RL-based methods are by far the most proposed over the others, with DQN being the favorite one.

7. Conclusions

With the adoption of NS, future 5G and B5G networks are poised to deliver a variety of service types—ranging from eMBB to URLLC and mMTC—each with diverse and stringent performance requirements. By creating and operating multiple logical networks, or slices, over a shared infrastructure, these networks can flexibly cater to heterogeneous user needs. This capability is increasingly becoming a reality thanks to the advent of softwarization and virtualization technologies such as SDN, NFV, and cloud/edge computing. Despite these technological advancements, the management of tasks related to slice creation and operation poses significant challenges due to the need for the real-time analysis of large volumes of complex data and decision-making processes to meet the desired QoS targets. To address these challenges and automate slice life-cycle management, AI and more specifically ML have emerged as critical enablers.
This paper presented a comprehensive survey of ML applications across the various tasks characterizing the different phases of the NS life-cycle within the transformative B5G landscape. For each examined work, we elaborated on the problem addressed, the specific slicing approach employed—considering allocation type, use case, and network domain—and the associated AI/ML methodologies. Furthermore, we introduced a taxonomy organizing the diverse tasks across the life-cycle phases and matched them to relevant ML techniques devised to address the respective challenges. The survey revealed a predominant focus on UL-, SL-, and RL-based techniques, with RL being the most widely adopted due to its suitability for dynamic decision-making problems inherent in NS management. Notably, the efficacy of an ML approach strongly depends on the specific task and life-cycle phase to which it is applied. Among the phases, Operation receives the most attention, with resource allocation as the primary focus, while significant efforts also target admission control in the Preparation phase and the RIAC task in the Commissioning phase. Interestingly, the Decommissioning phase remains underexplored, suggesting perhaps an avenue for further research.
In closing, the synergy between NS and AI/ML stands as a cornerstone for the advancement of next-generation networks. This survey consolidated the current state of AI-driven network slicing research, highlighting how ML techniques are integral to making intelligent, automated NS feasible. By providing a structured overview and critical insights into existing approaches, the paper aims to serve as a valuable reference for researchers and practitioners pursuing advances in intelligent network management. Notwithstanding, challenges such as scalability, real-time processing, and the need for explainability and standardization remain critical areas that warrant further investigation. Future research should also seek to fill existing gaps by exploring less-studied life-cycle phase tasks, improving model interpretability, addressing scalability, considering other emerging AI models like Generative AI, and transitioning from simulation to real-world deployment to unlock the full potential of AI-enabled network slicing in 5G and beyond.

Author Contributions

Conceptualization, E.T., A.S., A.T., and P.C.; investigation, E.T., A.S., and A.T.; resources, E.T., A.S., and A.T.; writing—original draft preparation, E.T., A.S., A.T., and P.C.; writing—review and editing, E.T., A.S., A.T., and P.C.; visualization, E.T., A.S., and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article. Further inquiries can be directed to the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. GSMA. E2E Network Slicing Architecture; Version 2.0; Technical Report NG.127; Global System for Mobile Communications Association (GSMA): London, UK, 2021. [Google Scholar]
  2. ITU. IMT Vision—Framework and Overall Objectives of the Future Development of IMT for 2020 and Beyond; Recommendation M.2083; International Telecommunication Union (ITU): Geneva, Switzerland, 2015. [Google Scholar]
  3. Zhang, S. An overview of network slicing for 5G. IEEE Wirel. Commun. 2019, 26, 111–117. [Google Scholar] [CrossRef]
  4. 3GPP. Management and Orchestration; Concepts, Use Cases and Requirements; Version 19.0.0; Technical Specification (TS) 28.530; 3rd Generation Partnership Project (3GPP): Sophia-Antipolis, France, 2025. [Google Scholar]
  5. Thomatos, E.; Sgora, A.; Chatzimisios, P. A Survey on AI based Network Slicing Standards. In Proceedings of the 2021 IEEE Conference on Standards for Communications and Networking (CSCN), Thessaloniki, Greece, 15–17 December 2021; pp. 136–141. [Google Scholar] [CrossRef]
  6. Foukas, X.; Patounas, G.; Elmokashfi, A.; Marina, M.K. Network slicing in 5G: Survey and challenges. IEEE Commun. Mag. 2017, 55, 94–100. [Google Scholar] [CrossRef]
  7. Afolabi, I.; Taleb, T.; Samdanis, K.; Ksentini, A.; Flinck, H. Network slicing and softwarization: A survey on principles, enabling technologies, and solutions. IEEE Commun. Surv. Tutor. 2018, 20, 2429–2453. [Google Scholar] [CrossRef]
  8. Kaloxylos, A. A survey and an analysis of network slicing in 5G networks. IEEE Commun. Stand. Mag. 2018, 2, 60–65. [Google Scholar] [CrossRef]
  9. Chahbar, M.; Diaz, G.; Dandoush, A.; Cérin, C.; Ghoumid, K. A comprehensive survey on the E2E 5G network slicing model. IEEE Trans. Netw. Serv. Manag. 2021, 18, 49–62. [Google Scholar] [CrossRef]
  10. Ordonez-Lucena, J.; Ameigeiras, P.; Lopez, D.; Ramos-Munoz, J.J.; Lorca, J.; Folgueira, J. Network slicing for 5G with SDN/NFV: Concepts, architectures, and challenges. IEEE Commun. Mag. 2017, 55, 80–87. [Google Scholar] [CrossRef]
  11. Barakabitze, A.A.; Ahmad, A.; Mijumbi, R.; Hines, A. 5G network slicing using SDN and NFV: A survey of taxonomy, architectures and future challenges. Comput. Netw. 2020, 167, 106984. [Google Scholar] [CrossRef]
  12. Debbabi, F.; Jmal, R.; Fourati, L.C.; Aguiar, R.L. An overview of interslice and intraslice resource allocation in b5g telecommunication networks. IEEE Trans. Netw. Serv. Manag. 2022, 19, 5120–5132. [Google Scholar] [CrossRef]
  13. Su, R.; Zhang, D.; Venkatesan, R.; Gong, Z.; Li, C.; Ding, F.; Jiang, F.; Zhu, Z. Resource allocation for network slicing in 5G telecommunication networks: A survey of principles and models. IEEE Netw. 2019, 33, 172–179. [Google Scholar] [CrossRef]
  14. Wijethilaka, S.; Liyanage, M. Survey on network slicing for Internet of Things realization in 5G networks. IEEE Commun. Surv. Tutor. 2021, 23, 957–994. [Google Scholar] [CrossRef]
  15. Shen, X.; Gao, J.; Wu, W.; Lyu, K.; Li, M.; Zhuang, W.; Li, X.; Rao, J. AI-assisted network-slicing based next-generation wireless networks. IEEE Open J. Veh. Technol. 2020, 1, 45–66. [Google Scholar] [CrossRef]
  16. Khan, L.U.; Yaqoob, I.; Tran, N.H.; Han, Z.; Hong, C.S. Network slicing: Recent advances, taxonomy, requirements, and open research challenges. IEEE Access 2020, 8, 36009–36028. [Google Scholar] [CrossRef]
  17. Wu, Y.; Dai, H.N.; Wang, H.; Xiong, Z.; Guo, S. A survey of intelligent network slicing management for industrial IoT: Integrated approaches for smart transportation, smart energy, and smart factory. IEEE Commun. Surv. Tutor. 2022, 24, 1175–1211. [Google Scholar] [CrossRef]
  18. Ssengonzi, C.; Kogeda, O.P.; Olwal, T.O. A survey of deep reinforcement learning application in 5G and beyond network slicing and virtualization. Array 2022, 14, 100142. [Google Scholar] [CrossRef]
  19. Dangi, R.; Jadhav, A.; Choudhary, G.; Dragoni, N.; Mishra, M.K.; Lalwani, P. Ml-based 5G network slicing security: A comprehensive survey. Future Internet 2022, 14, 116. [Google Scholar] [CrossRef]
  20. Donatti, A.; Correa, S.L.; Martins, J.S.; Abelem, A.; Both, C.B.; Silva, F.; Suruagy, J.A.; Pasquini, R.; Moreira, R.; Cardoso, K.V.; et al. Survey on machine learning-enabled network slicing: Covering the entire life cycle. IEEE Trans. Netw. Serv. Manag. 2023, 21, 994–1011. [Google Scholar] [CrossRef]
  21. Phyu, H.P.; Naboulsi, D.; Stanica, R. Machine learning in network slicing—A survey. IEEE Access 2023, 11, 39123–39153. [Google Scholar] [CrossRef]
  22. Azimi, Y.; Yousefi, S.; Kalbkhani, H.; Kunz, T. Applications of machine learning in resource management for RAN-slicing in 5G and beyond networks: A survey. IEEE Access 2022, 10, 106581–106612. [Google Scholar] [CrossRef]
  23. Hamdi, W.; Ksouri, C.; Bulut, H.; Mosbah, M. Network Slicing Based Learning Techniques for IoV in 5G and Beyond Networks. IEEE Commun. Surv. Tutor. 2024, 26, 1989–2047. [Google Scholar] [CrossRef]
  24. Ebrahimi, S.; Bouali, F.; Haas, O.C.L. Resource Management From Single-Domain 5G to End-to-End 6G Network Slicing: A Survey. IEEE Commun. Surv. Tutor. 2024, 26, 2836–2866. [Google Scholar] [CrossRef]
  25. Novanana, S.; Kliks, A.; Arifin, A.S.; Wibisono, G. Performance of 5G Slicing with Access Technologies, and Diversity: A Review and Challenges. IEEE Access 2024, 12, 170780–170802. [Google Scholar] [CrossRef]
  26. Sun, H.; Liu, Y.; Al-Tahmeesschi, A.; Nag, A.; Soleimanpour, M.; Canberk, B.; Arslan, H.; Ahmadi, H. Advancing 6G: Survey for Explainable AI on Communications and Network Slicing. IEEE Open J. Commun. Soc. 2025, 6, 1372–1412. [Google Scholar] [CrossRef]
  27. Wu, W.; Zhou, C.; Li, M.; Wu, H.; Zhou, H.; Zhang, N.; Shen, X.S.; Zhuang, W. AI-native network slicing for 6G networks. IEEE Wirel. Commun. 2022, 29, 96–103. [Google Scholar] [CrossRef]
  28. Martins, J.S.; Carvalho, T.C.; Moreira, R.; Both, C.; Donatti, A.; Corrêa, J.H.; Suruagy, J.A.; Corrêa, S.L.; Abelem, A.J.; Ribeiro, M.R.; et al. Enhancing Network Slicing Architectures with Machine Learning, Security, Sustainability and Experimental Networks Integration. IEEE Access 2023, 11, 69144–69163. [Google Scholar] [CrossRef]
  29. Li, W.; Liu, R.; Dai, Y.; Wang, D.; Cai, H.; Fan, J.; Li, Y. Research on network slicing for smart grid. In Proceedings of the 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China, 17–19 July 2020; pp. 107–110. [Google Scholar] [CrossRef]
  30. Dubey, M.; Singh, A.K.; Mishra, R. AI based Resource Management for 5G Network Slicing: History, Use Cases, and Research Directions. Concurr. Comput. Pract. Exp. 2025, 37, e8327. [Google Scholar] [CrossRef]
  31. Umagiliya, T.; Wijethilaka, S.; De Alwis, C.; Porambage, P.; Liyanage, M. Network slicing strategies for smart industry applications. In Proceedings of the 2021 IEEE Conference on Standards for Communications and Networking (CSCN), Thessaloniki, Greece, 15–17 December 2021; pp. 30–35. [Google Scholar] [CrossRef]
  32. Liu, B.; Luo, Z.; Chen, H.; Li, C. A survey of state-of-the-art on edge computing: Theoretical models, technologies, directions, and development paths. IEEE Access 2022, 10, 54038–54063. [Google Scholar] [CrossRef]
  33. Yi, S.; Hao, Z.; Qin, Z.; Li, Q. Fog computing: Platform and applications. In Proceedings of the 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), Washington, DC, USA, 12–13 November 2015; pp. 73–78. [Google Scholar] [CrossRef]
  34. Lopez Escobar, J.J.; Díaz Redondo, R.P.; Gil-Castineira, F. In-depth analysis and open challenges of Mist Computing. J. Cloud Comput. 2022, 11, 81. [Google Scholar] [CrossRef]
  35. Bega, D.; Gramaglia, M.; Garcia-Saavedra, A.; Fiore, M.; Banchs, A.; Costa-Perez, X. Network Slicing Meets Artificial Intelligence: An AI-Based Framework for Slice Management. IEEE Commun. Mag. 2020, 58, 32–38. [Google Scholar] [CrossRef]
  36. 3GPP. System Architecture for the 5G System (5GS); Version 19.5.0; Technical Specification (TS) 23.501; 3rd Generation Partnership Project (3GPP): Sophia-Antipolis, France, 2025. [Google Scholar]
  37. Bithas, P.S.; Michailidis, E.T.; Nomikos, N.; Vouyioukas, D.; Kanatas, A.G. A survey on machine-learning techniques for UAV-based communications. Sensors 2019, 19, 5170. [Google Scholar] [CrossRef]
  38. Verbraeken, J.; Wolting, M.; Katzy, J.; Kloppenburg, J.; Verbelen, T.; Rellermeyer, J.S. A survey on distributed machine learning. ACM Comput. Surv. 2020, 53, 1–33. [Google Scholar] [CrossRef]
  39. Han, B.; Schotten, H.D. Machine Learning for Network Slicing Resource Management: A Comprehensive Survey. ZTE Commun. 2019, 19, 27–32. Available online: https://www.zte.com.cn/content/dam/zte-site/res-www-zte-com-cn/mediares/magazine/publication/com_en/pdf/en201904.pdf (accessed on 8 September 2025).
  40. Kafle, V.P.; Fukushima, Y.; Martinez-Julia, P.; Miyazawa, T. Consideration on automation of 5G network slicing with machine learning. In Proceedings of the 10th ITU Academic Conference Kaleidoscope: Machine Learning for a 5G Future (ITU K), Santa Fe, Argentina, 26–28 November 2018. [Google Scholar] [CrossRef]
  41. You, X.; Zhang, C.; Tan, X.; Jin, S.; Wu, H. AI for 5G: Research directions and paradigms. Sci. China Inf. Sci. 2019, 62, 21301. [Google Scholar] [CrossRef]
  42. Thomas Rincy, N.; Gupta, R. A Survey on Machine Learning Approaches and Its Techniques. In Proceedings of the 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science, SCEECS 2020, Bhopal, India, 22–23 February 2020. [Google Scholar] [CrossRef]
  43. Singh, S.K.; Salim, M.M.; Cha, J.; Pan, Y.; Park, J.H. Machine learning-based network sub-slicing framework in a sustainable 5G environment. Sustainability 2020, 12, 6250. [Google Scholar] [CrossRef]
  44. Bega, D.; Gramaglia, M.; Banchs, A.; Sciancalepore, V.; Samdanis, K.; Costa-Perez, X. Optimising 5G infrastructure markets: The business of network slicing. In Proceedings of the Proceedings-IEEE INFOCOM, Atlanta, GA, USA, 1–4 May 2017. [Google Scholar] [CrossRef]
  45. Bega, D.; Gramaglia, M.; Banchs, A.; Sciancalepore, V.; Costa-Perez, X. A Machine Learning Approach to 5G Infrastructure Market Optimization. IEEE Trans. Mob. Comput. 2020, 19, 498–512. [Google Scholar] [CrossRef]
  46. Bakri, S.; Brik, B.; Ksentini, A. On using reinforcement learning for network slice admission control in 5G: Offline vs. online. Int. J. Commun. Syst. 2021, 34, e4757. [Google Scholar] [CrossRef]
  47. Raza, M.R.; Natalino, C.; Ohlen, P.; Wosinska, L.; Monti, P. Reinforcement Learning for Slicing in a 5G Flexible RAN. J. Light. Technol. 2019, 37, 5161–5169. [Google Scholar] [CrossRef]
  48. Bakhshi, B.; Mangues-Bafalluy, J.; Baranda, J. R-Learning-Based Admission Control for Service Federation in Multi-domain 5G Networks. In Proceedings of the Proceedings-IEEE Global Communications Conference, GLOBECOM, Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  49. Sciancalepore, V.; Zanzi, L.; Costa-Perez, X.; Capone, A. ONETS: Online Network Slice Broker from Theory to Practice. IEEE Trans. Wirel. Commun. 2022, 21, 121–134. [Google Scholar] [CrossRef]
  50. Rezazadeh, F.; Chergui, H.; Alonso, L.; Verikoukis, C. Continuous Multi-objective Zero-touch Network Slicing via Twin Delayed DDPG and OpenAI Gym. In Proceedings of the 2020 IEEE Global Communications Conference, GLOBECOM 2020-Proceedings, Taipei, Taiwan, 7–11 December 2020; pp. 1–5. [Google Scholar] [CrossRef]
  51. Rezazadeh, F.; Chergui, H.; Verikoukis, C. Zero-touch continuous network slicing control via scalable actor-critic learning. arXiv 2021, arXiv:2101.06654. [Google Scholar]
  52. Guan, W.; Zhang, H.; Leung, V.C. Customized slicing for 6G: Enforcing artificial intelligence on resource management. IEEE Netw. 2021, 35, 264–271. [Google Scholar] [CrossRef]
  53. Sulaiman, M.; Moayyedi, A.; Salahuddin, M.A.; Boutaba, R.; Saleh, A. Multi-agent deep reinforcement learning for slicing and admission control in 5G C-RAN. In Proceedings of the NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, Budapest, Hungary, 25–29 April 2022; pp. 1–9. [Google Scholar] [CrossRef]
  54. Yan, Z.; Ge, J.; Wu, Y.; Li, L.; Li, T. Automatic virtual network embedding: A deep reinforcement learning approach with graph convolutional networks. IEEE J. Sel. Areas Commun. 2020, 38, 1040–1057. [Google Scholar] [CrossRef]
  55. Rkhami, A.; Hadjadj-Aoul, Y.; Outtagarts, A. Learn to improve: A novel deep reinforcement learning approach for beyond 5G network slicing. In Proceedings of the 2021 IEEE 18th Annual Consumer Communications and Networking Conference, CCNC 2021, Las Vegas, NV, USA, 9–12 January 2021. [Google Scholar] [CrossRef]
  56. Alves Esteves, J.J.; Boubendir, A.; Guillemin, F.; Sens, P. A Heuristically Assisted Deep Reinforcement Learning Approach for Network Slice Placement. IEEE Trans. Netw. Serv. Manag. 2022, 19, 4794–4806. [Google Scholar] [CrossRef]
  57. Kibalya, G.; Serrat, J.; Gorricho, J.L.; Pasquini, R.; Yao, H.; Zhang, P. A reinforcement learning based approach for 5G network slicing across multiple domains. In Proceedings of the 2019 15th International Conference on Network and Service Management (CNSM), Halifax, NS, Canada, 21–25 October 2019; pp. 1–5. [Google Scholar] [CrossRef]
  58. Aboeleneen, A.E.; Abdellatif, A.A.; Erbad, A.M.; Salem, A.M. ECP: Error-Aware, Cost-Effective and Proactive Network Slicing Framework. IEEE Open J. Commun. Soc. 2024, 5, 2567–2584. [Google Scholar] [CrossRef]
  59. Dandachi, G.; De Domenico, A.; Hoang, D.T.; Niyato, D. An Artificial Intelligence Framework for Slice Deployment and Orchestration in 5G Networks. IEEE Trans. Cogn. Commun. Netw. 2020, 6, 858–871. [Google Scholar] [CrossRef]
  60. Garrido, L.A.; Dalgkitsis, A.; Ramantas, K.; Verikoukis, C. Machine Learning for Network Slicing in Future Mobile Networks: Design and Implementation. In Proceedings of the 2021 IEEE International Mediterranean Conference on Communications and Networking, MeditCom 2021, Athens, Greece, 7–10 September 2021; pp. 23–28. [Google Scholar] [CrossRef]
  61. Rahmanian, G.; Shahhoseini, H.S.; Pozveh, A.H.J. A Review of Network Slicing in 5G and Beyond: Intelligent Approaches and Challenges. In Proceedings of the 2021 ITU Kaleidoscope: Connecting Physical and Virtual Worlds, ITU K 2021, Geneva, Switzerland, 6–10 December 2021. [Google Scholar] [CrossRef]
  62. Zhang, C.; Dongy, M.; Otay, K. Vehicular multi-slice optimization in 5G: Dynamic preference policy using reinforcement learning. In Proceedings of the Proceedings-IEEE Global Communications Conference, GLOBECOM, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  63. Quang, P.T.A.; Hadjadj-Aoul, Y.; Outtagarts, A. A Deep Reinforcement Learning Approach for VNF Forwarding Graph Embedding. IEEE Trans. Netw. Serv. Manag. 2019, 16, 1318–1331. [Google Scholar] [CrossRef]
  64. Zhao, L.; Li, L. Reinforcement learning for resource mapping in 5G network slicing. In Proceedings of the 2020 5th International Conference on Computer and Communication Systems, ICCCS 2020, Shanghai, China, 15–18 May 2020; pp. 869–873. [Google Scholar] [CrossRef]
  65. Phyu, H.P.; Naboulsi, D.; Stanica, R.; Poitau, G. Towards energy efficiency in RAN network slicing. In Proceedings of the 2023 IEEE 48th Conference on Local Computer Networks (LCN), Daytona Beach, FL, USA, 2–5 October 2023; pp. 1–9. [Google Scholar] [CrossRef]
  66. Phyu, H.P.; Naboulsi, D.; Stanica, R. ICE-CREAM: MultI-agent fully CooperativE deCentRalizEd frAMework for Energy Efficiency in RAN Slicing. IEEE Trans. Netw. Serv. Manag. 2025, 22, 1859–1873. [Google Scholar] [CrossRef]
  67. Thantharate, A.; Paropkari, R.; Walunj, V.; Beard, C. DeepSlice: A deep learning approach towards an efficient and reliable network slicing in 5G networks. In Proceedings of the 2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019; pp. 0762–0767. [Google Scholar] [CrossRef]
  68. Xi, R.; Chen, X.; Chen, Y.; Li, Z. Real-Time resource slicing for 5G RAN via deep reinforcement learning. In Proceedings of the International Conference on Parallel and Distributed Systems-ICPADS, Tianjin, China, 4–6 December 2019; pp. 625–632. [Google Scholar] [CrossRef]
  69. Shome, D.; Kudeshia, A. Deep Q-learning for 5G network slicing with diverse resource stipulations and dynamic data traffic. In Proceedings of the 3rd International Conference on Artificial Intelligence in Information and Communication, ICAIIC 2021, Jeju Island, Republic of Korea, 13–16 April 2021; pp. 134–139. [Google Scholar] [CrossRef]
  70. Tang, J.; Duan, Y.; Zhou, Y.; Jin, J. Distributed slice selection-based computation offloading for intelligent vehicular networks. IEEE Open J. Veh. Technol. 2021, 2, 261–271. [Google Scholar] [CrossRef]
  71. Zhang, T.; Bian, Y.; Lu, Q.; Qi, J.; Zhang, K.; Ji, H.; Wang, W.; Wu, W. Supervised Learning Based Resource Allocation with Network Slicing. In Proceedings of the 2020 Eighth International Conference on Advanced Cloud and Big Data (CBD), Taiyuan, China, 5–6 December 2020; pp. 25–30. [Google Scholar] [CrossRef]
  72. Tsourdinis, T.; Chatzistefanidis, I.; Makris, N.; Korakis, T.; Nikaein, N.; Fdida, S. Service-aware real-time slicing for virtualized beyond 5G networks. Comput. Netw. 2024, 247, 110445. [Google Scholar] [CrossRef]
  73. Nassar, A.; Yilmaz, Y. Deep Reinforcement Learning for Adaptive Network Slicing in 5G for Intelligent Vehicular Systems and Smart Cities. IEEE Internet Things J. 2021, 9, 222–235. [Google Scholar] [CrossRef]
  74. Mei, J.; Wang, X.; Zheng, K. Intelligent network slicing for V2X services toward 5G. IEEE Netw. 2019, 33, 196–204. [Google Scholar] [CrossRef]
  75. Zhang, X.; Lu, W.; Li, B.; Zhu, Z. DRL-based network orchestration to realize cooperative, distributed and tenant-driven virtual network slicing. In Proceedings of the Optics InfoBase Conference Papers, Chengdu, China, 2–5 November 2019; Part F138-ACPC 2019. pp. 17–19. [Google Scholar]
  76. Lu, W.; Fang, H.; Zhu, Z. AI-assisted resource advertising and pricing to realize distributed tenant-driven virtual network slicing in inter-DC optical networks. In Proceedings of the 22nd Conference on Optical Network Design and Modelling, ONDM 2018-Proceedings, Dublin, Ireland, 14–17 May 2018; pp. 130–135. [Google Scholar] [CrossRef]
  77. Khodapanah, B.; Awada, A.; Viering, I.; Barreto, A.N.; Simsek, M.; Fettweis, G. Framework for slice-aware radio resource management utilizing artificial neural networks. IEEE Access 2020, 8, 174972–174987. [Google Scholar] [CrossRef]
  78. Wei, F.; Feng, G.; Sun, Y.; Wang, Y.; Qin, S.; Liang, Y.C. Network Slice Reconfiguration by Exploiting Deep Reinforcement Learning with Large Action Space. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2197–2211. [Google Scholar] [CrossRef]
  79. Yang, H.; Zhan, K.; Bao, B.; Yao, Q.; Zhang, J.; Cheriet, M. Automatic guarantee scheme for intent-driven network slicing and reconfiguration. J. Netw. Comput. Appl. 2021, 190, 103163. [Google Scholar] [CrossRef]
  80. Rago, A.; Martiradonna, S.; Piro, G.; Abrardo, A.; Boggia, G. A tenant-driven slicing enforcement scheme based on Pervasive Intelligence in the Radio Access Network. Comput. Netw. 2022, 217, 109285. [Google Scholar] [CrossRef]
  81. Ayala-Romero, J.A.; Garcia-Saavedra, A.; Gramaglia, M.; Costa-Perez, X.; Banchs, A.; Alcaraz, J.J. vrAIn: A deep learning approach tailoring computing and radio resources in virtualized RANs. In Proceedings of the 25th Annual International Conference on Mobile Computing and Networking, Los Cabos, Mexico, 21–25 October 2019; pp. 1–16. [Google Scholar] [CrossRef]
  82. Xiang, H.; Yan, S.; Peng, M. A Realization of Fog-RAN Slicing via Deep Reinforcement Learning. IEEE Trans. Wirel. Commun. 2020, 19, 2515–2527. [Google Scholar] [CrossRef]
  83. Endes, A.; Yuksekkaya, B. 5G Network Slicing with Multi-Purpose AI models. In Proceedings of the 2022 IEEE International Black Sea Conference on Communications and Networking, BlackSeaCom 2022, Sofia, Bulgaria, 6–9 June 2022; pp. 20–25. [Google Scholar] [CrossRef]
  84. Abbas, K.; Khan, T.A.; Afaq, M.; Song, W.C. Network Slice Lifecycle Management for 5G Mobile Networks: An Intent-Based Networking Approach. IEEE Access 2021, 9, 80128–80146. [Google Scholar] [CrossRef]
  85. Wu, Z.X.; You, Y.Z.; Liu, C.C.; Chou, L.D. Machine Learning Based 5G Network Slicing Management and Classification. In Proceedings of the 6th International Conference on Artificial Intelligence in Information and Communication, ICAIIC 2024, Osaka, Japan, 19–22 February 2024; pp. 371–375. [Google Scholar] [CrossRef]
  86. Song, C.; Zhang, M.; Huang, X.; Zhan, Y.; Wang, D.; Liu, M.; Rong, Y. Machine learning enabling traffic-aware dynamic slicing for 5G optical transport networks. In Proceedings of the CLEO: Science and Innovations, San Jose, CA, USA, 13–18 May 2018; Optica Publishing Group: Washington, DC, USA, 2018; p. JTu2A–44. [Google Scholar] [CrossRef]
  87. Theodorou, V.; Lekidis, A.; Bozios, T.; Meth, K.; Fernandez-Fernandez, A.; Tavlor, J.; Diogo, P.; Martins, P.; Behravesh, R. Blockchain-based zero touch service assurance in cross-domain network slicing. In Proceedings of the 2021 Joint European Conference on Networks and Communications and 6G Summit, EuCNC/6G Summit 2021, Porto, Portugal, 8–11 June 2021; pp. 395–400. [Google Scholar] [CrossRef]
  88. Camargo, J.S.; Coronado, E.; Gomez, B.; Rincon, D.; Siddiqui, S. Design of AI-based Resource Forecasting Methods for Network Slicing. In Proceedings of the 2022 International Wireless Communications and Mobile Computing, IWCMC 2022, Dubrovnik, Croatia, 30 May–3 June 2022; pp. 1064–1069. [Google Scholar] [CrossRef]
  89. Kafle, V.P.; Martinez-Julia, P.; Miyazawa, T. Automation of 5G Network Slice Control Functions with Machine Learning. IEEE Commun. Stand. Mag. 2019, 3, 54–62. [Google Scholar] [CrossRef]
  90. Bega, D.; Gramaglia, M.; Fiore, M.; Banchs, A.; Costa-Perez, X. DeepCog: Optimizing resource provisioning in network slicing with AI-based capacity forecasting. IEEE J. Sel. Areas Commun. 2019, 38, 361–376. [Google Scholar] [CrossRef]
  91. Bega, D.; Gramaglia, M.; Fiore, M.; Banchs, A.; Costa-Perez, X. AZTEC: Anticipatory capacity allocation for zero-touch network slicing. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020; pp. 794–803. [Google Scholar] [CrossRef]
  92. Buyakar, T.V.K.; Agarwal, H.; Tamma, B.R.; Franklin, A.A. Resource allocation with admission control for GBR and delay QoS in 5G network slices. In Proceedings of the 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020; pp. 213–220. [Google Scholar] [CrossRef]
  93. Yan, M.; Feng, G.; Zhou, J.; Sun, Y.; Liang, Y.C. Intelligent resource scheduling for 5G radio access network slicing. IEEE Trans. Veh. Technol. 2019, 68, 7691–7703. [Google Scholar] [CrossRef]
  94. Tayyaba, S.K.; Khattak, H.A.; Almogren, A.; Shah, M.A.; Ud Din, I.; Alkhalifa, I.; Guizani, M. 5G vehicular network resource management for improving radio access through machine learning. IEEE Access 2020, 8, 6792–6800. [Google Scholar] [CrossRef]
  95. Monteil, J.B.; Hribar, J.; Barnard, P.; Li, Y.; DaSilva, L.A. Resource reservation within sliced 5G networks: A cost-reduction strategy for service providers. In Proceedings of the 2020 IEEE International Conference on Communications Workshops (ICC Workshops), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  96. Cui, Y.; Huang, X.; Wu, D.; Zheng, H. Machine Learning based Resource Allocation Strategy for Network Slicing in Vehicular Networks. In Proceedings of the CIC International Conference on Communications in China, ICCC 2020, Chongqing, China, 9–11 August 2020; Volume 292, pp. 454–459. [Google Scholar] [CrossRef]
  97. Cui, Y.; Huang, X.; He, P.; Wu, D.; Wang, R. A Two-Timescale Resource Allocation Scheme in Vehicular Network Slicing. In Proceedings of the IEEE Vehicular Technology Conference, Helsinki, Finland, 25–28 April 2021. [Google Scholar] [CrossRef]
  98. Cui, Y.; Huang, X.; He, P.; Wu, D.; Wang, R. QoS Guaranteed Network Slicing Orchestration for Internet of Vehicles. IEEE Internet Things J. 2022, 9, 15215–15227. [Google Scholar] [CrossRef]
  99. Khan, H.; Majid Butt, M.; Samarakoon, S.; Sehier, P.; Bennis, M. Deep learning assisted CSI estimation for joint URLLC and eMBB resource allocation. In Proceedings of the 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020-Proceedings, Dublin, Ireland, 7–11 June 2020. [Google Scholar] [CrossRef]
  100. Sapavath, N.N.; Rawat, D.B.; Song, M. Machine Learning for RF Slicing Using CSI Prediction in Software Defined Large-Scale MIMO Wireless Networks. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2137–2144. [Google Scholar] [CrossRef]
  101. Matoussi, S.; Fajjari, I.; Aitsaadi, N.; Langar, R. Deep Learning based User Slice Allocation in 5G Radio Access Networks. In Proceedings of the Proceedings-Conference on Local Computer Networks, LCN, Sydney, NSW, Australia, 16–19 November 2020; pp. 286–296. [Google Scholar] [CrossRef]
  102. Jiang, W.; Anton, S.D.; Schotten, H.D. Intelligence Slicing: A Unified Framework to Integrate Artificial Intelligence into 5G Networks. In Proceedings of the 12th IFIP Wireless and Mobile Networking Conference, WMNC 2019, Paris, France, 11–13 September 2019; pp. 227–232. [Google Scholar] [CrossRef]
  103. Gutterman, C.; Grinshpun, E.; Sharma, S.; Zussman, G. RAN resource usage prediction for a 5G slice broker. In Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), Catania, Italy, 2–5 July 2019; pp. 231–240. [Google Scholar] [CrossRef]
  104. Sciancalepore, V.; Costa-Perez, X.; Banchs, A. RL-NSB: Reinforcement learning-based 5G network slice broker. IEEE/ACM Trans. Netw. 2019, 27, 1543–1557. [Google Scholar] [CrossRef]
  105. Silva, F.S.; Silva, S.N.; da Silva, L.M.; Bessa, A.; Ferino, S.; Paiva, P.; Medeiros, M.; Silva, L.; Neto, J.; Costa, K.; et al. ML-based inter-slice load balancing control for proactive offloading of virtual services. Comput. Netw. 2024, 246, 110422. [Google Scholar] [CrossRef]
  106. Bouzidi, E.H.; Outtagarts, A.; Hebbar, A.; Langar, R.; Boutaba, R. Online based learning for predictive end-to-end network slicing in 5G networks. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–7. [Google Scholar] [CrossRef]
  107. Thantharate, A.; Beard, C. ADAPTIVE6G: Adaptive resource management for network slicing architectures in current 5G and future 6G systems. J. Netw. Syst. Manag. 2023, 31, 9. [Google Scholar] [CrossRef]
  108. Hu, Q.; Wang, W.; Bai, X.; Jin, S.; Jiang, T. Blockchain Enabled Federated Slicing for 5G Networks with AI Accelerated Optimization. IEEE Netw. 2020, 34, 46–52. [Google Scholar] [CrossRef]
  109. Li, R.; Zhao, Z.; Sun, Q.; Chih-Lin, I.; Yang, C.; Chen, X.; Zhao, M.; Zhang, H. Deep Reinforcement Learning for Resource Management in Network Slicing. IEEE Access 2018, 6, 74429–74441. [Google Scholar] [CrossRef]
  110. Wang, Y.; Liu, N.; Pan, Z.; You, X. AI-Based Resource Allocation in E2E Network Slicing with Both Public and Non-Public Slices. Appl. Sci. 2023, 13, 12505. [Google Scholar] [CrossRef]
  111. Moon, S.; Hirayama, H.; Tsukamoto, Y.; Nanba, S.; Shinbo, H. Ensemble Learning Method-Based Slice Admission Control for Adaptive RAN. In Proceedings of the 2020 IEEE Globecom Workshops, GC Wkshps 2020-Proceedings, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  112. Hua, Y.; Li, R.; Zhao, Z.; Chen, X.; Zhang, H. GAN-Powered Deep Distributional Reinforcement Learning for Resource Management in Network Slicing. IEEE J. Sel. Areas Commun. 2020, 38, 334–349. [Google Scholar] [CrossRef]
  113. Chen, G.; Mu, X.; Shen, F.; Zeng, Q. Network Slicing Resource Allocation Based on LSTM-D3QN with Dual Connectivity in Heterogeneous Cellular Networks. Appl. Sci. 2022, 12, 9315. [Google Scholar] [CrossRef]
  114. Meng, S.; Wang, Z.; Ding, H.; Wu, S.; Li, X.; Zhao, P.; Zhu, C.; Wang, X. RAN Slice Strategy Based on Deep Reinforcement Learning for Smart Grid. In Proceedings of the 2019 Computing, Communications and IoT Applications, ComComAp 2019, Shenzhen, China, 26–28 October 2019; pp. 106–111. [Google Scholar] [CrossRef]
  115. Shi, Y.; Sagduyu, Y.E.; Erpek, T. Reinforcement Learning for Dynamic Resource Optimization in 5G Radio Access Network Slicing. In Proceedings of the IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks, CAMAD, Pisa, Italy, 14–16 September 2020. [Google Scholar] [CrossRef]
  116. Albonda, H.D.; Perez-Romero, J. An Efficient RAN Slicing Strategy for a Heterogeneous Network with eMBB and V2X Services. IEEE Access 2019, 7, 44771–44782. [Google Scholar] [CrossRef]
  117. Nouri, S.; Motalleb, M.K.; Shah-Mansouri, V.; Shariatpanahi, S.P. Semi-Supervised Learning Approach for Efficient Resource Allocation with Network Slicing in O-RAN. arXiv 2024, arXiv:2401.08861. [Google Scholar] [CrossRef]
  118. Zambianco, M.; Verticale, G. Spectrum allocation for network slices with inter-numerology interference using deep reinforcement learning. In Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC, London, UK, 31 August–3 September 2020. [Google Scholar] [CrossRef]
  119. Shao, Y.; Li, R.; Hu, B.; Wu, Y.; Zhao, Z.; Zhang, H. Graph Attention Network-Based Multi-Agent Reinforcement Learning for Slicing Resource Management in Dense Cellular Network. IEEE Trans. Veh. Technol. 2021, 70, 10792–10803. [Google Scholar] [CrossRef]
  120. Chergui, H.; Verikoukis, C. OPEX-Limited 5G RAN Slicing: An Over-Dataset Constrained Deep Learning Approach. In Proceedings of the IEEE International Conference on Communications, Dublin, Ireland, 7–11 June 2020. [Google Scholar] [CrossRef]
  121. Xu, Y.; Zhao, Z.; Cheng, P.; Chen, Z.; Ding, M.; Vucetic, B.; Li, Y. Constrained Reinforcement Learning for Resource Allocation in Network Slicing. IEEE Commun. Lett. 2021, 25, 1554–1558. [Google Scholar] [CrossRef]
  122. Liu, Y.; Ding, J.; Liu, X. A Constrained Reinforcement Learning Based Approach for Network Slicing. In Proceedings of the Proceedings-International Conference on Network Protocols, ICNP, Madrid, Spain, 13–16 October 2020. [Google Scholar] [CrossRef]
  123. Chen, X.; Zhao, Z.; Wu, C.; Bennis, M.; Liu, H.; Ji, Y.; Zhang, H. Multi-Tenant Cross-Slice Resource Orchestration: A Deep Reinforcement Learning Approach. IEEE J. Sel. Areas Commun. 2019, 37, 2377–2392. [Google Scholar] [CrossRef]
  124. Sun, G.; Boateng, G.O.; Ayepah-Mensah, D.; Liu, G.; Wei, J. Autonomous Resource Slicing for Virtualized Vehicular Networks with D2D Communications Based on Deep Reinforcement Learning. IEEE Syst. J. 2020, 14, 4694–4705. [Google Scholar] [CrossRef]
  125. Yu, K.; Zhou, H.; Qian, B.; Tang, Z.; Shen, X. A Reinforcement Learning Aided Decoupled RAN Slicing Framework for Cellular V2X. In Proceedings of the Proceedings-IEEE Global Communications Conference, GLOBECOM, Taipei, Taiwan, 7–11 December 2020; pp. 13–18. [Google Scholar] [CrossRef]
  126. Li, T.; Zhu, X.; Liu, X. An End-to-End Network Slicing Algorithm Based on Deep Q-Learning for 5G Network. IEEE Access 2020, 8, 122229–122240. [Google Scholar] [CrossRef]
  127. Anil Akyildiz, H.; Faruk Gemici, O.; Hokelek, I.; Ali Cirpan, H. Hierarchical Reinforcement Learning Based Resource Allocation for RAN Slicing. IEEE Access 2024, 12, 75818–75831. [Google Scholar] [CrossRef]
  128. Zhou, H.; Elsayed, M.; Erol-Kantarci, M. RAN Resource Slicing in 5G Using Multi-Agent Correlated Q-Learning. In Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC, Helsinki, Finland, 13–16 September 2021; pp. 1179–1184. [Google Scholar] [CrossRef]
  129. Wang, T.; Chen, S.; Zhu, Y.; Tang, A.; Wang, X. LinkSlice: Fine-Grained Network Slice Enforcement Based on Deep Reinforcement Learning. IEEE J. Sel. Areas Commun. 2022, 40, 2378–2394. [Google Scholar] [CrossRef]
  130. Li, R.; Wang, C.; Zhao, Z.; Guo, R.; Zhang, H. The LSTM-Based Advantage Actor-Critic Learning for Resource Management in Network Slicing With User Mobility. IEEE Commun. Lett. 2020, 24, 2005–2009. [Google Scholar] [CrossRef]
  131. Alcaraz, J.J.; Losilla, F.; Zanella, A.; Zorzi, M. Model-Based Reinforcement Learning with Kernels for Resource Allocation in RAN Slices. IEEE Trans. Wirel. Commun. 2023, 22, 486–501. [Google Scholar] [CrossRef]
  132. Dangi, R.; Lalwani, P. Optimizing network slicing in 6G networks through a hybrid deep learning strategy. J. Supercomput. 2024, 80, 20400–20420. [Google Scholar] [CrossRef]
  133. Lai, Y.; Yang, H.; Yang, C. Multi-resource network slicing with deep reinforcement learning for an optimal qos satisfaction ratio. In Proceedings of the 2024 16th International Conference on Advanced Computational Intelligence (ICACI), Zhangjiajie, China, 16–19 May 2024; pp. 140–149. [Google Scholar] [CrossRef]
  134. Elmosilhy, N.A.; Elmesalawy, M.M.; Ibrahim, I.I.; El-Haleem, A.M. Joint Q-Learning Based Resource Allocation and Multi-Numerology B5G Network Slicing Exploiting LWA Technology. IEEE Access 2024, 12, 22043–22058. [Google Scholar] [CrossRef]
  135. Boutiba, K.; Bagaa, M.; Ksentini, A. Optimal radio resource management in 5G NR featuring network slicing. Comput. Netw. 2023, 234, 109937. [Google Scholar] [CrossRef]
  136. Liu, W.; Hossain, M.A.; Ansari, N.; Kiani, A.; Saboorian, T. Reinforcement Learning-Based Network Slicing Scheme for Optimized UE-QoS in Future Networks. IEEE Trans. Netw. Serv. Manag. 2024, 21, 3454–3464. [Google Scholar] [CrossRef]
  137. Gharehgoli, A.; Nouruzi, A.; Mokari, N.; Azmi, P.; Javan, M.R.; Jorswieck, E.A. AI-Based Resource Allocation in End-to-End Network Slicing under Demand and CSI Uncertainties. IEEE Trans. Netw. Serv. Manag. 2023, 20, 3630–3651. [Google Scholar] [CrossRef]
  138. Mei, J.; Wang, X.; Zheng, K. Semi-Decentralized Network Slicing for Reliable V2V Service Provisioning: A Model-Free Deep Reinforcement Learning Approach. IEEE Trans. Intell. Transp. Syst. 2022, 23, 12108–12120. [Google Scholar] [CrossRef]
  139. Chergui, H.; Blanco, L.; Verikoukis, C. CDF-Aware Federated Learning for Low SLA Violations in beyond 5G Network Slicing. In Proceedings of the IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  140. Raftopoulos, R.; D’Oro, S.; Melodia, T.; Schembra, G. DRL-Based Latency-Aware Network Slicing in O-RAN with Time-Varying SLAs. In Proceedings of the 2024 International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 19–22 February 2024; pp. 737–743. [Google Scholar] [CrossRef]
  141. Alkhoury, G.; Berri, S.; Chorti, A. Deep Reinforcement Learning-Based Network Slicing Algorithm for 5G Heterogenous Services. In Proceedings of the Proceedings-IEEE Global Communications Conference, GLOBECOM, Kuala Lumpur, Malaysia, 4–8 December 2023; pp. 5190–5195. [Google Scholar] [CrossRef]
  142. Gupta, M.; Jha, R.K. Advanced network design for 6G: Leveraging graph theory and slicing for edge stability. Simul. Model. Pract. Theory 2025, 138, 103029. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.