Next Article in Journal
Apple Yield Estimation Method Based on CBAM-ECA-Deeplabv3+ Image Segmentation and Multi-Source Feature Fusion
Previous Article in Journal
An Analytical Framework for Optimizing the Renewable Energy Dimensioning of Green IoT Systems in Pipeline Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Graph Convolutional Network-Based Fine-Grained Low-Latency Service Slicing Algorithm for 6G Networks †

College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
This article is an extended version of a paper published in Wu, C.; Zhu, X.; Zhu, H. A Multi-Granularity and Low-Latency End-to-End Slicing Algorithm Based on GCN for 6G Networks. In Proceedings of the 2023 IEEE 23rd International Conference on Communication Technology (ICCT), Wuxi, China, 20–22 October 2023.
Sensors 2025, 25(10), 3139; https://doi.org/10.3390/s25103139
Submission received: 24 March 2025 / Revised: 8 May 2025 / Accepted: 13 May 2025 / Published: 15 May 2025
(This article belongs to the Section Sensor Networks)

Abstract

:
The future 6G (sixth-generation) mobile communication technology is required to support advanced network services capabilities such as holographic communication, autonomous driving, and the industrial internet, which demand higher data rates, lower latency, and greater reliability. Furthermore, future service classifications will become more fine-grained. To meet the requirements of these low-latency services with varying granularities, this work investigates fine-grained network slicing for low-latency services in 6G networks. A fine-grained network slicing algorithm for low-latency services in 6G based on GCNs (graph convolutional networks) is proposed. The goal is to minimize the end-to-end delay of network slicing while meeting the constraints of computational resources, communication resources, and the deployment of SFCs (service function chains). This algorithm focuses on the construction and deployment of network slices. First, due to the complexity and diversity of 6G networks, DAGs (Directed Acyclic Graphs) are used to represent network service requests. Then, based on the depth-first search algorithm, three types of SFCs of latency-type network slices are constructed according to the available computing and communication resources. Finally, the GCN-based low-latency service fine-grained network slicing algorithm is used to deploy SFCs. The simulation results show that the latency performance of the proposed algorithm outperforms that of the Double DQN and DQN algorithms across various scenarios, including changes in the number of underlying network nodes and variations in service sizes.

1. Introduction

The rapid progression of mobile communication technologies has heightened the necessity for the advanced exploration of network slicing architectures. Existing research predominantly concentrates on SFC mapping and deployment methodologies. Research on SFC deployment mostly focuses on the traditional chained SFC, and its structure and characteristics are discussed in depth. As outlined in [1], a node-ranking mechanism is implemented at the node mapping layer, leveraging network topology attributes and neighboring node significance. For link mapping optimization, a multi-constraint link mapping model is established, with genetic algorithms employed to identify optimal mapping paths, thereby enhancing resource allocation efficacy. In reference [2], a collaborative virtual node–link mapping algorithm is proposed to ensure the balanced utilization of the physical node and link resources in network slicing. Reference [3] introduces a radio access network slicing-based framework for user access control and wireless resource allocation, specifically tailored for smart grid applications. In [4], a novel delay-sensitive 5G access network slicing is proposed that can meet the demands of applications with different traffic characteristics, providing them with varying rate and delay guarantees to ensure efficient quality of service realization. However, with the continuous development of network technology and the increasing diversification of application requirements, the form and deployment mode of SFCs are characterized by more flexibility and complexity. In view of the complexity and diversity of current services, the traditional chained SFC is no longer able to fully describe and satisfy diverse service requests.
Concurrently, the effective utilization of network topology information has garnered increased attention. Current approaches predominantly utilize complex network theory to define node performance metrics through topological analysis. Reference [5] presents a graph-theory-driven resource scheduling algorithm incorporating depth-first search optimization, which satisfies latency, data rate, and reliability requirements for power services while reducing end-to-end delays. As outlined in [6], node importance metrics are derived from comprehensive topological evaluations of sliced and substrate networks, enabling precise node prioritization during mapping processes to optimize resource utilization.
The progressive evolution of communication technologies has exacerbated the limitations of conventional coarse-grained network slicing paradigms in addressing the escalating complexity of low-latency service requirements. This technological gap has driven substantial research focus toward fine-grained network slicing implementations. In the realization of network slicing technology, the choice of slice granularity is of high significance. References [7,8] show that if the granularity is too coarse, the flexibility of slices and the diversity and independence of user services will be limited, but if the granularity is too small, the management difficulty of different slices and the difficulty of sharing resources among slices will be increased. With the continuous development of various new services, the current demand for QoS is increasing, and in order to ensure better performance, it is necessary to move closer to a finer granularity. In reference [9], the concept of link-level network slicing is proposed, necessitating resource provisioning enforcement at the physical link stratum rather than conventional slice-level abstraction. This approach enables direct resource allocation to communication links through advanced fading mitigation and interference cancellation techniques, thereby theoretically guaranteeing QoS compliance. Nevertheless, per-service link-level slice instantiation inevitably incurs resource fragmentation, diminishing infrastructure utilization efficiency. Addressing massive IoT deployments in smart environments, reference [10] extends conventional 5G service taxonomies through micro-slice isolation mechanisms. The proposed architecture enables localized logical subnetworks between device clusters and application servers, featuring duration-aware and coverage-adaptive slice customization. Reference [11] investigates the problem of slice-based service function chain embedding (SBSFCE) to enable end-to-end network slice deployment. Simulations show that the algorithm outperforms the benchmark algorithm. Reference [12] uses machine learning to analyze network traffic and conducts experiments on different machine learning models. Based on these experimental results, a 5G slice management algorithm is proposed to optimize the slice resource. In reference [13], the author proposes an MEC network slice-aware smart queuing theoretical architecture for next-generation pre-6G networks by optimizing slice resources to improve performance metrics for both online and offline scenarios. While such micro-slices permit dynamic reconfiguration via application-layer policy orchestration with elevated automation levels, their inherent control-loop latencies render them suboptimal for stringent latency-sensitive applications. Consequently, systematic methodologies for constructing and deploying fine-grained latency-constrained network slices remain an open research challenge.
Current research paradigms in network slicing resource management predominantly address singular resource types or unilateral optimization of either RAN (radio access network) or core network components, neglecting cross-domain optimization frameworks and multi-resource interdependencies. In practical deployment scenarios, heterogeneous resource allocation across multiple slices exhibits multidimensional coupling characteristics, necessitating systematic investigation of joint multi-resource orchestration strategies integrated with end-to-end RAN–core network co-optimization. Furthermore, while existing SFC deployment studies have extensively examined conventional linear service function chains through structural and characteristic analyses, the evolution of network architectures and the diversification of application requirements have precipitated the emergence of hyper-connected service graphs. These graph-structured SFCs demonstrate enhanced topological flexibility but introduce non-trivial deployment complexity, rendering traditional chain-based models inadequate for contemporary service provisioning. Consequently, pioneering research into graph-theoretic SFC deployment mechanisms becomes imperative, particularly focusing on neighborhood-aware resource coordination algorithms and topology-adaptive scheduling paradigms to achieve precise service differentiation and infrastructure efficiency maximization.

2. System Model

The advent of 6G networks has been propelled by groundbreaking advancements in communication technologies, catalyzing the emergence of mission-critical applications with sub-millisecond latency constraints and microsecond-level timing synchronization requirements. Representative use cases including relay protection systems [14], telesurgery platforms [15], and industrial cyber–physical control systems impose differentiated network performance thresholds [16]. For instance, telemedicine applications mandate end-to-end latency below 20 ms, as exceeding this threshold could critically compromise surgical intervention viability. Similarly, differential protection mechanisms require 200 μs-level latency guarantees to ensure substation fault detection precision, where timing deviations exceeding 50μs may trigger protection misoperations. These stringent requirements underscore the imperative to pioneer low-latency fine-grained network slicing architectures as a cornerstone technology for 6G network infrastructure.
The system architecture under investigation, as depicted in Figure 1, is formally structured through three principal strata: terminal devices, the radio access network, and the core network infrastructure. The RAN subsystem implements intelligent access technology selection mechanisms to optimize user equipment connectivity, while the core network orchestrates SFC mapping operations. Three distinct network slice categories are provisioned, all operating under latency-bound SLAs (Service-Level Agreements) but with differentiated computing and communication resource allocations: LSSs (latency-sensitive slices), RTSs (real-time slices), and NRTSs (non-real-time slices). Service reliability is strictly contingent upon the slice’s capability to satisfy predefined latency thresholds through resource reservation. Furthermore, application-specific SFCs are dynamically instantiated per slice category, ensuring service execution compliance with heterogeneous requirements. The architecture establishes deterministic E2E communication paths spanning from user terminals through base stations to core network elements, implementing coordinated resource reservation across all network strata.
Within the RAN domain, protocol layer function virtualization is achieved through DU (Distributed Unit) devices leveraging standardized server clusters, establishing a centralized processing pool. To address heterogeneous network slicing requirements, an adaptive VNF (Virtual Network Function) deployment framework is implemented across SFCs, enabling joint optimization of computational resource allocation and infrastructure utilization efficiency. Capitalizing on the inherent caching capabilities of RAN infrastructure, per-SFC packet scheduling mechanisms are provisioned at DU endpoints to refine network resource allocation granularity and maximize system throughput. In the core network stratum, the software-defined virtualization of physical switching/routing devices is realized via commodity server platforms. VNF services within SFCs are instantiated through cloud-native network functions deployed on VMs (Virtual Machines), with cross-VM elastic VNF orchestration enabling multidimensional service provisioning while maintaining optimal resource allocation efficiency across heterogeneous infrastructure.
This section conducts a systematic latency taxonomy analysis, establishing three service classification tiers with corresponding provisioning frameworks. Service instances are classified into three distinct categories: NRTS, RTS, and LSS. The classification criteria are defined through rigorous latency-bound thresholds: NRTSs accommodate applications with relaxed latency constraints exceeding 100 ms; RTSs require bounded latency between 10 ms and 100 ms to maintain service continuity; and LSSs mandate ultra-strict latency guarantees below 10 ms to support mission-critical operations. Each service category is subsequently associated with dedicated network slice configurations and resource reservation mechanisms to ensure service-specific QoS compliance. LSSs have the highest resource allocation level, which allocates dedicated PRBs (Physical Resource Blocks) and computational resources. RTSs have medium resource allocation levels, for which the system reserves a portion of bandwidth in advance. NRTSs have the lowest resource allocation priority level, and we allow resources to be shared by other slices when NRTSs are available.
The infrastructure network can be abstracted as an undirected weighted graph, denoted as G P = ( N P , E P , C P , B P ) . In this model, N P represents the set of physical nodes, which consists of two types: wireless access network nodes ( N a ) and core network nodes ( N c ). E P represents the set of physical links, which includes two categories: physical links between wireless access network nodes ( E a ) and physical links between core network nodes ( E c ), each fulfilling distinct network connectivity requirements. C P denotes the set of computational resources of the physical nodes, while B P represents the set of bandwidth resources of the physical links. The network slice request set consists of three types of slices, denoted as RNS, where RNS = RNRT∪RRT∪RCT. Here, RNRT represents non-real-time slices, RRT represents real-time slices, and RCT represents latency-sensitive slices. The set of slice type is denoted as M = 1 , 2 , , m . Each request is represented as GR = (NR, ER, CR, BR, TR). Each slice type consists of a specific set of corresponding VNFs. The VNF composition for slice m is denoted as N I V m = N 1 V m , N 2 V m , , N i V m . In this section, a graph structure is employed to represent the SFC of service requests. Specifically, service requests are abstracted as a DAG G V = ( N V , E V ) , where N V represents the set of virtual nodes, and E V represents the set of virtual links.

2.1. Latency Definition

In the access network, assume that multiple Remote RRUs (Remote Radio Units) are distributed within a region, forming the set J = 1 , 2 , , j . To efficiently manage and utilize wireless resources, the total bandwidth B Hz is divided into multiple PRBs, represented by the set P = 1 , 2 , , p , where different PRBs are orthogonal to each other. Let P j , l , u m denote the power allocated by RRU j on PRB l to user u m in slice m , and h j , l , u m represents the corresponding channel gain. It is important to note that a user can connect to more than one RRU, and different PRBs from different RRUs can be allocated to the same user. Therefore, the rate provided to user u m in slice m by RRU j on PRB l can be expressed as
r j , l , u m = b l o g 2 1 + P j , l , u m h j , l , u m j ϵ J , j j m M v m U m , v m u m P j , l , v m h j , l , v m + σ 2 ,
where b represents the bandwidth allocated by RRU j on PRB l to user u m in slice m , and σ 2 denotes the noise power.
For slice m , assume that the arrival of SFC packets follows a dynamic Poisson distribution, where the parameter λ m ( t ) varies over time to describe the changing arrival rate of packets in slice m . Additionally, the packet size is characterized by an exponential distribution, with a mean value of P m ¯ .
The rate r j , l , u m provided by RRU j on PRB l to user u m in slice m is considered the service rate R m ( t ) of the link. The average packet processing rate is then given by
V m ( t ) = R m ( t ) P m ¯ ,
Let the queue length of slice m in the SFC at time slot t be q m ( t ) . The queue update equation for the SFC on the DU side is then expressed as
q m ( t + 1 ) = m a x q m t + a m t d m t , 0 ,
Here, a m t = λ m ( t ) · T s represents the number of data packets arriving in the time slot. Additionally, d m t = V m ( t ) · T s denotes the number of data packets processed in the time slot.
Based on Little’s Law, the average number of objects in a system can be derived by calculating the product of the average arrival rate of the objects and their average residence time within the system. Therefore, the queuing delay can be expressed as
D q u e u e m = m ϵ M q m t λ m ( t ) ,  
The processing time required by a physical network node n after receiving a VNF data packet N I V is defined as the node processing latency. For slice m , this latency is expressed as
D p r o c m = N I V m N v N n P N P δ i n   X i m R n m ,
where δ i n 0 ,   1 is defined to indicate whether the i-th VNF is deployed on server n . Specifically, δ i n = 1 if the i-th VNF is deployed on server n , and δ i n = 0 otherwise. R n m represents the computational processing capability of node n in slice m , while X i m denotes the packet size required by the i-th VNF N i V in slice m .
The time required to transmit the data packets of VNF N i V between physical network nodes through network links is referred to as the link transmission delay. The transmission delay for slice m is expressed as
D t r a n m = N I V m N v N n P N P δ i n δ i n h n , n X i m R n , n m ,
where the δ i n indicates whether the i-th VNF is deployed on server n ; h n , n represents the number of hops between physical nodes n and n ; and R n , n m denotes the transmission rate between physical nodes n and n .
The end-to-end delay for slice m is equal to the sum of the node processing delay and the link transmission delay:
T m = D q u e u e m + D t r a n m + D p r o c m .
where D q u e u e m is the queueing delay of slice m in the access network, D p r o c m is the total node processing delay for slice m in the access and core networks, and D t r a n m is the total link transmission delay for slice m in the access and core networks.
Therefore, the objective function is
m i n   T m  
The constraints are
n N P δ i n ( t ) = 1 , i N V
n N P δ i n ( t ) . C i v C n p , n N p
i N V ( ψ i j n m ( t ) + ψ i j m n ( t ) ) B i j v B n m p , n , m N p , n , m L P
i N V ψ i j n m ( t ) i N V ψ i j m n ( t ) = δ i n ( t ) δ j n ( t ) , n , m N p , n m L P
δ i n ( t ) = [ 0 ,   1 ] , i , j N V , n , m N p
ψ i j n m ( t ) = [ 0 ,   1 ] , i , j N V , n , m N p
θ n ( t ) = [ 0 ,   1 ] , n N p
Equation (9) ensures that each VNF on the SFC can only be deployed on one generic server. Equation (10) ensures that the total computational resources required by the VNFs deployed on a given server do not exceed the total computational resources available. Equation (11) guarantees that the sum of bandwidth resources required by all virtual links mapped onto physical links does not exceed the total bandwidth of the physical link. Equation (12) requires that when two adjacent VNFs on the SFC are deployed on servers n and m , respectively, there must be at least one connection path between the physical links n m . Equations (13)–(15) use binary variables to represent the deployment status of VNFs, the mapping of virtual links, and the utilization of generic servers, respectively.

2.2. Service Function Chain Construction

Two characteristic relationships are observed between network functions: dependency-based and autonomy-preserving topologies. Upon receiving service requests, the corresponding SFC is initially instantiated through formal graph composition rules [17]. Subsequently, a resource-aware mapping algorithm is implemented to optimally place VNFs along the SFC onto infrastructure nodes, achieving computational load balancing while maintaining service continuity constraints. Virtual machine interconnects are dynamically provisioned based on SFC requirements, establishing end-to-end service paths with customized topological configurations [18]. Each network function is mathematically characterized by two key parameters: λ f   , denoting the input/output bandwidth ratio, and μ f , representing the computational resource consumption per 1 Mb/s traffic unit. These parameters remain invariant during data plane processing, strictly adhering to preconfigured network function profiles.
In the process of constructing the SFC, the algorithm first analyzes each service request of users with precision and efficiency, independently constructing a network function dependency graph for each user [19]. Subsequently, the algorithm uses these dependency graphs as input to build an SFC tailored to meet the specific needs of each user.
The network function dependency graph used in this section is shown in Figure 2.
The dashed line from f 2 to f 1 indicates that in the dependency relationship between f 2 and f 1 , f 2 depends on f 1 , meaning that f 2 must be positioned after f 1 when constructing the service chain. Here, λ f represents the ratio of output to input bandwidth, and μ f refers to the computational resources required to process a 1 Mb/s flow.
Based on Figure 2, multiple SFC construction schemes can be generated. One such successfully constructed SFC, which satisfies both the network function dependencies and service requirements, is shown in Figure 3. In the service request, key information includes the source node S , destination node T , initial bandwidth requirement B i n i , and the VNF set.
In a slice network, the set of network functions is represented by F , which includes all the available VNFs on the slice. To clearly describe the dependencies between these functions, a matrix D with | F | rows and | F | columns is used to represent them:
D f i , f j = 1 , f i   depends   on   f j 0 , other , 0 i , j < F
where C t o t a l k represents the total computational resources required by the VNF on C K , and C K refers to the K-th network request. Therefore, the following holds:
C t o t a l k = 0 i < F k B i 1 k λ i 1 μ i ,
where B t o t a l k represents the total bandwidth resources required by the VNFs in C K . Thus, the relationship can be expressed as
B t o t a l k = B i n i + 0 i < F k B i 1 k λ i ,
where V k represents the evaluation value of the SFC constructed for C K in terms of both computational and bandwidth resources:
V k = α C t o t a l k + β B t o t a l k .
Finally, the SFC corresponding to different evaluation values is selected based on the actual requirements.
The core idea of the SFC construction algorithm based on the depth-first search is as follows: Firstly, the dependencies and constraints between the VNFs within the sliced network are extracted and represented in a tree structure, where each node represents a VNF and the connections between the nodes reflect the dependencies among the VNFs. Secondly, due to the dependencies between the VNFs, they are positioned at different hierarchical levels. The algorithm selects the VNF node at the highest level as the starting point. Then, starting from this node, the algorithm traverses the tree in reverse order to reach the root node. This backtracking process aims to determine the initial path of the SFC, which is the path connecting the source node to the highest-level VNF. Once the initial SFC is determined, the algorithm proceeds to further search and add the associated sibling nodes.
The SFC mapping problem has been proven to be NP-hard, meaning that finding the optimal solution is computationally challenging [20]. The objective of the optimization is to minimize latency by integrating the resource and topology information of the entire infrastructure network and the constructed SFCs, forming a comprehensive system state. The SFC mapping problem is inherently a complex and computationally intensive decision-making process, which aligns with the definition of an MDP (Markov Decision Process) [21]. Given that MDPs are effective in describing decision-making problems with state transitions and reward mechanisms, reinforcement learning methods can be employed to solve this problem. The interaction process between the agent and the environment is shown in Figure 4.
The MDP consists of five key components, which can be abstractly represented as ( S , A , P , R , γ ) . S represents the state space, A denotes the action space, P represents the state transition probability, R is the reward value, and γ represents the discount factor. At each time step τ , the DRL (Deep Reinforcement Learning) agent observes the state S τ and selects an action a τ . After the agent acts, the environment transitions to a new state S τ + 1 , and the agent receives a reward r τ .
The state s t represents the environment at a specific time t , which can be either discrete or continuous data. The set of all possible states forms the state space S . Actions a t describe the behaviors executed by the agent at a particular moment t , and the collection of all possible actions constitutes the action space A . The policy π a t s t is a function that determines the next action a t based on the current state s t . The state transition probability p s t + 1 s t , a t indicates the likelihood that the environment transitions to state s t + 1 after the agent takes action a t in state s t at time t . The reward R t is the feedback the environment provides after the agent performs action a t in state s t . This reward is not only related to the current action but also closely tied to the resulting state s t + 1 at the next time step.
To evaluate the performance of the policy π a t s t , the agent aims to maximize its long-term expected return by executing a sequence of actions. To achieve this objective, the concept of the state–action value function is introduced, which quantifies the long-term reward for taking a specific action in a given state. By properly defining and optimizing this function, the agent can make more informed decisions and thereby achieve higher returns. The state–action value function Q π s , a represents the expected return when the agent, starting from state s t , takes action under the policy π a t s t a t . Its mathematical expression is given as follows: Q π s , a = E π ( G t | s t = s , a t = a ) . Here, G t = R t + 1 + γ R t + 2 + γ 2 R t + 3 + , and γ is the discount factor for future rewards, with γ [ 0 , 1 ) . The term G t represents the discounted total return. The above value function can be decomposed using the Bellman equation as follows: Q π s t , a t = R t + s t + 1 S P s t s t + 1 a t a t + 1 A π a t + 1 s t + 1 Q π s t + 1 , a t + 1 .

2.3. Algorithmic Analysis

The following section illustrates the system states, actions, and reward functions in the deployment of SFCs for low-latency services within fine-grained network slicing in 6G networks. A GCN is a neural network that can directly act on domain graph structural data and can fully utilize its structural information. In the realization of network slicing technology, network topology is crucial, and a GCN can effectively handle complex network topology structure information to better realize the construction and deployment of network slices [22]. It can rely on its powerful capabilities to help in multiple stages of the network slicing orchestration and deployment process, such as slicing design [23], slicing deployment [24], and slicing performance and alarm monitoring [25]. Based on this, a GCN-based algorithm for 6G network low-latency service fine-grained network slicing is proposed.
A. 
State Space
The state space provides a comprehensive description of various resources across the network and the current processing state of each VNF, which is defined as S ( t ) = { C t , M t , B t , δ ( t ) } . At time t , C ( t ) represents the remaining computational resource vector across all nodes; M ( t ) shows the remaining storage resources at the nodes; and B ( t ) is the vector of the remaining bandwidth on the links between nodes. If two nodes are not directly connected by a link, the corresponding value of B n x n y r e s will remain 0; δ ( t ) is a vector composed of binary variables that represent the mapping states of each node, describing the mapping states of the VNFs within the entire SFC.
B. 
Action Space
When selecting the next node for mapping, the set of selectable nodes consists of all the neighbor nodes directly connected to the current node via physical links. The action space is determined and constructed by all the VNFs currently mapped on the nodes. The vector A ( t ) represents the action space at time t and is defined as A ( t ) = [ A n 1 ( t ) , A n 2 ( t ) , , A n | N | ( t ) ] , where A n x ( t ) denotes the set of possible next-hop actions for all the VNFs already mapped at node n x .
C. 
Reward Function Definition
The agent continuously receives rewards r τ from the external environment to enhance its performance and train its neural network, rather than following predefined labels. Actions that result in lower latency are considered better, and the environment provides a higher positive reward for such actions. In contrast, infeasible actions (i.e., those that violate at least one constraint) are regarded as incorrect, and the environment returns a reward of 0. Thus, the reward function guides the algorithm toward optimization in the direction of minimizing latency. Based on the previous discussion, the reward r τ for a given action a τ at time τ is defined as follows:
r τ = 1 T ,                 Feasible   actions 0 ,               Infeasible   actions
The strategy and target value networks of the Double DQN are improved to integrate the Double DQN with GCNs. The features of these nodes are organized into an N × D matrix X . The connections between the nodes are represented by an N × N matrix A , referred to as the adjacency matrix. This adjacency matrix A and the feature matrix X together form the input data for this proposed model, which will be used in subsequent analyses and computation.
In this section, modifications are made to the strategy network and the target value network in accordance with the characteristics of the GCN. We modified the layer structure of the neural network to use two convolutional layers, two activation function layers, and ReLu for the activation function. The propagation formula between the layers of the GCN is as follows: H ( l + 1 ) = σ ( D ~ 1 2 A ~ D ~ 1 2 H ( l ) W ( l ) ) , A ~ = A + I , I is the identity matrix, and D ~ is the degree matrix of A ~ . H represents the features at each layer, W denotes the weight matrix for the connected edges, and σ is the nonlinear activation function. The architecture diagram of the GCN-based low-latency service fine-grained network slicing algorithm is shown below (Figure 5):
The Double DQN further optimizes the Q-value estimation by decoupling the action selection process from the Q-value computation process, addressing the “overestimation” issue in the DQN. Meanwhile, the GCN, by introducing the adjacency matrix, enhances the ability to extract features from the graph. The specific execution steps of the GCN-based low-latency service fine-grained slicing algorithm in a 6G network are outlined as follows:
Step 1: Initialize the capacity of the experience pool and set the initial weights of the Q-value network and the target-value network.
Step 2: Map the service function chain constructed for the current service according to Algorithm 1 during each training process, and repeat step 3 through step 5 during the network training process until the whole network reaches a converged state.
Step 3: According to the current network state S ( t ) , based on the predefined ε-strategy, select and execute the action A ( t ) in the action space and then observe the state change of the network to enter a new state S ( t + 1 ) .
Step 4: Obtain the reward value R t from the executed action and update the Q-value network parameters. Then, update the weights of the target value network.
Step 5: Deposit the samples ( S ( t ) , A ( t ) , S ( t + 1 ) , R t ) into the experience pool from which samples are subsequently randomly drawn for subsequent model training and updating.
Algorithm 1: A Deep-First Search-Based Algorithm for Constructing Low-Latency Fine-Grained Network Slicing in 6G Networks
Input: F, D
Output: SFC construction scheme and its evaluation value V k
1: Transform the dependency D into a tree structure
2: for k 0   to   | c |
3:   From the node where q m k , h is located to find the parent node, grandfather node and root node, and put these nodes into the set q n o d e ;
4:    q i n i t = q n o d e F k .
5:   Assign the remaining elements in F k to set q r e m ;
6:    D e e p f c . c o p y ( q i n i t ) ;
7:    for i 0   to   T e m p . s i z e ( )
8:     T e m p . c o p y ( D e e p f c ) ;
9:     D e e p f c . c l e a r ( )
10:     for j 0   to   T e m p . s i z e ( )
11:      for m 0   to   T e m p [ j ] . s i z e ( )
12:        t = T e m p build a SFC according to the residual node rule.
13:        D e e p f c . c o p y ( T e m p [ j ] ) ;
14:       Delete the node added by T e m p [ j ] .
15:      end for
16:     end for
17:     end for
18: Equations (17) and (18) were used to compute the required computing resources and bandwidth resources.
19: Equation (19) was used to compute evaluation value V k .
20: end for

3. Results

The time and space complexity of the proposed algorithm can be calculated in two parts: the time complexity of Algorithm 1 is O = ( | F | 3 ) , and the space complexity is O = ( | F | ) . Algorithm 2 has a time complexity O = ( T · ( | F | 3 + | A | ) ) and a space complexity O = ( C · ( F + | E | ) ) , where | A | is the size of the action space, T is the number of training rounds, | E | is the number of dependent edges, and C is the capacity of the playback experience pool, so the total time complexity of the algorithm is O = ( | F | 3 + T · ( | F | 3 + | A | ) ) and the total space complexity is O = ( F + C · ( | F | + | E | ) ) . The algorithm satisfies polynomial time in theoretical complexity and is suitable for dealing with medium-sized and below network problems.
Algorithm 2: A GCN-Based Algorithm for Constructing Low-Latency Fine-Grained Network Slicing in 6G Networks
1: Randomly reply memory to capacity D.
2: Initialize the Q-network Q with random weights θ .
3: Initialize the target Q-network θ ^ with random weights θ ^ = θ .
4: For episode t = 1 to T do
5:     According to Algorithm 1, a dedicated network SFC with source node and end node is generated for the current service.
6:    With probability ε   select   a   random   action   A ( t ) ,   otherwise   select   A t = a r g m a x A t a Q ( S t , A t ; θ ) .
7:      Perform action A t and get the reward R t , observer the next state S ( t + 1 ) .
8:      Store transition ( S ( t ) , A ( t ) , S ( t + 1 ) , R t ) in the replay memory.
9:      Sample random minibatch of transitions ( S ( j ) , A ( j ) , S ( j + 1 ) , R j ) from the replay memory.
10:      y j = R j + γ Q ( S j + 1 ,   a r g m a x A j + 1 a Q ^ S j + 1 , A j ; θ ; θ ^ ) .
11:     Perform a gradient step on ( y j Q S j , A j ; θ ) 2 with respect to the network parameter θ .
12:     Every step reset θ ^ = θ .
13: end for
The underlying random physical network topology is constructed using the NetworkX library. NetworkX is a package for Python 3.9; in this paper, we set the network topology as a Barabasi–Albert network by NetworkX, and the weight change of the edges is set as a sinusoidal function fluctuation mode. Other simulation parameters are set as shown in Table 1.
The following section compares the proposed GCN-based low-latency fine-grained network slicing algorithm in 6G networks with the Double DQN algorithm.
As shown in Figure 6, the reward value in this study is defined as the inverse of transmission time. Sixth-generation networks need to support highly dynamic, low-latency services. Algorithms must quickly adjust their strategies in the rapidly changing network environment. The convergence speed directly reflects the time required for the algorithm to stabilize the optimal state from the initial state and is a key indicator of the algorithm’s real-time and adaptive ability. The simulation results show that the DQN algorithm converges at 400 training steps, the Double DQN algorithm converges at 300 training steps, and the proposed algorithm achieves convergence in just 250 steps. In addition, the algorithm proposed in this paper improves the reward by 25.74% and 8.50% over the DQN and Double DQN, respectively. This means that the algorithm proposed in this paper is more suitable for resource-sensitive communication scenarios compared to the DQN and Double DQN.
Furthermore, three algorithms are compared based on delay data under five different infrastructure network node configurations. The results are shown in Figure 7. As the number of infrastructure network nodes increases, the end-to-end delay increases. However, the end-to-end delay of the 6G network low-latency service fine-grained slicing algorithm based on the GCN remains lower than both the Double DQN algorithm and the DQN algorithm. This is because the algorithm used in this paper itself takes into account the network topology, which has a deeper message propagation depth and has higher network scalability, so the increase in nodes has less impact on latency. This indicates that the GCN-based algorithm possesses greater stability as the number of nodes in the network topology changes.
Figure 8 and Figure 9 present comparisons of the end-to-end delay for service execution under three different algorithms, with service sizes of 25 and 75, respectively. The service size plays a crucial role in resource allocation strategies. A larger service size typically necessitates more computational and bandwidth resources, which place greater demands on the dynamism and coordination of resource allocation strategies. For each scenario, five sets of average end-to-end latency data were collected from five rounds of training, with each round consisting of 200 time steps. As observed in the figures, the end-to-end latency of all three algorithms increases as the size of the service data packet grows. The GCN-based algorithm reduces the latency by 26.92% over the DQN and 18.77% over the Double DQN when the service size is 25. When the service size is 75, this algorithm reduces the latency by 28.82% and 22.41% compared to the DQN and Double DQN, respectively. The reason for this is that the algorithm proposed in this section leverages the spectral properties of the graph and operates on the Laplace matrix to update node features, which enhances computational capability. Therefore, the end-to-end latency of the 6G network low-latency service fine-grained slicing algorithm based on the GCN is lower than that of the Double DQN algorithm and the DQN algorithm.

4. Conclusions

In this paper, we propose a GCN-based low-latency service fine-grained network slicing algorithm for 6G networks. First, a finer-grained latency-type slicing approach is introduced, which comprehensively considers both the computing and communication resources of the RAN and core network in the process of calculating end-to-end latency. The simulation results show that this algorithm fully leverages network topology information and incorporates local and neighboring characteristics, significantly reducing end-to-end latency. This indicates that compared to the DQN and Double DQN algorithms, the algorithm proposed in this paper enhances the ability to model the network topology, which significantly improves the effective utilization of resources by capturing the local and global resource dependencies.
Additionally, a deployment method for the constructed network slices is presented. During the deployment process, a novel approach is presented to represent the diversified and complex 6G service requests using DAGs. The DAG-based service request representation enables accurate modeling of complex 6G service dependencies. By abstracting service requests as DAGs, the algorithm can capture intricate VNF interdependencies and topological constraints, thus facilitating efficient SFC construction. A depth-first search-based SFC generation mechanism further ensures compliance with hierarchical functional dependencies. Furthermore, the graph data feature extraction capabilities of GCN are utilized to address the challenges posed by the DAG, thereby enabling effective deployment of the SFC.
In summary, this work advances 6G network slicing techniques by addressing the limitations of coarse-grained resource allocation and linear SFC models. Our future research will explore real-world testbed implementations and extend the framework to support more complex 6G network slicing scenarios.

Author Contributions

Conceptualization, Y.Y.; Methodology, Y.Y.; Software, Y.Y., C.Z. and C.W.; Formal analysis, Y.Y.; Data curation, Y.Y.; Writing—original draft, Y.Y. and C.Z.; Writing—review & editing, C.Z., C.W. and X.Z.; Visualization, C.Z.; Funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science and Technology Major Project of China on Mobile Information Networks [2024ZD1300400] and Natural Science Foundation of China [92367102].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This paper is an extended version of a paper [26], which was present at the 2023 IEEE 23rd International Conference on Communication Technology (ICCT), Wuxi, China, 20–22 October 2023.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zheng, W.J.; Xu, H.; Wang, Z.; Tang, J.J.; Hu, J.B.; Zhu, C.X.; Yao, J.M. A Virtual Network Mapping Method Based on Power Service Delay Sensitivity and Service Reliability. In Proceedings of the 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP), Changsha, China, 20–22 October 2021; pp. 1–5. [Google Scholar]
  2. Mei, C.L.; Xia, X.; Liu, J.Y.; Yang, H.Y. Load Balancing Oriented Deployment Policy for 5G Core Network Slices. In Proceedings of the 2020 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Paris, France, 27–29 October 2020; pp. 1–6. [Google Scholar]
  3. Yao, J.M.; Chen, D.Y.; Yu, Y.; Wang, W. RAN Slice Access Control Scheme Based on QoS and Maximum Network Utility. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; pp. 1853–1858. [Google Scholar]
  4. García-Morales, J.; Lucas-Estañ, M.C.; Gozalvez, J. Latency-Sensitive 5G RAN Slicing for Industry 4.0. IEEE Access 2019, 7, 143139–143159. [Google Scholar] [CrossRef]
  5. Zhao, Y.J.; Shen, J.; Qi, X.Y.; Wang, X.; Zhao, X.J.; Yao, J. A 5G Network Slice Resource Orchestration and Mapping Method based on Power Service. In Proceedings of the 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 4–6 March 2022; pp. 1419–1422. [Google Scholar]
  6. Guan, W.Q.; Wen, X.M.; Wang, L.H.; Lu, Z.M.; Shen, Y.D. A Service-Oriented Deployment Policy of End-to-End Network Slicing Based on Complex Network Theory. IEEE Access 2018, 6, 19691–19701. [Google Scholar] [CrossRef]
  7. Gangopadhyay, B.; Santos, J.; Pedro, J.; Ukkonen, J.; Hannula, M.; Kemppainen, J.; Nieminen, J.; Ylisirnio, P.; Hallivuori, V.; Silvola, M.; et al. SDN-enabled Slicing in Disaggregated Multi-domain and Multilayer 5G Transport Networks. In Proceedings of the 2020 23rd International Symposium on Wireless Personal Multimedia Communications (WPMC), Okayama, Japan, 19–26 October 2020; pp. 1–6. [Google Scholar]
  8. Sen, N.; Franklin, A.A. Impact of Slice Granularity in Centralization Benefit of 5G Radio Access Network. In Proceedings of the 2020 6th IEEE Conference on Network Softwarization (NetSoft), Ghent, Belgium, 29 June–3 July 2020; pp. 15–21. [Google Scholar] [CrossRef]
  9. Wang, T.; Chen, S.; Zhu, Y.; Tang, A.; Wang, X. LinkSlice: Fine-Grained Network Slice Enforcement Based on Deep Reinforcement Learning. IEEE J. Sel. Areas Commun. 2022, 40, 2378–2394. [Google Scholar] [CrossRef]
  10. Boussard, M.; Sauze, N.L.; Papillon, S.; Peloso, P.; Varloot, R. Secure Application-Oriented Network Micro-Slicing. In Proceedings of the 2019 IEEE Conference on Network Softwarization (NetSoft), Paris, France, 24–28 June 2019; pp. 248–250. [Google Scholar]
  11. Li, H.; Kong, Z.; Chen, Y.; Wang, L.; Lu, Z.; Wen, X.; Jing, W.; Xiang, W. Slice-based service function chain embedding for end-to-end network slice deployment. IEEE Trans. Netw. Serv. Manag. 2023, 20, 3652–3672. [Google Scholar] [CrossRef]
  12. Wu, Z.X.; You, Y.Z.; Liu, C.C.; Chou, L.D. Machine learning based 5g network slicing management and classification. In Proceedings of the 2024 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Osaka, Japan, 19–22 February 2024; pp. 371–375. [Google Scholar]
  13. Hesselbach, X. Intelligent Network Slicing in the Multi-Access Edge Computing for 6G Networks. In Proceedings of the 2023 23rd International Conference on Transparent Optical Networks (ICTON), Bucharest, Romania, 2–6 July 2023; pp. 1–4. [Google Scholar]
  14. Wang, Y.F.; Dinavahi, V. Low-Latency Distance Protective Relay on FPGA. IEEE Trans. Smart Grid 2014, 5, 896–905. [Google Scholar] [CrossRef]
  15. Zhang, P.Y.; Pang, X.; Bi, Y.X.; Yao, H.P.; Pan, H.J.; Kumar, N. DSCD: Delay Sensitive Cross-Domain Virtual Network Embedding Algorithm. IEEE Trans. Netw. Sci. Eng. 2020, 7, 2913–2925. [Google Scholar] [CrossRef]
  16. Behnke, I.; Austad, H. Real-Time Performance of Industrial IoT Communication Technologies: A Review. IEEE Internet Things J. 2024, 11, 7399–7410. [Google Scholar] [CrossRef]
  17. Kim, H.G.; Jeong, S.Y.; Lee, D.Y.; Choi, H.; Yoo, J.H.; Hong, J.W.K. A Deep Learning Approach to VNF Resource Prediction using Correlation between VNFs. In Proceedings of the 2019 IEEE Conference on Network Softwarization (NetSoft), Paris, France, 24–28 June 2019; pp. 444–449. [Google Scholar]
  18. Xu, Z.Q.; Jin, C.; Yang, F.; Ding, Z.; Du, S.; Zhao, G.Q. Coordinated Resource Allocation with VNFs Precedence Constraints in Inter-Datacenter Networks over Elastic Optical Infrastructure. In Proceedings of the 2018 IEEE World Symposium on Communication Engineering (WSCE), Singapore, 28–30 December 2018; pp. 1–6. [Google Scholar]
  19. Liu, Y.M.; Zhang, C.C.; Li, Y.; Zhang, S.N.; Yang, H.H. Dependency Aware and Delay Guaranteed SFC Placement in Multi-domain Networks. In Proceedings of the 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security Companion (QRS-C), Chiang Mai, Thailand, 22–26 October 2023; pp. 491–498. [Google Scholar]
  20. Wang, Y.; Zhang, L.; Yu, P.; Chen, K.; Qiu, X.; Meng, L.; Kadoch, M.; Cheriet, M. Reliability-Oriented and Resource-Efficient Service Function Chain Construction and Backup. IEEE Trans. Netw. Serv. Manag. 2021, 18, 240–257. [Google Scholar] [CrossRef]
  21. Shah, H.A.; Zhao, L. Multiagent Deep-Reinforcement-Learning-Based Virtual Resource Allocation Through Network Function Virtualization in Internet of Things. IEEE Internet Things J. 2021, 8, 3410–3421. [Google Scholar] [CrossRef]
  22. Dong, T.; Zhuang, Z.; Qi, Q.; Wang, J.; Sun, H.; Yu, F.R.; Sun, T.; Zhou, C.; Liao, J. Intelligent Joint Network Slicing and Routing via GCN-Powered Multi-Task Deep Reinforcement Learning. IEEE Trans. Cogn. Commun. Netw. 2022, 8, 1269–1286. [Google Scholar] [CrossRef]
  23. Liu, Y.C.; Liu, J.Y.; Wang, C.; Yang, Q.H. Comprehensive 5G Core Network Slice State Prediction Based on Graph Neural Networks. In Proceedings of the ICC 2023—IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; pp. 6615–6620. [Google Scholar]
  24. Alves, E.J.J.; Boubendir, A.; Guillemin, F.; Sens, P. A Heuristically Assisted Deep Reinforcement Learning Approach for Network Slice Placement. IEEE Trans. Netw. Serv. Manag. 2022, 19, 4794–4806. [Google Scholar] [CrossRef]
  25. Lei, K.; Qin, M.; Bai, B.; Zhang, G.; Yang, M. GCN-GAN: A Non-linear Temporal Link Prediction Model for Weighted Dynamic Networks. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 388–396. [Google Scholar]
  26. Wu, C.; Zhu, X.; Zhu, H. A Multi-Granularity and Low-Latency End-to-End Slicing Algorithm Based on GCN for 6G Networks. In Proceedings of the 2023 IEEE 23rd International Conference on Communication Technology (ICCT), Wuxi, China, 20–22 October 2023. [Google Scholar]
Figure 1. Schematic of fine-grained low-latency network slicing for 6G networks based on GCN.
Figure 1. Schematic of fine-grained low-latency network slicing for 6G networks based on GCN.
Sensors 25 03139 g001
Figure 2. Diagram of network function dependency.
Figure 2. Diagram of network function dependency.
Sensors 25 03139 g002
Figure 3. Diagram of the constructed SFCs.
Figure 3. Diagram of the constructed SFCs.
Sensors 25 03139 g003
Figure 4. Schematic diagram of the MDP.
Figure 4. Schematic diagram of the MDP.
Sensors 25 03139 g004
Figure 5. Architecture diagram of the GCN-based low-latency service fine-grained network slicing algorithm.
Figure 5. Architecture diagram of the GCN-based low-latency service fine-grained network slicing algorithm.
Sensors 25 03139 g005
Figure 6. Convergence speed graph.
Figure 6. Convergence speed graph.
Sensors 25 03139 g006
Figure 7. Comparison of latency with various numbers of basic network nodes.
Figure 7. Comparison of latency with various numbers of basic network nodes.
Sensors 25 03139 g007
Figure 8. Comparison of end-to-end delay for service size 25.
Figure 8. Comparison of end-to-end delay for service size 25.
Sensors 25 03139 g008
Figure 9. Comparison of end-to-end delay for service size 75.
Figure 9. Comparison of end-to-end delay for service size 75.
Sensors 25 03139 g009
Table 1. Simulation parameter list.
Table 1. Simulation parameter list.
Network ParametersValues
Total storage resources of physical nodesUniform distribution with mean 100 and variance 30
Total computing resources of physical nodesUniform distribution with mean 150 and variance 40
Base station wireless channel bandwidth (MHz)20
Noise power spectral density (dBm/Hz)−174
Number of VNF types5
Length of SFC[5, 10]
Total number of deployed SFCs[50, 80]
Number of VNFs in each SFC5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, Y.; Zhang, C.; Wu, C.; Zhu, X. A Graph Convolutional Network-Based Fine-Grained Low-Latency Service Slicing Algorithm for 6G Networks. Sensors 2025, 25, 3139. https://doi.org/10.3390/s25103139

AMA Style

Ye Y, Zhang C, Wu C, Zhu X. A Graph Convolutional Network-Based Fine-Grained Low-Latency Service Slicing Algorithm for 6G Networks. Sensors. 2025; 25(10):3139. https://doi.org/10.3390/s25103139

Chicago/Turabian Style

Ye, Yuan, Caiming Zhang, Chenlan Wu, and Xiaorong Zhu. 2025. "A Graph Convolutional Network-Based Fine-Grained Low-Latency Service Slicing Algorithm for 6G Networks" Sensors 25, no. 10: 3139. https://doi.org/10.3390/s25103139

APA Style

Ye, Y., Zhang, C., Wu, C., & Zhu, X. (2025). A Graph Convolutional Network-Based Fine-Grained Low-Latency Service Slicing Algorithm for 6G Networks. Sensors, 25(10), 3139. https://doi.org/10.3390/s25103139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop