Next Article in Journal
Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition
Previous Article in Journal
ADEmono-SLAM: Absolute Depth Estimation for Monocular Visual Simultaneous Localization and Mapping in Complex Environments
Previous Article in Special Issue
KAN-Sense: Keypad Input Recognition via CSI Feature Clustering and KAN-Based Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Network Traffic Prediction for Multiple Providers in Digital Twin-Assisted NFV-Enabled Network

School of Computer Science and Technology, Zhengzhou University of Light Industry, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(20), 4129; https://doi.org/10.3390/electronics14204129
Submission received: 6 September 2025 / Revised: 7 October 2025 / Accepted: 20 October 2025 / Published: 21 October 2025

Abstract

This manuscript investigates the network traffic prediction problem, with the aim of predicting network traffic on a network function virtualization (NFV)-enabled and digital twin (DT)-assisted physical network for network service providers and network resource providers. It faces several key challenges like data privacy and different variation patterns of network traffic for multiple service function chain (SFC) requests. In view of this, we address the network traffic prediction problem by jointly considering the above key challenges in this manuscript. Specifically, we formulate the virtual network function (VNF) migration and SFC placement problems as integer linear programming (ILP) that aim to maximize acceptance revenues, minimize network resource costs, minimize energy consumption, and minimize migration cost. Then, we define the Markov Decision Process (MDP) for the network traffic prediction problem, and propose a model and algorithm to solve the problem. The simulation results demonstrate that our algorithms outperform benchmark algorithms and achieve a better performance.

1. Introduction

Network function virtualization (NFV), as one of the core innovations in modern networking technologies, significantly enhances network flexibility and scalability by virtualizing functionalities of traditional hardware-based network devices and running them on general-purpose servers [1,2]. In practical network deployments, the unpredictability of user behaviors, the diversification of application requirements, and the dynamic fluctuations of network workloads pose substantial challenges to NFV networks [3]. In statically configured networks, resources are often over-provisioned to accommodate potential traffic spikes, resulting in resource wastage. To address these issues, NFV networks typically rely on dynamic resource management and scheduling strategies to flexibly adjust the deployment locations of virtual network functions (VNFs), the allocation of computational resources, and traffic routing paths, thereby optimizing the overall performance. However, in dynamic traffic environments, such real-time adjustments must balance multi-dimensional objectives, including migration costs, resource utilization efficiency, and operational expenditures.
On the other hand, digital twin technology has gradually emerged in various industries in recent years [4,5]. By creating a virtual replica of a physical entity or system, it enables the real-time monitoring, analysis, and optimization of the operation of physical systems [6]. Introducing digital twin technology into NFV networks allows for more flexible and efficient network management through virtualization, helping operators cope with complex network environments and increasing demands. In practical applications, digital twin technology typically requires large amounts of data for training and optimization. This data often contains sensitive information, including user privacy data, operators’ core business data, and network resource usage. Therefore, in the context of multi-party collaborative NFV networks, how to achieve efficient network management and optimization while ensuring data privacy and security has become an urgent issue to address.
Network service providers and network resource providers train the data locally and then upload the model updates (rather than the raw data) or operational results to the global model for consolidation. Multiple organizations can collaborate effectively while ensuring data privacy. In this way, there is no need to share or upload data, but it is necessary to transfer the parameters of the neural network, and the number of parameters in a typical neural network is quite large. Therefore, an LSTM structure based on low-rank adaptation (LoRA) [7] is used here. Only low-rank matrices are transferred, rather than all the parameters of the entire neural network.
In this article, we propose the utilization of LoRA and DDPG in a digital twin-assisted NFV network for network traffic prediction. Our specific contributions and innovations can be summarized as follows. (1) To tackle the issue of untimely VNF migration induced by dynamic changes in network traffic, we construct a system network model for multiple network service providers and network resource provider in a digital twin-assisted NFV network. This network system model allows all parties to train neural network models locally, protecting the privacy data of various network service providers and network resource providers. (2) We propose a mathematical model for multi-party merchants in a digital twin-assisted NFV network. In this mathematical model, we define the constraints and optimization objectives aimed at maximizing network revenue. (3) We propose a prediction algorithm which integrates the deep reinforcement learning algorithm and LoRA in LSTM. In addition, we give the Markov Decision Process (MDP) model for the network traffic prediction problem.
The remaining sections of this article are organized as follows. Section 2 provides an overview of related works. Section 3 defines the system network model, the mathematical definition, and the MDP definition of the problem. The solutions and algorithms are proposed in Section 4. The simulation results are presented in Section 5, and this manuscript is summarized in Section 6.

2. Related Works

In this section, we introduce some research on the application of digital twins in NFV networks.
A digital twin is a virtual model based on physical entities, which reflects the state and behavior of a physical device or system in real time by modeling and simulating its real-time data. Traditionally, digital twins are widely used in industrial manufacturing, IoT, and other fields, and with the rapid development of virtualization technology in communication networks, digital twins have gradually entered the network field, especially in the NFV architecture, which plays a crucial role.
Calle-Heredia et al. [8] propose a new network architecture that combines the concepts of digital twins and virtual networks to support the efficient operation of XR applications. The architecture creates a virtual network model through digital twins to reflect the behavior and demand of XR applications in the network in real time, so as to optimize the allocation and management of resources. Li et al. [9] present an innovative platform combining a digital twin and NFV, demonstrating how digital twin technology can be utilized to enhance the performance, reliability, and intelligence of virtual network architectures. The platform not only provides effective solutions for network failure prediction and recovery, resource optimization, etc., but also provides new operation and management ideas for future networks, such as 5G and future 6G networks. Tang et al. [10] proposed a digital twin-based VNF mapping and scheduling framework to optimize network resource scheduling in industrial IoT using SDN and NFV technologies. With real-time monitoring, prediction, and scheduling completed by using a digital twin, the system can effectively improve the resource utilization of VNFs and ensure the efficiency and reliability of industrial IoT services. The solution provides strong support for dynamic and intelligent network management in industrial IoT, and is particularly suitable for industrial application scenarios that require low latency and high reliability. Wang et al. [11] proposed a service function chain scheduling framework that combines digital twin and multi-intelligence reinforcement learning to provide innovative solutions to the resource optimization and service quality assurance problems in multi-domain computing power networks. Through real-time monitoring and prediction using digital twin technology, and intelligent cooperative optimization by multi-intelligence reinforcement learning, the system is able to provide efficient and reliable service function chain scheduling in dynamic and complex network environments. This research provides new ideas and technical support for service management and optimization in future multi-domain networks. Liu et al. [12] proposed an innovative approach combining deep reinforcement learning and digital twins to present a dynamic scheduling framework for network resource demand prediction and VNF migration optimization problems. The method can effectively improve the utilization of network resources and ensure the efficiency of VNF migration decisions and the stability of service quality. The framework is experimentally verified to perform well in multi-domain network environments, providing a new technical path for resource management and scheduling in next-generation networks. Li et al. [13] proposed an innovative framework combining digital twin and service function chaining (SFC) techniques for providing efficient service scheduling and resource optimization in MEC environments. With the real-time monitoring and prediction of digital twins, combined with the dynamic scheduling capability of SFC, the framework is able to provide high-quality services for various applications under variable network conditions and effectively optimize the resource utilization. The solution provides a new solution idea for service provisioning and optimization in MEC, which has a large application potential. Guo et al. [14] propose a new method, DTFL: a digital twin-assisted graph neural network approach for service function chain fault localization in a network function virtualization environment. By monitoring and simulating the network state in real time through digital twin technology, combined with a graph neural network’s processing of complex network topology relations, DTFL can efficiently and accurately localize the faults in service function chains. Experimental validation shows that DTFL exhibits significant advantages in both fault localization accuracy and computational efficiency, providing a new solution for fault diagnosis and optimization in NFV networks. Tang et al. [15] proposed a VNF migration method based on digital twin technology and resource prediction, which significantly improves resource utilization and migration efficiency in IoT networks. Through real-time monitoring, prediction, and intelligent scheduling, the system is able to realize seamless VNF migration with guaranteed quality of service. The experimental results show that the method can effectively reduce resource waste, improve the dynamic adaptability of the network, and provide new ideas and methods for VNF migration and resource optimization in future IoT networks. Huo et al. [16] demonstrates that joint sparse Bayesian learning enables the effective reconstruction of hybrid near- and far-field structures under low signal-to-noise ratio conditions. These results substantiate the potential of incorporating embedded architectural paradigms—including digital twins, service function chains, and provider partitioning—to improve learning efficiency and system robustness in complex communication environments.
The research findings mentioned above serve as the foundation for our current study. In the context of digital twin-assisted NFV networks, where data is centrally stored, the issue of privacy protection for all parties becomes even more pronounced. To address this problem, we have reconstructed the network system model, formulated a mathematical model of the problem, and developed implementation algorithms.

3. System Model and Problem Formulation

3.1. System Model

3.1.1. Digital Twin-Assisted Network Architecture

This manuscript focuses on a software-defined network (SDN) and NFV-based network, which contains access, edge, and core networks. Access and core networks may belong to one network resource provider’s domain or cross multiple network resource provider’s domains. Each domain has a centralized SDN controller, which can obtain complete knowledge of the domain’s network topology, resource capacity, resource demands, and so on [17]. The network architecture is described in Figure 1. It is assumed that the overall multi-domain is centralized by a global orchestrator SDN controller, and it can obtain a global view of the multi-domain network and knowledge of multi-domain resources status by collecting information from each domain’s centralized SDN controller [18,19]. The network edge has multiple types of end nodes, including hosts, data center servers, mobile phones, and other devices. The network service demands of these sites and devices are aggregated and submitted to various network service providers, who, in turn, make resource usage requests to network resource providers in the form of a service function chain (SFC). Network service providers use the resources of the network resource providers to deliver services to users. Service providers may offer the same services or different ones. Network traffic flows are transmitted to different users’ devices and data centers via the core network. In the whole physical network, physical nodes are assumed to be divided into five categories: (1) NFV-enabled nodes, (2) transport nodes, (3) digital twin nodes, (4) access nodes, (5) end nodes. NFV-enabled nodes process and transmit VNFs that are required for services, transport nodes can only transmit traffic flows, twin nodes collect and synchronize the data of VNFs, access nodes can transmit traffic flows in the access network, while end nodes request a service or provide a service [10]. Due to privacy concerns, network service providers are unwilling to share information about network traffic, and network resource providers are also reluctant to share information about the usage of underlying network resources. Therefore, network service providers may have their own digital twin nodes, which can collect dynamic network traffic information. At the same time, network resource providers also have their own digital twin nodes, which monitor VNFs in real time and prevent the service failure caused by VNF operation failure. In this way, the digital twin node of each network service provider has network traffic information of their own, and the digital twin node of the network resource provider has information about network resource usage. The network traffic of each SFC varies dynamically according to higher resource demands to minimize the need for SFC reallocation (VNF migration), which results in the overuse of network resources. At the same time, this cannot completely avoid resource reallocation. To alleviate this issue, recent approaches suggest dividing the day into multiple time slices, predicting the network traffic peak within each time slice before each time slice begins, and proactively reallocating resources. Considering the data privacy of each provider, a new framework can be applied to independently train each network traffic prediction model of each network service provider and share the local model training results with the network resource provider. The global model calculates the costs and benefits of the underlying physical network based on mappings and migrations, which are used to train the global model. During the prediction phase, network traffic is predicted based on the physical network resources of the network resource provider. We utilize the SDN controller to manage network data flows [20]. It is assumed that the SDN controller can gather information on the VNFs through the network resource provider’s digital twin nodes [10]. SDN controllers collect information from core network nodes (transport nodes, NFV-enabled nodes, and the digital twin nodes) and access network nodes (access nodes), such as locations, resource usage, and SFC mappings, in order to improve the quality of service (QoS) of SFCs. This framework can physically isolate privacy information while enabling cooperation.

3.1.2. Physical Network

The physical network is represented by an undirected graph G = V , E , where V and E stand for the node set and the link set, respectively. V consists of the transport node set V f o r a 2 + b 2 , the NFV-enabled node set V s e r , the access node set V a c c , the end node set V e n d , and the digital twin node set V d t , i.e., V f o r V s e r V a c c V e n d V d t = V and V f o r V s e r V a c c V e n d V d t = 0 . Transport nodes have forwarding devices and only forward traffic flow to other nodes, NFV-enabled nodes are data centers (DCs) that can not only forward flow but also carry VNF instances, access nodes can forward traffic flow between the core network and the edge network, end nodes are terminal nodes that belong to users or service providers, while digital twin nodes can construct DTs of a physical network and cannot deploy VNFs. Digital twin nodes that belong to network resource providers are represented by V r e s d t , and digital twin nodes that belong to network service providers are represented by V s e r d t . Each NFV-enabled node is endowed with certain computing, processing, and storage resources, and each digital twin node is endowed with certain computing and storage resources. We use c n c o m p to denote the computing capacity of the NFV-enabled node v n V s e r , we use c k c o m p to denote the computing capacity of the digital twin node v k V r e s d t , and we use c h c o m p to denote the computing capacity of the digital twin node v h e d g e V s e r d t . When v r V f o r , the computing capacity c a c o m p or c r c o m p of access node v a or transport node v r is 0. Each NFV-enabled node and digital twin node is endowed with a certain storage resource. We use c n s t o r to denote the storage capacity of the NFV-enabled node v n V s e r , we use c k s t o r to denote the storage capacity of the digital twin node v k V r e s d t , and we use c h s t o r to denote the storage capacity of the digital twin node v h e d g e V s e r d t . Each NFV-enabled node is endowed with a certain processing resource. We use c n p r o to denote the processing capacity of the NFV-enabled node v n V s e r . We use N n to represent the neighbor node set of the node v n V s e r , we use N k to represent the neighbor node set of the node v k V r e s d t , and we use N h to represent the neighbor node set of the node v h e d g e V s e r d t . Each link e m , n E ( e l , l E ) has a bandwidth capacity c m , n b w ( c l , l bw ) and a link propagation latency d m , n p a g a ( d l , l prop ).

3.1.3. Scenario

Different physical nodes may have different constraints on the allowable VNF types and the allowable number of VNF instances. Each physical node can instantiate multiple VNFs, and each VNF can instantiate multiple instances on the physical node. We use b n , v to represent if the VNF f v can be instantiated on the physical node v n , and n n , v max to represent the maximum number of VNF f v ’s instances that can be instantiated on node v n . Each instance on a physical node consumes CPU resources, and the basic computing resource requirement for instance f v , i is represented by c v , i c o m p . Each instance on a physical node consumes storage resources, and the basic storage resource requirement for instance f v , i is represented by c v , i s t o r . Each instance instantiated on a physical node has a processing capacity, and the total processing capacity of the instances on the physical node cannot exceed the total processing capacity of the physical node. The total processing capacity of a physical node is represented by c n p r o , and the processing capacity of each instance on the physical node is represented by c v , i p r o . Additionally, some VNFs can be shared among multiple SFCs on the physical node, while others cannot. Therefore, the set of VNFs V consists of the shareable set V s h a and the non-shareable set V u n s h a . The VNF instance can be shared among multiple different SFCs, and the VNF instance in V s h a can’t be shared among different SFCs.

3.1.4. Service Function Chain Request

There is a set of service function chain requests (SFC requests) R in the physical network, and a set of network service providers Q. These SFCs belong to different network service providers. We assume that there are R SFCs, and they belong to Q network service providers. The SFC requests can be represented as R = r 1 , r 2 , , r R , and the network service providers can be represented as Q = q 1 , q 2 , , q Q . We use b s , p to represent if SFC request r s belongs to network service provider q p . An individual SFC request r s can be represented by an ordered sequence of VNFs, the delay requirement, the packet size, and the resource requirements, which is represented as r s = f s , d s , l s , c s . In the SFC request r s , f s = f s , 1 , f s , 2 , , f s , k , , f s , F s represents the VNFs of the SFC request, and f s , k represents the kth VNF of the SFC, F s represents the number of VNFs in the SFC request r s . d s represents the end-to-end latency requirement of the SFC request r s and l s represents the packet size of the SFC request r s . c s = c s b w , c s , 1 n o d e , c s , 2 n o d e , , c s , v n o d e , , c s , F s n o d e represents the resource requirements of the SFC request r s , c s b w represents the bandwidth requirement of the SFC request r s , c s , v n o d e = c s , v c o m p , c s , v s t o r , c s , v p r o represents the resource requirements of the VNF f v in the SFC request r s , c s , v c o m p represents the computing requirement of the VNF f s , v , c s , v s t o r represents the storage requirement of the VNF f s , v , and c s , v p r o represents the processing requirement of VNF f s , v .

3.1.5. Digital Twin

Digital twin nodes have two kinds of categories: one belongs to network service providers to collect user requests, understand service network traffic, and other information; the other belongs to network resource providers to collect the mapping and migration status of the underlying physical network and the resource usage of the underlying physical network. Digital twin nodes that belong to network service providers are connected to end nodes, while digital twin nodes that belong to network resource provider are connected to transport nodes, NFV-enabled nodes, and access nodes. We use v k to represent a digital twin node that belongs to a network resource provider. We use b s v , n k to represent if VNF f s , v is mapped to the node v n which is associated with digital twin node v k , and b s v , n k to represent the same relationship in the previous time slice. The packet for collecting or updating data of VNF f s , v between the NFV-enabled node v n and the digital twin node v k passes a physical path. If physical link e l , l belongs to the physical path, we set b s v , n k , l , l = 1 , otherwise, we set b s v , n k , l , l = 0 . Constructing the DT of nodes (such as NFV-enabled nodes, access nodes, and transport nodes) consumes resources of CPU, storage, and bandwidth. c s v , n k c o m p represents the CPU resources to construct the DT of VNF f s , v in the digital twin node v k for the NFV-enabled node v n , c r , k c o m p represents the CPU resources to construct the DT in the digital twin node v k for the transport node v r , and c a , k c o m p represents the CPU resources to construct the DT in the digital twin node v k for the access node v a . c s v , n k s t o r represents the storage resources to construct and maintain the DT of VNF f s , v in the digital twin node v k for the NFV-enabled node v n , c a , k s t o r represents the storage resources to construct and maintain the DT in the digital twin node v k for the access node v a , and c r , k s t o r represents the storage resources to construct and maintain the DT in the digital twin node v k for the transport node v r . c s v , n k b w represents the bandwidth resources to transmit the DT data of VNF f s , v between the physical node v n and the digital twin node v k , c a , k b w represents the bandwidth resources to transmit the DT data between the access node v a and digital the twin node v k , and c r , k b w represents the bandwidth resources to transmit the DT data between the transport node v r and the digital twin node v k . v h e d g e represents the digital twin node that belongs to the network service provider, and b h , p represents if the digital twin node v h e d g e belongs to the network service provider q p . Constructing the DT of end nodes (such as user nodes, service nodes, and IoT devices) also consumes resources of CPU, storage, and bandwidth. c e , h c o m p represents the CPU resources to construct the DT in the digital twin node v h e d g e for the end node v e . c e , h s t o r represents the storage resources to construct and maintain the DT in the digital twin node v h e d g e for the end node v e . c e , h b w represents the bandwidth resources to transmit the DT data between the end node v e and the digital twin node v h e d g e .

3.1.6. Problem Objectives

Given the physical network, and SFC requests, the problem is to predict network traffic for migrating SFC requests while achieving the following goals.
(1) Maximize the Acceptance Ratio of SFC Requests: A higher acceptance ratio of SFC requests brings higher revenue from network service providers.
(2) Minimize the Cost of Migration: Under the premise of maximizing the acceptance ratio of SFC requests, minimize the migration cost, mainly the latency of migration.
(3) Minimize Energy Consumption: Under the premise of maximizing the acceptance ratio of SFC requests, minimize the energy consumption, mainly including the energy consumption of NFV-enabled nodes.

3.2. Problem Formulation

In this section, we mathematically formulate the problem.

3.2.1. Decision Variables

We use a binary decision variable x s , v to represent if VNF x s , v is migrated.
x s , v = 1 , if VNF f s , v F s is migrated , 0 , otherwise .
We use a binary decision variable y s , v , n to represent if VNF f s , v is embedded in the physical node v n .
y s , v , n = 1 , if VNF f s , v is embedded on node v n , 0 , otherwise .
We use a binary decision variable z s , v , l , l to represent if virtual link (VL) l s , θ s , v is mapped to the physical link e l , l .
z s , v , l , l = 1 , VL l s , θ s , v is mapped to physical link e l , l , 0 , otherwise .

3.2.2. Constrains

Equation (4) guarantees that each VNF of the accepted SFC request is processed by the NFV-enabled node only once, and it can only run on the NFV-enabled nodes that support the VNF.
v n V s e r y s , v , n · b n , v = 1 , r s R , f v F s .
Equations (5) and (6) state that each accepted SFC request begins from the start node and ends at the destination node.
y s , θ s , 0 , n s src = 1 , r s R .
y s , θ s , | F s | + 1 , n s dst = 1 , r s R .
Equation (7) guarantees that a digital twin node that belongs to a network service provider can only be associated with one network service provider.
q p Q b h , p = 1 , v h e d g e V s e r d t .
Equation (8) guarantees that the flow of each SFC request cannot split.
v m N n z s , v , n m + v m N n z s , v , m n 1 , r s R , f s , v F s , v n V s e r , e m , n E , m < n .
Equation (9) guarantees that the conservation of each accepted SFC request at each node.
v m N n z s , v , n m v m N n z s , v , m n = y s , v , n y s , θ s , ( θ s , v 1 ) , n , r s R ,   f s , v F s , v n V .
Where θ s , v represents the location sequence of VNF f s , v , and θ s , q represents the qth VNF of SFC request r s . Then, θ s , ( θ s , v 1 ) represents the previous VNF of f s , v .
Equation (10) guarantees that the processing capacity of each sharable VNF is not violated.
r s R b s , v · b n , v · y s v , n i · c s , v p r o c v , i p r o , v n V s e r , f v F s h a , f v , i I n , v .
Equation (11) guarantees that a non-shareable VNF can be instantiated on each physical node for only one SFC.
r s R b s , v · b n , v = 1 , f v F u n s h a , v n V s e r .
Equation (12) guarantees that the processing capacity of each non-sharable VNF is not violated.
b s , v · b n , v · y s v , n i · c s , v p r o c v , i p r o , v n V s e r , f v F u n s h a , r s R , f v , i I n , v .
Equation (13) ensures that the delay constraint of each SFC flow is not violated.
θ s , v = 0 F s e m , n E l s , v c s b w + d m , n p a g a · z s , v , m n + f s , v F s v n V s e r 1 c s , v p r o + 1 c s , v p r o λ s · y s , v , n d s max , r s R .
Equation (14) ensures that the bandwidth capacity of each link is not violated when not registering at the digital twin node.
r s R f v F s z s , v , l l + z s , v , l l · c s bw c l , l bw , e l , l E , l < l .
Equation (15) ensures that the bandwidth capacity of each link is not violated when registering at the digital twin node.
r s R f v F s z s , v , l l + z s , v , l l · c s bw + v k V resdt v n T k r s R f v F s b s v , n k , l l + b s v , n k , l l · c s v , n k bw c l , l bw , e l , l E , l < l .
Among them, the registration of access nodes, end nodes, and transmission nodes is omitted.
Equation (16) ensures that the processing capacity of each NFV-enabled node is not violated.
f v F b n , v · n n , v · c v , i p r o c n p r o , v n V s e r , n n , v n n , v max .
Equation (17) states that each VNF can only be associated with one digital twin node.
v k V r e s d t b s v , n k = v k V r e s d t b s , v · b n , v · y s , v , n · b n , k = 1 , v n V s e r , r s R , f v F .
Equation (18) guarantees that the flow of the transmitting packet of the DT of VNF f s , v on the NFV-enabled node v n to the digital twin node v k that belongs to the network resource provider cannot split.
v l N l b s v , n k , l l + v l N l b s v , n k , l l 1 , r s R , f v F s , v n V ser , v k V resdt , e l , l E , l < l .
Equation (19) guarantees that the flow of the transmitting packet of the DT of switch information on the transport node v r to the digital twin node v k that belongs to the network resource provider cannot split.
v l N l b r k , l l + v l N l b r k , l l 1 , v r V acc V for , v k V resdt , e l , l E , l < l .
Equation (20) guarantees that the flow of the transmitting packet of the DT of service information on the end node v e to the digital twin node v h e d g e that belongs to the network service provider cannot split.
v l N l b e h , l l + v l N l b e h , l l 1 , v e V end , v h edge V serdt , e l , l E , l < l .
Equation (21) guarantees that the computing capacity of each NFV-enabled node is not violated.
c n c o m p u s e d = r s R f v F s y s , v , n · c s , v c o m p + f v F b n , v · n n , v · c v , i c o m p c n c o m p , v n V s e r , n n , v n n , v max .
Equation (22) guarantees the computing capacity of each digital twin node that belongs to the network resource provider is not violated.
v n T k r s R f v F s b s v , n k · c s v , n k c o m p + v r T k b r , k · c r , k c o m p + v a T k b a , k · c a , k c o m p c k c o m p , v k V r e s d t .
Equation (23) guarantees the computing capacity of each digital twin node that belongs to the network service provider is not violated.
v e T h b e , h · c e , h c o m p c h c o m p , v h e d g e V s e r d t .
Equation (24) guarantees that the storage capacity of each NFV-enabled node is not violated.
r s R f v F s y s , v , n · c s , v s t o r + f v F b n , v · n n , v · c v , i s t o r c n s t o r , v n V s e r , n n , v n n , v max .
Equation (25) guarantees that the storage capacity of each digital twin node that belongs to a network resource provider is not violated.
v n T k r s R f v F s y s , v , n · b n , k · c s v , n k s t o r + v r T k b r , k · c r , k s t o r + v a T k b a , k · c a , k s t o r c k s t o r , v k V r e s d t .
Equation (26) guarantees that the storage capacity of each digital twin node that belongs to a network service provider is not violated.
v e T h b e , h · c e , h s t o r c h s t o r , v h V s e r d t .
The above constraints are summarized as follows in Table 1 where bold italic font is used for titles and the rest are values.

3.2.3. Revenue and Cost Structure

(1)
Revenue of SFC Request Acceptance
We assume that the network resource providers charge fees according to the resource requirement of each SFC request [10]. The revenue is calculated using Equation (27).
r a c c e p t = η 1 · r s R f v F s c s , v c o m p + η 2 · r s R f v F s c s , v s t o r + η 3 · c s b w · F s + 1 .
where η 1 , η 2 , and η 3 are the unit resource revenue.
(2)
Cost of Migration
The migration delay of the VNF includes propagation latency and transmission latency, and the transmission delay of the VNF information to the digital twin node also includes propagation latency and transmission latency [10]. The following section will analyze these four types of delays.
The propagation latency is the delay incurred when a packet propagates in a medium, and it mainly depends on the propagation distance and the propagation speed. Let c p a g a represent the propagation speed, and let d l , l represent the distance of physical link e l , l . The propagation latency from NFV-enabled node v n to NFV-enabled node v n is defined as follows.
d s v , n n p a g a = e l , l E z s , v , l l · d l , l p a g a = e l , l E z s , v , l l · d l , l c p a g a .
The propagation latency from the NFV-enabled node v n to the digital twin node v k is defined as follows.
d s v , n k p a g a = e l , l E b s v , n k , l l · d l , l p a g a = e l , l E b s v , n k , l l · d l , l c p a g a .
Let c m i g b w represent the transmission bandwidth for migrating the VNF. The transmission latency d s v , n n t r a n from v n to v n is defined as follows.
d s v , n n t r a n = c s , v s t o r c m i g b w · e l , l E z s , v , l l .
Let c s v , n k b w represent the transmission bandwidth for transmitting the VNF information between the NFV-enabled node v n and the digital twin node v k . The transmission latency d s v , n k t r a n from the NFV-enabled node v n to the digital twin node v k is defined as follows.
d s v , n k t r a n = c s v , n k s t o r c s v , n k b w · e l , l E b s v , n k , l l .
The migration latency of each VNF can be expressed as follows.
d s , v m i g = d s v , n n t r a n + d s v , n n p a g a + d s v , n k t r a n + d s v , n k p a g a .
Define the migration latency of each SFC as follows.
c s m i g = f v F s x s , v · d s , v m i g .
Define the migration cost at the time slice alternation as follows.
c m i g r a t e = ω · r s R c s m i g .
where ω is the unit cost of migration latency.
(3)
Cost of Placement
A greater reward is obtained when the SFC delay and the construction delay of SFC twin information are shorter. We calculate the placement cost according to the end-to-end latency of SFC and the conducting latency of SFC in digital twin nodes that belong to network resource providers.
The end-to-end latency of SFC is analyzed as follows:
d s = θ s , v = 0 | F s | e m , n E l s , v c s b w + d m , n p a g a z s , v , m n f s , v F s v n V s e r 1 c s , v p r o + 1 c s , v p r o λ s y s , v , n .
The construction delay of VNF twin information includes the construction delays of NFV-enabled nodes, accessing nodes, and transport nodes in the digital twin nodes. Since the twin information of accessing nodes and transport nodes is relatively small, we will only analyze the construction delay of VNF twin information for NFV-enabled nodes here.
The propagation latency from the NFV-enabled node v n to the digital twin node v k is defined as follows.
d s v , n k p a g a = e l , l E b s v , n k , l l · d l , l p a g a = e l , l E b s v , n k , l l · d l , l c p a g a .
The transmission latency from the NFV-enabled node v n to the digital twin node v k is defined as follows.
d s v , n k t r a n = c s v , n k s t o r c s v , n k b w · e l , l E b s v , n k , l l .
Define the placement cost of all SFC requests as follows.
c p l a c e = μ · r s R x s d s + r s R f v r s d s v , n k t r a n + d s v , n k p a g a .
where μ is the unit cost of placement latency.
(4)
Cost of Operation
We define the power consumption of physical nodes as follows [21,22]:
e n = ε 0 + ( ε 1 ε 0 ) · c n c o m p u s e d c n c o m p .
where ε 0 is the basic energy consumption per unit of the NFV-enabled node, and ε 1 is the maximum energy consumption per unit of the NFV-enabled node.
Define the operation cost between two migrations as follows.
c o p e r a t e = γ · l t m i g · v n V s e r e n .
where γ is the unit operation cost of energy consumption and l t m i g is the time length between two migrations.
(5)
Punishment For Denying SFC Requests
The punishment is computed according to the resource requirement of each denied SFC request. The punishment is calculated as follows.
r p u n i s h = μ 1 · r s R f v F s c s , v c o m p μ 2 · r s R f v F s c s , v s t o r μ 3 · c s b w · ( | F s | + 1 ) .
where μ 1 , μ 2 , and μ 3 are the unit resource punishment.

3.2.4. Optimization Objectives

(1)
Service Function Chain Deployment
In our SFC deployment (SFCD) model, the optimization objective is to minimize the operation cost and maximize the acceptance ratio. Overall, the SFCD problem can be formulated following integer linear programming (ILP) problem.
maximize : r d e p l o y = r a c c e p t c o p e r a t e c p l a c e r p u n i s h , subject to : ( 4 ) ( 26 ) .
(2)
Service Function Chain Migration
We formulate the SFC migration (SFCM) problem using the following integer linear programming (ILP) method, which aims to maximize the request acceptance ratio and minimize the migration and operation cost. The problem is formulated as follows.
maximize : r m i g = r a c c e p t c o p e r a t e c m i g r a t e r p u n i s h , subject to : ( 4 ) ( 26 ) .
Due to the NP hardness of the problems, they cannot be solved using the exhaustive search algorithm [23]. We use greeedy algorithms to solve the problems above.

3.3. Markov Decision Process

The knowledge layer is the core of the framework, where the DRL module of network resource providers solves the whole network traffic prediction problem. In this section, the network traffic prediction problem is modeled as a Markov Decision Process.
(1) States: The states include the state of the physical network and the state of the network traffic. The state of the physical network is stored in a network status database of the store collector that belongs to the network resource provider, and the state of the complete network traffic is stored in a history database of the store collector that belongs to the network service provider. They are separately described below.
State of Physical Network: We define this state as a composite of nodes, instances, and links where SFC requests are embedded, and available resources of nodes, instances, and links. This state includes seven kinds of record matrices: the number record matrix of VNF instance N = n n , v F × V s e r , the placement record matrix of VNF instance P v n f = p n , v , i v n f V s e r × F × n n , v max , the mapping path record matrix of virtual link P v l i n k = p s , θ s , v v l i n k R × V + 1 , the record matrix of residual processing capacity C r e p r o = c n , v , i r e p r o V s e r × F × n n , v max , the record vector of residual CPU capacity c r e c o m p = c 1 r e c o m p , c 2 r e c o m p , , c n r e c o m p , , c V s e r r e c o m p 1 × V s e r , the record vector of residual storage capacity c r e s t o r = c 1 r e s t o r , c 2 r e s t o r , , c n r e s t o r , , c V s e r r e s t o r 1 × V s e r , and the record symmetric matrix of residual bandwidth capacity C r e b w = c m , n r e b w V × V . In N, each element n n , v is a single value, where n n , v indicates the current number of VNF f v ’s ( f v F ) instances in the NFV-enabled node v n V s e r . In P v n f , if f v F u n s h a , each element p n , v , i v n f is a single value, where p n , v , i v n f denotes the current SFC request of VNF f v ’s ( f v F u n s h a ) deployment in instance f v , i I n , v . If f v F s h a , each element p n , v , i v n f = p n , v , i , 1 v n f , p n , v , i , 2 v n f , , p n , v , i , s v n f , , p n , v , i , R v n f 1 × R is an R -dimensional vector, where p n , v , i , s v n f = 1 indicates the VNF f v ( f v F s h a ) of SFC r s ( r s R ) is deployed in instance f v , i I n , v , otherwise, the value of p n , v , i , s v n f is 0. In P v l i n k , each element p s , θ s , v v l i n k is a physical path. If VNF f s , θ s , θ s , v 1 and VNF f s , θ s , θ s , v embed to the same physical node, then, p s , θ s , v v l i n k is null. In C r e p r o , each element c n , v , i r e p r o is a single value, where c n , v , i r e p r o denotes the current residual processing capacity of VNF f v ’s ( f v F u n s h a ) instance f v , i I n , v in NFV-enabled node v n V s e r . In c r e c o m p , each element c n r e c o m p is the current residual CPU computing capacity of the NFV-enabled node v n V s e r . In c r e s t o r , each element c n r e s t o r is the current residual storage capacity of the NFV-enabled node v n V s e r . In C r e b w , each element c m , n r e b w is the current residual bandwidth capacity of the physical link e m , n E . If link e m , n does not exist, the element c m , n r e b w equals 0.
S t p h y n e t = N , P v n f , P v l i n k , C r e p r o , c r e c o m p , c r e s t o r , C r e b w .
State of Network Traffic: Each network service provider has the complete network traffic information of their own. They are stored in their history database of the store collector. During the training process, records can be extracted from their history database, and each record is specified by a time slice. The record in the history database can be defined as follows. In c r e s t o r , each element c n r e s t o r is the current residual storage capacity of the NFV-enabled node v n V s e r . In C r e b w , each element c m , n r e b w is the current residual bandwidth capacity of the physical link e m , n E . If link e m , n does not exist, the element c m , n r e b w equals 0.
S t h i s t r a f f i c = x t , y t , y t 1 .
In Equation (45), x t indicates the latest history network traffic in a given time period for the specified time slice s t = s t d a t e , s t s l i c e , where s t d a t e indicates the date of the specified time slice, and s t s l i c e indicates the serial number of the time slice in date s t d a t e . y t represents the actual network traffic for time slice s t , and y t 1 represents the actual network traffic for time slice s t 1 which is the previous time slice of s t .
(2) Action: The action is the predicted value of network traffic for the specified time slice. We combine low-rank adaptation and deep reinforcement learning to solve the network traffic prediction problem. Prediction operations are performed in the actor network of the global model and the neural network of the local model.
The input of the local model is the network traffic S p , t h i s t r a f f i c owned by each network service provider, and the output is the prediction value of network traffic a p , t . This process can be expressed as follows:
a p , t = A p l o c a l S p , t h i s t r a f f i c
The network resource provider knows the combined traffic from SFCs of the same type; however, it does not know the individual parts. Therefore, the global model uses the total SFC traffic as its input. The local models use their own components as the input. In the actor network of the global model, the input is the actual total network traffic S t h i s t r a f f i c _ t o t a l = x t total , y t total , y t 1 total from each SFC of the same type, and the output is the new prediction value of the total network traffic A t = a i , t for SFC type i. This process can be expressed as follows:
A t = A S t h i s t r a f f i c _ t o t a l
In the critic network of the global model, the main task is to evaluate the prediction performance. The input of the critic network is the prediction values of network traffic A t from the global model, the actual total network traffic A t , and the physical network status S t p h y n e t . The output is the evaluation value. This process can be expressed by the following equation:
q t = Q S t phynet , S t h i s t r a f f i c _ t o t a l , A t
(3) Reward: In our proposed framework, the reward represents the network performance, and it needs to be considered from four aspects: migration cost, energy consumption, request acceptance revenue, and request failure penalty.
When the network traffic prediction value is greater than or equal to the actual peak value, there will be no passive migration within the time slice. Migration will only be actively initiated at the beginning of the time slice. Then, the migration cost is defined as the sum of the migration delays during active migration; the energy consumption is the network power consumption after the active migration; the request acceptance revenue is the total amount of resources of the SFC requests accepted after the active migration; and the request failure penalty is the total amount of resources of the SFC requests that fail to map after the active migration. Thus, the network reward is
r m i g , > = r a c c e p t , a c o p e r a t e , a c m i g r a t e , a c p u n i s h , a .
When the network traffic prediction value is less than the actual peak value, passive migration will occur within the time slice. In this case, both active migration and passive migration occur. The migration cost is the sum of the migration delays during active migration and passive migration; the energy consumption is the network power consumption under the actual peak value of network traffic; the request acceptance revenue is the total amount of resources of the SFC requests accepted after passive migration; and the request failure penalty is the total amount of resources of the SFC requests that fail to map after passive migration. Thus, the network reward in this case is
r m i g , < = r a c c e p t , p c o p e r a t e , p c m i g r a t e , a + c m i g r a t e , p c p u n i s h , p .
The actual reward value, however, must be calculated on a case-by-case basis. r m i g and r m i g , > are the same. The sole distinction lies in the computation of the passive migration latency between r m i g and r m i g , < , which is performed in the same manner as described in Section 3.2.3 (2).
Specifically, we formulate the network traffic prediction as an optimization problem, balancing migration latency, operation cost, and acceptance ratio to find the optimal predicted value. To achieve this objective, we first define and solve the VNF migration problem for evaluating the effect of the network traffic prediction. Secondly, we apply the LoRA and DDPG model to protect the data privacy of all merchants. With the aforementioned MDP model, we characterize the system state, action, and reward. Finally, we need to design an efficient network traffic prediction algorithm which can predict the optimal value of network traffic. The details of the algorithms will be described in Section 4.

3.4. LSTM and LoRA Fusions

In this paper, we inject LoRA into the weight matrix of each gate unit in the LSTM (including the input gate, forget gate, output gate, and candidate state), i.e., introducing low-rank incremental terms into each linear mapping. The weights in the gate computation process are also injected with LoRA in the same way. In this way, the model achieves more efficient training and a controllable capacity in prediction modeling while retaining the original temporal modeling structure.

3.4.1. Basic LSTM Equation

The standard LSTM unit contains three gate mechanisms and one memory unit:
Input gate : i t = σ ( W i i x t + b i i + W h i h t 1 + b h i ) . Forget gate : f t = σ ( W i f x t + b i f + W h f h t 1 + b h f ) . Output gate : o t = σ ( W i o x t + b i o + W h o h t 1 + b h o ) . Candidate memory : c ˜ t = tanh ( W i c x t + b i c + W h c h t 1 + b h c ) . Memory update : c t = f t c t 1 + i t c ˜ t . Hidden state : h t = o t tanh ( c t ) . where x t : Input at time step t ( flow feature vector ) . h t : Hidden state ( time series representation ) . σ : Sigmoid activation function . : Hadamard product ( element - wise multiplication ) .

3.4.2. LoRA Mathematical Representation

The core of LoRA is to inject trainable parameters through low-rank decomposition.
For the weight matrix W R m × n : W = W + Δ W = W + B A where B R m × r . A R r × n . r min ( m , n ) is the low-rank dimension ( typically r = 4 32 ) .

3.4.3. LoRA Mathematical Representation

The following steps comprise injecting LoRA into LSTM gates.
Input gate enhancement:
i t = σ W i i x t + W h i h t 1 Original item + B i i A i i x t + B h i A h i h t 1 LoRA item where B i i R d × r , A i i R r × d , r d .
Similarly, apply to other gates, including f t , o t and c ˜ t .
For the local model, the LSTM model is initialized and trained, and the LSTM parameters are transmitted back to the global model. Then, during the subsequent DDPG training process, the local model trains the LoRA_LSTM hybrid structure, where only the LoRA structure is activated during training. As a result, at the end of each training round, the parameters of the LoRA matrix are transmitted back to the global model. The global model collects and aggregates the LSTM and LoRA structures from multiple SFCs of the same type to form the actor network model in the LoRA_DDPG structure. The training algorithm is shown in Section 4 below.
The structure is described in Figure 2.

4. Algorithms

The pseudocode for DDPG- and LoRA-based training for network traffic prediction is provided in Algorithm 1; the pseudocode for prediction is presented in Algorithm 2.
Algorithm 1 DDPG- and LoRA-Based Training
  • Input: Historical traffic data of each local network service provider; Historical total traffic data of each SFC; Physical network G; already arrived SFC requests θ 0 ; initial models Q θ 0 , Q θ 0 , Ω θ 0 , Ω θ 0 , π θ 1 ,…, π θ Q .
  • Output: Improved models Ω θ 0 , π θ 1 ,…, π θ Q .
1:
Initialize each local model π θ i without LoRA which belongs to network service provider;
2:
for epoch j do
3:
      for each local model π θ i without LoRA do
4:
          Calculate the MSE loss between the predicted and actual values for local model π θ i without LoRA;
5:
           Update parameters of each local model π θ i without LoRA;
6:
      end for
7:
end for
8:
The actor network of the global model gathers parameters from each local model π θ i without LoRA;
9:
for epoch k do
10:
     for epoch j do
11:
           for each local model π θ i with LoRA do
12:
                 Freeze all structures excluding LoRA;
13:
              Calculate the MSE loss between the predicted and actual values for local model π θ i with LoRA;
14:
                 Update LoRA parameters of each local model π θ i with LoRA;
15:
           end for
16:
      end for
17:
     The actor network of the global model gathers LoRA parameters from each local model π θ i with LoRA;
18:
     Aggregate local models into actor network Ω θ 0 of global model using parameters from step 10 and step 17 for the same SFC type;
19:
     Predict A t using actor network Ω θ 0 of global model;
20:
     Calculate reward r t according to Equations (49) and (50);
21:
     Store experience (state, action, reward, next state, slice) into replay memory and select mini-batch samples;
22:
     Update the online critic network Q θ 0 with SGD;
23:
     Copy the parameters of the online networks Ω θ 0 and Q θ 0 to the target networks Ω θ 0 and Q θ 0 .
24:
end for
Algorithm 2 Predicting
  • Input: Physical network G, already arrived SFC requests R, actor network Ω θ 0 of global model.
  • Output: Predicted network traffic.
1:
x t t o t a l , y t t o t a l , y t 1 t o t a l ← Read the record of historical network traffic for the specified time slice s t from the memory database of the same SFC type;
2:
A t Input x t into actor network Ω θ 0 of the global model for each SFC request to obtain predicted values of network traffic;
3:
S Migrate the VNFs and Virtual Links using a greedy algorithm;
4:
r t Calculate the reward according to Equations (49) and (50).
Therefore, in the LoRA_DDPG algorithm, the data components of the local models from various network service providers are protected, as the algorithm eliminates the need to transmit traffic component data to the network resource provider. The global model of the network resource provider only requires aggregated local network traffic for training and prediction. Concurrently, the parameters of the low-rank matrices from the local models are transmitted to the global model. This approach not only preserves the global model’s ability to learn from local models but also significantly reduces the volume of parameters transmitted.

5. Simulation

5.1. Simulation Settings

In this section, we use Python 3.6 and Pytorch 1.12 to verify the proposed scheme. The simulation platform is built on an Intel Core computer with a 1.8 GHz central processor and 8 GB Random Access Memory.
In the simulation, we obtain the network topology with scale: [140,20,20,20,10,10,991], where the numbers represent the quantity of NFV-enabled nodes, DT nodes of network service providers, DT nodes of network resource provider, end nodes, access nodes, transport nodes, and physical links, respectively. The CPU capacity of nodes ranges from 100 to 200 cores, the storage capacity of nodes ranges from 100 to 200, the processing capacity of nodes ranges from 80 to 150, the link bandwidth capacities between two nodes ranges from 15 to 25 Mbps, and the link latency between two adjacent nodes ranges from 1 to 2 s. The experiment considers eight types of VNFs, and one of the VNFs is non-shareable. Each SFC randomly selects a few VNFs, and the length of each SFC is between two and six. The SFC latency limit ranges from 30 to 50 s. A total of 100 SFC requests enter the network sequentially, with around 10 SFC requests typically online simultaneously. The processing latency of VNF in the service node is from 2 to 4 s. The CPU requirement of each VNF is from one to two cores, the storage requirement of each VNF is from one to two, the processing resource requirement of each VNF is from one to two, and the bandwidth resource requirement of each virtual link is from 10 to 20 bps. The CPU requirement of each instance is from one to two cores, the storage requirement of each instance is from one to two, and the processing resource requirement of each instance is from 10 to 20. In addition, randomization is used to assign an SFC request to each type of network service provider, and the type number of network service providers is set to four.
For the LoRA_DDPG algorithm of network traffic prediction, the actor networks of the online and target network in the global model contain two hybrid layers, each fusing LSTM with LoRA, while the dimension of each LSTM part is 50 and the rank of each LoRA part is eight. The critic networks of the online and target networks in the global model contain 2 LSTM layers and the dimension of each LSTM layer is 50. Each local model of a network service provider also contains two hybrid layers, each fusing LSTM with LoRA, while the dimension of each LSTM part is 50 and the rank of each LoRA part is eight. The algorithm uses softplus as the activation function. The detailed simulation parameter settings are shown in Table 2.

5.2. Benchmark Schemes

We compare the performance of our proposed LoRA_DDPG approach against the following three baseline schemes.
(1) Baseline 1, LSTM : In the global model, we apply LSTM [24], and the predicted results are directly given to the network resource provider for resource reallocation. We label the results associated with this baseline as LSTM in the experiments.
(2) Baseline 2, DDPG_LSTM : In the local model, we apply LSTM as the base layer, and apply DDPG [25] with LSTM layers in the global model. The actor network in the global model and the local model uses LSTM as the foundational layer, which is the only difference from our proposed method. We label the results associated with this baseline as DDPG_LSTM in the experiments.
(3) Baseline 3, FL [26] : In the local model, we apply LSTM, and the parameters of the model are delivered to aggregate the global model. We label the results associated with this baseline as FL in the experiments.
We use the following performance metrics to evaluate the performance:
(1) Resource: The resource revenue of the accepted SFC requests.
r a v e r a e , a c c e p t = i = 1 n r a c c e p t n .
where r a c c e p t is defined in Equation (27).
(2) Latency: This represents the migration latency of the physical network.
c a v e r a e , m i g r a t e = i = 1 n c m i g r a t e n .
where c m i g r a t e is analyzed in Equation (34).
(3) Energy: The energy consumption of the network, which consists of the energy consumption of all servers.
c a v e r a e , o p e r a t e = i = 1 n c o p e r a t e n .
where c o p e r a t e is defined in Equation (40).
(4) Punish: The resource required by the failed SFC requests.
r a v e r a e , p u n i s h = i = 1 n r p u n i s h n .
where r p u n i s h is defined in Equation (41).
(5) Reward: The linear combination of the above four metrics.
r a v e r a e , m i g = i = 1 n r m i g n .
where r m i g is calculated using Equation (49) or Equation (50).

5.3. Performance Evaluation

To examine the effectiveness of LoRA_DDPG in the global model, we train the LoRA_DDPG model with the parameters selected in the above analysis and then deploy it in the DRL module.
We record the reward values of the validation data during the training process in the global model. We conducted a total of 10 training rounds, with each round using a validation dataset of 144 data points. The reward outcomes for the 144 validation data points across the 10 rounds are presented in the figure. As can be seen from Figure 3, after three rounds of training, the reward outputs for the 144 validation data points in each of the subsequent seven rounds have become very similar. Therefore, the training process has essentially approached convergence after three rounds.
To examine the effectiveness of LoRA_LSTM in the local model, we train the LoRA_LSTM model with the parameters selected in the above analysis.
We record the MSE loss values of the validation data during the second training process (lines 10 to 16 in Algorithm 1) in the local model. As can be seen from Figure 4, the training algorithm of LoRA_LSTM has a rapid convergence trend.
Next, we compare the revenue for accepted SFC requests, the migration latency, the energy consumption, the punishment of refused SFC requests, and the total reward of different algorithms to validate the performance of the proposed algorithm.
In Figure 5, the resource revenue of the accepted SFC requests is plotted for the solutions obtained from LoRA_DDPG, DDPG_LSTM, FL, and LSTM. The results show that the resource revenue of the DDPG_LSTM algorithm is the highest. DDPG_LSTM increases by 4.21% compared with LoRA_DDPG; LSTM reduces by 15.51% compared with LoRA_DDPG; and FL reduces by 9.67% compared with LoRA_DDPG.
Figure 6 illustrates the migration latency values of the four algorithms. It can be observed from the figure that the VNF migration latency value of the LSTM algorithm is the highest. Conversely, the VNF migration latency value of the DDPG_LSTM algorithm is the lowest. DDPG_LSTM reduces by 23.45% compared with LoRA_DDPG; LSTM increases by 49.63% compared with LoRA_DDPG; and FL increases by 26.81% compared with LoRA_DDPG.
Figure 7 depicts the energy consumption of the four algorithms. LoRA_DDPG performs the best in saving energy, but the difference in value between them is very small. DDPG_LSTM increases by 0.17% compared with LoRA_DDPG; LSTM increases by 0.37% compared with LoRA_DDPG; and FL increases by 0.01% compared with LoRA_DDPG.
Figure 8 presents the total punishment for denying SFC requests. We can observe that DDPG_LSTM achieves the lowest punishment. DDPG_LSTM reduces by 11.59% compared with LoRA_DDPG; LSTM increases by 50.79% compared with LoRA_DDPG; and FL increases by 46.55% compared with LoRA_DDPG.
Figure 9 presents the total reward of these algorithms. DDPG_LSTM increases by 51.59% compared with LoRA_DDPG; LSTM reduces by 157.44% compared with LoRA_DDPG; and FL reduces by 94.61% compared with LoRA_DDPG.
Through the above comparison, we find that DDPG_LSTM and LoRA_DDPG result in a better performance, DDPG_LSTM results in the best performance, and LoRA_LSTM ranks second in prediction performance. However, LoRA_DDPG transmits a smaller number of parameters, and the analysis is as follows: Each LSTM layer has a parameter count of 4 * (input dimension * hidden dimension + hidden dimension * hidden dimension + hidden dimension). Thus, when the hidden dimension per layer is 50, the output dimension is 1, and the input dimension is 144, the first layer contains 39,000 parameters, while the second layer contains 20,200 parameters. Thus, for the DDPG_LSTM architecture, each local model needs to transmit 59,200 parameters per iteration. The parameter count of a low-rank matrix is (input dimension * rank + rank * output dimension). Thus, for the two-layer fusion of LSTM and LoRA, when the rank value is eight, the first layer has 6208 parameters, and the second layer has 3200 parameters. Thus, for the LoRA_DDPG architecture, each local model needs to transmit 9408 parameters per iteration. The amount of parameters transmitted from the local model to the global model under the LoRA_DDPG structure is reduced by 84.11% compared with DDPG_LSTM.

6. Conclusions

In this manuscript, we study the network traffic prediction problem for end-to-end network service deployment. We focus on three key challenges of the prediction problem in the DT-assisted and SDN/NFV-enabled network. For the first challenge of the definition of the mathematical model considering the DT and multiple merchants, we define the VNF migration problem for multiple merchants in the DT network. For the second challenge of better resource utilization, we carefully take the network service provider and network resource provider into account, and define the Markov Decision Process for the network traffic prediction problem. For the last challenge of protecting the privacy of each merchant and reducing model parameters for communication, we embed LoRA into LSTM, and propose the algorithm LoRA_DDPG. We evaluate the performance of LoRA_DDPG through experimental simulations, and the simulation results show that our algorithm outperforms benchmark algorithms and achieves a better performance.
In summary, this research not only presents a novel prediction methodology but also paves the way for building more intelligent, autonomous, and efficient future networks. The proposed framework finds concrete and impactful applications in the core challenges of 5G/6G evolution, namely agile network slicing and responsive edge computing. Future work will involve implementing and validating this framework in a real-world testbed emulating these specific scenarios.

Author Contributions

Conceptualization, Y.H. and B.L.; methodology, Y.H. and J.L.; software, B.L.; validation, B.L. and J.L.; formal analysis, Y.H., B.L., and J.L.; resources, B.L.; writing—original draft preparation, Y.H. and J.L.; writing—review and editing, B.L.; visualization, B.L. and J.L.; supervision, L.J. and J.L.; project administration, B.L., J.L and L.J.; funding acquisition, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Henan Provincial Department of Science and Technology Program (No. 242102210204).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We extend our heartfelt appreciation to our esteemed colleagues at the university for their unwavering support and invaluable insights throughout the research process. We also express our sincere gratitude to the editor and the anonymous reviewers for their diligent review and constructive suggestions, which greatly contributed to the enhancement of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gonzalez, A.J.; Nencioni, G.; Kamisiński, A.; Helvik, B.E.; Heegaard, P.E. Dependability of the NFV Orchestrator: State of the Art and Research Challenges. IEEE Commun. Surv. Tutor. 2018, 20, 3307–3329. [Google Scholar] [CrossRef]
  2. Huang, H.; Tian, J.; Min, G.; Yin, H.; Zeng, C.; Zhao, Y.; Wu, D.O. Parallel Placement of Virtualized Network Functions via Federated Deep Reinforcement Learning. IEEE/ACM Trans. Netw. 2024, 32, 2936–2949. [Google Scholar] [CrossRef]
  3. Yang, L.; Jia, J.; Lin, H.; Cao, J. Reliable Dynamic Service Chain Scheduling in 5G Networks. IEEE Trans. Mob. Comput. 2023, 22, 4898–4911. [Google Scholar] [CrossRef]
  4. Ercetin, O. Computational and Communication Aspects of Digital Twin: An Information Theoretical Perspective. IEEE Commun. Lett. 2023, 27, 492–496. [Google Scholar] [CrossRef]
  5. Darvishi, H.; Ciuonzo, D.; Rossi, P.S. A Machine-Learning Architecture for Sensor Fault Detection, Isolation, and Accommodation in Digital Twins. IEEE Sens. J. 2023, 23, 2522–2538. [Google Scholar] [CrossRef]
  6. Mihai, S.; Yaqoob, M.; Hung, D.V.; Davis, W.; Towakel, P.; Raza, M.; Karamanoglu, M.; Barn, B.; Shetve, D.; Prasad, R.V.; et al. Digital Twins: A Survey on Enabling Technologies, Challenges, Trends and Future Prospects. IEEE Commun. Surv. Tutor. 2022, 24, 2255–2291. [Google Scholar] [CrossRef]
  7. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
  8. Calle-Heredia, X.; Hesselbach, X. Digital Twin-Driven Virtual Network Architecture for Enhanced Extended Reality Capabilities. Appl. Sci. 2024, 14, 10352. [Google Scholar] [CrossRef]
  9. Deng, L.; Wei, X.; Gao, Y.; Cheng, G.; Liu, L.; Chen, M. NFV-empowered digital twin cyber platform: Architecture, prototype, and a use case. Comput. Commun. 2023, 210, 163–173. [Google Scholar] [CrossRef]
  10. Tang, L.; Wen, W.; Li, J.; Fang, D.; Li, L.; Chen, Q. Digital-Twin-Assisted VNF Mapping and Scheduling in SDN/NFV-Enabled Industrial IoT. IEEE Internet Things J. 2024, 11, 18516–18533. [Google Scholar] [CrossRef]
  11. Wang, K.; Yuan, P.; Jan, M.A.; Khan, F.; Gadekallu, T.R.; Kumari, S.; Pan, H.; Liu, L. Digital twin-assisted service function chaining in multi-domain computing power networks with multi-agent reinforcement learning. Future Gener. Comput. Syst. 2024, 158, 294–307. [Google Scholar] [CrossRef]
  12. Liu, Q.; Tang, L.; Wu, T.; Chen, Q. Deep Reinforcement Learning for Resource Demand Prediction and Virtual Function Network Migration in Digital Twin Network. IEEE Internet Things J. 2023, 10, 19102–19116. [Google Scholar] [CrossRef]
  13. Li, J.; Guo, S.; Liang, W.; Chen, Q.; Xu, Z.; Xu, W.; Zomaya, A.Y. Digital Twin-Assisted, SFC-Enabled Service Provisioning in Mobile Edge Computing. IEEE Trans. Mob. Comput. 2024, 23, 393–408. [Google Scholar] [CrossRef]
  14. Guo, K.; Chen, J.; Dong, P.; Zou, T.; Zhu, J.; Huang, X.; Liu, S.; Liao, C. DTFL: A Digital Twin-Assisted Graph Neural Network Approach for Service Function Chains Failure Localization. IEEE Trans. Cloud Comput. 2023, 11, 3573–3590. [Google Scholar] [CrossRef]
  15. Tang, L.; Hou, Q.; Wen, W.; Fang, D.; Chen, Q. Digital-Twin-Assisted VNF Migration Through Resource Prediction in SDN/NVF-Enabled IoT Networks. IEEE Internet Things J. 2024, 11, 35445–35464. [Google Scholar] [CrossRef]
  16. Huo, J.; Lu, Z.; Han, Y.; Jin, S. Hybrid-field channel estimation for extremely large-scale MIMO system. Sci. Sin. Inf. 2025, 55, 1296–1310. [Google Scholar] [CrossRef]
  17. Liu, Y.; Zhang, H.; Chang, D.; Hu, H. GDM: A General Distributed Method for Cross-Domain Service Function Chain Embedding. IEEE Trans. Netw. Serv. Manag. 2020, 17, 1446–1459. [Google Scholar] [CrossRef]
  18. Addad, R.A.; Bagaa, M.; Taleb, T.; Dutra, D.L.C.; Flinck, H. Optimization Model for Cross-Domain Network Slices in 5G Networks. IEEE Trans. Mob. Comput. 2020, 19, 1156–1169. [Google Scholar] [CrossRef]
  19. Zhang, C.; Wang, X.; Zhao, Y.; Dong, A.; Li, F.; Huang, M. Cost Efficient and Low-Latency Network Service Chain Deployment Across Multiple Domains for SDN. IEEE Access 2019, 7, 143454–143470. [Google Scholar] [CrossRef]
  20. Mesodiakaki, A.; Gatzianas, M.; Kalfas, G.; Vagionas, C.; Maximidis, R.; Pleros, N. ONE: Online Energy-efficient User Association, VNF Placement and Traffic Routing in 6G HetNets. In Proceedings of the 2022 IEEE Globecom Workshops (GC Wkshps), Rio de Janeiro, Brazil, 4–8 December 2022; pp. 304–309. [Google Scholar] [CrossRef]
  21. Yang, K.; Zhang, H.; Hong, P. Energy-Aware Service Function Placement for Service Function Chaining in Data Centers. In Proceedings of the 2016 IEEE Global Communications Conference (GLOBECOM), Washington, DC, USA, 4–8 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
  22. Aroca, J.A.; Chatzipapas, A.; Anta, A.F.; Mancuso, V. A Measurement-Based Characterization of the Energy Consumption in Data Center Servers. IEEE J. Sel. Areas Commun. 2015, 33, 2863–2877. [Google Scholar] [CrossRef]
  23. Amaldi, E.; Coniglio, S.; Koster, A.M.; Tieves, M. On the computational complexity of the virtual network embedding problem. Electron. Notes Discret. Math. 2016, 52, 213–220. [Google Scholar] [CrossRef]
  24. Azzouni, A.; Pujolle, G. A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction. arXiv 2017, arXiv:1705.05690. [Google Scholar] [CrossRef]
  25. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
  26. Mcmahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Communication-Efficient Learning of Deep Networks from Decentralized Data. PMLR Proc. Mach. Learn. Res. 2017, 54, 1273–1282. [Google Scholar]
Figure 1. Digital twin-assisted network architecture.
Figure 1. Digital twin-assisted network architecture.
Electronics 14 04129 g001
Figure 2. The LoRA_DDPG structure in the global model.
Figure 2. The LoRA_DDPG structure in the global model.
Electronics 14 04129 g002
Figure 3. Reward values of the validation data during the training process.
Figure 3. Reward values of the validation data during the training process.
Electronics 14 04129 g003
Figure 4. MSE loss of low-rank matrix training in local model.
Figure 4. MSE loss of low-rank matrix training in local model.
Electronics 14 04129 g004
Figure 5. Resource revenue of the accepted SFC requests during the prediction process.
Figure 5. Resource revenue of the accepted SFC requests during the prediction process.
Electronics 14 04129 g005
Figure 6. Migration latency during the prediction process.
Figure 6. Migration latency during the prediction process.
Electronics 14 04129 g006
Figure 7. Energy consumption during the prediction process.
Figure 7. Energy consumption during the prediction process.
Electronics 14 04129 g007
Figure 8. Punishment for SFC request denial during the prediction process.
Figure 8. Punishment for SFC request denial during the prediction process.
Electronics 14 04129 g008
Figure 9. Reward values during the prediction process.
Figure 9. Reward values during the prediction process.
Electronics 14 04129 g009
Table 1. Constraints.
Table 1. Constraints.
VNFSFC RequestDT NodeLinkNFV Node
(4) One and only one NFV node mapped(5)(6) One and only one beginning and end node(7) Belongs to only one merchant(14) Bandwidth capacity not violated(16) Processing capacity not violated
(17) One and only one DT node associated(8) Cannot split flow(18) Transmitting flow for NFV node cannot split(15) Bandwidth capacity not violated when registering(21) Computing capacity not violated
ShareableNon-shareable(9) Conservation(19) Transmitting flow for transport node cannot split (24) Storage capacity not violated
(10) Processing capacity not violated(11) Exclusivity in NFV node(13) End to end delay constraint(20) Transmitting flow for end node cannot split
(12) Processing capacity not violated DT of network
service provider
DT of network
resource provider
(23) Computing
capacity not violated
(22) Computing
capacity not violated
(26) Storage
capacity not violated
(25) Storage
capacity not violated
Table 2. Parameter settings.
Table 2. Parameter settings.
HyperparametersValues
Learning rate0.0001
Dropout factor0.2
Number of network service providers4
Number of time slices in one day24
η 1 in Equation (27)3
η 2 in Equation (27)3
η 3 in Equation (27)3
ω in Equation (34)20
ε 0 in Equation (39)200
ε 1 in Equation (39)300
γ in Equation (40)0.01
μ 1 in Equation (41)1
μ 2 in Equation (41)1
μ 3 in Equation (41)1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Y.; Liu, B.; Li, J.; Jia, L. Network Traffic Prediction for Multiple Providers in Digital Twin-Assisted NFV-Enabled Network. Electronics 2025, 14, 4129. https://doi.org/10.3390/electronics14204129

AMA Style

Hu Y, Liu B, Li J, Jia L. Network Traffic Prediction for Multiple Providers in Digital Twin-Assisted NFV-Enabled Network. Electronics. 2025; 14(20):4129. https://doi.org/10.3390/electronics14204129

Chicago/Turabian Style

Hu, Ying, Ben Liu, Jianyong Li, and Linlin Jia. 2025. "Network Traffic Prediction for Multiple Providers in Digital Twin-Assisted NFV-Enabled Network" Electronics 14, no. 20: 4129. https://doi.org/10.3390/electronics14204129

APA Style

Hu, Y., Liu, B., Li, J., & Jia, L. (2025). Network Traffic Prediction for Multiple Providers in Digital Twin-Assisted NFV-Enabled Network. Electronics, 14(20), 4129. https://doi.org/10.3390/electronics14204129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop