Next Article in Journal
Empowering Education with Intelligent Systems: Exploring Large Language Models and the NAO Robot for Information Retrieval
Previous Article in Journal
From Vulnerability to Resilience: Securing Public Safety GPS and Location Services with Smart Radio, Blockchain, and AI-Driven Adaptability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms †

by
Wenlan Diao
,
Jianping An
*,
Tong Li
,
Yu Zhang
and
Zhoujie Liu
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2024 International Conference on Electrical, Electronic Information and Communication Engineering (EEICE) under the title “Lossless Congestion Control Based on Priority Scheduling in Named Data Networking”, Xi’an, China, 12–14 April 2024.
Electronics 2025, 14(6), 1206; https://doi.org/10.3390/electronics14061206
Submission received: 14 February 2025 / Revised: 12 March 2025 / Accepted: 14 March 2025 / Published: 19 March 2025
(This article belongs to the Section Networks)

Abstract

:
Low Earth Orbit (LEO) satellite networks are promising for satellite-based cloud platforms. Due to frequent link switching and long transmission distances in LEO satellite networks, applying the TCP/IP architecture introduces challenges such as packet loss and significant transmission delays. These issues can trigger excessive retransmissions, leading to link congestion and increased data acquisition delay. Deploying Named Data Networking (NDN) with connectionless communication and link-switching tolerance can help address these problems. However, the existing congestion control methods in NDN lack support for congestion avoidance, lossless forwarding, and tiered traffic scheduling, which are crucial for achieving low-delay operations in satellite-based cloud platforms. In this paper, we propose a Congestion Control method with Lossless Forwarding (CCLF). Addressing the time-varying nature of satellite networks, CCLF implements zero packet loss forwarding by monitoring output queues, aggregating packets, and prioritizing packet scheduling. This approach overcomes traditional end-to-end bottleneck bandwidth limitations, enhances network throughput, and achieves low-delay forwarding for critical Data packets. Compared with the Practical Congestion Control Scheme (PCON), the CCLF method achieves lossless forwarding at the network layer, reduces the average flow completion time by up to 41%, and increases bandwidth utilization by up to 57%.

1. Introduction

Recent advancements in satellite communications have enabled the deployment of a Low Earth Orbit (LEO) satellite constellation. This constellation, along with its components, forms a satellite-based cloud platform [1]. The platform, equipped with substantial storage, robust computing capabilities, and inter-satellite links exceeding 400 Gbps, is designed to provide global high-speed Internet access, efficient data exchange, and scalable cloud computing services [2].
However, the satellite-based cloud platform operates as a time-varying network, with dynamic changes in both topology and available bandwidth. Compared to conventional terrestrial communications, LEO satellite communications feature higher propagation delays, greater path losses, and increased interference [3]. Furthermore, the allocation and management of available bandwidth in LEO systems are more intricate [4]. Collectively, these factors render efficient and stable communication more challenging compared to traditional terrestrial systems.
The characteristics of satellite networks, such as high-speed movement, long transmission distances, and highly dynamic topologies, give rise to challenges like short contact windows, significant transmission delays, and diverse transmission channels [5]. Applying the traditional TCP/IP-based network architecture to the satellite-based cloud platform can lead to several issues: 1. Link switching can cause transmission interruptions, leading to temporary loss of data or service outages. 2. Changes in the IP addresses of access satellites and variations in transmission paths can cause end-to-end connection disruptions and packet loss. 3. The instability of the end-to-end transmission path causes Round-Trip Time (RTT) fluctuations, which challenge the accuracy of TCP timeout estimation. As a result, frequent packet timeouts and retransmissions occur, exacerbating link congestion and delays. 4. Significant variability in data acquisition delay in cloud services impacts the stability of service quality on the satellite-based cloud platform.
Given the above challenges, it is imperative to develop a novel network architecture for the time-varying satellite-based cloud platform [6], characterized by the following: 1. Decoupling cloud services from satellite servers and implementing a connectionless data transmission mechanism from endpoint to the cloud platform, thereby tolerating link interruptions. 2. Preventing congestion caused by substantial packet retransmissions through traffic scheduling at the network layer [7], thereby reducing data acquisition delays. 3. Supporting efficient data distribution, minimizing redundant data transmission, and conserving bandwidth through collaborative storage and data synchronization. 4. Integrating inherent reliable transmission and robust security mechanisms into the system to ensure data acquisition reliability and safeguard against malicious network attacks.
As a candidate for future network architecture, Named Data Networking (NDN) [8] offers several key features: connectionless communication, secure and reliable transmission [9], robust tolerance for delay and interruption [10,11], and efficient data transmission from endpoints to cloud platforms. Specifically, NDN’s request aggregation, content multicast [12], and in-network storage mechanisms facilitate the distribution and collaborative storage of data chunks. These features ensure that satellite-based cloud platforms can meet the requirements for efficient content distribution, low-delay data acquisition, and reliable transmission. Consequently, the satellite-based cloud platform based on the NDN architecture demonstrates substantial development potential [13].
As the demand for cloud storage and cloud computing services continues to grow, LEO satellite networks face the challenge of handling a vast amount of information exchange, which poses significant challenges for congestion control. Reducing packet loss, improving throughput, and avoiding congestion have become urgent issues that need to be addressed.
NDN relies on the pull model to acquire content, where the user sends explicit requests referred to as “Interest” to pull the named content referred to as “Data”. Each request message contains the name of the desired content and is forwarded by routers according to the name to retrieve the corresponding content from sources or in-network caches [14]. Each Interest will retrieve one Data, and the Data are reverse-forwarded along the path specified by the Interest. After receiving the Data, the router stores its copy in the Content Store (CS) and immediately forwards the Data. This means that the Data will be inserted into the output queue of the forwarding interface, as shown in Figure 1.
However, when the traffic of Data packets waiting to be forwarded exceeds the link capacity, these Data packets accumulate in the output queue, leading to queue overflow and packet loss. This issue is prominent in LEO satellite networks due to their dynamic available bandwidth. In this paper, the term “bandwidth” is employed to denote the maximum data transmission rate of a link, reflecting the link capacity for data transfer. The substantial space propagation delay and queuing delay result in a significant increase in data acquisition delay, especially during retransmission triggered by packet loss. Therefore, designing an effective congestion control scheme in NDN to achieve low transmission delay, high throughput, and lossless forwarding is of great significance.
Currently, most NDN congestion control schemes employ a receiver-driven approach, whereby the traffic of Data packets can be indirectly regulated by adjusting the sending or forwarding rate of Interests. According to the participants involved in congestion control, existing methods in NDN can be classified into three categories: (1) consumer-side congestion control, (2) hop-by-hop congestion control, and (3) hybrid congestion control [15].
Regarding the first method, it requires consumers to monitor the RTT from sending an Interest to receiving Data and then estimate whether the network is congested based on changes in RTT. This information is used to adjust the sending rate of Interests, thereby controlling the response rate of Data packets. However, due to multiple sources and forwarding paths in NDN, content is no longer transmitted along a single path, resulting in significant fluctuations in RTT. Consequently, adjusting the sending rate of Interests becomes challenging for accurately preventing congestion. The second method, hop-by-hop congestion control, requires each router to monitor the occupancy of output queues at each interface. Once congestion is detected, the router sends signals downstream. Upon receiving these signals, downstream routers reduce the forwarding rate of Interests and continue to propagate the signals further downstream [16,17]. Ultimately, this forces a decrease in the sending rate of Interests. However, this approach can easily lead to packet loss because received Interests may not be forwarded in a timely manner.
The hybrid congestion control method requires collaboration between consumers and routers. Currently, the Practical Congestion Control Scheme (PCON) [18] is the most recognized mechanism in NDN. It detects congestion by measuring the dwell time of each Data packet in the router and then marks certain Data packets as congestion signals. Upon receiving these marked Data packets, consumers reduce the sending rate of Interests. The NDN Congestion Control Based on Queue Size Feedback (NDN-QSF) [19] requires routers to estimate upstream bandwidth to regulate the forwarding of Interests and uses queue size as congestion feedback to inform downstream routers to slow down Interest forwarding. The Cooperative Hybrid Congestion Control (CoopCon) scheme [20] employs four types of congestion markers to adjust the sending rate of Interests. However, these methods rely on numerous threshold parameters, and the optimal thresholds vary with applications, making them unsuitable for LEO satellite networks that carry diverse services.
In summary, current methods control the traffic of Data packets by limiting the transmission rate of Interests in NDN. However, applying these methods to a satellite-based cloud platform may encounter the following issues:
  • Among the existing methods, network throughput is adjusted to match the bandwidth of the bottleneck link. By preventing full occupation of the bottleneck link, congestion is avoided. However, this results in low throughput and high flow completion times.
  • The size of each Data is defined by the producer; thus, the sizes of Data packets retrieved by different Interests can vary significantly. Therefore, adjusting the sending rate of Interests alone cannot proportionally regulate the traffic of Data packets.
  • The satellite-based cloud platform, characterized by its time-varying nature and significant propagation delays, may experience delayed congestion responses when existing congestion feedback mechanisms are used. This can lead to packet loss [21].
  • Existing schemes do not consider prioritizing packet scheduling while performing congestion control, which poses challenges in meeting the low-delay requirements for computing services and the high-throughput demands for storage services [22,23].
In order to achieve high throughput, low transmission delay, and lossless forwarding within a satellite-based cloud platform, we propose a Congestion Control method with Lossless Forwarding (CCLF). Unlike previous approaches that adjust the transmission rate of Interests, we utilize the inherent CS of NDN to buffer Data packets and then schedule them. Each router regulates locally forwarded Data packets to avoid congestion.
Currently, existing cloud server instances provide up to 5888 GB (5.8 TB) of Dynamic Random Access Memory (DRAM) [24], supporting efficient data buffering and fast access in CS. Additionally, we design a priority-based scheduling mechanism to facilitate hierarchical, efficient, and lossless forwarding at the network layer while maintaining high throughput. The main contributions of this paper are summarized as follows:
  • We propose a lossless congestion control method termed CCLF, which enhances the content storage and data forwarding mechanisms of NDN. By monitoring output queues and regulating data forwarding, CCLF effectively eliminates packet loss.
  • We optimize the aggregation and unified scheduling of Data packets, fully utilizing the bandwidth resources in the time-varying network. This approach overcomes the throughput limitations imposed by the constrained bandwidth of the end-to-end bottleneck link, thereby achieving high network throughput.
  • We design a priority-based scheduling algorithm that supports swift forwarding of high-priority Data while ensuring appropriate bandwidth allocation for ordinary Data with the same priority, thus meeting the desired Quality of Service (QoS).
  • We conduct a series of experiments to evaluate the CCLF method, comparing its performance with that of the recognized PCON approach. The results show that CCLF significantly reduces flow completion time and improves network throughput.
The rest of this paper is organized as follows. In Section 2, we discuss the related works. In Section 3, we describe the integrated system and algorithm design of CCLF. In Section 4, we implement the CCLF method, and in Section 5, we evaluate the experimental results. Finally, we summarize our work in Section 6.

2. Related Works

Currently, the congestion control schemes of NDN are based on the principle that each Interest can retrieve one Data. Therefore, the traffic of Data packets can be constrained by limiting the sending rate of Interests. The existing congestion control schemes are mainly divided into three types: consumer-side congestion control, hop-by-hop congestion control, and hybrid congestion control.

2.1. Consumer-Side Congestion Control

The congestion control on the consumer side in NDN inherits the scheme from Transmission Control Protocol (TCP). Carofiglio et al. proposed the Interest Control Protocol (ICP) [25], in which the consumer uses the Additive Increase Multiplicative Decrease (AIMD) algorithm to control the sending rate of Interests. Since Data in NDN are no longer obtained along a single path, Saino et al. proposed Content-Centric TCP (CCTCP) [26] to maintain a Retransmission Timeout (RTO) timer for each content producer. The Remote Adaptive Active Queue Management (RAAQM) [27], also proposed by Carofiglio et al., maintains the RTO value for each path separately. These methods can be used for multi-source and multi-path scenarios to a certain extent. However, in the case of request aggregation and Data multicast, calculating an accurate RTO is still challenging [1]. To address the issues mentioned above, the Q-NDN congestion control method [28] was proposed. It not only considers RTT but also incorporates various state information. Consequently, consumers can dynamically adjust the congestion window (cwnd) through real-time monitoring of network status, leveraging the Q-learning algorithm for this adjustment.
This type of method treats consumers as agents and adjusts the sending rate of Interests based on the estimation of congestion status. However, due to delays in obtaining network information such as timeouts and NACKs within NDN, consumers are unable to promptly access the current network congestion status using this information.

2.2. Hop-by-Hop Congestion Control

Using the hop-by-hop congestion control scheme, each router detects congestion by monitoring its outgoing queues and then shapes the forwarding rate of the Interest. Rozhnova et al. proposed the Hop-By-Hop Interest Shaping mechanism (HoBHIS) [29], which predicts congestion and decreases the shaping rate before the receiver can detect congestion via timer expiration, calculating the shaping rate based on Data queuing occupancy. SafaMejri et al. proposed a method called Hop-By-Hop congestion control (HBH) [30], where each intermediate router monitors the occupancy of outgoing queues to identify congestion. If the queue size reaches its minimum or maximum specified threshold, an explicit notification is sent to downstream routers and consumers, specifying the increase or decrease rate for sending Interests. Moreover, they proposed another mechanism called Hop-By-Hop Interest Rate Notification and Adjustment (IRNA) [31,32]. IRNA requires each router to monitor the occupancy of the outgoing Interest queue in order to detect and issue congestion signals in advance.
This type of method alleviates congestion by adjusting the forwarding rate of Interests on a hop-by-hop basis. However, these methods are essentially local optimal solutions. When the network hosts complex applications, the above solutions may fail to ensure global optimization of the network performance.

2.3. Hybrid Congestion Control

The hybrid scheme involves both consumers and routers in network congestion control. At present, PCON [18] is a widely recognized congestion control mechanism in NDN. It detects congestion by measuring the Data queuing time and then signals this information to consumers by explicitly marking certain packets. This allows downstream routers to divert traffic to alternative paths and enables consumers to reduce their Interest sending rates. Yuhang et al. proposed a method that includes Hop-by-hop Congestion Measurement (HbHCM) and Practical Active Queue Management (PAQM) [33]. This approach judges congestion based on RTT between adjacent routers and provides Explicit Congestion Notification (ECN) feedback to consumers. Sichen Song et al. proposed an NDN Congestion Control Based on Queue Size Feedback (NDN-QSF) [19]. In NDN-QSF, forwarders estimate upstream bandwidth to calculate the forwarding rate of Interests and use queue size as congestion feedback to inform downstream routers to limit the forwarding rate of Interests. Zhuo Li et al. proposed a Cooperative Hybrid Congestion Control (CoopCon) scheme [20]. This scheme innovatively introduces four types of congestion markers. Depending on the specific type of congestion marker, consumers will adopt specialized methods to reduce the sending rate of Interests.
The hybrid congestion control method utilizes the ECN technique to achieve more accurate congestion relief [34]. However, its performance is affected by threshold parameters that cannot adapt to different Data packets. For example, using PCON [35] to forward Data containing 200-byte content results in the loss of 544 packets within 40 s, as shown in Figure 2. This highlights the limitations of fixed thresholds in diverse networks.
In summary, most existing methods limit the traffic of Data packets by restricting the transmission of Interests. However, these solutions have several drawbacks. Firstly, the varying sizes of Data packets make Interest-based regulation imprecise. Secondly, existing methods limit the overall network throughput to the bandwidth of the bottleneck link. Thirdly, long-distance transmission in LEO satellite networks can lead to delayed responses to congestion, resulting in packet loss. Additionally, prioritizing the forwarding of critical Data is crucial. However, existing approaches lack effective mechanisms for assigning priorities to Data packets, limiting their ability to meet diverse application requirements.
The essence of congestion control is to regulate the traffic injected into the network. In this paper, we leverage the inherent Content Store (CS) in NDN and introduce a router-driven congestion control method. Each router manages the local scheduling of Data packet forwarding, enabling high-throughput and congestion-free forwarding within the satellite-based cloud platform.

3. Integrated System and Algorithm Design

In this section, we provide a detailed description of the system and algorithm design of CCLF. We first outline our design goals and then elaborate on the procedures for congestion control and Data scheduling. In this paper, we use “Interest” to represent Interest packets and “Data” to denote Data packets in NDN.

3.1. Design Goals

Currently, the congestion control methods used in NDN face significant challenges when applied to LEO satellite networks. Issues such as imprecise traffic control, low network throughput, high flow completion times, delayed congestion responses, and frequent packet loss collectively prevent existing approaches from achieving optimal performance in these environments. Additionally, current congestion control methods in NDN do not support the prioritized forwarding of critical data, which results in an inability to meet the diverse Quality of Service (QoS) requirements.
To address these issues, we propose a Congestion Control method with Lossless Forwarding (CCLF). In this paper, multiple Data packets sharing the same prefix in their names are considered to form a data flow. The objectives of the proposed work are as follows:
Lossless forwarding: We aim to achieve congestion-free forwarding of Data packets, ensuring zero packet loss at the network layer of NDN.
High throughput: We design the router to directly manage the forwarding of Data packets, thereby maintaining high throughput and full bandwidth utilization.
QoS guarantee: We introduce a scheduling scheme for Data packets based on the priority of data flows, facilitating low-delay forwarding of time-sensitive data packets.

3.2. Framework of CCLF

The CCLF modifies the details of the data forwarding process in NDN. Specifically, we enhance the backhaul forwarding of Data packets. Incoming Data packets are first stored in the CS. Following this, a priority-based scheduling algorithm is employed to progressively forward both time-sensitive and regular Data packets.
The current NDN architecture allows the CS to store Data packets, which are then forwarded immediately. However, when the output queue of the forwarding interface is full, Data packets are discarded. In contrast, the CCLF method enhances the functionality of the CS by enabling it to serve both as a storage and a forwarding buffer. This allows Data packets to be temporarily held in the CS when the output queue of the forwarding interface is full, ensuring lossless forwarding by retaining packets until the queue has available capacity. This critical capability is absent in traditional NDN systems.
Advancements in hardware technology have enabled high-capacity DRAM in cloud servers. For instance, the M2 UltraMemory [24] machine types are specifically designed for memory-intensive workloads. Each M2 UltraMemory VM provides up to 5888 GB of memory. Consequently, it is feasible to equip LEO satellite routers with 5 TB or greater DRAM, thereby facilitating efficient data storage and access in the CS.
The main improvement introduced by the CCLF is the creation of a dispatch table within NDN routers to assist in making forwarding decisions. This dispatch table maintains detailed records for each Data packet stored in the CS that has not yet been forwarded. When the final Data packet in the output queue is transmitted, the dispatch table is consulted, and the next Data packet to be forwarded is selected based on a priority-based scheduling algorithm. This selected Data packet is inserted into the output queue and immediately forwarded, and the next Data packet is then selected, as illustrated in Figure 3. By scheduling Data forwarding based on the current status of the output queue and the priorities of data flows, the CCLF method achieves maximum throughput while ensuring congestion-free forwarding and reducing the flow completion time.
In LEO satellite communications, the available bandwidth frequently fluctuates due to varying conditions. By leveraging the contact windows provided by satellite communication, we can model the time-varying network as a sequence of segmented static networks. Specifically, when the available bandwidth of the upstream link exceeds that of the downstream link, CCLF leverages the CS to store Data packets that cannot be immediately forwarded due to bandwidth limitations. Once the available bandwidth of the downstream link increases, these previously stored Data packets can be promptly forwarded, thereby enhancing the overall throughput of the network. This mechanism ensures efficient use of available bandwidth and minimizes delays, as illustrated in Figure 4.
We establish models to validate throughput optimization and QoS guarantees in CCLF. First, we calculate and compare the end-to-end average bandwidth, which serves as an indicator of throughput. Then, we introduce a priority-based Data scheduling model.

3.3. Throughput Optimization Model

We developed a model to evaluate throughput optimization achieved by CCLF. Notably, we adopted average bandwidth as a key performance indicator for assessing network throughput. Average bandwidth is defined as mean data transfer rate over a specified period, accurately reflecting network actual transmission efficiency.
First, we calculate the throughput capacity and average bandwidth denoted as b ¯ exist under existing congestion control mechanisms. In Section 3.3.1, we measure the network throughput and average bandwidth denoted as b CCLF when using the CCLF approach. Then, in Section 3.3.2, by comparing the values of b CCLF and b ¯ exist , we validate that the network throughput under the CCLF method significantly exceeds that of existing methods.
In this study, we investigate data transmission within the contact windows of satellite communications. During this period, the direct path from the producer to the consumer can be regarded as a stable connection. We define N as the number of forwarding hops from the producer to the consumer. In LEO satellite networks characterized by dynamic available bandwidth, the available bandwidth of the i-th link at time t is denoted by b i ( t ) .
For existing congestion control schemes, when the data traffic exceeds the available bandwidth, packets accumulate in the queue of the output interface, potentially causing queue overflow and packet loss. Consequently, the throughput capacity of the network will not exceed the bandwidth of its least capable link. This relationship can be expressed as
b t = min i b i t
where b ( t ) represents the throughput capacity of the network at time t. The average bandwidth of the entire network within 0 , t time can be derived as follows:
b ¯ exist t = 1 t 0 t b τ d τ = 1 t 0 t min i b i τ d τ
It indicates that under existing schemes, the throughput capacity of the network is constrained by the bottleneck bandwidth. Our research objectives are to improve bandwidth utilization and enhance network throughput. Specifically, in our subsequent analysis, we employ average bandwidth as the metric for evaluating network throughput.

3.3.1. Throughput with CCLF

Assuming that the consumer requests Data at a high rate and the CS has sufficient storage space, utilizing the CCLF method involves scheduling the forwarding of all Data packets. For any link i, the maximum throughput capacity that the link can provide within the time interval 0 , t is
q i t = 0 t b i τ d τ
For the overall network throughput, which is the traffic volume q t of the consumer, it will not exceed the throughput capacity of any single link.
q t q i t , i 1 , N
It means that q t is limited by the throughput capacity of the links: q t min i q i t .
Assuming the bottleneck link c = arg min i q i t , then min i q i t = q c t . Due to the high transmission rate, the bottleneck link has no idle capacity. For downstream links i > c , their average bandwidth exceeds that of the bottleneck link, resulting in minimal backlog of Data packets received from upstream. Assuming that the downstream link i delivers less Data traffic than min i q i t during the time interval 0 , t due to Data backlog ϵ i t , then
lim t ϵ i t q c t = 0
The network throughput in time 0 , t , that is, the data transfer q t of the consumer, can be expressed as
q t = q c t i = c N ϵ i t
q t q c t = 1 i = c N ϵ i t q c t
According to Formula (5), the throughput can be simplified to lim t q t q c t = 1 . Therefore, using the CCLF method, the end-to-end throughput of the network is optimized as
q t q c t = min i 0 t b i τ d τ
In this paper, the symbol “∼” denotes approximation: a t b t lim t a t b t = 1 . Thus, the average bandwidth within time 0 , t can be expressed as
b CCLF t = q t t = min i 1 t 0 t b i τ d τ
Using the CCLF method, the equivalent average bandwidth of the network is determined by the minimum average bandwidth among all links. In contrast, using existing congestion control schemes, the equivalent bandwidth of the network is the bandwidth of the bottleneck link at every moment. We prove the optimization in the following equation:
min i 1 t 0 t b i τ d τ 1 t 0 t min i b i τ d τ

3.3.2. Average Bandwidth Comparison

In this section, we compare the average bandwidths b CCLF and b ¯ exist derived from the preceding analysis. Initially, we assume that the available bandwidth of each link follows a Gaussian distribution. Utilizing the squeeze theorem and asymptotic analysis, we compute the mathematical expectations of both b CCLF and b ¯ exist . Through rigorous derivation and analytical examination, we demonstrate that when the number of forwarding hops N 1 , it holds true that b CCLF t b ¯ exist t .
Assuming that the available bandwidth b i t follows a Gaussian distribution with a mean of μ and a variance of σ 2 , the bandwidth for each link and each moment is independent. For the CCLF method,
b CCLF t = min i 1 t 0 t b i τ d τ = μ
In comparison, for the existing congestion control methods, let b ˜ i t = b i t μ σ , and then b ˜ i t is a standard normal distribution with a mean of 0 and a variance of 1. Its Probability Density Function (PDF) is ϕ x = 2 π 1 / 2 e x 2 / 2 , and its Cumulative Distribution Function (CDF) is Φ x = x ϕ t d t = 1 + erf x / 2 2 , where erf x = 2 π 0 x e t 2 d t .
First, we calculate the mathematical expectation for the bandwidth Z N = min i b ˜ i t of the bottleneck link, where the CDF of Z N is equal to
F Z N z = 1 i = 1 N P b ˜ i t z = 1 1 Φ z N
So its mathematical expectation is
E Z N = z d d z F Z N z d z = 0 z d d z F Z N z d z 0 + z d d z 1 F Z N z d z = 0 1 Φ N z d z + 0 + 1 Φ z N d z
We use the formulas Φ z + Φ z = 1 , lim z z F Z N z = lim z + z 1 F Z N z = 0 . Note that following the conditions t 1 2 2 0 , this means that t 2 2 t + 1 2 . Therefore, it can be inferred that
0 1 Φ z = 1 2 π z + e t 2 / 2 d t 1 2 π z + e t + 1 / 2 d t = e 2 π e z
Then, the following conditions are met:
0 0 + 1 Φ z N d z e 2 π N 0 + e N z d z = 1 N e 2 π N
According to the squeeze theorem, lim N 1 N e 2 π N = 0 , so it can be inferred that
lim N 0 + 1 Φ z N d z = 0
According to the asymptotic theory presented in Chapter 10 of reference [36], as substantiated on page 302, it has been established that
1 Φ z ϕ z z = e z 2 / 2 2 π z
Let λ N = 2 ln N ; then, z > λ N , e z 2 / 2 < 1 N . When N + , there is
λ N + 1 Φ N z d z λ N + 1 1 e z 2 / 2 2 π z N d z λ N + N e z 2 / 2 2 π z d z N 2 π λ N + z e z 2 / 2 λ N 2 d z = 1 2 π λ N 2 = 1 2 2 π ln N 0
Note the conditions that
Φ λ N ϵ 1 e ln N + ϵ 2 ln N ϵ 2 2 2 π 2 ln N ϵ = 1 e ϵ 2 2 e ϵ 2 ln N 2 π N 2 ln N ϵ
lim N e ϵ 2 2 e ϵ 2 ln N 2 π 2 ln N ϵ = +
For ϵ 0 , N + , we can derive that
0 λ N Φ N z d z = 0 λ N ε Φ N z d z + λ N ε λ N Φ N z d z 0 λ N ε 1 e ϵ 2 2 e ϵ 2 ln N 2 π N 2 ln N ϵ N d z + ϵ λ N exp e ϵ 2 2 e ϵ 2 ln N 2 π 2 ln N ϵ + ϵ 0
Finally, for  N + ,
0 1 Φ N z d z = 0 + 1 Φ N z d z = 0 λ N d z + 0 λ N Φ N z d z λ N + 1 Φ N z d z λ N = 2 ln N
Therefore, for  N + , it can be concluded that
E Z N = 0 1 Φ N z d z + 0 + 1 Φ z N d z 2 ln N
Note that b ˜ i t = b i t μ σ , so the mathematical expectation of min i b i t is approximately equal to μ + E Z N σ . E Z 1 = 0 , E Z 2 = 1 / π , E Z 3 = 1.5 / π when N + , E Z N reaches its extremum:
lim N E Z N 2 ln N = 1
In summary, using the CCLF method, the equivalent average available bandwidth of the network is μ . However, using existing methods, the equivalent bandwidth is approximately equal to μ + E Z N σ . Since, when N 1 , E Z N 0 , μ μ + E Z N σ . Therefore, the following conclusion is drawn:
b CCLF t b ¯ exist t
To summarize, the CCLF method optimizes the average bandwidth, which represents the enhancement of network throughput.

3.4. Data Flow Scheduling Model

We introduce a flow scheduling model to ensure QoS guarantees. In this paper, QoS guarantees represent the swift forwarding of critical Data packets.
Assuming a time step of τ , one Data packet can be sent within each time step. Consider the scenario where N Data packets are stored in the CS waiting to be forwarded. The priority of the Data i is represented by p i . This specific Data has been held in the CS for a duration of m i time steps, where m i is a known constant. Let t i denote the total queuing time of the Data. If the Data are scheduled for transmission after n i time steps, then t i = m i + n i τ .
Our optimization goal is to identify a set n 1 , , n N that minimizes the total sum of weighted queuing time for all Data packets. The weighted queuing time is calculated as the product of the priority of each packet and its queuing time:
min i = 1 N p i t i = min i = 1 N p i m i τ + p i n i τ
We stipulate that a larger p i signifies a higher priority. It is worth noting that n 1 , , n N belongs to the set P, and P denotes the permutation of N elements, where N elements are integers 1 to N. Since p i and m i are constants, then i = 1 N p i m i τ is a constant term that can be ignored, and the optimization objective is changed to minimize i = 1 N p i n i τ .
This problem can be seen as sequentially assigning numbers 1 N to the jth Data. In an optimal set n 1 , , n N , it is essential that the priority p j of the Data with n j = 1 must be the highest. This means that i = 1 N , p j p i . This aligns with the well-known greedy algorithm.
We first establish the above proposition. If the assumption i = 1 N , p j p i is not valid, there exists a k j such that p k > p j . The objective function in this case is
L 1 = i = 1 N p i m i τ + i = 1 i j , k N p i n i τ + p j n j τ + p k n k τ
where n j = 1 and n k > n j . If we exchange n j and n k , the objective function becomes
L 2 = i = 1 N p i m i τ + i = 1 i j , k N p i n i τ + p j n k τ + p k n j τ
Subtracting the two equations, we obtain that
L 2 L 1 = n j n k p k p j τ
Considering that n k > n j and p k > p j , it can be inferred that L 2 L 1 < 0 . This implies L 2 < L 1 .
Therefore, if  p j is not the highest priority, the objective function can be minimized by swapping its transmission order with the Data with the highest priority. If p j is not the highest priority, it is not the optimal solution. This implies that the smaller n i should be allocated to the Data with larger p i . When selecting the Data to be forwarded, it is preferable to choose the Data with the highest priority.
This approach can be generalized to the forwarding of a series of Data packets. Given an optimal solution n 1 , , n N , it is guaranteed that for each n j < n k , it holds that p j p k . The proof follows a similar logic as mentioned earlier, employing proof by contradiction. Assume that there exists some n j < n k with p j < p k . In this scenario,
L 3 = i = 1 N p i m i τ + i = 1 i j , k N p i n i τ + p j n j τ + p k n k τ
If we exchange n j and n k , the objective function becomes
L 4 = i = 1 N p i m i τ + i = 1 i j , k N p i n i τ + p j n k τ + p k n j τ
By subtracting the two equations, the following is obtained: L 4 L 3 = n j n k p k p j τ < 0 . Consequently, the original configuration does not represent an optimal solution.
If all Data are prioritized distinctly, that is, i j , p i p j , then there exists a unique optimal solution. This involves arranging the forwarding order of Data packets in descending order of p i . However, when some Data packets share the same priority, swapping the forwarding order of these Data packets does not affect the objective function value but can lead to significant fluctuations in flow completion times on the consumer side. To address this issue, we optimize the transmission rate of each flow to ensure a reasonable bandwidth allocation among flows with equal priority, thereby maintaining consistent performance.
Let d t 2 = i d i t 2 , where d i t represents the average queuing time for packets in flow i waiting to be transmitted at time t. It can be equivalently modeled as a stochastic process, since packet arrivals are random.
We optimize the transmission rate x i for each data flow i to ensure that d t 2 decreases as quickly as possible, that is, minimizing E d d t d t 2 . Simultaneously, we seek to maximize the utility function i U i ( x i ) , where U i x i = log x i . Here, U i x i is a twice-differentiable concave function, ensuring that the optimization problem has a unique global maximum. This is a multi-objective optimization problem. To solve it, we consolidate the multiple objectives into a single-objective optimization problem by introducing an adjustable weight parameter K:
maximize i U i ( x i ) K · E d d t d ( t ) 2 s . t . i x i C , x i 0
In this paper, we use C to represent the total available bandwidth in bits per second.
Let μ i denote the mean number of packets arriving per second for flow i, and let σ i represent the standard deviation of the number of packets arriving per second for flow i. Let M i be the length (in bits) of each packet in flow i. Then, d i ( t ) corresponds to a Brownian Motion with Drift, which can be described by the following Stochastic Differential Equation:
d d i t = μ i x i M i d t + σ i d B t
where B ( t ) is a Brownian Motion. Using the well-known Ito formula [37], we can derive
d d i t 2 = 2 d i t μ i x i M i + σ i 2 d t + 2 d i t σ i d B t
We set the μ t and σ t : μ t = i 2 d i t μ i x i M i + σ i 2 , σ t = 2 i d i t σ i . From these settings, we can derive d d t 2 2 = μ t d t + σ t d B t . Next, by applying Ito formula and setting d t = d t 2 , we obtain
d d t = d d 2 t = 4 μ t d 2 t σ t 2 8 d 3 t d t + σ t 2 d t d B t
E d d t d t 2 = 4 μ t d 2 t σ t 2 8 d 3 t
x i E d d t d t 2 = x i μ t 2 d t = d i t d t M i
For the optimization problem, we aim to maximize the function as shown in (32). To solve this problem, we define the Lagrangian L as follows:
L x 1 , , x n , λ = i U i x i K · E d d t i d t 2 + λ C i x i
where λ is the Lagrange multiplier, and C is the available channel bandwidth.
According to the Karush–Kuhn–Tucker (KKT) [38] conditions, the necessary conditions for an optimal solution are given by
L x 1 , , x n , λ x i = U i x i + K d i t d t M i λ = 0 s . t . i x i C , λ 0
λ C i x i = 0
Based on Equation (39), it follows that
U 1 x 1 + K d 1 t d t M 1 = U 2 x 2 + K d 2 t d t M 2 = = U n x n + K d n t d t M n
To select an appropriate value for the K, we set that U i x i = n C = K d i t d t M i = K n × 0.1 μ i , and then the following can be deduced: K = n × 0.1 μ i log C / n . Considering the congested link environment, the optimal transmission rates for multiple data flows are
U 1 x 1 + K d 1 t d t M 1 = U 2 x 2 + K d 2 t d t M 2 = = U n x n + K d n t d t M n i x i = C x i 0
in which U i x i = log x i . We let S i = K d i t d t M i , and then we can derive the relationship between the transmission rates of data flows that achieves the optimal solution:
1 x 1 + S 1 = 1 x 2 + S 2 = = 1 x n + S n
Next, we prove that under constraint i x i = C , there exists a unique set of the optimal solution ( x 1 , x 2 , , x n ) . We define that 1 / x 1 + S 1 = 1 / x 2 + S 2 = = 1 / x n + S n = P . By substituting this setting into Equation (42), we can obtain
i = 1 n 1 P S i = C
where P > S i , i [ 1 , n ] . Let P 0 = max { S 1 , S 2 , , S n } . It follows that P > P 0 .
The function i = 1 n 1 P S i is strictly decreasing with respect to P, as each term 1 P S i is positive and monotonically decreasing. As P , the sum approaches zero. Furthermore, when P = P 0 + n C , we have
i = 1 n 1 P S i = i = 1 n 1 P 0 + n C S i i = 1 n 1 n C = C
This ensures that the function i = 1 n 1 P S i is continuous on the interval ( P 0 , ) , strictly decreasing, and ranges from to 0. Therefore, by the intermediate value theorem, there exists a unique P such that i = 1 n 1 P S i = C . Therefore, it can be concluded that there exists a unique optimal solution ( x 1 , x 2 , , x n ) .
This completes the proof of the uniqueness of the optimal transmission rate allocation ( x 1 , x 2 , , x n ) for multiple data flows. In simulation experiments and practical deployments, by inputting actual parameter values, we can efficiently compute the numerical solutions for ( x 1 , x 2 , , x n ) . This approach offers a straightforward method for solving Equation (43). Consequently, the proposed method demonstrates low computational complexity during practical data flow scheduling operations.
Our demonstration shows that the CCLF method enables optimal scheduling for multiple data flows. In the following section, we first describe the structure of the forwarding dispatch table. Subsequently, we provide an example to elaborate on the detailed scheduling of Data packet forwarding, ensuring swift delivery of critical data and appropriate bandwidth allocation for flows with identical priority.

3.5. Forwarding Dispatch Table

The dispatch table records each data flow and the Data packets belonging to it, classifying them for forwarding based on their priority and name prefix. In this paper, the producer assigns a specific priority to each content object. A series of Data packets sharing the same name prefix but differing in sequence numbers constitutes a data flow, as illustrated in Figure 5. The payloads carried by these packets can be aggregated to form complete content. The priority of the content object is embedded in each corresponding Data packet, providing a foundation for making forwarding decisions.
To ensure lossless forwarding and satisfy the service requirements of multi-priority data flows, a forwarding dispatch table is established in each router. This table classifies and records incoming Data based on the forwarding interface, priority, and name prefix. The structure of this dispatch table is illustrated in Figure 6.
In the dispatch table, each interface is associated with a forwarding database Q. The database Q records all Data packets that should be forwarded from that interface. Specifically, Q contains the priority p i and the corresponding queue q i = Q ( p i ) . The details recorded in q i include mappings of the name prefix n i to its respective sub-queue s i = q i ( n i ) . Each mapping corresponds to a specific data flow. The sub-queue s i holds pointers to each Data packet in the form of a queue and records the total byte length byte ( s i ) of the forwarded Data belonging to the data flow. The parameters used in the dispatch table are summarized in Table 1.
The lookup process of the dispatch table is based on the following keywords: interface, priority, name, and bytes sent. Each round of keyword-based searching narrows down the search scope. The keywords, such as interface ID, priority, and bytes sent, are all positive integers, ensuring an efficient search process. Furthermore, all packets belonging to the same data flow share the same name prefix, so the speed of name lookup depends on the number of data flows at a specific priority. Currently, multiple efficient name lookup algorithms have been developed within the NDN. Since the lookup of the dispatch table is only performed on packets that have not yet been forwarded, its entries are fewer compared to those in the CS, further reducing the search scope. In conclusion, the complexity of the dispatch table lookup is well within the manageable capacity of routers.
When the network size increases without introducing new data flows, the complexity of both the dispatch table lookups and the scheduling algorithm remains unchanged. However, if the network size grows and leads to more data flows being transmitted, the CS buffer space on the router will be more extensively utilized. Consequently, both the dispatch table lookups and the scheduling algorithm complexity will increase, proportional to the number of newly added data flows.

3.6. Data Forwarding Scheduling

We propose a priority-based Data scheduling approach. The output queue of each interface in the router is monitored to ensure sufficient available space. Next, the most suitable Data packet is selected from the dispatch table and inserted into the output queue according to the scheduling algorithm. CCLF prioritizes the forwarding of critical, delay-sensitive Data packets while allocating bandwidth to regular data flows based on metrics such as queuing time, packet reception rate, and packet size, as detailed in Section 3.4.
The Data scheduling approach involves two key processes: (1) inserting Data information into the dispatch table, and (2) selecting Data for forwarding from the dispatch table. The priority of each data flow aids in Data scheduling, ensuring low forwarding delay for critical flows and maximizing network throughput. The Data scheduling process is illustrated in Figure 7 and described as follows.
When a router receives a Data packet, it searches for the Pending Interest Table (PIT) entry with the same name as the Data. If no matching entry exists, the router discards the Data. Otherwise, the Data are stored in the CS and forwarded according to the interface recorded in the PIT. The next step involves inserting the Data pointer into the dispatch table. First, the forwarding database Q is determined based on the output interface. Then, the Data pointer is inserted into the appropriate sub-queue based on its name and priority. The detailed process is described as follows:
  • Read the priority field p i in Data i and determine the queue q i = Q ( p i ) from the forwarding database Q.
  • If the queue q i with priority p i does not exist, create a new queue q i = Q ( p i ) in the forwarding database Q.
  • Read the name n i in Data i and select the sub-queue s i = q i ( n i ) from the queue q i .
  • If the sub-queue s i is empty, it indicates that a new data flow with the name prefix n i is being received. To ensure appropriate bandwidth allocation among flows with the same priority, reset the recorded sent byte count b y t e ( s i ) for all sub-queues s i of equal priority to zero.
  • Insert the pointer of Data i into the sub-queue s i .
At this point, the PIT entry can be erased as it has been satisfied. Specifically, a scheduling thread is configured for each interface. When a forwarding database Q is not empty, the scheduling thread serving the corresponding interface remains active. After one Data packet is sent out from the output queue, the next Data packet is selected from the forwarding database Q and inserted into the output queue. The scheduling thread continues to run until the database Q becomes empty.
Furthermore, after each interval of δ , the queue is checked. If the output queue is empty and the associated forwarding database Q is not empty, the Data recorded in Q will be scheduled one by one. Specifically, high-priority Data can be forwarded preferentially. If multiple data flows have the same priority, factors such as average queuing time, packet reception speed, and packet size are considered. If these factors are also identical, then according to Equation (43), the parameters for these data flows are identical, i.e.,  S 1 = S 2 = = S n . Consequently, each data flow is assigned the same forwarding rate. To achieve this balance, Data packets from the flow with the fewest sent bytes are selected first. The scheduling algorithm is described as follows:
  • After sending one Data packet, retrieve the database Q. Select the non-empty queue q max with the highest priority p max from Q.
  • If there is only one sub-queue s i in q max , remove the first element l from s i and insert the corresponding Data packet into the output queue of the interface. Update the sent byte count for s i from b y t e ( s i ) to b y t e ( s i ) + b y t e ( l ) .
  • If there are multiple sub-queues s j in q max , select the sub-queue with the least sent bytes: s opt = arg min ( b y t e ( s j ) ) . Remove the first element l from s opt , insert the associated Data into the output queue, and update the byte count for s opt from b y t e ( s opt ) to b y t e ( s opt ) + b y t e ( l ) .
  • If the sub-queue s i or s opt becomes empty after removing the first element l, erase it to prevent selecting an empty sub-queue in subsequent steps.
  • Repeat step (1) until the termination condition is met.
We track the sent byte count for each flow and use this information to schedule the forwarding of Data packets. In the above scenario, we can ensure a balanced distribution of bandwidth among multiple data streams with equal priority and other factors.
Notably, the CCLF method can handle dynamic changes in traffic priority. It uses a router-driven scheduling mechanism where routers adaptively adjust forwarding rates based on updated priorities, ensuring all flows are allocated appropriate rates. Additionally, in satellite communication, high-priority flows are reserved for critical real-time data, including information related to satellite control, and occupy only a small proportion of network traffic. This results in low-priority flows occupying a significant proportion of the network traffic, thus ensuring they are not starved or blocked. The design ensures balanced resource allocation while supporting both real-time and general data transmission.
In summary, the CCLF method leverages the inherent CS storage to provide an effective solution for heavy traffic transmission. It enables lossless forwarding by optimizing packet scheduling, and reduces end-to-end data acquisition delay in LEO satellite networks. Moreover, it ensures high bandwidth utilization, thereby enhancing overall throughput.

4. Implementation

Our CCLF approach focuses on preventing packet loss, reducing flow completion time, improving bandwidth utilization, and enhancing network throughput. Notably, Interest packets are smaller in size than Data packets. The network reserves a portion of its available bandwidth specifically for forwarding Interest packets in all directions. Therefore, this study concentrates on congestion control and traffic scheduling for Data packet forwarding, excluding the scheduling of Interest packets.
To evaluate the performance of our proposed CCLF method, we implemented it in an NS-3-based NDN simulator, namely ndnSIM [39]. This simulator includes the NFD prototype [40] and the ndn-cxx library [41]. The ndn-cxx is a C++ library for implementing NDN primitives, while NFD is a network forwarding daemon that implements the logic for packet forwarding and processing.

4.1. Experimental Settings

To conduct our experiments, we used a machine equipped with an Intel(R) Core(TM) i7-11800H @ 2.30 GHz CPU and 32 GB of RAM. The simulations were performed using ndnSIM 2.8 [42] on Ubuntu 20.04 LTS. We developed additional modules and modified some of the original code in ndnSIM 2.8.
First, a dispatch table was introduced to each router. Next, we implemented a forwarding database associated with each network interface. Within this database, priority queues and data flow sub-queues were organized to categorize Data packets based on their priority and name prefixes. These implementations leverage the mapping template in C++ to enhance functionality. The structure of the dispatch table is shown in Figure 6.
Second, we enhanced the structure of Data packets. Specifically, we added a field to represent priority in each Data packet. This field is a variable of type uint16_t, where a larger value indicates higher priority. We declared and defined read/write operations for this new field. Since ndnSIM adopts Type–Length–Value (TLV) encoding, we declared additional blocks for the new element and implemented the corresponding function operations. To encode the priority field, we used the function prependByteArrayBlock. Additionally, we decoded the priority field within the function Data::wireDecode(const Block& wire).
In addition, a new flag, CanBeReplace of type bool, was added to each CS entry. This flag indicates that the Data packet cannot be replaced before being forwarded.
Importantly, we modified the data forwarding process. We defined operations including updating, inserting, querying, and erasing for the dispatch table. Notably, the integration of a dispatch table represents an additional feature within the NDN architecture. We modified the data forwarding mechanisms within NDN to better address the unique characteristics of LEO satellite networks. Currently, the NDN architecture undergoes extensive testing and deployment in terrestrial environments. By incorporating a dispatch table, our method optimizes bandwidth allocation, thereby enhancing performance in LEO satellite networks.

4.2. Simulation Settings

Our simulation illustrates the competitive forwarding of data flows with different priorities over congested links.
The simulation environment is inspired by the Qianfan (G60) Constellation, a major LEO satellite project in China aimed at providing global broadband Internet services. According to available data, the inter-satellite laser communication links within this constellation can achieve transmission rates ranging from 100 Mbps to several 10s of Gbps. For ground users, the downlink bandwidth can range from 100 Mbps to 1 Gbps. Based on this channel information, we have constructed a simulation environment.
This study focuses on LEO satellite networks that exhibit periodic changes. By leveraging the contact windows in satellite communication, the dynamic network can be modeled as a sequence of static networks. Based on this, experiments were conducted on a static network, where packet forwarding can be treated as point-to-point.
We evaluate the performance of CCLF and typical PCON [35] in both constant-bandwidth and dynamic-bandwidth scenarios. Here, “dynamic bandwidth” specifically refers to the variation in available bandwidth for forwarding Data packets over a link. The parameters used in the constant-bandwidth scenario are summarized in Table 2.
We employed a constant-bit-rate (CBR) consumer model to send Interest packets at a configurable constant rate. The timeout-based retransmission mechanism on the consumer side operates effectively. Retransmitting Interest packets refreshes the input records in the PIT entry, ensuring the availability of the reverse path for Data forwarding. In particular, the periodic changes in satellite orbits render routing in LEO satellite networks predictable. Adjacent satellites exchange forwarding information stored in the PIT to ensure backhaul Data forwarding. Consequently, Data packets are not lost due to path changes. On this basis, we simulated and evaluated the CCLF method.
First, we compared the performance of CCLF and PCON in terms of flow completion time. PCON is the most recognized NDN congestion control mechanism. It achieves congestion control through router-based congestion detection and marking, as well as user-end adjustment of Interest sending rates. PCON integrates key features of existing methods and serves as a model for receiver-driven congestion control technologies. Next, we evaluated the throughput and bandwidth utilization of CCLF and PCON. Specifically, we assessed consumer throughput and total network throughput, which reflects bandwidth utilization. Finally, we counted packet losses on congested links and verified lossless forwarding.

5. Experimental Results and Discussion

In this section, the CCLF method is extensively evaluated in terms of flow completion time (FCT), network throughput, bandwidth utilization, and packet loss count. Experiments were conducted in both constant-bandwidth and dynamic-bandwidth scenarios to serve different evaluation purposes.

5.1. Congestion Control Under Constant Bandwidth

We use the topology shown in Figure 8 to evaluate the performance of CCLF. In this scenario, four consumers request content from four producers.
Our network model includes a bottleneck link between Router 2 and Router 3 with a bandwidth of 10 Mbps, while all other links have a bandwidth of 100 Mbps. This configuration enables us to study the impact of constrained bandwidth on multi-flow forwarding performance. The detection interval is set to δ = 0.01 s. We simulate and compare the CCLF and PCON methods in terms of the FCT, network throughput, bandwidth utilization, and packet loss performance.

5.1.1. Flow Completion Time

We evaluate the completion time of flows with different priorities. The priority of Flow j , which is requested by Consumer j , is represented as p j . A larger p j indicates a higher priority. The relative priorities are set as p 1 = p 2 < p 3 < p 4 . Consumer 2 sends 3000 Interests, while the other consumers each send 5000 Interests. Each Interest brings back a Data packet containing 1024 bytes of content, with a total length of 1069 bytes. Both the speed of sending Interests and the speed of receiving Data are set to 1000 pkts/s.
In this case, the Router 3 will handle a total of 34 Mbps of Data forwarding from four data flows, which exceeds the available bandwidth of 10 Mbps, leading to link congestion. According to the scheduling algorithm described in Section 3.4, Flow 4 and Flow 3 with high priority will be forwarded first. Then, under limited link capacity, there exists a unique optimal rate allocation for forwarding Flow 1 and Flow 2, which have the same priority. In this experiment, due to the synchronous sending of the Interest packets, the Data packets of Flow 1 and Flow 2, sent at the same time, will arrive at Router 3 simultaneously, resulting in identical queuing delays and packet reception rates. By applying these actual parameters to Equation (43), we conclude that the optimal rate allocation for Flow 1 and Flow 2 is x 1 = x 2 . This implies that an even distribution of bandwidth among flows with equal priority, such as Flow 1 and Flow 2, leads to optimal network utility.
By comparison, under the PCON mechanism, consumers adaptively adjust the speed of sending Interests based on congestion feedback. Additionally, the priority of Data packets is ignored during Data forwarding. We compare the performance of CCLF with that of PCON, and the results are presented in Figure 9.
The results indicate that, using the PCON method, the FCT of Flow 3 and Flow 4 with high priority is 17.15 s and 18.09 s, respectively, both of which are close to the FCT of 18.43 s for Flow 1. The flow completion time under the PCON is only related to the content size. In contrast, using the CCLF method reduces the FCT of Flow 3 and Flow 4 to 8.7 s and 5.15 s, respectively, representing a reduction of 50% and 70% compared to PCON. Notably, the PCON method yields an average FCT of 16.60 s, whereas the CCLF method results in an average FCT of 10.79 s. This represents a 35% reduction in average flow completion time.
This is because the CCLF method can prioritize the forwarding of Data with high priority, satisfying the transmission of content with high QoS requirements. However, PCON does not distinguish the priority or urgency of Data and instead distributes limited bandwidth evenly across all flows.
Specifically, Figure 9 shows that the differences in completion times between consecutive priority levels are inconsistent. Our research focuses on QoS guarantees for critical real-time data, particularly satellite control information, ensuring low transmission delays for high-priority data flows. Consequently, the reduction in delay across different priority levels is not regular. Given that high-priority flows constitute a smaller proportion of network traffic, low-priority flows do not experience excessive delays or starvation. In future work, we will further quantify variations in delay to provide predictable transmission delays for data flows across different priority levels.

5.1.2. Network Throughput

We analyze both the overall throughput of the network and the data transmission rate on the consumer side. The simulation scenario remains the same as before, with a bandwidth of 10 Mbps for the congested link. We evaluate the performance of CCLF and PCON, and the results are presented in Figure 10 and Figure 11.
As shown in Figure 10, with the CCLF method, initially, 8.552 Mbps of bandwidth is allocated to the highest-priority Flow 4, and Flow 3 with the second-highest priority occupies the remaining 1.448 Mbps. After 5.2 s, Consumer 4 has received all Data packets. Then, the Data packets of Flow 3, backlogged in the CS of Router 3, occupy all bandwidth for forwarding. By 8.7 s, Consumer 3 has received all Data packets. Then, the bandwidth is equally shared by Flow 1 and Flow 2 with the same priority. Due to the smaller size of Flow 2, its transmission ends early at 13.8 s. Afterwards, Flow 1 occupies all bandwidth. The result reflects the swift forwarding of high-priority data flows and the balanced bandwidth usage among flows with the same priority.
In contrast, the PCON method cannot meet the requirements of urgent flows. Figure 11 shows that with the PCON method, the bottleneck bandwidth is allocated among all flows, leading to a nearly equal distribution. Except for Flow 2, which is completed within 12.7 s, the FCT of other critical flows, such as Flow 3 and Flow 4, is close to 18 s.
The experimental results indicate that although the throughput on the consumer side is influenced by various factors, the overall network throughput under the CCLF method has consistently remained at a high level.

5.1.3. Bandwidth Utilization

We evaluate bandwidth utilization using the total network throughput as an indicator. As shown in Figure 12, when compared to CCLF, the bandwidth of the congested link is not fully utilized in PCON. This phenomenon becomes evident after the 12th second.
This occurs due to the congestion feedback mechanism of PCON, which causes all consumers to slow down the sending rate of Interests, resulting in insufficient bandwidth utilization. In contrast, as shown in Figure 12, when using the CCLF method, the total throughput of the congested link remains close to 10 Mbps, with bandwidth utilization approaching 100%. The CCLF method completes all flow transmissions in 15.51 s, whereas the PCON method requires 18.43 s. Compared to PCON, the CCLF method reduces the total flow completion time by 16%.

5.1.4. Verification of Zero Packet Loss

We evaluate the packet loss count under burst flow conditions. We use a scenario similar to the previous experiment. The moment when consumers start sending Interests is set to 0 s, 1 s, 3 s, and 5 s. In this experiment, Flows 3 and 4 are burst flows that cause a significant increase in forwarded packets, leading to potential packet loss. We evaluate the packet loss performance of the CCLF and PCON methods on the congested link. The statistics of packet loss count are presented in Figure 13.
Figure 13 indicates that, in the presence of sudden traffic surges, the use of the PCON method results in the loss of 57 packets within 8 s. In contrast, the CCLF method achieves zero packet loss in network-layer transmission, enabling lossless forwarding.
This occurs because PCON requires consumers to reduce the sending rate of Interests when detecting congestion. Consumers implement multiple rate adjustments before the relief of congestion, during which packet loss is inevitable. In contrast, the CCLF method instructs routers to use CS storage to buffer burst data flows. Data packets are scheduled according to available bandwidth, ensuring zero packet loss.

5.2. Congestion Control Under Dynamic Bandwidth

We evaluate the performance of CCLF in networks with dynamic bandwidth. The topology used for simulation is illustrated in Figure 14. The available bandwidth of Link 1 between Router 1 and Router 2, as well as Link 2 between Router 2 and Router 3, varies over time. The specific changes in bandwidth are detailed in Table 3. Other parameters are set to match those in Table 2. We compare the CCLF and PCON methods in terms of FCT, network throughput, bandwidth utilization, and packet loss.

5.2.1. Flow Completion Time

We evaluate the completion time of flows with different priorities. The priority of Flow j , requested by Consumer j , is denoted as p j . The relative priorities are set such that p 1 = p 2 < p 3 < p 4 . Consumer 2 sends 3000 Interest packets, while other consumers each send 5000 Interest packets. Each consumer sends Interests at a rate of 1000 pkts/s. Each Interest brings back a Data packet with a total length of 1069 bytes. In this scenario, Router 3 and Router 2 process Data packets exceeding the bandwidth of their forwarding links. We compare the performance of CCLF with PCON, and the results are shown in Figure 15.
The results indicate that, when using the PCON method, the flow completion time is only related to the content size. The FCT of Flow 1, Flow 3, and Flow 4, which all transmit the same amount of data, exceeds 21 s each. However, using the CCLF method, the FCT of Flow 3 and Flow 4 with high priority is reduced to 10.18 s and 5.08 s, respectively. Notably, using the PCON method, the average FCT of the four flows is 20.67 s, while using the CCLF method, it is reduced to 12.06 s, representing a 41% reduction compared to PCON.
This is because the CCLF method prioritizes the forwarding of high-priority Data packets. However, PCON does not distinguish data priority, and the limited bandwidth is shared among all flows. By employing the CCLF method, Data packets are scheduled for forwarding while maintaining full bandwidth occupancy on congested links. This reduces the average flow completion time.

5.2.2. Network Throughput

We analyze both the overall throughput of the network and the data transmission rate on the consumer side. The simulation scenario is the same as Section 5.2.1. We compare the performance of CCLF against PCON, and the results are shown in Figure 16 and Figure 17.
Using the CCLF method, Figure 16 shows that, initially, Flow 4 with the highest priority and Flow 3 with the second-highest priority occupy Link 1. After 5.08 s, all Data packets of Flow 4 are transmitted, allowing packets of Flow 3 buffered in the CS of Router 2 to fully utilize the available bandwidth of Link 1. By 10.18 s, Flow 3 transmission completes, and then Flow 2 and Flow 1, with equal priority, share the available bandwidth equally for data forwarding. Flow 2 completes transmission at 15.06 s, and then Flow 1 occupies the entire bandwidth. Notably, the overall network throughput under the CCLF method remains consistently at a high level.
By contrast, when using the PCON method, all packets are forwarded based on their receiving order, and bandwidth is allocated in a balanced manner. Figure 17 shows that the FCT of critical flows, such as Flow 3 and Flow 4, exceeds 21 s each. Thus, the PCON method fails to meet the QoS requirements of urgent flows.

5.2.3. Bandwidth Utilization

We use total network throughput as an indicator to evaluate bandwidth utilization. Figure 18 shows the total throughput of congested Link 1. Compared with CCLF, dynamic bandwidth cannot be fully utilized when using the PCON method, as is evident from the 10th to the 14th second.
Specifically, when the available bandwidth of Link 1 is 14 Mbps, the CCLF method achieves a total throughput of 14 Mbps, resulting in nearly 100% bandwidth utilization. In contrast, the PCON method yields a total throughput of only 6 Mbps, corresponding to a 43% bandwidth utilization. Thus, the CCLF method increases bandwidth utilization by 57% compared with PCON.
This discrepancy is attributed to the lack of data buffering and storage in PCON. Consequently, even if the bandwidth of Link 1 exceeds that of Link 2, the throughput of Link 1 is constrained by the bottleneck bandwidth of Link 2. By leveraging data storage and packet buffering in CS, CCLF ensures that the bandwidth of Link 1 remains fully utilized. As a result, it takes 17.91 s to complete all data flow transmissions using the CCLF method, compared with 23.19 s for the PCON method. Thus, the CCLF method reduces the total flow completion time by 23% compared with PCON.

5.2.4. Verification of Zero Packet Loss

We evaluate the packet loss under burst flow conditions. The experimental scenario is similar to that of the previous experiment, with the priority set as p 1 = p 2 < p 3 < p 4 . Consumers begin sending Interest packets at moments 0 s, 1 s, 3 s, and 5 s. In this setup, Flows 3 and 4 are burst flows that significantly increase the number of forwarded packets, potentially causing packet loss. We assess the packet loss for the CCLF and PCON methods on Router 2 and Router 3. The packet loss statistics are illustrated in Figure 19.
Figure 19 demonstrates that the PCON mechanism fails to quickly alleviate congestion during a sudden traffic surge, resulting in packet losses across multiple routers. In this experiment, the PCON method leads to a total of 1592 packet losses within 12 s. By contrast, the CCLF method ensures zero packet losses across all routers.
This is because PCON requires consumers to gradually reduce the sending rate of Interests and cannot immediately decrease data traffic to a congestion-free level, making packet loss unavoidable during the adjustment period. In contrast, the CCLF method enables each router to directly control the forwarding of Data packets. When burst traffic arrives, Data packets can be buffered in the CS to ensure that no packets are discarded. By avoiding packet loss, data retransmissions are eliminated, reducing flow completion time.
Notably, under conditions of extreme congestion or network failures, the CS reaches its capacity and results in overflow. The CCLF method cannot guarantee lossless transmission in such scenarios. However, NDN features proximity data retrieval, meaning that upon recovery from network failures, users can send Interest packets to retrieve Data packets from upstream routers that have stored these packets in their CS, thereby avoiding fetching them from the original producer. This significantly reduces content acquisition delay. This paper emphasizes that the CCLF method guarantees lossless forwarding at the network layer. As long as link connections are intact, Data packets can be forwarded without loss.

5.3. Evaluation Summary

Based on the above experimental results, we compare the technical characteristics of the CCLF and PCON methods for congestion control, as shown in Table 4.
Following the PCON method, consumers adjust the sending rate of Interests to limit Data packet traffic, thereby alleviating congestion. However, there is a response delay between detecting congestion and reducing the Data forwarding rate. Especially during sudden traffic surges, PCON struggles to alleviate congestion promptly, leading to frequent packet loss. This receiver-driven approach resolves congestion by indirectly limiting Data traffic but results in lower bandwidth utilization and network throughput.
In contrast, the proposed CCLF method is a router-driven solution where each router directly schedules the forwarding of Data packets, thereby effectively preventing congestion. Moreover, CCLF achieves both rapid forwarding of delay-sensitive Data and balanced scheduling of regular Data. In the case of burst traffic, Data packets can be stored in the CS and subsequently scheduled and forwarded based on priority. This ensures full bandwidth utilization without packet loss while reducing flow completion time.

6. Conclusions

This paper proposes the CCLF method as a solution for congestion control challenges in satellite-based cloud platforms. By optimizing the storage and forwarding mechanisms of NDN, including output queue monitoring and prioritized packet scheduling, the CCLF method achieves zero packet loss and meets QoS requirements. It dynamically adjusts the forwarding rates of data flows in response to the time-varying conditions of satellite networks, enabling full bandwidth utilization. This approach addresses the conventional throughput limitations imposed by bottleneck bandwidth, enhancing network throughput. The simulation results indicate that the CCLF method achieves zero packet loss, reduces the flow completion time, improves network throughput, and provides QoS guarantees. In summary, the CCLF method addresses critical issues such as packet loss, retransmission timeouts, and significant transmission delay variations. Its features of lossless transmission, high throughput, and low delays make it highly suitable for the development of satellite-based cloud platforms. In future work, we will quantify variations in transmission delays across different priority levels, providing predictable delays for data flows supporting various applications.

Author Contributions

Conceptualization, W.D. and T.L.; methodology, W.D.; validation, J.A. and Y.Z.; formal analysis, Z.L.; writing—original draft preparation, W.D.; writing—review and editing, T.L.; visualization, Z.L.; supervision, Y.Z.; project administration, J.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are private, from the School of Information and Electronics, Beijing Institute of Technology.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Diao, W.; An, J.; Li, T.; Wang, X.; Liu, Z.; Li, Z. Lossless congestion control based on priority scheduling in named data networking. In Proceedings of the 2024 International Conference on Electrical, Electronic Information and Communication Engineering (EEICE), Xi’an, China, 12–14 April 2024; Volume 2849, p. 012096. [Google Scholar] [CrossRef]
  2. Luo, X.; Chen, H.H.; Guo, Q. LEO/VLEO Satellite Communications in 6G and Beyond Networks–Technologies, Applications, and Challenges. IEEE Netw. 2024, 38, 273–285. [Google Scholar] [CrossRef]
  3. Toka, M.; Lee, B.; Seong, J.; Kaushik, A.; Lee, J.; Lee, J.; Lee, N.; Shin, W.; Poor, H.V. RIS-Empowered LEO Satellite Networks for 6G: Promising Usage Scenarios and Future Directions. IEEE Commun. Mag. 2024, 62, 128–135. [Google Scholar] [CrossRef]
  4. Han, Z.; Xu, C.; Zhao, G.; Wang, S.; Cheng, K.; Yu, S. Time-Varying Topology Model for Dynamic Routing in LEO Satellite Constellation Networks. IEEE Trans. Veh. Technol. 2023, 72, 3440–3454. [Google Scholar] [CrossRef]
  5. Diao, W.; An, J.; Li, T.; Zhu, C.; Zhang, Y.; Wang, X.; Liu, Z. Low delay fragment forwarding in LEO satellite networks based on named data networking. Comput. Commun. 2023, 211, 216–228. [Google Scholar] [CrossRef]
  6. Jiang, D.; Wang, F.; Lv, Z.; Mumtaz, S.; Al-Rubaye, S.; Tsourdos, A.; Dobre, O. QoE-Aware Efficient Content Distribution Scheme For Satellite-Terrestrial Networks. IEEE Trans. Mob. Comput. 2023, 22, 443–458. [Google Scholar] [CrossRef]
  7. Chen, X.; Gu, W.; Dai, G.; Xing, L.; Tian, T.; Luo, W.; Cheng, S.; Zhou, M. Data-Driven Collaborative Scheduling Method for Multi-Satellite Data-Transmission. Tsinghua Sci. Technol. 2024, 29, 1463–1480. [Google Scholar] [CrossRef]
  8. Arsalan, A.; Burhan, M.; Rehman, R.A.; Umer, T.; Kim, B.S. E-DRAFT: An Efficient Data Retrieval and Forwarding Technique for Named Data Network Based Wireless Multimedia Sensor Networks. IEEE Access 2023, 11, 15315–15328. [Google Scholar] [CrossRef]
  9. Cobblah, C.N.A.; Xia, Q.; Gao, J.; Xia, H.; Kusi, G.A.; Obiri, I.A. A Secure and Lightweight NDN-Based Vehicular Network Using Edge Computing and Certificateless Signcryption. IEEE Internet Things J. 2024, 11, 27043–27057. [Google Scholar] [CrossRef]
  10. Demiroglou, V.; Martinis, M.; Florou, D.; Tsaoussidis, V. IoT Data Collection Over Dynamic Networks: A Performance Comparison of NDN, DTN and NoD. In Proceedings of the 2023 IEEE 9th World Forum on Internet of Things (WF-IoT), Aveiro, Portugal, 12–27 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
  11. Xia, Z.; Zhang, Y.; Fang, B. Exploiting Knowledge for Better Mobility Support in the Future Internet. Mob. Netw. Appl. 2022, 27, 1671–1687. [Google Scholar] [CrossRef]
  12. Ahmad, Z.N.; Triana, F.; Rachel, R.; Negara, R.M.; Mayasari, R.; Astuti, S.; Rizal, S. Optimizing Forwarding Strategies in Named Data Networking Using Reinforcement Learning. In Proceedings of the 2023 9th International Conference on Wireless and Telematics (ICWT), Solo, Indonesia, 6–7 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
  13. Rodríguez-Pérez, M.; Herrería-Alonso, S.; Suárez-Gonzalez, A.; López-Ardao, J.C.; Rodríguez-Rubio, R. Cache Placement in an NDN-Based LEO Satellite Network Constellation. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 3579–3587. [Google Scholar] [CrossRef]
  14. Tan, X.; Feng, W.; Lv, J.; Jin, Y.; Zhao, Z.; Yang, J. f-NDN: An Extended Architecture of NDN Supporting Flow Transmission Mode. IEEE Trans. Commun. 2020, 68, 6359–6373. [Google Scholar] [CrossRef]
  15. Siddiqui, S.; Waqas, A.; Khan, A.; Zareen, F.; Iqbal, M.N. Congestion Controlling Mechanisms in Content Centric Networking and Named Data Networking—A Survey. In Proceedings of the 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 30–31 January 2019; pp. 1–7. [Google Scholar] [CrossRef]
  16. Kato, T.; Bandai, M. A hop-by-hop window-based congestion control method for named data networking. In Proceedings of the 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 12–15 January 2018; pp. 1–7. [Google Scholar] [CrossRef]
  17. Bandai, M.; Kato, T.; Yamamoto, M. A congestion control method for named data networking with hop-by-hop window-based approach. IEICE Trans. Commun. 2019, 102, 97–110. [Google Scholar] [CrossRef]
  18. Schneider, K.; Yi, C.; Zhang, B.; Zhang, L. A Practical Congestion Control Scheme for Named Data Networking. In Proceedings of the 3rd ACM Conference on Information-Centric Networking, Kyoto, Japan, 26–28 September 2016; pp. 21–30. [Google Scholar] [CrossRef]
  19. Song, S.; Zhang, L. Effective NDN congestion control based on queue size feedback. In Proceedings of the ICN’22: 9th ACM Conference on Information-Centric Networking, Osaka, Japan, 19–21 September 2022; pp. 11–21. [Google Scholar] [CrossRef]
  20. Li, Z.; Shen, X.; Xun, H.; Miao, Y.; Zhang, W.; Luo, P.; Liu, K. CoopCon: Cooperative Hybrid Congestion Control Scheme for Named Data Networking. IEEE Trans. Netw. Serv. Manag. 2023, 20, 4734–4750. [Google Scholar] [CrossRef]
  21. Zhong, X.; Zhang, J.; Zhang, Y.; Guan, Z.; Wan, Z. PACC: Proactive and Accurate Congestion Feedback for RDMA Congestion Control. In Proceedings of the IEEE INFOCOM 2022—IEEE Conference on Computer Communications, London, UK, 2–5 May 2022; pp. 2228–2237. [Google Scholar] [CrossRef]
  22. Bai, H.; Li, H.; Que, J.; Smahi, A.; Zhang, M.; Chong, P.H.J.; Li, S.Y.R.; Wang, X.; Lu, P. QSCCP: A QoS-Aware Congestion Control Protocol for Information-Centric Networking. IEEE Trans. Netw. Serv. Manag. 2024; early access. [Google Scholar] [CrossRef]
  23. Bai, H.; Li, H.; Que, J.; Zhang, M.; Chong, P.H.J. DSCCP: A Differentiated Service-based Congestion Control Protocol for Information-Centric Networking. In Proceedings of the 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, USA, 10–13 April 2022; pp. 1641–1646. [Google Scholar] [CrossRef]
  24. Google Cloud. M2 UltraMemory Machine Types. Available online: https://cloud.google.com/compute/docs/machine-types#m2-machine-types (accessed on 23 December 2024).
  25. Carofiglio, G.; Gallo, M.; Muscariello, L. ICP: Design and evaluation of an Interest control protocol for content-centric networking. In Proceedings of the 2012 Proceedings IEEE INFOCOM Workshops, Orlando, FL, USA, 25–30 March 2012; pp. 304–309. [Google Scholar] [CrossRef]
  26. Saino, L.; Cocora, C.; Pavlou, G. CCTCP: A scalable receiver-driven congestion control protocol for content centric networking. In Proceedings of the 2013 IEEE International Conference on Communications (ICC), Budapest, Hungary, 9–13 June 2013; pp. 3775–3780. [Google Scholar] [CrossRef]
  27. Carofiglio, G.; Gallo, M.; Muscariello, L.; Papali, M. Multipath congestion control in content-centric networks. In Proceedings of the 2013 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Turin, Italy, 14–19 April 2013; pp. 363–368. [Google Scholar] [CrossRef]
  28. Zheng, R.; Zhang, B.; Zhao, X.; Wang, L.; Wu, Q. A Receiver-Driven Named Data Networking (NDN) Congestion Control Method Based on Reinforcement Learning. Electronics 2024, 13, 4609. [Google Scholar] [CrossRef]
  29. Rozhnova, N.; Fdida, S. An extended Hop-by-hop interest shaping mechanism for content-centric networking. In Proceedings of the 2014 IEEE Global Communications Conference, Austin, TX, USA, 8–12 December 2014; pp. 1–7. [Google Scholar] [CrossRef]
  30. Mejri, S.; Touati, H.; Malouch, N.; Kamoun, F. Hop-by-Hop Congestion Control for Named Data Networks. In Proceedings of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017; pp. 114–119. [Google Scholar] [CrossRef]
  31. Touati, H.; Mejri, S.; Malouch, N.; Kamoun, F. Fair hop-by-hop interest rate control to mitigate congestion in named data networks. Clust. Comput. 2021, 24, 2213–2230. [Google Scholar] [CrossRef]
  32. Mejri, S.; Touati, H.; Kamoun, F. Hop-by-hop interest rate notification and adjustment in named data networks. In Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, Spain, 15–18 April 2018; pp. 1–6. [Google Scholar] [CrossRef]
  33. Ye, Y.; Lee, B.; Qiao, Y. Hop-by-Hop Congestion Measurement and Practical Active Queue Management in NDN. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  34. Thibaud, A.; Fasson, J.; Arnal, F.; Sallantin, R.; Dubois, E.; Chaput, E. Reactivity Enhancement of Cooperative Congestion Control for Satellite Networks. In Proceedings of the 2020 3rd International Conference on Hot Information-Centric Networking (HotICN), Hefei, China, 12–14 December 2020; pp. 135–141. [Google Scholar] [CrossRef]
  35. Schneider, L.K.; Yi, C.; Zhang, B. PCON Source Code. 2019. Available online: https://github.com/schneiderklaus/ndnSIM (accessed on 1 June 2024).
  36. David, H.A.; Nagaraja, H.N. Order Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  37. Itô, K. On Stochastic Differential Equations; American Mathematical Society: New York, NY, USA, 1951. [Google Scholar]
  38. Kuhn, H.; Tucker, A. Nonlinear Programming Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1951; pp. 481–492. [Google Scholar]
  39. Mastorakis, S.; Afanasyev, A.; Zhang, L. On the Evolution of ndnSIM: An Open-Source Simulator for NDN Experimentation. SIGCOMM Comput. Commun. Rev. 2017, 47, 19–33. [Google Scholar] [CrossRef]
  40. NDN Team. Named Data Networking Forwarding Daemon (NFD) 22.02-3-gdeb5427 Documentation. 2022. Available online: https://named-data.net/doc/NFD/current/overview.html (accessed on 6 July 2024).
  41. NDN Teams. NDN-cxx. 2021. Available online: http://named-data.net/doc/ndn-cxx (accessed on 10 September 2024).
  42. NDN Team. NS-3-Based NDN Simulator. 2021. Available online: https://ndnsim.net/2.8/# (accessed on 14 February 2024).
Figure 1. Data forwarding process in NDN.
Figure 1. Data forwarding process in NDN.
Electronics 14 01206 g001
Figure 2. Packet loss statistics under PCON mechanism.
Figure 2. Packet loss statistics under PCON mechanism.
Electronics 14 01206 g002
Figure 3. Data forwarding process in CCLF with dispatch table.
Figure 3. Data forwarding process in CCLF with dispatch table.
Electronics 14 01206 g003
Figure 4. Comparison of end-to-end average throughput in time-varying networks.
Figure 4. Comparison of end-to-end average throughput in time-varying networks.
Electronics 14 01206 g004
Figure 5. Data flow composed of Data packets with the same name prefix.
Figure 5. Data flow composed of Data packets with the same name prefix.
Electronics 14 01206 g005
Figure 6. The structure of the dispatch table.
Figure 6. The structure of the dispatch table.
Electronics 14 01206 g006
Figure 7. Data forwarding scheduling in CCLF.
Figure 7. Data forwarding scheduling in CCLF.
Electronics 14 01206 g007
Figure 8. The simulated network with constant bandwidth.
Figure 8. The simulated network with constant bandwidth.
Electronics 14 01206 g008
Figure 9. The flow completion time in the network with constant bandwidth.
Figure 9. The flow completion time in the network with constant bandwidth.
Electronics 14 01206 g009
Figure 10. Network throughput under constant bandwidth using the CCLF method.
Figure 10. Network throughput under constant bandwidth using the CCLF method.
Electronics 14 01206 g010
Figure 11. Network throughput under constant bandwidth using the PCON method.
Figure 11. Network throughput under constant bandwidth using the PCON method.
Electronics 14 01206 g011
Figure 12. Total throughput of the network with constant bandwidth.
Figure 12. Total throughput of the network with constant bandwidth.
Electronics 14 01206 g012
Figure 13. Validation of packet loss free of CCLF under burst flow conditions.
Figure 13. Validation of packet loss free of CCLF under burst flow conditions.
Electronics 14 01206 g013
Figure 14. The simulated network with dynamic bandwidth.
Figure 14. The simulated network with dynamic bandwidth.
Electronics 14 01206 g014
Figure 15. The flow completion time in the network with dynamic bandwidth.
Figure 15. The flow completion time in the network with dynamic bandwidth.
Electronics 14 01206 g015
Figure 16. Network throughput under dynamic bandwidth using the CCLF method.
Figure 16. Network throughput under dynamic bandwidth using the CCLF method.
Electronics 14 01206 g016
Figure 17. Network throughput under dynamic bandwidth using the PCON method.
Figure 17. Network throughput under dynamic bandwidth using the PCON method.
Electronics 14 01206 g017
Figure 18. Total throughput of the congested link with dynamic bandwidth.
Figure 18. Total throughput of the congested link with dynamic bandwidth.
Electronics 14 01206 g018
Figure 19. Validation of packet loss free of CCLF with time-varying bandwidth.
Figure 19. Validation of packet loss free of CCLF with time-varying bandwidth.
Electronics 14 01206 g019
Table 1. Parameters used in the dispatch table.
Table 1. Parameters used in the dispatch table.
ParametersDefinitions
p i , i The priority of Data i
n i , i The name prefix of Data i
q i = Q p i , i The queue with priority p i in database Q
s i = q i n i , i The sub-queue with name prefix n i in the queue q i
b y t e s i Total byte length of the forwarded Data in sub-queue s i
Table 2. Simulation parameters.
Table 2. Simulation parameters.
Simulation ParametersValue
Link typepoint-to-point link
General link bandwidth100 Mbps
Congested link bandwidth10 Mbps
Propagation delay10 ms
Queue (maxpackets on the link)200
Interest sending rate1000 pkts/s
Size of Content Store (CS)32 GB (approx. 3.2 × 10 7 packets)
Table 3. Parameters for dynamic bandwidth settings.
Table 3. Parameters for dynamic bandwidth settings.
Time IntervalBandwidth of Link 1Bandwidth of Link 2
0–6 s10 Mbps10 Mbps
6–10 s 6  Mbps14 Mbps
10–14 s14 Mbps 6  Mbps
14–25 s 6  Mbps14 Mbps
Table 4. Comparison of congestion control schemes.
Table 4. Comparison of congestion control schemes.
PerformancePCONCCLF
Packet lossOccurrenceNon-occurrence
Control masterConsumers and routersRouters
Control modeAdjust sending rate of InterestsData scheduling and forwarding
ThroughputFloatingStable
Bandwidth usageUnderutilized bandwidthFull bandwidth utilization
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Diao, W.; An, J.; Li, T.; Zhang, Y.; Liu, Z. Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms. Electronics 2025, 14, 1206. https://doi.org/10.3390/electronics14061206

AMA Style

Diao W, An J, Li T, Zhang Y, Liu Z. Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms. Electronics. 2025; 14(6):1206. https://doi.org/10.3390/electronics14061206

Chicago/Turabian Style

Diao, Wenlan, Jianping An, Tong Li, Yu Zhang, and Zhoujie Liu. 2025. "Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms" Electronics 14, no. 6: 1206. https://doi.org/10.3390/electronics14061206

APA Style

Diao, W., An, J., Li, T., Zhang, Y., & Liu, Z. (2025). Lossless and High-Throughput Congestion Control in Satellite-Based Cloud Platforms. Electronics, 14(6), 1206. https://doi.org/10.3390/electronics14061206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop