Network Delay and Cache Overflow: A Parameter Estimation Method for Time Window Based Hopping Network

A basic understanding of delayed packet loss is key to successfully applying it to multi-node hopping networks. Given the problem of delayed data loss due to network delay in a hop network environment, we review early time windowing approaches, for which most contributions focus on end-to-end hopping networks. However, they do not apply to the general hopping network environment, where data transmission from the sending host to the receiving host usually requires forwarding at multiple intermediate nodes due to network latency and network cache overflow, which may result in delayed packet loss. To overcome this challenge, we propose a delay time window and a method for estimating the delay time window. By examining the network delays of different data tasks, we obtain network delay estimates for these data tasks, use them as estimates of the delay time window, and validate the estimated results to verify that the results satisfy the delay distribution law. In addition, simulation tests and a discussion of the results were conducted to demonstrate how to maximize the reception of delay groupings. The analysis shows that the method is more general and applicable to multi-node hopping networks than existing time windowing methods.


Hopping Network
As shown in Figure 1, assume that the network G is made up of physical nodes N 1 , N 2 , . . . , N z . The network parameters (IP address, port number, etc.) configured for the corresponding node N i , i = 1, 2, . . . , z is denoted as h ij , j = 1, 2, . . . , ξ. Typically, the set H = (h ij ) z×ξ remains constant during the work, while nodes can rely on H for mutual access. For each node, it is specified that at every period T, a parameter is selected from H and configured to it by some rule, and its network parameters are changed every period T. For this change, it is called network hopping, and T is the hopping period.

Hopping Network
As shown in Figure 1, assume that the network G is made up o N2, …, Nz. The network parameters (IP address, port number, etc.) con responding node Ni, i = 1, 2, …, z is denoted as hij, j = 1, 2, …, ξ. Typica remains constant during the work, while nodes can rely on H for mut node, it is specified that at every period T, a parameter is selected from to it by some rule, and its network parameters are changed every perio it is called network hopping, and T is the hopping period.

Time Window
As network parameters constantly change in the hopping, a par rameter is only valid for a specific period, which is the time windo parameter, as shown in Figure 2. Then the network parameter hij cor node at the j hopping period is valid for the time range [twa(j), twb(j)] dow, denoted as TW(hij). Since the network nodes are generally synchr time windows of the parameters corresponding to each node are ful TW(h1j) = TW(h2j) = … = TW(hzj) = [twa(j), twb(j)], so they are denoted un the normal case, the time window within which the data is sent is th

Time Window
As network parameters constantly change in the hopping, a particular network parameter is only valid for a specific period, which is the time window for that network parameter, as shown in Figure 2. Then the network parameter h ij corresponding to the i node at the j hopping period is valid for the time range [tw a (j), tw b (j)] for the h ij time window, denoted as TW(h ij ). Since the network nodes are generally synchronous hopping, the time windows of the parameters corresponding to each node are fully overlapping, i.e., TW(h 1j ) = TW(h 2j ) = . . . = TW(h zj ) = [tw a (j), tw b (j)], so they are denoted uniformly as TW(j). In the normal case, the time window within which the data is sent is the sending time window STW(j)= [st a (j), st b (j)]. After receiving the data within the time window, the time window for receiving the data can be denoted as receiving time window RTW(j) = [rt a (j), rt b (j)]. The set of times at which data arrives after it has been sent within the time window is the arrival-data time window ATW(j) = [at b (j), at b (j)]. In the ideal case of no network delay, RTW (j) = STW(j) = ATW(j).

Delayed Packet Loss
In hopping networks, some mechanisms are usually design nodes can sense the change of H in real time, so the regular com nodes is unaffected. However, some transmission packet loss is short period of time when H is hopped, which is called delayed p Figure 3, the specific cause of delayed packet loss is that ATW(j) period of time compared to STW(j) due to the presence of netwo in RTW(j) not being able to cover ATW(j) completely. As a result been tried to make RTW(j) cover ATW(j)as much as possible. We tions in the literature [15][16][17], where they present two metho ATW(j). One is to maximize RTW(j) coverage of ATW(j) by exten that STW(j) does not change [14]; the other is to shorten STW(j) does not change, which is equivalent to delaying RTW(j) to g ATW(j) completely [15,16] Figure 3. Delayed packet loss may be lost during network hopping.

Delayed Packet Loss
In hopping networks, some mechanisms are usually designed so that all legitimate nodes can sense the change of H in real time, so the regular communication of legitimate nodes is unaffected. However, some transmission packet loss is still generated during a short period of time when H is hopped, which is called delayed packet loss. As shown in Figure 3, the specific cause of delayed packet loss is that ATW(j) is slightly delayed by a period of time compared to STW(j) due to the presence of network delay d, which results in RTW(j) not being able to cover ATW(j) completely. As a result, different methods have been tried to make RTW(j) cover ATW(j)as much as possible. We summarize the descriptions in the literature [15][16][17], where they present two methods for RTW(j) to cover ATW(j). One is to maximize RTW(j) coverage of ATW(j) by extending RTW(j) on the basis that STW(j) does not change [14]; the other is to shorten STW(j) on the basis that RTW(j) does not change, which is equivalent to delaying RTW(j) to give it a chance to cover ATW(j) completely [15,16], but both methods have drawbacks.

Delayed Packet Loss
In hopping networks, some mechanisms are usually designed so that all legiti nodes can sense the change of H in real time, so the regular communication of legiti nodes is unaffected. However, some transmission packet loss is still generated dur short period of time when H is hopped, which is called delayed packet loss. As show Figure 3, the specific cause of delayed packet loss is that ATW(j) is slightly delayed period of time compared to STW(j) due to the presence of network delay d, which re in RTW(j) not being able to cover ATW(j) completely. As a result, different methods been tried to make RTW(j) cover ATW(j)as much as possible. We summarize the des tions in the literature [15][16][17], where they present two methods for RTW(j) to c ATW(j). One is to maximize RTW(j) coverage of ATW(j) by extending RTW(j) on the that STW(j) does not change [14]; the other is to shorten STW(j) on the basis that RT does not change, which is equivalent to delaying RTW(j) to give it a chance to c ATW(j) completely [15,16], but both methods have drawbacks.
T sent packets arrived packets

Problem Statement
These two methods have two disadvantages: One is the problem of conflicting IP addresses. When extending RTW(j) for s time, then RTW(j) is a period longer than STW(j), and there will be a period of ov between this extra time ODRT = [rta(j + 1), rtb(j)] and RTW(j + 1). Since the overlap time can receive data from different ports but not from different IP addresses, rece data during the overlapping time can lead to a problem of conflicting IP addresse shown in Figure 4.
Time Figure 3. Delayed packet loss may be lost during network hopping.

Problem Statement
These two methods have two disadvantages: One is the problem of conflicting IP addresses. When extending RTW(j) for some time, then RTW(j) is a period longer than STW(j), and there will be a period of overlap between this extra time ODRT = [rt a (j + 1), rt b (j)] and RTW(j + 1). Since the overlapping time can receive data from different ports but not from different IP addresses, receiving data during the overlapping time can lead to a problem of conflicting IP addresses, as shown in Figure 4.
One is the problem of conflicting IP addresses. When extending RTW(j) for time, then RTW(j) is a period longer than STW(j), and there will be a period of o between this extra time ODRT = [rta(j + 1), rtb(j)] and RTW(j + 1). Since the overla time can receive data from different ports but not from different IP addresses, rec data during the overlapping time can lead to a problem of conflicting IP addres shown in Figure 4. Figure 4. ODRT for overlapping data reception time. The other is the problem of tight business coupling. When STW(j) is shortened by a period of time, then a period of time FDTT = [st b (j), st a (j + 1)] for which STW(j) is less than RTW(j) can be used without being used to send data and without the need to interact with the tasks to obtain st b (j) and st a (j + 1) for this period of time through the time window. The aim is to allow delayed data to arrive in total, and such a time window is mainly used in end-to-end networks (see Figure 5).
Entropy 2023, 25, 116 5 of The other is the problem of tight business coupling. When STW(j) is shortened b period of time, then a period of time FDTT = [stb(j), sta(j + 1)] for which STW(j) is less th RTW(j) can be used without being used to send data and without the need to interact w the tasks to obtain stb(j) and sta(j + 1) for this period of time through the time window. T aim is to allow delayed data to arrive in total, and such a time window is mainly used end-to-end networks (see Figure 5).

Proposed Scheme
In this section, we present a scheme for a delayed time window compensation me anism and its parameter estimation to solve the problem of IP address conflicts and tigh coupled services. The implementation of such a mechanism then requires two conditio firstly, a time window compensation mechanism and defined associated parameters th meet the above requirements; secondly, such a mechanism can effectively perform t function as a time window when the influence of external factors (it is assumed that th are no network attacks on the hopping network and that the hopping network is hoppi at a uniform rate) on the compensation mechanism is negligible.

Delay Time Window Compensation Mechanism
According to the above, there are problems with both methods of RTW(j) coveri ATW(j) by extending RTW(j) on the basis that STW(j) does not change length or shorteni STW(j) on the basis that RTW(j) remains unchanged, so we propose the idea of keepi STW(j) equal to RTW(j) and delaying RTW(j) to STW(j) for a period of time, with the a of allowing RTW(j) to override ATW(j). This not only avoids IP address conflicts and tig task coupling, but it also solves the problem of delayed packet loss. The proof is as follow Firstly, when STW(j) = RTW(j), then there is not a period of time when RTW(j) longer than STW(j); this period of time does not create an overlap with RTW(j + 1), a therefore this does not lead to IP address conflicts. Furthermore, there is also not a peri of time when STW(j) is less than RTW(j), which indicates that the time window does n require interactive data tasks to obtain such a period of time, and therefore this does n lead to tight service coupling.
Secondly, when RTW(j) = ATW(j), then all the data from STW(j) can be received RTW(j), including the delayed data.
Thus, we let STW(j) = RTW(j) and RTW(j) = ATW(j); they can be achieved by the f lowing two steps:

Proposed Scheme
In this section, we present a scheme for a delayed time window compensation mechanism and its parameter estimation to solve the problem of IP address conflicts and tightly coupled services. The implementation of such a mechanism then requires two conditions: firstly, a time window compensation mechanism and defined associated parameters that meet the above requirements; secondly, such a mechanism can effectively perform the function as a time window when the influence of external factors (it is assumed that there are no network attacks on the hopping network and that the hopping network is hopping at a uniform rate) on the compensation mechanism is negligible.

Delay Time Window Compensation Mechanism
According to the above, there are problems with both methods of RTW(j) covering ATW(j) by extending RTW(j) on the basis that STW(j) does not change length or shortening STW(j) on the basis that RTW(j) remains unchanged, so we propose the idea of keeping STW(j) equal to RTW(j) and delaying RTW(j) to STW(j) for a period of time, with the aim of allowing RTW(j) to override ATW(j). This not only avoids IP address conflicts and tight task coupling, but it also solves the problem of delayed packet loss. The proof is as follows: Firstly, when STW(j) = RTW(j), then there is not a period of time when RTW(j) is longer than STW(j); this period of time does not create an overlap with RTW(j + 1), and therefore this does not lead to IP address conflicts. Furthermore, there is also not a period of time when STW(j) is less than RTW(j), which indicates that the time window does not require interactive data tasks to obtain such a period of time, and therefore this does not lead to tight service coupling.
Secondly, when RTW(j) = ATW(j), then all the data from STW(j) can be received in RTW(j), including the delayed data.
Thus, we let STW(j) = RTW(j) and RTW(j) = ATW(j); they can be achieved by the following two steps: and rt a (j) = st a (j). (2) For RTW(j) = ATW(j), assume that d is the network delay and w~denotes a period of time (delay time window). Under the condition that step (1) and (2) are satisfied, it follows that at a (j) = st a (j) + d, Conversely, the values of rt a (j) and rt b (j) can be determined as long as the estimated value of w~is obtained and w~= d, again satisfying STW(j) = RTW(j) and ATW(j) = RTW(j).
In summary, the delay time window compensation mechanism has the characteristics of necessity and possibility: 1.
Necessity means that a delay time window compensation mechanism must solve the problem of delayed packet loss and related problems.

2.
Possibility means that there may be a suitable delay time window length, i.e., a value of w~(see Figure 6).
Entropy 2023, 25,116 1. Necessity means that a delay time window compensation mechanism mus problem of delayed packet loss and related problems. 2. Possibility means that there may be a suitable delay time window length, i of w~ (see Figure 6). Figure 6. Diagram of the delay time window.

Estimation for Delay Time Window
In this section, we obtain the value of w~ by calculation, as getting the v helps us to determine the values of rta(j) and rtb(j). Since d is unknown, then w find a way to estimate a value for the network delay and use it as an estimate w~. From this, we present an example with multiple (l) data tasks distributed in able wholly connected network. By progressively computing the network tra delay for the typical data tasks in the example, we finally obtain an estimated v network delay for w~. These typical data tasks include an end-to-end data tas node data task, and multiple multi-node data tasks. This regular data task is plex, so we plan to start with the simpler data tasks, as shown in Figure 7.

Estimation for Delay Time Window
In this section, we obtain the value of w~by calculation, as getting the value of wh elps us to determine the values of rt a (j) and rt b (j). Since d is unknown, then we can only find a way to estimate a value for the network delay and use it as an estimated value for w~. From this, we present an example with multiple (l) data tasks distributed in an observable wholly connected network. By progressively computing the network transmission delay for the typical data tasks in the example, we finally obtain an estimated value of the network delay for w~. These typical data tasks include an end-to-end data task, a multi-node data task, and multiple multi-node data tasks. This regular data task is more complex, so we plan to start with the simpler data tasks, as shown in Figure 7.
ntropy 2023, 25,116 1. Necessity means that a delay time window compensation mechanism m problem of delayed packet loss and related problems. 2. Possibility means that there may be a suitable delay time window leng of w~ (see Figure 6).

Estimation for Delay Time Window
In this section, we obtain the value of w~ by calculation, as getting th helps us to determine the values of rta(j) and rtb(j). Since d is unknown, the find a way to estimate a value for the network delay and use it as an estim w~. From this, we present an example with multiple (l) data tasks distributed able wholly connected network. By progressively computing the network delay for the typical data tasks in the example, we finally obtain an estimate network delay for w~. These typical data tasks include an end-to-end data node data task, and multiple multi-node data tasks. This regular data task plex, so we plan to start with the simpler data tasks, as shown in Figure 7.  (1) A simple data task. Under normal situations, routers and networ sufficient network cache [17,18] to store and forward data, so in most cases cache is still significantly effective in reducing delay in packet loss. Howe  (1) A simple data task. Under normal situations, routers and network agents have sufficient network cache [17,18] to store and forward data, so in most cases, the network cache is still significantly effective in reducing delay in packet loss. However, a few instances can still cause packet loss problems for some data. These are the rare cases where the processor does not have enough processing capability due to large amounts of data or unusual data, and therefore the network cache is insufficient. The few cases that cause delayed packet loss are then our target. Consequently, we examine task 1 (r = 1), as shown in Figure 8. In task 1, we start with two nodes, with a network delay of d 10 between node A and node B. When data is sent from A to B, a small amount of data is lost, except for most of the data that is received by B. This is because when B has sufficient network cache and w~> d 10 , the network cache can store data arriving before rt a (j) without data loss. However, when B does not have sufficient network cache due to oversized data, data arriving before rt a (j) is lost; furthermore, when w~< d 10 and RTW(j) cannot completely cover ATW(j), the data arriving after rt b (j) are lost (see Figure 9). The lost data can be expressed as: Let L AB = 0, w~= d 10 .
, 116 ATW(j), the data arriving after rtb(j) are lost as: Where v denotes the transmission rate, a losses. ropy 2023, 25,116 ATW(j), the data arriving after rtb(j) are lost (see Figure 9). The lost data as: Where v denotes the transmission rate, and LAB denotes the number losses.  (2) Depicts a multi-node data task. We observe this for task 2 (r = 2 able m combined with the corresponding subscript number r indicates nodes of the corresponding data task, e.g., the number of nodes for the is m2. The variable d combined with the corresponding subscript numb the network delay dri of the corresponding task r and node i, e.g., the task 2 and node 1 is d21, as shown in Figure 10. The data tasks for multip complex because data passing through multiple nodes will generate ne there is an accumulation of multiple network delays starting with thre Where v denotes the transmission rate, and L AB denotes the number of delayed packet losses. (2) Depicts a multi-node data task. We observe this for task 2 (r = 2), where the variable m combined with the corresponding subscript number r indicates the number m r of nodes of the corresponding data task, e.g., the number of nodes for the data task 2 (r = 2) is m 2 . The variable d combined with the corresponding subscript numbers r and i denotes the network delay d ri of the corresponding task r and node i, e.g., the network delay for task 2 and node 1 is d 21 , as shown in Figure 10. The data tasks for multiple nodes are more complex because data passing through multiple nodes will generate network delays, and there is an accumulation of multiple network delays starting with three nodes. The network transmission of three nodes can be seen as two end-to-end network transmissions, i.e., node 1 to node 2 and node 2 to node 3. As the data causes one network delay after passing node 2 and another network delay when the data reaches node 3, then the sum of the two network delays is the total network delay when the data comes to node 3. We can then calculate the number of packet losses based on the total network delay, but it needs to be clear that Equation (1) already provides the delayed packet loss for the first end-to-end network transmission, so the delayed packet loss for the second end-to-end network transmission is cumulative on top of that (see Figure 11). The delayed packet loss for the three nodes can then be expressed as: Based on the delayed packet loss for three nodes, we introduce the delayed packet loss for m 2 nodes, which can be expressed as: Combining for (3), we get: we have:    Based on Equation (3), we get: where fAB is the function of packet loss rate for hopping period T.     Based on Equation (3), we get: where f AB is the function of packet loss rate for hopping period T. Let ∂ f 2 ∂w ∼ = 0, we have: (3) Depicts multiple data tasks for the whole network. In the observable network, any two or multiple nodes in the network are connected as a single path. Thus, there are multiple paths (α) across the network, and in most cases, there are multiple data tasks (l) on a single path. Then, after observing the data tasks on this one path, we can further examine the data tasks on multiple paths. Since data tasks on multiple paths are more complex, we first examine data tasks on one path. By observation, accumulating the network delay for each data task can be used as the total network delay for the data tasks on this path. Then similarly, based on (4), accumulating the delayed packet loss for multiple data tasks on the path one can be used as the total delayed packet loss for this path (see Figure 12). The total delayed packet loss on path one can be expressed as: Based on the delayed packet loss for path 1, we introduce the delayed packet loss for the α path expressed as: Based on (6), the data packet loss rate for period (T) can be expressed as: Let ∂ f l ∂w ∼ = 0, we have: [m r d r1 + (m r − 1)d r2 + · · · + d rm r ], then L a = 0 satisfies 0 packet loss for data tasks on r. According to [19] as the basis for w ∼ estimation, the results obtained by Bolot's [19] tests are consistent with those obtained by [20][21][22] using simulation and experimental methods. Under the assumption that using bulk traffic for large packets and traffic for small packets in internet traffic estimation is consistent, the structure of the delay time distribution can be described as the relationship between the waiting time w n and w n+1 for packets n and n + 1. It influences the network traffic (bits) b, the packets (bits) P, the service rate of the network (bits/ms) µ, and the packet queuing time δ. Their relationship can be expressed as w n+1 − w n = (b + P)/µ − δ. Taking Figure 13 as an example, it shows the distribution of w n+1 − w n − δ for n (n ≤ 800) UDP (32 bts) packets with w n+1 − w n − δ at δ = 20 ms. w n+1 − w n − δ is the network load received by the server within [nδ, (n + 1)·δ] and is measured in ms. From Figure 13, it can be seen that the time is mainly distributed in the area covered by the dashed line. Therefore, our proposed strategy is to select the larger data tasks. This is because they take up more time (delay time = receive time (at rκ ) − send time (st rκ )) through multihop routes [23,24]. For example, in Figure 13, assuming that the largest data task r = 2, i = 1, . . . , m 2 , then the maximum delay time is m 2 d 21 + (m 2 − 1)d 22 + · · · + d 2m 2 , and the amount of lost data is Our approach is to use max{∑ i κ=1 at 2κ −st 2κ , 0} as an estimate for w~, i.e., w ∼ = max{∑ i κ=1 at 2κ −st 2κ , 0},   According to [19] as the basis for w estimation, the results obtaine tests are consistent with those obtained by [20][21][22] using simulation a methods. Under the assumption that using bulk traffic for large packe small packets in internet traffic estimation is consistent, the structure o distribution can be described as the relationship between the waiting tim packets n and n + 1. It influences the network traffic (bits) b, the packets (b rate of the network (bits/ms) μ, and the packet queuing time δ. Their rel is measured in ms. From Figure 13, it can be seen that the time is mainly d area covered by the dashed line. Therefore, our proposed strategy is to data tasks. This is because they take up more time (delay time = receive ti time ( r st κ )) through multi-hop routes [23,24]. For example, in Figure 1 the largest data task r = 2, i = 1, …, m2, then the maximum ( )

Performance
In this section, we give the error and experimental evaluation of ou

Performance
In this section, we give the error and experimental evaluation of our two proposed delay time window schemes. For error evaluation, we mainly analyze the difference between estimated and optimal values of the delay time window. For experimental evaluation, we test the data loss rate of our schemes.

Error Evaluation
In this section, to assess the validity of our proposed estimates of the delay time window, we thus present the error assessment of the delay time window. The error is a key part of the error assessment. The error is defined as δ = w~− w, and the mathematical expectation of the error (E(δ)) reflects the magnitude of the mean of the error between the estimated and optimal values of the delay time window. Therefore, it can be used as an indicator for error assessment. In the error evaluation, let the variables x 1 , x 2 , . . . , x 2k+1 denote the network delay, and x 1 , x 2 , . . . , x 2k+1~N (0,1), and let them be independent of each other, w = (x 1 , x 2 , . . . , x 2k+1 )/2k + 1. Let med (x 1 , x 2 , . . . , x 2k+1 ) denote the function that returns an optimal value based on the proposed method. According to [25,26], the probability density distribution for end-to-end delayed packet loss shows gamma distribution. Thus, we consider that its probability density can be viewed as density function e − t 2 2 , and its distribution may be viewed as Firstly, the analysis of multiple network delays is more complex, so we start with three network delays, x 1 , x 2 , and x 3 . According to this, we have δ = ( Similarly, δ = med (x 1 , x 2 , . . . , x 2k+1 ) − (x 1 , x 2 , . . . , x 2k+1 )/2k + 1.
Therefore, E(δ) = 0, from which it follows that the estimated value of w~is an unbiased estimated value.

Experimental Evaluation
The experimental assessment consists of two parts, one for the simulation experiment and the other for the actual examination.
For the simulation experiments, we developed the simulated hopping network program SHN (similar to NS-2) using Dev-C++5.11 and C++, which consisted of seven hopping network nodes, including one transmitter node, one receiver node, and five intermediate nodes, as well as network delays (X) and 14 IP addresses (192.168 .1.10 to 192.168.1.13). The host with the SHN is DESKTOP-B4VQAPP, which was configured with CPU Intel (R) Xeon (R) e-2124 3.31ghz, 16GB of RAM, and Windows 10 OS. The experiments were implemented on DESKTOP-B4VQAPP. In our experiments, we first set up the system's initial values, including the network hopping period T = 2000, the delay time window w = 50, the number of intermediate nodes σ = 5, the mean value mu = 50 of the random variable X, and the variance sigma = 5, 10, 15, 20, bandwidth (100 Mbit/s). Our first experiment was an end-to-end hopping network packet loss rate experiment. The second experiment was a packet loss rate experiment for a multi-node hopping network. In both experiments, we sent 10 5 data (32 bytes) from the sending node to the receiving node and counted the data loss rate at period T based on the data received by the receiving node after the data reached the receiving node. All experiments were repeated 5000 times. For end-to-end networks, as shown in Figure 14 and Table 1, because in this method, the size of the time window was set by sending interaction information to the receiving node and feedback from it. The methods of [15][16][17] are better than the method presented in this paper. In addition, as shown in Figure 15, we experimented with the proposed delayed time window approach, and the experimental results show that the packet loss rate is better for time windows of 10 ms, 20 ms, and 30 ms compared to no time window.
window w = 50, the number of intermediate nodes σ = 5, the mean value mu = 50 o random variable X, and the variance sigma = 5, 10, 15, 20, bandwidth (100 Mbit/s). Our experiment was an end-to-end hopping network packet loss rate experiment. The sec experiment was a packet loss rate experiment for a multi-node hopping network. In experiments, we sent 10 5 data (32 bytes) from the sending node to the receiving node counted the data loss rate at period T based on the data received by the receiving n after the data reached the receiving node. All experiments were repeated 5000 times end-to-end networks, as shown in Figure 14 and Table 1, because in this method, the of the time window was set by sending interaction information to the receiving node feedback from it. The methods of [15][16][17] are better than the method presented in paper. In addition, as shown in Figure 15, we experimented with the proposed dela time window approach, and the experimental results show that the packet loss ra better for time windows of 10 ms, 20 ms, and 30 ms compared to no time window.
(a) (b) Figure 14. (a) Port hopping, (b) IP address hopping, the packet loss rates for end-to-end netw Refs [16,17].  Refs [16,17]. For the actual examination environment, the physical connection topology of the network is shown in Figure 16. The system is composed of two hopping subnets; hopping subnet 1 and hopping subnet 2 are connected through the IP bearer network. After the start of the network hopping, the source address and source port of the IP packet of the data platform in hopping subnet 1 after the hopping process are transmitted through the two WAN ports of the S5700 router through the IP bearer network to the hopping equipment in hopping subnet 2 after restoration, and then they are transmitted by the visitors in the data plane, thus completing an ordinary network transmission process. For the actual examination environment, the physical connection topology of work is shown in Figure 16. The system is composed of two hopping subnets; subnet 1 and hopping subnet 2 are connected through the IP bearer network. A start of the network hopping, the source address and source port of the IP pack data platform in hopping subnet 1 after the hopping process are transmitted thro two WAN ports of the S5700 router through the IP bearer network to the hoppin ment in hopping subnet 2 after restoration, and then they are transmitted by the in the data plane, thus completing an ordinary network transmission process.
The test environment is built with two hopping subnets, each of which is co through an IP bearer network. Each hopping subnet is simulated by the relevan ment in a physical cabinet. Each cabinet includes eight physical servers an switches; the eight servers, respectively, achieve service hopping, network hoppi ture display, and other functions. The IP bearer network is simulated by an inde cabinet. The IP bearer network consists of four switches with three-layer routing f The hardware equipment of the test environment mainly includes servers, switch ers, etc., whose functions and performance indicators are shown in Table 2.  The maximum network transmission rate is 100 Mbit/s under the national standard for category 5 network cables. Network hopping is initiated at one hop/10 s, one hop/5 s, and one hop/2 s, respectively. In addition, a network transmission stress test was conducted (see Figure 17. Moreover, the performance test of the router hopping was carried out at a network transmission pressure of 500 Mb/s (Figures 18 and 19). The experimental The test environment is built with two hopping subnets, each of which is connected through an IP bearer network. Each hopping subnet is simulated by the relevant equipment in a physical cabinet. Each cabinet includes eight physical servers and three switches; the eight servers, respectively, achieve service hopping, network hopping, posture display, and other functions. The IP bearer network is simulated by an independent cabinet. The IP bearer network consists of four switches with three-layer routing function. The hardware equipment of the test environment mainly includes servers, switches, routers, etc., whose functions and performance indicators are shown in Table 2. The maximum network transmission rate is 100 Mbit/s under the national standard for category 5 network cables. Network hopping is initiated at one hop/10 s, one hop/5 s, and one hop/2 s, respectively. In addition, a network transmission stress test was conducted (see Figure 17. Moreover, the performance test of the router hopping was carried out at a network transmission pressure of 500 Mb/s (Figures 18 and 19). The experimental results show that the performance of the router still has a relatively large improvement based on various indicators with the use of delay time windows.

Summary and Future Research
In this paper, we construct a multi-node delay time window estimation method. The delay time window and the delay time window compensation mechanism are proposed in the method, as well as the estimation of the delay time window length. A series of experiments were performed to test the effectiveness of the delay time window estimation method. In the SHN simulation experiments, the network transmission packet loss rate was less than 0.6% at 50 ms network delay. At 100 ms network delay, the network transmission packet loss rate was less than 1.1%. In the actual test, the test environment network cable was lower than the super category five standard, and the maximum network transmission rate was 100 Mbit/s. Taking the network transmission speed pressure of 500 Mbit/s as an example, the packet loss rate of network transmission using the delay time window method is less than 0.8% under 30 ms network delay. This is an improvement compared to the packet loss rate of network transmission without delay time windows. This proves the effectiveness of the method in this paper.
The first suggestion for future research relates to the later ones concerning the plausibility of the period of change of the network parameters. In the delay time window estimation method, an estimate of the delay time window is proposed based on the calculated value of the network delay for data services. This reduces the network transmission packet loss rate to a certain extent. However, many factors affect the network transmission packet loss rate, including in hopping networks; one of the important factors is the effect of the hopping period on the network transmission packet loss rate. Therefore, the second recommendation for future research is that we will consider setting a reasonable hopping period. There are two main aspects of a reasonable period. The first is that setting the hopping period length should not consume too many system resources or cause too much packet loss. The second is that setting the hopping period length should not have too great an impact on the resistance to external attacks. It is necessary to analyze the effect of the hopping period on these factors and then propose a reasonable hopping period scheme.
In future research, we will continue to evaluate the impact of the hopping period on the transmission performance of the system.

Patents
Zhengquan Xu, Zhu Fang. A method for calculating delay time windows in multi-node networks of hopping networks. National Invention Patent, Patent number 2021107766.
Author Contributions: Z.F. and Z.X. reviewed the literature, drafted the manuscript, and critically revised and approved the final version before submission. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the National Natural Science Foundation of China (41971407).
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.

TW
time window STW sending time window RTW receiving time window ATW arrival-data time window ODRT overlapping data reception time FDTT idle data sending time