This allows network administrators to deploy and manage security functions as virtualized services. In addition, costs are reduced, and the efficiency of network security is increased for IoT devices. The use of 6G for VNFs provides a flexible and customizable solution for analyzing network traffic and detecting potential threats for IoT devices. The centralized management provided by SDN allows network administrators to manage network resources through a single point of control. This leads to reduced complexity and improved overall network security for IoT devices. Overall, the proposed approach provides a powerful, efficient and secure solution for IoT devices that is well suited to the demands of modern businesses and organizations. The proposed solution consists of three main phases:
3.2. Threat-Capturing and Decision-Driven Process for IoT Devices
Threat capturing is the process of intercepting and recording network traffic for the IoT devices. This is useful for analyzing the network behavior, troubleshooting issues, and identifying threats to IoT devices. Threat capturing in VNFSDN involves capturing network threats at a specific point in the network, extracting the necessary data, and processing the data for further analysis. Threat capturing in VNFSDN offers several advantages over traditional threat-capturing methods. First, it allows the capture and processing of threats at specific points in the network, which can provide greater visibility of the network behavior. Second, it allows the capture of threats according to specific criteria, which can help filter out noise and focus on specific areas of interest. Finally, it allows packets to be captured at the VNF level, which can provide more granular insights into network behavior and performance. Algorithm 2 represents the process of threat-capturing and saving packets for further analysis. First, the variables and parameters required for capturing and storing threat packets are initialized. The user interface, MAC address of the access point, and other inputs are accepted. The output of the algorithm is a file that is produced and recorded. The Linux network-monitoring tool is deployed and configures the network resources. To create the final threat-captured file, the method gathers and adds packets to previously collected packets. For subsequent investigations, the recorded packets are saved to a designated folder. The whole loop ensures ongoing packet capture and storage until the process is complete.
Algorithm 2: Threat-capturing and -mitigating processes using VNFSDN |
Input: in Output: out Initialization: {: Network Chanel; : MAC address of access point; I: Interface; : Packet-captured file; : Linux tool; : Network monitoring; N: Network; : Folder; : Packets} Set
Do process
while
do Capture P Sum Do Save end while
|
The network traffic for IoT devices is analyzed by the VNFs to make decisions about the threats to IoT devices. The SDN controller then receives these alerts and makes decisions based on predefined response policies. These policies can be mathematically represented as a set of conditional statements that consider factors such as the severity of the threat, the location of the affected network resources, and the overall state of the network.
Figure 5 depicts the VNFSDN and 6G technology-based threat collection and decision-making process. The process begins with a network packet being transmitted through the network. The SDN component is then used to capture packets and extract the necessary data. Threat-capturing and -mitigating processes are of paramount significance, as illustrated in Algorithm 2.
Algorithm 2 illustrates packet capture and saving using the VNFSDN approach. Step 1 gives the initialization of the variables, including the network channel (), MAC address of the access point (), interface (I) used for capturing packets, packet-captured file (), Linux tool () used for network monitoring, network monitoring (), the network (N) being monitored, the folder () where the captured packets will be saved, and the packet (P) being captured. Steps 2–3 represent the input and outputs, respectively. Step 4 sets the variables. Steps 5–11 represent the packet-capturing and -saving processes. The algorithm enters a loop that captures packets, adds each captured packet to a packet-captured file, and resets the network monitoring () to zero. This loop continues until network monitoring () ceases within the network (N) being monitored. When the loop ends, the packet-captured file () is saved in a designated folder (). The time complexity of the threat capture algorithm is O (log n), which makes the threat detection procedure substantially faster.
Definition 2. A VNF is a software component of SDN that performs specific network functions, such as firewalls and load balancers, on a network framework F. The VNFs communicate with standardized protocols () using packets routed through the network R. VNFs enable dynamic and scalable network architectures to adapt to changing business requirements and traffic patterns.
Theorem 2. The proposed algorithm for the packet-capturing and -saving processes can efficiently capture and store network packets.
Proof. The algorithm begins by initializing the necessary variables, such as the network channel, MAC address of the access point, interface, packet-captured file, Linux tool, network monitoring, network, folder, and packets. It then begins the network-monitoring process using the Linux tool. When the network-monitoring variable is less than or equal to one, the algorithm captures packets from the network channel and adds them to the packet-captured file. The captured packets are then summed to calculate the total number of captured packets. After capturing the desired number of packets, the network-monitoring variable is set to zero to stop the network-monitoring process. The packet-captured files are saved to a designated folder. The total number of packets captured is given by
where
represents the time when generating the packets,
is the desired number of network packets to capture,
is the set of captured packets, and (
) is an indicator variable that takes the value of 1 if the packet is captured and 0 otherwise. The time required to capture
packets is given by
where
t is the time required to capture the total number of available packets, and
is the capacity of the network channel. Equation (
14) calculates the time required to capture a total number of packets
in a network channel with a given capacity
and the time taken to process each packet
. This formula suggests that the time required to capture packets increases with the number of packets and the time required to process each packet. This also shows that a network with a higher capacity can capture packets faster than a network with a lower capacity.
The efficiency
E is defined as the ratio of the number of packets captured
to the time required to capture them
t.
Equation (
15) suggests that efficiency can be improved by increasing the number of packets captured, reducing the time required to capture them, or adjusting the constant
or the total number of available packets
. The network-monitoring variable is less than or equal to 1 while the algorithm captures packets from the network channel. Therefore, the maximum number of packets
that can be captured in a single iteration is given by
where
represents the maximum number of packets captured without exceeding the desired total number of packets. The captured packets are then added to packet-captured files. Therefore, the file size
is in bytes after capturing the packets.
where
is the initial file size and (
) is the average size of a network packet. This equation calculates the file size in bytes after capturing n packets and adding them to the initial file size
. The average size of a network packet is denoted as (
), which is multiplied by the number of captured packets
and added to the initial file size. The algorithm efficiently captures and stores network packets by iteratively capturing the maximum number of packets possible while the network-monitoring variable is less than or equal to 1 and adding them to the packet-captured file until the desired number of packets is captured. □
Hypothesis 2. For effective network analysis and incident responses, a real-time packet capture and saving algorithm is anticipated to deliver precise and timely network packet data.
Proof. The algorithm continually captures and stores network packets in real time using a network-monitoring process. It is suitable for real-time network packet analysis owing to its capability to enable instant access to network packets for analysis and troubleshooting. Packet-capturing and -saving processes are given by
where
is the count of the captured packets,
is the previous count, and 1 is added to the previous count for each captured packet. Equation (
18) represents the process of packet capturing and saving, in which the count of captured packets
is calculated by adding one to the previous count
of each captured packet. It is a simple equation used to track the number of packets captured by a network-monitoring or security tool. This equation provides a straightforward method for counting and tracking the number of packets captured during packet capturing. The packet loss rate
is given by
where
is the sum of the packets, and (
) denotes the total number of dropped packets. In this case, it represents packets dropped owing to network congestion or other reasons. The
measures the percentage of packets that are not successfully delivered to a destination. The
measures the percentage of packets that are not successfully delivered to their destination. This does not help network administrators identify areas of the network that may require optimization or additional resources to reduce packet loss. The network latency
is given by
where (
) is the arrival time, (
) is the departure time, and (
) is the processing time. The processing time (
) is the time required to process packet i using the network. This metric is particularly useful for identifying any areas in the network that may be experiencing delays or bottlenecks and can be utilized to optimize both routing and network configuration. By measuring and analyzing the processing time, network administrators can gain a better understanding of how their network functions and make informed decisions to improve their overall performance. Thus, network throughput
with 6G can be obtained as follows:
where the total time taken
is the time required for all the packets to be transmitted. This metric is useful for quantifying the amount of data that can be transferred over a network within a specific period, and can be used to identify network segments that require improvements in data transfer rates. By analyzing
, one can gain valuable insights into network performance and identify areas where optimization efforts may be necessary. The network utilization
is given by
where the total bytes
is the total number of bytes transmitted over the network, and the available bandwidth
is the maximum bandwidth the network can support. This metric measures the percentage of available bandwidth being used and can help identify areas of the network where traffic congestion may occur. Thus, the jitter
is given by
where
is the packet arrival time,
is the average packet arrival time,
is the total number of packets, and the average packet arrival time is the average time between packet arrivals. Jitter measures the variation in packet arrival times. This can help identify areas of the network where delays or interruptions may occur. The packet delivery ratio
is given by
where
is the number of successfully delivered packets and
is the total number of packets sent. This metric measures the percentage of packets that are successfully delivered to their destination and can be used to evaluate the overall performance of the network. The average packet size
is given by
where
is the number of packets transmitted over the network. This equation can be used to optimize network performance by adjusting packet size limits or optimizing the network configuration. □
Lemma 2. The proposed algorithm for packet capture and saving is suitable for a real-time network packet analysis.where measures the percentage of packets that successfully reach their destination and provides an indication of the reliability of the network, and denotes total number of the packets. The packet size has a great impact on the performance of the network, particularly when using 6G. Thus, the packet size can be determined as follows:where the header size includes the protocol header, address, and control information, and the payload size includes the actual data being transmitted. Network administrators can optimize network performance and ensure that network resources are effectively used by analyzing the packet size. The network throughput can be greatly improved by employing features
of 6G. Additionally, the administrators can identify bottlenecks and optimize the network to improve its performance. Thus, the optimized throughput
can be determined as follows:
where the amount of data transferred
is measured in bits or bytes, and latency
l is the time delay between sending and receiving packets. The network administrators can assess network performance and identify potential issues with network latency or packet loss. Thus, there is a need to calculate the round trip time
, which can be determined as follows:
where
is the time taken for a packet to travel from the sender to the receiver, and
is the time taken for the packet to travel back again, which is also known as the round trip time or RTT. By measuring the RTT, the following can be obtained:
For each historical sample, a vector derived from the efficiency of the CPU, the main memory, and the number of VMs is achieved. All three parts have the same significance level.
The PM on the cloud calculates the Euclidean distance between the current state and the best settings. The configuration is said to be optimal or acceptable when it is within the range of efficiency accepted by the PM of the cloud server. These settings are stored to meet efficiency requirements.
The PM also calculates the distance of the total configuration to the total shutdown of the PM state for the cloud.
Let us assume that user
U has the minimal distance from user position
to join the edge computing. Thus, the distance matrix
between the mobile user and
can be formulated as
Additionally, the positions of two mobile cloud nodes are denoted by a and b, and the nearest distance between two mobile cloud nodes is measured using the Euclidean distance.
Additionally, a user’s coordinates are indicated by the
x and
y …values. There, the closest distance can be determined, d
The threshold distance identifies the range of the user’s participation in edge computing
because
represents the packet transmission from the user to the edge computing. Additionally, the distance between
U and the edge computing performance is denoted by
, and the distance between
and user
U is represented by
. The entire user performance with 6G is associated with
and
, and the systems are indicated by
and
. The distance-determining fitness functions can be calculated as follows:
where
represents the distance between two users.
Corollary 2. The ability of Algorithm 2 to capture and store network packets in real time can be useful for quickly detecting and responding to security incidents by providing access to important network data for analysis and troubleshooting. The efficiency of the proposed algorithm in capturing and storing network packets, E(efficiency), is directly proportional to the ability to detect and respond to security incidents, (response). The proportionality constant K represents the scaling component of the relationship and can be used to express the proportional relationship between E and . This implies that as the ability to detect and respond to security incidents increases, the efficiency of the algorithm in capturing and storing network packets also increases proportionally.
Let
be a vector representing the set of network resources and a scalar representing the current time. Let
be a function that takes the inputs and returns the security status of the network resources. Based on
, a decision function
can be defined as follows:
If the security status of the network resources indicates the presence of a threat, the decision function outputs a value of 1 to indicate that a threat has been detected. Otherwise, the decision function outputs a value of 0, indicating that no threat has been detected. This decision function can be used in various security applications, such as intrusion detection systems, to automate the processes of threat detection and response. By continuously monitoring the security status of network resources and applying a decision function, security systems can quickly and accurately detect threats and take appropriate actions to mitigate them. If the decision function returns a value of 1, the SDN controller mitigates the security threat. This can be mathematically represented as
where
is a binary function that maps the set of network resources that are affected by the security threat. The function
is defined as
Once the affected network resources have been identified, the SDN controller can take action to mitigate the threat. This may involve rerouting traffic, isolating the affected resources, or deploying additional security measures. In addition to responding to security threats, the VNFSDN approach enables proactive monitoring of the network to detect potential vulnerabilities before they can be exploited. This can be achieved using machine learning algorithms to analyze network traffic and identify patterns of behavior that may indicate an attempted attack. Mathematically, this can be represented as
where
represents the probability of a security threat occurring given the current security status of the network resources represented by
.
Let us assume that there are two system placed on the edge computing. The consumers are competing for access to them. A fundamental model of competition is created and combines the fuzzy fractional aspects of an ordinary differential equation with requests from the users to access the first system placed on the edge computing
and the second system placed on the edge computing
.
where
is the maximum capacity of dealing with the requests of the first system of the edge computing using the 6G network;
is the maximum capacity of entertaining the requests of the second PM of the MCC;
is the number of requests to connect with the first system;
is the number of requests to connect with the second system;
is the request of each user using 6G within a given time;
is the competing capability of the user for the first system; and
is the competing capability of the user for the second system. The capabilities of the users depend on the characteristics of SDN and 6G because each system only needs a short amount of time to respond to the requests indicated by the user’s capabilities. Because each request from the user is processed with a distinct time delay, each system’s properties vary slightly from one another.
Suppose that
and
represents the latency of each system placed on edge computing.
and
indicate the maximum capabilities of the first and second systems, respectively. Thus, the complete model to deal with requests for the systems can be determined as follows:
The probability of a security threat occurring at a specific time based on the current security status of the network resources should be measured. By monitoring the network and analyzing behavioral patterns, both algorithms can be used to update the security status of the network resources and improve the accuracy of the probability calculations. Overall, the third phase of the VNFSDN approach plays a critical role in ensuring the effectiveness and efficiency of the network security solution. By leveraging the power of SDN and VNFs, this approach enables quick and targeted responses to security threats while also providing proactive monitoring to detect potential vulnerabilities. Mathematical representations of the decision-making process and proactive monitoring illustrate the potential of the VNFSDN approach to enhance network security in a scalable and efficient manner.
3.3. Modeling of the Blockchain-Enabled Consensus Algorithm for Cyberattack Depletion
Prioritized delegated proof of stake (PDPoS) is used to counteract attacks to IoT devices. A predetermined number of sensor nodes or IoT devices which handle communication and oversee the ledger can be determined by PDPoS. As a result, these IoT devices create blocks and perform validation to stop any illegal transactions from being made by the attacker. The participants (IoT devices) are chosen through a voting process, and they have the option of being replaced or expelled if they act inappropriately or perform poorly. PDPoS asserts that it is more effective, scalable, secure and adaptable. The PDPoS is a modification to the current DPoS consensus that allows blockchain to have a three-tier network on top of the network, as depicted in
Figure 6. PDPoS is a mechanism that is more flexible and scalable. Blockchain networks may be able to manage additional users, applications, and transactions without compromising decentralization or security. Moreover, for a PDPoS chain, the possibility for staked asset values to rise in proportion to the network value exists. In other words, the network’s economic security improves as the native token of the PDPoS chain increases in value. PDPoS has a scaling benefit over PoW because of this characteristic. Let
be the number of blockchain users in the network, and when the number of the users increases there is the possibility of increasing the level of contention
that requires a level of consistency
l. Thus, the scalability of the blockchain
can be balanced by employing PDPoS, which is given by
In our proposed PDPoS, we define top-, medium-, and low-priority delegates. Our supposition is that an IoT device completes a communication on a platform powered by blockchain. Any of the tiers (tier 1, tier 2, and tier 3) may receive it for processing from the IoT device.
The response time for a Tier 1 transmission, whether it be to a user device or delegate, is only 100 ms at most. For the confirmation, Tier 2 can take up to two hours longer than Tier 3, which takes up to twelve hours. On the other hand, top-priority delegates can only communicate with Tier 1 block producers (BPs). The medium-priority delegates can communicate with Tier 2. The low-priority delegates can communicate with Tier 3 block producers. Tiers 1–3 have 36, 24, and 12 BPs, respectively. Tiers 2 and 3 are also responsible for the consensus, but the transaction cannot be confirmed, and the user device would be paying more to send the transmission straight to Tier 1. Every transmission on Tier 3 is sent to Tier 2 and would ultimately become part of Tier 1. This kind of transmission can be confirmed after 12 h. BPs from Tiers 3 and 2 cannot participate in Tier 1 at the same time, and vice versa. The incentive for validating would be reduced because Tiers 2 and 3 possess medium- and low-priority delegates. By clearing up Tier 1, the proposed PDPoS consensus mechanism would increase the network’s communication speed. Additionally, it enables consumers to pay less for blockchain benefits. The IoT-enabled user would be able to pay according to the urgency and use case under this arrangement. When a IoT-enabled device moves from Tier 3 to Tier 2 and then finally Tier 1, they can track their transmission and receive updates. The transmission will be sent directly to Tier 1 and obtain a timely confirmation within 100 ms, but the IoT-enabled device will have to pay an additional fee if they need the transaction completed right away. The approach will use a third-party database to store the hash in case a transaction from Tier 2 or 3 to Tier 1 is missing or fails. One needs a certain level of processing power and to keep a stake for a set amount of time in order to be a BP on Tier 1. After the allotted time has passed, the BPs would enter the election pool, where residents would vote on them every two hours. The top 6 block producers, out of a total of 36, would be known as Super BPs. A minimum of 30 BPs must validate a transaction for it to be considered final, and at least 5 of the 6 super BPs must sign and confirm the transaction in order for it to be considered final. This will increase the security and effectiveness of the network. Elections serve to determine the fate of the BPs for the following two hours every two hours. The same technique can be used for Tiers 2 and 3, with the exception that there will only be 20 and 10 BPs, respectively, and the Super BP concept will be dropped. An election would be performed every two hours.
Every PDPoS IoT-enabled device receives a reward. Both Super BPs and BPs are paid based on the number of blocks confirmed per hour. Voting rewards will also be given to the voters. This network could be utilized for high-frequency instantaneous transactions in a wide range of IoT-enabled applications (including commercial, health, and vehicle validation). PDPoS reduces the number of nodes necessary for the consensus and allows for faster and less expensive transactions, which aids in energy conservation for IoT devices. Energy consumption has been a major concern in traditional blockchain technology, but it has been addressed using this algorithm. PDPoS, on the other hand, has weaknesses due to its reliance on a small number of incorruptible delegates; however, in the presence of edge computing and a 6G network, it does not threaten the network’s decentralization and security. Due to this obstacle, small members with less impact in the network may find it difficult to enroll and engage in the network. However, the goal of this consensus algorithm is to prevent cyberattacks on IoT devices.
Another objective by employing this algorithm is to achieve stability.
Let
be the participants and
be the transactions of the stable communication at time
t. Given a finite time
, the goal is to minimize the loss function
, which can be determined as follows:
where
denotes the decentralized process, and
denotes the proportionate weight assigned to the trade-off between participants and price stability.
The PDPoS algorithm uses a balancing token to prevent unpredictability of the IoT devices. Each token displays consistency in its operation. Furthermore, there is a common pattern that allows for real-time monitoring in order to maintain the PDPoS predictable condition.
The quantity of tokens in circulation
increases as tokens are awarded
as rewards to nodes, while it decreases when tokens are reissued. Thus, the token-increasing process
can be determined as follows:
where
denotes the amount the controller’s account pays to active token holders to buy their tokens. This implies that the number of tokens purchased at time t is equal to H divided by the cost of the purchase.
denotes the token’s current market price, and is the additional sum that the IoT-enabled user pays over the market price to entice them to sell their tokens. If the owners of the tokens are not rational, is equal to zero. Because a reasonable owner might not sell their tokens if they think the price would go up in subsequent timestamps, the IoT-enabled user could be required to offer an additional incentive price to disprove the agents’ predictions. To determine for a rational strategy, we developed a mathematical strategy.
Two stocks are kept by the IoT-enabled user, one of which is made up of tokens and the other of dollars. The values of these two stocks at time
t are provided by
and
, respectively. The dollar balance rises as a result of the user purchasing tokens and falls as a result of market buybacks.
where the block is represented by the new variable
. Hence,
is a linear function of the system’s total number of users. Users need to be validated before joining the blockchain. The block
is comparable to the security parameters that are stored within in, meaning it grows with the number of the newly joined users and shrinks with a decrease in users.
Thus, it is important to determine blocks in transit to avoid any kind of potential threat.
where
denotes the release of blocks at a given time.
The relationship between the number of IoT-enabled users in the system
, the block’s value
, and the demand for the block is (unspecified). Assuming that the release of the block is proportionate to the number of IoT-enabled users in the existing network at a specified time, it is possible to state that for a certain block value
∝
. The PDPoS algorithm completely models the value of each block in each phase. The transmitting block remains invariant when confirmed using Equations (46)–(48).
The dynamic quantities required to regulate the blockchain are captured by the state
. The control vector
should be built to activate the blockchain.
PDPoS does not necessitate the costly and powerful equipment required to enable network functioning for IoT devices. As a result, network maintenance costs are reduced. Furthermore, PDPoS networks are environmentally friendly because they capitalize on little power.
The blockchain’s sustainability is critical, which is attained by utilizing PDPoS for the depletion of threats because it serves as a protection for the blockchain network. To do this, the smart contract managed by PDPoS dynamically distributes block rewards across blockchain-enabled IoT devices. This is accomplished by employing a time-varying parameter ranging between 0 and 1.
Thus, the transaction rate during the specific time
can be determined as follows:
where
denotes the block-generating rate,
is an average collateral ratio,
represents the reward earnings, and
F is a function that grows indefinitely.
The modification of the time-varying parameter
becomes compatible with the block-generating rate
that is critical to the stable system. The PDPoS algorithm for
works in discrete time steps, as shown below:
where
denotes a monotonically growing concave function
Assume for the time being that an IoT-enabled user on the blockchain is risk-neutral and does not devalue the performance. Since the process happened over a short period of time, discounting should not really be a factor. A useful and significant benchmark is provided by rationality and risk neutrality. Next, take into account a date and a brief window of time from t to t. We suppose that during this time, the communication among the IoT-enabled user remains roughly constant.
Let us assume that a number of transactions
are initiated and converted into secured blocks over the edge computer for the IoT devices. Thus, the secured blocks can be determined as follows:
where
denotes the number of secured blocks.
PDPoS provides a fair voting process for selecting the delegates that request for transaction validation. The fair voting process is completed at time
with a higher success probability
, and the success will be
. The fair voting process continues with a probability of
, and the success rate during the operating process is
. On the other hand, it is also important to count the number of transactions for IoT devices
and the number of blocks
at time
, together with risk neutrality and rationality for the number of IoT-enabled users
. On the other hand, the order for the quantity of blocks for the number of IoT-enabled users
can be determined as follows:
If the numbers of blocks at time
t cannot match with the the transaction-generating rate at
, which can be determined by looking at the number of delegates in the blockchain technology,
. At the start of the time span
, the number of original blocks in the network is
. Now, it can be written as
. IoT-enabled users are not willing to accept the sudden increase in blocks in the network. The process will finish at
with a success probability of
, and the number of blocks will be
. The run continues with a success probability of
, and the total blocks at the start of the subsequent time interval are
.
where
denotes the network capacity.
Instability in the blockchain technology will undoubtedly affect the other; however, small variations can be tolerated. The instability in the blockchain can be modeled using stochastic differential equations, which explain how a network responds in uncertain settings. One such practical mathematical model is the scalar case, which can be expressed as a stochastic linear differential equation.
where coefficient (
) depends on the semi-Markov process (
). The expected states (
) of the stochastic process (
) illustrate the situations in which the blockchain efficiently works, for instance, in the stable blockchain, during a cyberattacks, and so forth. Let circumstance (
), k = 1, 2, …n, be taken by the stochastic process (
). Thus, it can be denoted that
) = (
). The following equations can also be used to show the intensities (
).
Changes in the stochastic process (
) are brought about by fluctuations in the network, and as a result, the solution in this scalar situation is subject to arbitrary transformations.
At the time of the jumps (), i = 1, 2, …
Corollary 3. It is possible to discover the potential threats in a specific situation.
Proof. The following three alternative states can be used to gauge the effectiveness of the blockchain technology employing the PDPoS algorithm. □
If the blockchain is operating during the crisis, then = ();
If the blockchain functions in a secure manner, then = ();
If the blockchain works under restricted conditions, then = ().
With given concentrations, it can written as follows:
This demonstrates that with the use of PDPoS algorithm, the blockchain can remain stable for a certain amount of time ().
The preceding three states demonstrate how blockchain technology can be used to effectively mitigate threats. The following alternative states can be modeled as follows:
where
F denotes the capacity of the blockchain technology when employing the PDPoS consensus algorithm, (
) represents the standard function when the blockchain is being operated during entire time (
T), and (
) shows the nature of the network, which is used for blockchain technology.