1. Introduction
In response to recent acts of terrorism and cyberterrorism, it is increasingly critical to provide robust security measures and implement an effective disaster recovery system [
1]. This system must be adaptable to various vulnerable targets, including, but not limited to, stock markets, postal services, nuclear facilities, human resource centers, military installations, computer networks, and government agencies. Given the inherent limitations in achieving absolute security, a focus on resilient recovery paradigms becomes essential to mitigate damage and ensure operational continuity across diverse sectors. To enhance global security, three primary aspects merit administration, with prevention as the foremost concern. Cybersecurity represents a critical issue within computer industries, encompassing the Internet, software engineering, and e-commerce. While the conventional notion of security involves strategies to prevent attacks from adversaries, cybersecurity entails three distinct elements: secrecy (confidentiality), accuracy (integrity), and availability (reliability) [
2]. Addressing these factors is crucial for maintaining robust and resilient systems against evolving threats in the digital domain. A secure network architecture mandates that information not be disclosed to unauthorized entities. Accuracy requires that the system maintain data integrity, preventing any corruption of stored information. Availability ensures that the network system operates efficiently and possesses the capacity to recover in the event of a disaster [
2]. These tenets are essential for establishing a resilient and trustworthy network environment capable of safeguarding sensitive data and sustaining operations under adverse conditions.
Blockchain technology provides a distributed public digital ledger maintained through consensus across a peer-to-peer network, finding application in various services beyond cryptocurrencies [
3,
4]. Traditional decentralized networks offer advantages over centralized systems by mitigating numerous security risks [
5]. The decentralized architecture enhances resilience and trust, making blockchain a foundational technology for secure and transparent operations in diverse applications. Blockchain Governance Game (BGG) offers a theoretical stochastic game framework for determining optimal strategies to prevent network failures [
6]. This model combines mixed strategy game and fluctuation theories to yield analytically tractable results for enhancing decentralized network security. Strategic Alliance for Blockchain Governance Game (SABGG) presents an alternative method for reserving real nodes [
7], while a novel secure blockchain network framework aims to prevent damages. By applying the alliance concept from a strategic management perspective atop a general BGG, a hybrid mathematical model is created to determine strategies for protecting a network through strategic alliances, or it can perform a security operation to protect entire multi-layered networks from attackers [
8]. This study provides a comprehensive analysis of network security, employing explicit mathematical formulations to predict optimal times for security operations. Recent research integrates BGG with machine learning systems to defend against cyberattacks [
9]. Furthermore, BGG and its variants have been applied to enhance security in real-world applications, including smart cars [
10,
11] and AI-enabled robot swarms [
12,
13,
14,
15]. These applications underscore the versatility and practical utility of BGG in bolstering the security of blockchain-based networks.
Classic system reliability and network availability models, adapting closed queuing systems (finite sources) with or without reserve backups, have undergone extensive study [
16,
17]. Various stochastic analysis techniques, including semi-regenerative analysis with additional results for semi-Markov processes, have been explored to derive analytical solutions for practical stochastic models, proving particularly useful in optimization [
1,
18]. The present stochastic modeling methodology builds upon the foundations laid in [
1,
18] and further developed in [
6,
7]. The duality principle, prominent in fields such as set theory, electrical engineering, and mathematical optimization, posits that if a statement is valid, its dual statement is also valid [
19]. This principle involves interchanging specific elements within the original statement to generate its dual. For instance, set theory might swap unions with intersections or replace a set with its complement. The duality principle is particularly relevant in servicing machines with double control [
19]. This research firstly includes a simple stochastic model, which is a monotone increased counting process referred to as
Model-1.
Model-2 is similar to
Model-1, except that it has the limited capacity (i.e, the counting number is limited up to the full capacity
M). Since the model is monotone increase, the counts start from 0 to
M (i.e.,
).
Model-2 is directly connected with yet another model, which we will call
Model-3, and
Model-3 is called the reverse counting process, which means that the counting number in the system starts from the full capacity (i.e.,
M) and monotone decreased to the ground (i.e.,
). The Poisson point process is a type of mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another [
20]. The process derives from the fact that the number of points in any given finite region follows a Poisson distribution. The process itself was discovered independently and repeatedly in several settings. This process is one of the popular counting processes which has Markov properties. The Poisson counting process has been discovered in several settings, including experiments on radioactive decay, telephone call arrivals, and actuarial science [
21,
22].
The Reverse Poisson Counting Process (RPCP) represents a stochastic model which focuses on counting possibilities under conditions where the capacity or the number of available opportunities is random. The framework allows for the derivation of key stochastic properties, such as the probability mass function and the mean, specifically tailored to scenarios involving this random capacity element. Its utility lies in providing a tractable approach for analyzing and optimizing systems where such random constraints are significant. Direct observation of this reverse counting process has proven difficult, potentially limiting its discovery due to challenges in identifying systems or models where it explicitly manifests. Nevertheless, this process can effectively characterize the extreme behaviors of established stochastic models. These include the M/M/1 queueing system operating under instantaneous service completion (zero service time or infinite service rate) or the death-only process, which is an extreme case derived from general birth–death processes. One recent study indicates the potential adaptation of this process for enhancing the availability of streaming services, specifically in maintaining a target number of subscribers during broadcasts. In this context, each subscriber or terminal device can be viewed as a network node. Furthermore, the RPCP framework might be applicable to transaction validation systems within decentralized networks (e.g., blockchain networks). This is particularly relevant for handling frequent state transitions of nodes between online and offline states, a critical aspect for systems deployed on lightweight, portable personal devices [
23]. Such networks are designed to accommodate frequent user joins and departures. Nevertheless, the assumption is made that user nodes persist online for a minimum duration equivalent to one complete transaction approval time (TAT). This presumption is justified, as user presence for at least one TAT is requisite for transaction finalization via consensus among other nodes [
23]. The primary contributions of this research involve the formal definition and analysis of the RPCP. Explicit mathematical derivations were provided for key stochastic characteristics, including the probability mass function and mean, under conditions of random capacity. The relevance of the RPCP to networks allowing for dynamic node participation, while assuming a minimum online duration equivalent to the transaction approval time, was established. The explicit formulas derived provide a direct method for applying functionals of the fundamental stochastic characteristics and optimizing the relevant objective function.
Following this introduction, the paper is organized into four sections.
Section 2 presents the theoretical foundation of the basic stochastic process, specifically the Poisson counting process with memoryless observation, along with its capacity-limited variants.
Section 3 details the mathematical formulation and analytical proofs underpinning the RPCP. Furthermore, it analyzes an extension incorporating random capacity.
Section 4 presents two potential applications of the RPCP, accompanied by analytical results evaluating the effectiveness of the models. Lastly,
Section 5 provides a summary of the contributions and outlines potential avenues for future research.
3. Reverse Poisson Counting Process
This section presents detailed proofs and formulations that define the behavior of the RPCP, which is referred to as Model-3. Additionally, the analysis extends to a variant of the RPCP that incorporates random capacity, providing a more flexible and realistic model for various applications. The RPCP is fundamentally a Poisson counting process with both a limited capacity and a memoryless observation process. However, unlike typical counting processes, the RPCP counts backward from a defined maximum level M, representing the full capacity. Let be the number of negative counting from the full capacity M at the observation time (i.e., ). Each observation process shall be reset once the duration reaches from the starting moment and the capacity is fully restored to M.
Theorem 1. The probability of the RPCP with the exponential observation time could be found as follows:and Proof . From (
16), we have
from (
14), we have
where
because
□
The meaning of
Reverse in the RPCP mathematically manifests as a probabilistic reflection of the truncated Poisson counting process from
Model-2, where the probability mass function
for the RPCP equates
,
with
normalized to ensure summation to unity from (
15). This mirrors the cumulative distribution from the upper bound
M downward, effectively inverting the directionality: whereas Model-2 accumulates counts upward from zero to a capped
M (losses beyond
M), the RPCP depletes from full capacity
M to zero, interpreting decrements as negative counts or failures. The duality principle allows to swap accumulation with depletion, akin to birth–death process extremes where births vanish, yielding a pure death process; explicit derivation from
Theorem 1. It is observed that all parameters and their relationships with the models are detailed in
Appendix A.
From (
15)–(
18), the probability of the RPCP with the exponential observation duration finally determined as follows:
where
The functional
of this truncated counting process could be defined as follows:
and the PGF of RPCP
with the exponential observation duration
becomes
which could be calculated as follows:
and the mean of the RPCP is as follows:
which yields
The concept of random capacity is targeted for extend extending the RPCP model to accommodate scenarios where the capacity
M is not fixed but rather varies randomly. This enhancement allows the model to better represent real-world systems with fluctuating resources or capacities, adding a layer of complexity and realism to the analysis. Let
denote the number of counts from the random capacity
representing the random full capacity at the initial moment (i.e.,
). The probability mass function (PMF) of
(representing the system’s full capacity or the number of full network nodes) is given by
. The mean of
is denoted as
and
is the maximum capacity of system or the number of maximum nodes on the network security. From (
19), the probability of the RPCP under the random capacity could be as follows:
where
and
which is assumed that
. Regarding this matter, robust experiments have been conducted to validate the approximation of the original form
by evaluating the mean square error rate (MSER) with the simpler form
. The relationship between node accountability
and the mean square error rate (MSER) has been illustrated on
Figure 1.
This figure presents the simulated accountability of each node for parameters , , and which offer a gap analysis between two different calculation. This evaluation quantifies the discrepancy between the expected value of the function applied to the random variable and the function applied to the expected value of the random variable. The results demonstrate that the approximation holds with acceptable accuracy under the tested conditions, supporting the use of as a computationally simpler surrogate for in relevant stochastic modeling scenarios. In this scenario, the random capacity follows a Binomial distribution with parameters n and success probability p (i.e., ). The graph shows how MSER decreases as node accountability increases, reflecting improved system reliability with higher accountability . Two key points are highlighted: higher node accountability correlates with improved system performance due to more reliable nodes. Key highlights include a 5% error rate when the average node accountability exceeds approximately 68% (i.e., ), and a further reduction to a 1% error rate when node accountability reaches 95% or higher (i.e., ). Higher node accountability substantially reduces the MSER. The graph demonstrates that accounting for variations in node behavior using closely approximates the original function . This underscores the importance of considering stochastic variations in network nodes to effectively optimize error rates.