Next Article in Journal
Teaching AI to Decode Vaccine Hesitancy Narratives: A Few-Shot Learning and Topic Modeling Approach
Previous Article in Journal
PRL-DAS: Robust Heliox Speech Recognition for Unaligned Low-Resource Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation

Center for Informatics Science, School of Information Technology and Computer Science, Nile University, Giza 12588, Egypt
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2026, 10(5), 158; https://doi.org/10.3390/bdcc10050158 (registering DOI)
Submission received: 11 April 2026 / Revised: 3 May 2026 / Accepted: 13 May 2026 / Published: 15 May 2026

Abstract

Many blockchain-based cyber threat intelligence (CTI) sharing systems emphasize immutability and auditability, but often treat CTI submissions as ordinary blockchain transactions without explicitly separating content validation from publication anchoring. This paper presents CTIB, a proof-of-concept hybrid Proof-of-Stake (PoS) and Proof-of-Work (PoW) framework for CTI publication. CTIB uses a sequential workflow in which a PoS committee first evaluates CTI submissions, and an accepted feed hash is then anchored through a PoW step to provide verifiable temporal binding. The prototype is evaluated in a controlled local Hardhat environment; therefore, the results should be interpreted as prototype-level feasibility evidence rather than production-scale deployment results. CTI content is represented using STIX 2.1, canonicalized, and hashed using SHA-256; only integrity-critical evidence is stored on-chain, while full CTI content remains off-chain. Experimental results demonstrate prototype-level feasibility, with measured throughput, latency, and success rate metrics under different PoW difficulty profiles. Across ten independent local runs, CTIB achieved an average throughput between 141.13 and 166.14 feeds/min, average p50 latency between 326.18 and 403.09 ms, and average p95 latency between 553.22 and 700.82 ms under the tested difficulty profiles. Security analysis uses analytical modeling, committee capture probability, and Monte Carlo simulation to evaluate majority-attack feasibility under stated assumptions. The results indicate that sequential compromise of both validation and anchoring layers increases the cost of coordinated manipulation.

Graphical Abstract

1. Introduction

Modern cloud security operations rely heavily on the timely detection and dissemination of cyber threat intelligence (CTI) across organizations. As attacks such as ransomware and advanced persistent threats continue to evolve rapidly, the ability to share reliable threat information in near real time has become essential for effective collective defense. While detection technologies have advanced significantly, detection alone does not address the broader challenges of trust, integrity, and accountability in CTI sharing. Many existing CTI platforms lack verifiable provenance, auditability, and resistance to manipulation or censorship, particularly when they rely on centralized architectures or single-consensus control models [1] These limitations reduce confidence in shared intelligence and weaken coordinated response efforts.
CTI plays a critical role in helping organizations understand attacker behavior, identify indicators of compromise, and respond more quickly to emerging threats. However, CTI sharing in practice remains limited and inefficient. Organizations often hesitate to share intelligence due to concerns about privacy, reputation, legal exposure, or competitive disadvantage. Even when intelligence is shared, it is frequently distributed through fragmented, vendor-specific platforms that require manual validation and interpretation. As a result, valuable intelligence is underutilized, and similar attacks continue to affect multiple organizations independently. Trust is another persistent issue. In many CTI sharing systems, consumers cannot reliably verify who submitted the intelligence, whether it has been altered, or when it was approved. This lack of verifiable trust discourages adoption and collaboration. In addition, CTI sharing often suffers from delays. Centralized approval workflows, manual review processes, and organizational hesitation can significantly slow down publication. In time-sensitive scenarios such as ransomware attacks, even small delays can substantially increase operational impact. Furthermore, centralized CTI platforms introduce single points of failure and censorship risks. If a central server becomes unavailable, compromised, or manipulated, the entire sharing ecosystem may be disrupted.
Prior research has explored several approaches to improve CTI sharing. Early efforts focused on centralized repositories and trusted sharing communities, which improved coordination but did not eliminate trust dependencies. The introduction of structured threat intelligence standards, such as STIX and TAXII, improved interoperability and automation by providing a common representation for threat data [2,3]. More recently, blockchain-based solutions have been proposed to enhance [4] integrity, transparency, and auditability in CTI sharing [5]. These studies demonstrate that distributed ledgers can provide immutable records and reduce reliance on a single trusted authority. However, most blockchain-based CTI proposals rely on a single-consensus mechanism, typically Proof-of-Work or Proof-of-Stake. Each of these mechanisms has well-known weaknesses. Proof-of-Work systems are vulnerable to majority hash-power attacks and censorship through mining dominance, while Proof-of-Stake systems may suffer from validator collusion, governance capture, or delayed publication by a small group of validators. As a result, single-consensus designs inherit the failure modes of the underlying mechanism and do not fully address the combined requirements of correctness, timeliness, and resistance to suppression in CTI workflows.
These observations reveal several open gaps in the literature. Many proposed CTI blockchain systems remain conceptual or limited to simulations, without end-to-end implementations or measurable performance results. In particular, there is limited work on systems that combine standardized CTI representation, hybrid consensus mechanisms, and empirical validation within a single, reproducible framework.
This paper addresses these gaps by introducing a practical approach to CTI publication that separates content validation from publication finality. The key idea is to treat CTI validation and CTI anchoring as distinct stages with different security objectives. Validation focuses on assessing the quality and relevance of CTI content, while anchoring focuses on ensuring that accepted intelligence is recorded in a tamper-resistant and auditable manner. Separating these responsibilities enables stronger guarantees than those offered by single-consensus designs and allows each stage to be analyzed and optimized independently. Based on this principle, the paper presents CTIB, a hybrid CTI publication framework that combines Proof-of-Stake and Proof-of-Work in a sequential manner. In CTIB, a Proof-of-Stake committee evaluates CTI content using a deterministic majority rule, and Proof-of-Work is applied only to anchor the accepted decision to time through a verifiable computational proof. This design increases accountability and makes post-approval suppression or reordering of CTI costly and observable, without requiring Proof-of-Work to revalidate content. The framework is evaluated through an implemented prototype and a structured multi-phase workflow, supported by measurable performance metrics and quantitative security analysis. The contributions of this work include the design of a sequential hybrid consensus model tailored for CTI sharing, an implementation-backed CTI representation using STIX with canonicalization and cryptographic hashing, a complete end-to-end prototype with reproducible execution phases, an empirical performance evaluation using throughput and latency metrics, and a quantitative security evaluation against majority attacks using analytical modeling and Monte Carlo simulation. The work also introduces an α-constrained effective hash-power model to study how economic commitment can limit hash-power-dominated attacks in hybrid consensus systems.
The distinction between CTIB and prior blockchain-based CTI systems lies in their separation of CTI content validation from publication anchoring. Many prior systems use blockchain mainly as an immutable storage or audit layer, whereas CTIB treats validation and anchoring as two separate security functions. The PoS stage addresses content-level acceptance, while the PoW stage provides temporal anchoring for accepted decisions. This distinction enables the framework to evaluate both application-level publication performance and majority control attack feasibility under a unified prototype. Accordingly, CTIB should be interpreted as a prototype-level framework for studying hybrid consensus CTI publication, not as a production-ready CTI exchange platform.
The remainder of this paper is organized as follows. Section 2 reviews CTI sharing challenges and related blockchain-based approaches. Section 3 presents the CTIB architecture, methodology, and hybrid PoS → PoW workflow. Section 4 describes the prototype implementation and execution process. Section 5 reports performance benchmarking results and evaluates majority-attack feasibility using analytical modeling and Monte Carlo simulation. Section 6 discusses the prototype scope and limitations. Section 7 compares CTIB with existing CTI–blockchain approaches. Section 8 concludes the paper and outlines future work.

2. Background and Related Work

This section reviews existing research related to CTI sharing and blockchain-based security systems. It discusses the main challenges in CTI sharing, the role of blockchain technology in addressing these challenges, the limitations of single-consensus mechanisms, and prior work on hybrid consensus systems. Finally, it identifies the research gap that motivates the CTIB framework proposed in this paper.

2.1. Related Work

CTI sharing has been widely studied as a mechanism to improve collective cybersecurity defense. Early CTI platforms relied on centralized repositories and trusted communities, where threat information was exchanged through proprietary systems or manual reporting processes. While these approaches improved situational awareness, they introduced strong trust dependencies and single points of failure. To address interoperability, structured CTI standards such as STIX and TAXII [6] were introduced and are now widely referenced in the literature for automated intelligence exchange and integration with SOC and SIEM platforms [7]. However, these standards assume a trusted distribution infrastructure and do not, by themselves, address integrity, auditability, or timely publication. To improve trust and data integrity, several studies proposed blockchain-based CTI sharing systems. These systems leverage the immutability of distributed ledgers to record CTI submissions and enable tamper detection using cryptographic hashes. Smart contracts are often used to manage CTI submission, access control, and, in some cases, incentive mechanisms. Some proposals integrate structured intelligence formats such as STIX, while others rely on permissioned blockchain environments to control participation. Despite these contributions, most blockchain-based CTI systems remain conceptual or are implemented as limited proof-of-concept platforms. Their evaluations typically focus on architectural design rather than measurable performance, and few report application-level latency or throughput metrics relevant to real CTI workflows [8].
In addition to blockchain-based trust mechanisms, recent research increasingly explores artificial intelligence methods for preliminary filtering, credibility assessment, and malicious content analysis. Chechkin, Pleshakova, and Gataullin [9] proposed a hybrid neural network transformer for detecting and classifying destructive content in digital space, demonstrating the relevance of AI-assisted content screening in cybersecurity workflows. Such methods are complementary to CTIB rather than competing alternatives. AI-based models can support pre-validation, triage, and anomaly detection before PoS committee review, while CTIB focuses on decentralized validation, integrity preservation, and temporal anchoring of accepted CTI submissions. Integrating AI-based credibility scoring into the PoS validation stage is, therefore, identified as an important future extension.
A key limitation of many existing CTI blockchain approaches is reliance on a single-consensus mechanism. Proof-of-Work-based systems inherit vulnerabilities related to majority hash-power attacks and censorship by mining dominance, while Proof-of-Stake-based systems may suffer from validator collusion, governance capture, or censorship by delay. Several surveys explicitly note that single-consensus designs do not provide sufficient protection against coordinated adversarial behavior in intelligence sharing environments.
Hybrid consensus mechanisms have been explored primarily in the cryptocurrency and general blockchain security literature. Systems such as Decred [10] and TwinsCoin [11] combine Proof-of-Work and Proof-of-Stake to increase the cost of majority attacks by requiring adversaries to control multiple resources simultaneously. These studies demonstrate that hybrid designs can improve security and governance robustness. However, they are not designed for CTI sharing and do not address CTI-specific workflows, such as content validation, standardized intelligence representation, or publication timing guarantees. Their evaluations focus on block-level or protocol-level metrics rather than application-level performance. Overall, existing research provides important foundations in CTI standardization, blockchain-based trust mechanisms, and hybrid consensus security. However, no prior work combines standardized CTI representation, hybrid PoS–PoW consensus, a working end-to-end prototype [12], and quantitative performance and security evaluation within a single framework. CTIB addresses this gap by integrating structured CTI validation with sequential hybrid consensus and validating the design through reproducible benchmarks and probabilistic majority-attack analysis tailored specifically to CTI publication workflows. Cyber threat intelligence sharing aims to help organizations exchange information about threats, vulnerabilities, and attacks in order to improve collective defense. Despite its importance, CTI sharing faces several practical challenges. One major challenge is a lack of trust. Organizations receiving CTI often cannot verify the accuracy, origin, or integrity of the shared data. This leads to hesitation in using external intelligence and reduces the overall effectiveness of collaboration. Another challenge is timeliness. CTI is most valuable when it is shared quickly. However, many existing platforms rely on manual validation, centralized approval, or delayed publication processes. These delays reduce the defensive value of the intelligence, especially in fast-moving attacks such as ransomware [1]. Centralization is also a key problem. Most CTI platforms depend on centralized servers or trusted intermediaries. This creates single points of failure and makes the system vulnerable to outages, censorship, or manipulation by the platform operator. In addition, a lack of standardization limits interoperability. Different CTI formats and proprietary data models make it difficult to integrate intelligence across tools and organizations. Although standards such as STIX exist, they are not consistently enforced or validated across sharing platforms.
The prior works that apply blockchain technology to the sharing of threat or intelligence feeds highlight substantial variation in scope, technical depth, and alignment with established CTI practices. A first group of papers explicitly references blockchain as an enabling technology for sharing threat-related information, but leaves many critical design aspects unspecified. For example, the work in [13] emphasizes the relevance of blockchain in addressing CTI-related challenges and acknowledges the importance of structured threat intelligence by referring to STIX. However, it does not define the type of blockchain proof or consensus mechanism, nor does it discuss any reward or incentive model. This limits the practical interpretability of the approach, as consensus and incentives are central to blockchain operation. A similar pattern appears in [14] and the related studies [15,16,17,18,19], which draw heavily on blockchain concepts for alert sharing but omit details about proof types, rewards, and standardized CTI formats. These works are, therefore, best characterized as alert-sharing systems rather than full CTI platforms, as they do not address the structured representation, exchange, or governance of intelligence feeds.
A second group of studies specifies a blockchain proof mechanism but focuses primarily on basic threat or alert dissemination without adopting recognized CTI standards. Papers such as [20,21,22,23,24] fall into this category. They explicitly mention a blockchain proof type, indicating a clearer understanding of the underlying distributed ledger mechanics. In some cases, rewards or incentives are also discussed, as in [21,22]. Nevertheless, the absence of STIX, TAXII, or CybOX support suggests that these approaches are oriented toward sharing raw alerts or security events rather than structured, machine-consumable intelligence. While these works contribute by clarifying consensus or incentive aspects, their lack of standardized formats limits interoperability and reuse across heterogeneous CTI ecosystems.
In contrast, a smaller but more mature set of papers integrates blockchain with established CTI standards, indicating a stronger alignment with real-world intelligence-sharing requirements. Studies such as [25,26,27,28,29,30,31,32,33] explicitly adopt STIX and, in several cases, TAXII and CybOX. This choice supports structured, semantically rich threat intelligence exchange and facilitates interoperability between organizations and platforms. Among these, some works still do not specify the blockchain proof type or incentive model, which remains a limitation from an implementation perspective. However, their use of standardized formats represents a clear strength, as it enables consistent representation, transport, and automation of CTI feeds beyond ad hoc alert sharing.
The most comprehensive approaches are exemplified by works such as Gong and Lee [34], as well as [27,30,31,35], which combine explicit blockchain proof mechanisms with CTI standards and, in some cases, reward or incentive schemes. In particular, ref. [34] stand out by proposing a blockchain-based CTI feed system that incorporates smart contracts for intelligence sharing and rating. This design not only specifies the proof mechanism but also introduces a structured way to assess and incentivize the quality of shared intelligence, addressing trust and reputation concerns that are central to collaborative CTI environments. These studies are clearly designed as full CTI systems rather than simple alert-sharing platforms, as they consider governance, incentives, and standardized data models together. Overall, these papers show that while many papers reference blockchain as a promising solution for threat or intelligence sharing, a large portion of the literature remains at a high level and incomplete. Many works omit essential details such as the consensus or proof type, reward mechanisms, or standardized CTI formats, which limit their practical applicability. Only a subset of studies presents more complete and implementable designs that jointly address blockchain mechanics, incentives, and structured CTI standards. This gap highlights the need for integrated approaches that balance performance and resource usage, define clear incentive models for participants, and adopt widely accepted CTI formats to enable interoperable and trustworthy intelligence sharing, as illustrated in Table 1. More details and information regarding Table 1 are in [36].

2.2. PoS vs. PoW Limitations

Blockchain systems typically rely on a single-consensus mechanism, most commonly PoW or PoS [37]. Each approach has strengths but also important limitations. PoW provides strong security through computational effort, but it is resource-intensive and vulnerable to majority hash-power attacks. An attacker controlling sufficient mining power can censor transactions or reorder published data. PoW systems can also suffer from high latency and energy consumption. PoS improves efficiency by replacing computational work with an economic stake. However, PoS systems are vulnerable to validator collusion, governance capture, and censorship by delay, where a small group of validators can approve a transaction but postpone its publication without leaving a strong cryptographic trace [38]. For CTI sharing, these limitations are critical. CTI systems require both correctness (where the content is valid) and timely publication (where intelligence is released without unjustified delay). A single-consensus mechanism does not adequately protect both properties at the same time. Hybrid consensus systems combine more than one consensus mechanism to improve security and resilience. Several blockchain systems in the cryptocurrency domain have explored hybrid PoW and PoS designs. Examples such as Decred and TwinsCoin use a hybrid consensus to reduce the risk of majority attacks by requiring an attacker to control both mining power and stake. These systems demonstrate that hybrid designs can raise attack costs and improve governance robustness. However, these systems are designed for cryptocurrency or general blockchain governance, not for CTI sharing. They do not address CTI-specific workflows such as intelligence validation, standardized CTI representation, or publication timing guarantees.
Most hybrid consensus studies focus on block-level metrics such as block time, voting rounds, or protocol overhead. They do not report application-level performance metrics such as end-to-end latency or throughput for data-sharing workflows. Based on the reviewed literature, several gaps can be identified. First, CTI-focused blockchain systems often rely on a single-consensus mechanism or permissioned blockchains. While they support structured intelligence formats such as STIX, they do not provide strong resistance to majority attacks or censorship under adversarial conditions. Second, hybrid consensus systems demonstrate improved security in cryptocurrency contexts but do not integrate CTI workflows or standardized intelligence representation. Their evaluation focuses on protocol behavior rather than application-level performance. Third, many proposed systems lack implementation-backed evaluation. They do not report measurable latency, throughput, or probabilistic security metrics, making it difficult to assess practical feasibility. This paper addresses these gaps by proposing CTIB, a hybrid PoS → PoW framework designed specifically for CTI sharing. CTIB integrates standardized CTI representation (STIX 2.1), separates content validation from temporal anchoring, and provides a working prototype with reproducible performance benchmarks and quantitative security evaluation.
Overall, most existing blockchain-based CTI solutions focus mainly on immutability and auditability. However, many of these systems treat CTI submissions as ordinary blockchain transactions without applying explicit checks on the quality, relevance, or correctness of the shared intelligence. As a result, low-quality or misleading CTI can be permanently recorded without proper validation. In addition, several proposals rely on permissioned environments that assume participating entities are inherently trusted. While this simplifies deployment, it limits the system’s ability to reason about coordinated misbehavior or partial corruption among validators. From a performance perspective, prior work often reports only limited blockchain-level metrics, such as block throughput or confirmation time. These metrics are not directly comparable to real-world CTI sharing workflows, which are typically driven by API-based submission, validation, and consumption processes. Finally, security analyses in existing studies mostly discuss generic blockchain threats. They rarely model the probability of coordinated majority compromise specifically during the CTI submission and approval process, where content validation decisions are actually made. These limitations motivate the CTIB design. CTIB follows a sequential approach in which a small PoS committee first performs content-level validation to assess the quality and usefulness of CTI reports. After acceptance, a lightweight PoW anchoring step binds the decision to a verifiable time reference and increases the cost and visibility of censorship or intentional delay.

2.3. Permissioned and Permissionless Blockchains in CTI Sharing

Hyperledger Fabric is a permissioned, enterprise-oriented blockchain framework designed for consortium environments where participants are known, authenticated, and governed by organizational policies. Unlike permissionless blockchains that enable open participation through economic consensus mechanisms such as Proof-of-Work or Proof-of-Stake, Fabric restricts network access through digital identities, certificate authorities, and fine-grained access control policies, with participation, data visibility, and transaction rights explicitly defined and enforced at the architectural level. This permissioned model offers clear advantages for enterprise CTI sharing, where confidentiality, accountability, and regulatory compliance are paramount. Fabric supports structured CTI exchange through established standards such as STIX and TAXII while providing robust governance mechanisms via channels and private data collections to precisely control data dissemination among trusted organizations. However, Hyperledger Fabric has fundamental limitations for open, adversarial CTI sharing environments. Despite being open source, Fabric prioritizes organizational control over trustless decentralization, with security emerging from pre-established trust relationships rather than economic incentives or majority-based consensus [39]. Fabric does not employ quantifiable consensus resources such as hash power or stake, and its ordering service assumes organizational cooperation rather than Byzantine fault tolerance or economically motivated adversaries. Consequently, probabilistic security modeling, including 51% attack analysis, censorship-by-delay evaluation, or majority resource control assessments, cannot be meaningfully applied. Fabric’s architecture explicitly rejects open community-driven participation, making it unsuitable for public CTI ecosystems where unknown entities may contribute valuable intelligence. These limitations motivate alternative designs that leverage permissionless or hybrid consensus mechanisms to explicitly model and mitigate adversarial behavior in CTI publication workflows, as detailed in Table 2 [40].
Permissioned blockchains emphasize governance, identity, and controlled confidentiality, whereas permissionless blockchains prioritize open participation and trust minimization through economic consensus mechanisms.

3. Methodology Overview

This section describes the design principles and architectural structure of the CTIB framework. The design focuses on separating responsibilities across system layers, standardizing CTI representation, and combining two complementary consensus mechanisms to achieve both correctness and timely publication of intelligence.

3.1. Architectural Overview

CTIB is designed as a layered system that combines off-chain processing with on-chain verification. The architecture separates content handling, validation logic, and publication anchoring to balance scalability, integrity, and auditability.
At a high level, CTIB consists of:
  • An off-chain backend layer, responsible for CTI processing, validation coordination, and benchmarking;
  • An on-chain smart contract layer, responsible for recording immutable integrity evidence and final publication status.
This separation allows CTIB to process rich CTI data efficiently while keeping blockchain usage minimal and verifiable.

3.2. On-Chain vs. Off-Chain Components

CTIB clearly separates where data are stored between off-chain and on-chain components. The off-chain layer is used to store complete CTI reports in STIX 2.1 JSON format, along with validator review notes, intermediate validation data, system logs, and experimental results used during evaluation. In contrast, the on-chain layer stores only integrity-critical evidence. This includes the deterministic hash of each CTI submission (feed_hash), the outcome of PoS validation in terms of approval count and publication status, and the PoW anchoring proof fields such as the nonce, resulting hash, and number of mining attempts. By limiting on-chain data to these essential elements, the design avoids unnecessary blockchain growth while still allowing independent verification of CTI integrity and publication timing.

3.3. Backend and Smart Contract Interaction

Submissions through an API convert the input into a STIX 2.1 bundle, apply canonicalization and hashing, and then coordinate validation by the PoS committee. For submissions that are approved, the backend performs the PoW anchoring step and submits the final evidence record to the smart contract. The smart contract serves as an immutable record of validation outcomes and publication status. It does not validate CTI content and does not store full CTI reports. Instead, it records the evidence produced by the backend and enforces basic rules related to publication state and reward accounting. All interactions between the backend and the blockchain are carried out through an Ethereum-compatible JSON-RPC interface.

3.4. STIX-Based CTI Representation and Hash-Based Integrity

CTIB represents cyber threat intelligence using the STIX 2.1 standard to support interoperability and smooth integration with existing SOC and SIEM platforms. Before any validation or anchoring takes place, each STIX bundle is converted into a canonical JSON form. Canonicalization ensures that the same CTI content always results in the same serialized representation, regardless of differences in formatting or field order.
This step is critical for reproducibility and integrity checking because without canonicalization, identical CTI objects could generate different hashes and weaken trust guarantees. Once canonicalized, the STIX bundle is hashed using SHA-256 to generate a deterministic identifier called the feed_hash. The feed_hash uniquely identifies each CTI submission, allows tamper detection since any content change produces a different hash, and acts as the shared reference used by both PoS validation and PoW anchoring. Only the feed_hash is written to the blockchain, while the full STIX document is kept off-chain to preserve scalability and reduce on-chain storage overhead. CTIB employs a sequential hybrid consensus architecture in which PoS validation is followed by PoW anchoring. This design reflects the need for two different types of guarantees in CTI sharing.

3.5. Design Principle: Why Two Consensus Layers

CTIB deliberately uses two consensus mechanisms in sequence because CTI sharing requires Correctness and credibility of the submitted CTI content, which constitutes a content-level assessment that is most appropriately performed by a committee of qualified validators, and temporal anchoring and finality, which is a publication-level guarantee best enforced through computational proof. The PoS layer evaluates whether a CTI report is correct, relevant, and credible. The PoW layer does not revalidate CTI content; instead, it finalizes and timestamps the PoS decision by anchoring the accepted feed_hash under a defined difficulty. This increases the cost and observability of censorship-by-delay attempts.
For the PoS committee logic and majority acceptance, the PoS layer in CTIB uses a fixed-size committee of five validators with an acceptance threshold of three approvals out of five. In the implemented prototype, the validator set is seeded during system initialization. On the other hand, the committee composition is deterministic to support reproducible experiments, and dynamic selection based on stake or reputation is treated as a future extension. This committee-based design reduces the risk of individual validator dominance and allows probabilistic analysis of committee capture. In PoW Anchoring and Temporal Finality, once a CTI submission is approved by the PoS committee. The PoW layer is activated and starts to perform a nonce search over the approved feed_hash, applies a configurable difficulty target, and records the resulting proof on-chain. PoW in CTIB is used only for anchoring, not for content validation or block production. Its role is to bind the PoS decision to time and make suppression or reordering economically costly and detectable.

3.6. Formal CTI Validation Criteria for the PoS Committee

To clarify the PoS validation stage, CTIB defines a structured validation policy for CTI submissions. In the current prototype, validator decisions are represented as binary approval outcomes to support reproducible benchmarking and probabilistic security analysis. For practical deployment, each validator decision can be derived from a weighted scoring function over five criteria:
S_j = w_1 F_j + w_2 I_j + w_3 R_j + w_4 C_j + w_5 X_j
where S_j is the score assigned by validator j; F_j represents format validity, including STIX 2.1 syntactic correctness and required-field completeness; I_j represents IoC validity, such as correct IPv4, domain, hash, URL, or CVE structure; R_j represents operational relevance to known threat categories or attack contexts; C_j represents source credibility and contributor history; and X_j represents cross-source consistency with external CTI repositories or prior submissions. Each criterion is normalized to [0, 1], and the weights satisfy:
Σ w_i = 1.
A validator approves a CTI submission when:
S_j ≥ θ,
where θ is the validator acceptance threshold.
At the committee level, CTIB accepts a submission if at least three out of five validators approve it:
Accept(feed) = 1 if Σ Approval_j ≥ 3, otherwise 0.
This formalization separates the conceptual role of PoS validation from the prototype implementation. The current prototype evaluates the committee threshold and publication workflow, while calibrated scoring weights, expert-labeled datasets, external CTI feed integration, and inter-validator consistency metrics such as Cohen’s kappa [41] or Fleiss’ kappa [42] are left for future empirical validation.

3.7. ABI and Smart Contract Interfacing

CTIB uses an Ethereum-compatible toolchain to connect the backend and the blockchain. The backend interacts with the smart contract through Ethereum JSON-RPC, which provides read/write access to blockchain state, and the Application Binary Interface (ABI), which defines the contract’s functions, parameters, and return types. The ABI acts as a formal interface that allows the backend to encode function calls and decode contract responses consistently across deployments [43]. For the design implications, the sequential PoS → PoW design requires an attacker to compromise two different resource types: validator influence and computational power. At the same time, it limits PoW usage to anchoring operations, preserving efficiency. This hybrid architecture provides the foundation for the prototype implementation and the performance and security evaluation presented in the next section.

4. Prototype Implementation

The CTIB prototype addresses integrity and publication accountability in CTI sharing through standardized CTI representation and blockchain-based integrity evidence. CTI submissions are converted into STIX 2.1 bundles, canonicalized, and hashed (SHA-256) to produce a deterministic feed_hash. The prototype enforces an on-chain/off-chain separation: only integrity-critical evidence (feed_hash, PoS approvals and status, and PoW proof fields) is recorded on-chain, while full STIX documents and logs are stored off-chain. Validation and anchoring are treated as separate responsibilities: a PoS committee validates CTI content, then PoW anchors the accepted decision. End-to-end encryption, digital signature verification, TAXII transport, and production-grade admission control are treated as deployment extensions rather than fully implemented prototype features. Figure 1 illustrates the complete end-to-end operational workflow of the CTIB framework, from threat detection to intelligence dissemination. The workflow begins with IRDS4C [44], where ransomware or intrusion activity is detected using deception-based mechanisms. Once malicious behavior is identified, a CTI report is generated and transformed into a structured representation using the STIX 2.1 standard. To ensure scalability, full CTI content is stored off-chain, while canonical JSON encoding is applied prior to hashing to guarantee deterministic identifiers. The resulting cryptographic hash serves as the reference for subsequent validation and anchoring steps. The first consensus stage is the PoS layer, where a committee of five validators independently evaluates the CTI submission. Validators assess correctness, relevance, and credibility, and a submission is accepted if at least three validators approve it. Rejected submissions are discarded without progressing to the anchoring stage. Accepted CTI reports are then forwarded to the PoW layer, which performs temporal anchoring by computing a cryptographic hash over the CTI identifier and a nonce subject to a difficulty constraint. This step binds the PoS decision to a verifiable point in time and increases resistance to censorship by delay. Finally, the anchored CTI hash and validation metadata are recorded on-chain, making the intelligence immutable and auditable. Authorized participants can then consume the validated CTI feeds, ensuring that shared intelligence is both verified and temporally anchored. This workflow highlights the separation of responsibilities between content validation and finalization, which is a central design principle of CTIB.

4.1. Design and Architecture of the CTIB Framework

The CTIB prototype was implemented using Solidity smart contracts deployed on a local Hardhat Ethereum development network (Hardhat v2.28.0), enabling deterministic validation rules and immutable recording of integrity evidence. The off-chain control plane was implemented in Python (Python 3.12.3) using FastAPI to expose a lightweight service interface for CTI submission and status retrieval, while web3.py was used to encode ABI calls and interact with the Ethereum JSON-RPC endpoint for reading and writing contract state. The JavaScript runtime environment used Node.js v22.21.0 and npm 10.9.4. To support reproducibility in evaluation, the validator committee was instantiated from a seeded, predefined validator set during initialization, ensuring that the same committee composition and threshold behavior could be reproduced across benchmark runs and security simulations.
In this section, the term genesis block is used in a conceptual sense to describe the first logical CTI record schema associated with each CTI submission rather than the literal blockchain network genesis block. The CTIB prototype does not modify the underlying Ethereum block structure. Instead, it anchors integrity-critical CTI metadata on-chain via smart contracts, while the full CTI content is stored off-chain. This logical record schema defines the minimum set of metadata required to ensure deterministic identification of CTI content, traceability of PoS validation decisions, and cryptographic anchoring of publication time using PoW. The design intentionally separates data from blockchain mechanics, allowing CTIB to remain compatible with standard Ethereum tooling. In the implemented CTIB prototype, the full STIX document is stored off-chain, while the blockchain layer stores integrity-critical metadata only. Specifically, the prototype records the CTI content hash (feed_hash), the PoS committee approval outcome (e.g., approvals count and published status), and the PoW anchoring proof fields (pow_nonce, pow_hash, and pow_attempts) as on-chain verifiable evidence. A representative example of the implemented CTIB “final approved record” returned by the API is shown below in Figure 2 (from the prototype run report), demonstrating the concrete fields produced and used during execution: cti_id, feed_hash, approvals, published, pow_nonce, pow_hash, pow_attempts, and end-to-end timing breakdown. In the implemented CTIB prototype, the following fields are recorded and returned by the backend API and stored on-chain as integrity evidence:
  • cti_id: unique identifier of the CTI submission;
  • feed_hash: SHA-256 hash of the canonical STIX bundle;
  • Approvals: number of PoS validator approvals. In our prototype, we used five validators as a committee from a pool of validators;
  • Published: final publication status;
  • pow_nonce: nonce satisfying PoW difficulty;
  • pow_hash: resulting PoW hash;
  • pow_attempts: number of hash attempts.
The full STIX document, validator notes, and logs are stored off-chain. This division is reflected directly in the prototype output shown in Figure 2 and Figure 3 and in the API response returned by/cti/submit. These principles motivate the adoption of a hybrid consensus model in which PoS and PoW serve complementary roles rather than competing ones. The genesis block header stores essential metadata, including the previous block hash (empty for the genesis block), a creation timestamp, and aggregate counters describing the CTI content. Each CTI feed is represented by a cryptographic hash of a STIX-formatted intelligence bundle, a reference to the full report, a list of Indicators of Compromise (IoCs), severity and attack-type metadata, and the digital signature of the contributor. Validators append structured quality reviews containing cryptographic hashes, evaluation scores, review notes, and validator signatures. Aggregated quality metadata is recorded to support transparency and auditability.

4.2. Threat Model, Assumptions, and Scope

CTIB considers adversaries who attempt to manipulate CTI publication in different ways. These include submitting false or low-quality indicators, suppressing valid CTI, delaying accepted reports, or influencing consensus participants. The main threat evaluated quantitatively in this work is majority control (51%) attacks because it directly affects the core function of CTIB’s secure and timely CTI publication. In such attacks, an adversary may try to manipulate, delay, reorder, or control the anchoring of accepted CTI reports. However, real-world CTI ecosystems face a broader set of threats. These include Sybil attacks (fake identities), validator collusion, false IoC submission, reputation poisoning, denial-of-service (DoS) attacks on the API layer, and metadata leakage from on-chain observations. To address this, the threat model clearly distinguishes between threats that are evaluated quantitatively in the current prototype and threats that are considered at the design or deployment level. The current prototype is implemented in a controlled local environment using an Ethereum-compatible development network and a predefined validator set to ensure reproducibility. Therefore, several important security aspects are not evaluated, including open-network Sybil resistance, production-level admission control, distributed fault tolerance, API-level DoS resilience, encryption workflows, and TAXII-based transport. These aspects are treated as deployment extensions and future research directions rather than implemented features in the current prototype, as illustrated in Table 3.

4.3. Hybrid Consensus Architecture (PoS → PoW)

CTIB uses two consensus mechanisms in sequence because the system requires two different guarantees. First, the PoS committee provides content-level validation and credibility checking by expert validators, deciding whether the submitted CTI is acceptable for publication. Second, PoW anchoring timestamps and finalizes the accepted decision by binding the approved feed_hash to a verifiable nonce search under a difficulty target. In this design, PoW does not revalidate CTI content; it anchors the PoS decision to strengthen finality and increase the cost and observability of censorship by delay. Unlike blockchain systems that rely on a single-consensus mechanism, CTIB clearly separates content evaluation from publication anchoring. In this architecture, PoS determines whether a CTI report should be accepted, while PoW determines when the accepted decision becomes immutably recorded on the blockchain.
In the PoS layer, CTIB adopts a committee-based validation model. For each CTI submission, the system evaluates the report using a fixed-size PoS committee consisting of five validators, with an acceptance threshold of three approvals out of five. In the current prototype implementation, the committee is formed from the registered validator set seeded during system initialization, ensuring reproducible and deterministic evaluation behavior during experimentation. The selected committee configuration balances fault tolerance, validation latency, and decision fairness while preventing any single validator from dominating the validation process. The PoS layer is responsible exclusively for assessing CTI content quality, relevance, and credibility. Once a CTI report satisfies the PoS acceptance rule, the PoW layer is activated. In contrast to traditional PoW blockchains, where mining determines both block validity and ordering, PoW in CTIB is used solely to anchor the PoS decision. During PoW anchoring, miners compute a cryptographic hash over the accepted CTI hash combined with a nonce, subject to a configurable difficulty target. Successfully finding a valid nonce produces a proof of computational effort that is recorded on-chain. This anchoring mechanism provides two essential properties: verifiable time binding, linking the PoS decision to a specific and auditable point in time. Resistance to censorship by delay, increasing the cost and observability of suppressing or postponing already-approved CTI reports. Importantly, PoW does not revalidate CTI content; it finalizes and timestamps the outcome of PoS validation, ensuring integrity and availability guarantees without duplicating review.

4.4. Alpha-Constrained Effective Hash Power

The α-constrained effective hash-power formulation is adapted from the allowance-based hybrid PoW/PoS model in Ref. [45]. In this work, the model is used only as an analytical tool to study how stake-based economic commitment could restrict the influence of raw mining power. It is not enforced by the implemented CTIB prototype.
Let hi denote the raw hash power of miner i, si denote the stake of miner i, and S denote the total stake in the system. The model computes an effective hash power hi′ by limiting each miner’s contribution according to a stake-based allowance. The objective is to obtain the total effective hash power H′ while ensuring that no miner contributes more effective mining influence than permitted by its allowance.
Equations (1)–(4) follow the α-constrained allowance formulation in Ref. [45] and were previously discussed in our earlier CTI-blockchain framework article [36]. In the present manuscript, the formulation is used only as an analytical governance abstraction and is not enforced as a runtime CTIB mechanism.
The formulation is summarized as follows:
Equation (1):
H = i h i
Equation (2):
h i h i , i
Equation (3):
f(si) = min(α × si/S, 1)
Equation (4):
h i f s i H , i
where:
  • H′ is the total effective hash power of the network;
  • hi is the raw hash power of miner i;
  • hi′ is the effective hash power of miner i;
  • si is the stake of miner i;
  • f(si) is the maximum effective mining share allowed for miner i;
  • S is the total stake in the system; and
  • α is a system-level governance parameter.
Because the allowance depends on stake share, dividing the same stake across multiple identities does not increase the miner’s total permitted influence. In this paper, the model is used to compute illustrative effective mining shares and to examine how different α values affect attacker influence under majority-attack assumptions.
The α parameter represents a governance trade-off. Smaller α values impose stricter coupling between stake and effective hash power, reducing the influence of miners with high raw hash power but low stake. Larger α values relax this constraint and allow raw hash power to contribute more strongly. In Section 5, we evaluate α ∈ {1, 2, 3, 5} to show how the attacker’s effective PoW share changes under different governance settings.

4.5. CTI Representation and STIX Integration in Smart Contracts and Backend Integration

CTIB represents CTI submissions using the STIX 2.1 standard. Each submission is converted into a minimal STIX bundle and serialized using canonical JSON encoding to ensure deterministic hashing. A SHA-256 hash is computed over the canonical representation and serves as the immutable identifier for the CTI report throughout the CTIB workflow. To maintain scalability, full STIX documents are stored off-chain while only cryptographic hashes and validation metadata are recorded on-chain. CTIB, therefore, separates data placement as follows: on-chain storage includes CTI hashes, validator approvals, PoW results, publication status, and reward balances; off-chain storage includes full STIX documents, validator notes, logs, and benchmarking outputs. This separation reduces blockchain storage overhead while preserving integrity and auditability. Smart contracts in CTIB enforce protocol rules, including validation thresholds, state transitions, and reward distribution. They store integrity-critical metadata only and avoid large data payloads. A backend service implemented in Python using FastAPI orchestrates the end-to-end workflow, including CTI submission, STIX conversion, PoS validation, PoW anchoring, and interaction with the blockchain through an Ethereum-compatible JSON-RPC interface. CTIB follows a structured, phase-based workflow from Phase 0 to Phase 10, covering system setup, deployment, validation, anchoring, benchmarking, and security evaluation as illustrated in Table 4. A unified execution script automates these steps, enabling reproducible experiments and independent verification.
The CTIB prototype is implemented using a combination of established blockchain and backend technologies to ensure reproducibility, clarity, and experimental control. Solidity is used to implement the CTIB smart contract, which records integrity evidence, validation outcomes, and anchoring proofs on an Ethereum-compatible blockchain. Hardhat serves as the development and testing framework, providing a local blockchain environment, deterministic accounts, and deployment tooling that enables repeatable experimentation. The backend service is implemented using FastAPI, which exposes RESTful endpoints for CTI submission, validation orchestration, and result retrieval, allowing the full workflow to be exercised at the application level. Interaction between the backend and the blockchain is handled through web3.py, which provides programmatic access to smart contract functions via the Ethereum JSON-RPC interface and decodes contract responses using the ABI. To support reproducibility during evaluation, the Proof-of-Stake validation layer relies on a seeded validator committee, where the validator set is initialized deterministically at system startup. This design choice ensures that experimental results can be repeated and analyzed consistently, while more dynamic committee selection mechanisms (e.g., stake- or reputation-based) are treated as future deployment extensions.

5. CTIB Validation, Performance, and Security Evaluation

The evaluation is designed to measure prototype-level feasibility under controlled conditions rather than production-scale performance. All components run on a single virtual machine using a local Hardhat Ethereum development network and a FastAPI backend. This setup isolates the CTIB workflow and supports reproducibility, but it does not model geographically distributed validators, heterogeneous nodes, nonuniform network latency, unstable communication channels, or live adversarial traffic. Therefore, throughput and latency results should be interpreted as controlled proof-of-concept measurements, not as evidence of deployment-ready scalability.
CTIB evaluation focuses on end-to-end workflow correctness of the sequential PoS → PoW pipeline, application-level performance benchmarking under multiple PoW difficulty profiles, and probabilistic security evaluation against majority (51%) attacks using analytical modeling and Monte Carlo simulation to calculate the ratio of corruption in both consensus layers using this Equation:
Equation (5):
P CTIB = r PoS × r PoW
Sybil resistance, fork handling, and double spending are not empirically tested in the prototype; they depend on deployment-specific admission control and the underlying blockchain platform’s properties, and are, therefore, treated as limitations and future work. The reward accounting mechanism does not impact the measured performance metrics, as it consists of simple state updates within the smart contract. Accordingly, rewards are excluded from throughput and latency evaluation and are analyzed solely as a functional incentive component of the prototype. The CTIB prototype presented in this section is executed on a local Ethereum development network (Hardhat) and orchestrated via a FastAPI backend. The PoS committee rule we used in our prototype, as an example (five validators, 3/5 threshold), and PoW anchoring (nonce search under difficulty_bits) are implemented end-to-end and reported via reproducible run artifacts. However, several security and governance mechanisms described in the broader CTIB design (e.g., full encryption workflows, end-to-end digital signature verification, TAXII transport, and production-grade admission control) are treated as deployment extensions and are not fully evaluated in the current prototype run outputs.
The entire experiments were performed on a machine with the following specifications: CPU ➔ 8-core Intel Core i7 processor (host system), Memory ➔ 16 GB of RAM. Operating System ➔ Ubuntu 22.04 LTS (64-bit) as the guest OS. Virtualization ➔ VMware Workstation Pro environment, running the Ubuntu guest VM. This VMware-based test environment ran the CTIB prototype’s full software stack, including a local Ethereum blockchain network (Hardhat dev network), and the CTIB backend implemented as a FastAPI application server. Running all components on a single virtual machine allowed for controlled, reproducible experiments in an isolated setting (eliminating external network variables) while still exercising the complete CTIB system. The evaluation, therefore, focuses on the correctness of the PoS → PoW workflow, on-chain/off-chain separation with STIX-based hashing, benchmarking under multiple difficulty profiles, and probabilistic security evaluation against majority attacks. This subsection provides a concrete end-to-end execution example from the implemented CTIB prototype before presenting aggregate benchmarking and security results. The prototype health check (verifies API readiness and blockchain connectivity) is given below in Algorithm 1. Algorithm 1 shows a representative end-to-end prototype execution, including the health check, CTI submission, PoS approval, PoW anchoring evidence, and STIX retrieval.
Algorithm 1. Prototype Output Example (End-to-End Execution)

GET /health
→ {“ok”: true, “rpc_url”: “http://127.0.0.1:8545”, “contract_json”: “backend/chain/ctib_contract.json”}

Example CTI submission (end-to-end PoS → PoW):
POST /cti/submit?pow_difficulty_bits=12&pow_max_nonce=2000000
Body: {“ioc_type”:“ipv4”,”ioc_value”:“203.0.113.10”,”severity”:6,”confidence”:80}
→ {
“cti_id”: 1,
“feed_hash”: “0x0970aec55d752996c781a0af5a6c967de0f7beeeb80fdae41703d60975ad352e”,
“approvals”: 5,
“published”: true,
“pow_nonce”: 491,
“pow_hash”: “0x000f2680a88133224e4c3ac73195e4ee8586ff5f27acada22ea99d93e827e9bd”,
“pow_attempts”: 492,
“timings_ms”: {
“stix_ms”: 0.1206269999727283,
“submit_ms”: 26.375424999997676,
“pos_ms”: 151.7292320000081,
“pow_ms”: 0.7103070000198386,
“total_ms”: 377.74579899999594
}
}

STIX retrieval (off-chain storage):
GET /cti/1/stix
→ Returns the STIX 2.1 bundle for cti_id=1 (bundle + indicator object).
This study evaluates CTIB under the threat that most directly challenges its core purpose: control over publication decisions. In distributed systems, this threat is commonly known as the 51% attack, where an adversary gains majority influence over a consensus mechanism and can bias outcomes. In Proof-of-Work systems, this influence comes from hash power; in Proof-of-Stake systems, it comes from voting power or stake. Because CTIB combines both mechanisms, a successful attack requires the adversary to dominate both layers at the same time. The 51% attack was chosen because it captures the worst-case scenario for CTIB. If an attacker can control the majority of decisions, they can delay, suppress, or reorder accepted CTI reports without changing their content. For a CTI publication system, this is the most damaging form of attack, as timely dissemination is as important as correctness. Other attacks do not test this property as directly or as severely. The evaluation begins with a simple analytical model that expresses attacker success as the product of control in PoS and PoW. This model is easy to interpret and provides a clear baseline. It is then validated using a Monte Carlo simulation, which allows the same probabilistic assumptions to be tested against repeated randomized trials. This combination ensures that the analysis is not only theoretical but also consistent with the implemented behavior of the system. To further strengthen the analysis, the study introduces an α-constrained effective hash-power model. This model reflects the fact that raw computational power alone should not always translate into proportional influence. By limiting effective PoW power based on economic commitment, the α-model demonstrates how CTIB can reduce the impact of hash-power dominance under realistic assumptions. The parameter α also provides a tunable way to explore governance and security trade-offs. Other threats, such as Sybil and double-spending attacks, were not selected as primary evaluation targets. Sybil attacks depend mainly on identity admission and governance policies, which are deployment-specific and outside the scope of the CTIB prototype. Double spending is relevant to financial systems, but CTIB does not manage monetary assets. In contrast, majority attacks directly test the integrity and availability guarantees that CTIB is designed to provide. Therefore, we focused on the 51% attack and the α-model. Both were selected because they provide clear, quantifiable, and implementation-relevant ways to evaluate CTIB’s hybrid consensus design. Together, they demonstrate how separating validation from anchoring and constraining effective influence significantly increases resistance to coordinated adversarial control.

5.1. Performance Benchmarking Results

CTIB performance was tested under three PoW difficulty profiles. The goal was to see how changing PoW difficulty affects end-to-end CTI publication, while keeping the rest of the workflow unchanged (STIX generation, PoS committee validation, contract calls, and off-chain storage). Validator behavior in the CTIB prototype is modeled using Bernoulli trials [46] to reflect the binary nature of the approval decision and to enable probabilistic analysis. Monte Carlo simulation [45] is then employed to repeatedly execute the modeled validation process and estimate end-to-end success probabilities under realistic stochastic conditions. This combination allows the analytical model to be validated against empirical results and provides a reproducible framework for evaluating security and performance trade-offs in hybrid consensus systems.

5.2. Experimental Setup and Metrics

To evaluate the practical behavior of the CTIB framework, three Proof-of-Work (PoW) difficulty profiles were considered: baseline (difficulty_bits = 8), medium (difficulty_bits = 12), and stress (difficulty_bits = 16). These values were selected to represent low, moderate, and relatively high PoW effort while keeping the system usable for real CTI workflows. For each profile, a benchmark script submits 100 CTI reports to the /cti/submit API endpoint. Every submission follows the complete CTIB processing pipeline. First, the input data is transformed into a STIX 2.1 bundle, which is then canonicalized and hashed using SHA-256 to produce a deterministic feed_hash. This hash serves as the integrity identifier of the CTI report. The submission is then reviewed by a Proof-of-Stake (PoS) committee consisting of five validators. Each validator independently decides whether to approve or reject the submission. If the PoS acceptance threshold is met, a PoW anchoring step is executed to bind the accepted decision to time, and the resulting evidence fields are recorded on-chain.
The PoW process is implemented as a bounded nonce search. The parameter pow_max_nonce is treated as an engineering constraint rather than a strict mathematical limit. It is set to a value several orders of magnitude larger than the expected number of hash attempts (2^{difficulty_bits}) to ensure a high probability of success while keeping computation bounded. In the experiments, pow_max_nonce was set to 2,000,000 for the baseline and medium profiles, and increased to 5,000,000 for the stress profile to maintain a realistic success rate at greater difficulty. Validator behavior in the PoS layer is modeled probabilistically. Each validator’s decision is represented as a Bernoulli trial with a fixed approval probability (APPROVE_PROB = 0.80). This reflects the binary nature of the validation outcome (approve or reject) and enables mathematical analysis of committee behavior. The same probabilistic model is later used to validate analytical results through Monte Carlo simulation, allowing direct comparison between theory and experiment. For the performance metrics, we used three primary metrics are recorded during benchmarking, which are throughput, success rate, and latency. The throughput measures how many CTI submissions are fully processed per minute. It is computed as:
Equation (6)
Throughput = N elapsed × 60
where N = 100 submissions per run, and elapsed is the total wall-clock time (in seconds) required to process the full batch. This metric captures the overall processing capacity of the CTIB pipeline, including API handling, PoS validation, PoW anchoring, and blockchain interaction. The second performance metric is Success Rate. A CTI submission is considered successful if the API returns a normal response and the final record is marked with published = true. In the CTIB backend, this condition is satisfied only when the PoS committee approval threshold is reached, i.e., at least three approvals out of five validators, given an independent approval probability of 0.80 per validator. PoW failure can occur only if no valid nonce satisfying the difficulty constraint is found before reaching pow_max_nonce. In such cases, the backend raises an error, and the submission is marked as unsuccessful. In addition, a client-side timeout (e.g., 180 s) limits how long the benchmark script waits for a response, but it does not affect the internal PoW search limit. The success rate is calculated as:
Equation (7)
Success   Rate = Successful   Submissions Total   Submissions × 100
This metric reflects both PoS acceptance behavior and the bounded nature of PoW anchoring. The last metric is Latency (p50 and p95). Latency is measured using the total_ms value reported by the API for each submission, which represents the full end-to-end execution time [47]. Two percentile-based metrics are reported, the first one is p50 latency (median), which is the typical execution time, where 50% of submissions complete faster than this value. The second one is p95 latency, which is a near worst-case indicator, showing the time within which 95% of submissions complete, while only a small fraction experience higher delay. Reporting both p50 and p95 provides insight into normal system behavior as well as tail latency, which is important for time-sensitive CTI publication scenarios.

5.3. Single-Run Results (100 Submissions) and Multi-Run Aggregated Results (10 Runs)

In CTIB, each CTI submission passes through three main stages. The first stage is the API stage, which handles data preparation, including receiving the request, converting it to STIX, and producing a canonical JSON form and hash.
The second stage is the PoS committee validation, where five validators review the submission, and this step usually takes the most time. The third stage is PoW anchoring, where a nonce is computed and, in the reported measurements, this typically takes less than 1 ms. Increasing difficulty_bits means requiring more leading zero bits in the PoW hash, so PoW should take longer in theory; however, this matters only if PoW is the main contributor to total latency. In this system and within the tested difficulty range, the overall time is dominated by PoS and backend processing, so the difference between difficulty 8 and 16 is practically similar in the end-to-end runtime. This also explains why the stress run can appear faster than the baseline because the improvement is best attributed to runtime conditions (for example, better CPU scheduling, more efficient use of validator threads, less waiting and context switching, warmer OS caches, or a more favorable request arrival pattern) rather than to any systematic performance benefit from higher PoW difficulty. Therefore, the non-monotonic throughput is reasonable, and the stress result should be treated as a single-run outcome driven by execution behavior, not as evidence that higher PoW difficulty improves scalability. Table 5 summarizes one execution per profile with N = 100 submissions.
These values should be treated as single-run measurements. Because the benchmark is executed on a local environment, system-level factors (CPU scheduling and runtime conditions) can affect throughput and tail latency in a single run. For the multi-run aggregated results (10 runs) and to reduce run-to-run noise, each profile can be executed multiple times under the same settings (e.g., 10 runs), and the averages can be reported. Table 6 summarizes the average metrics across 10 independent runs.
As mentioned in the Abstract and Conclusion, we rely on the multi-run averages in Table 6 rather than the single-run values in Table 5 because the averaged results reduce run-to-run noise caused by local CPU scheduling and runtime conditions.

5.4. Discussion

Across both single-run and multi-run results, the success rate remains high, which shows the CTIB workflow completes reliably under the tested difficulty range. Throughput does not strictly decrease as difficulty increases, which indicates that PoW anchoring is not the only driver of performance. In this prototype, PoS committee processing and backend execution contribute a large portion of total time, while PoW anchoring adds extra work that is bounded by pow_max_nonce. Therefore, changes in runtime conditions can shift throughput and p95 values in a single run. Repeating runs and reporting averages provides a more stable view of system behavior. Overall, these aggregated results confirm that the higher throughput observed in some individual stress runs was caused by favorable runtime conditions rather than improved scalability at higher PoW difficulty. Repeating the experiments across multiple independent runs provides a more stable view of system behavior and supports the conclusion that PoW anchoring introduces a bounded and predictable overhead in the CTIB workflow.

5.5. Security Evaluation Against 51% Attacks

This evaluation considers majority (51%) attacks as a core threat model for blockchain-based CTI sharing. An adversary may attempt to influence publication outcomes by (i) controlling a sufficient fraction of PoS validation power to approve malicious or low-quality CTI, and/or (ii) controlling sufficient PoW capability to delay, suppress, or reorder anchoring of accepted decisions. CTIB reduces single-layer dominance by separating validation (PoS) from temporal anchoring (PoW). For tractable analysis, an independence assumption is used as a baseline model, where attacker success in the PoS layer and PoW layer are treated as statistically independent. The baseline probability model is intentionally simplified and relies on three assumptions. First, compromise of the PoS validation layer and compromise of the PoW anchoring layer are treated as separate events in the sequential workflow. Second, the attacker must succeed in both layers during the same publication attempt. Third, the baseline model assumes statistical independence between PoS compromise and PoW compromise. This independence assumption is not claimed to fully represent all real-world deployments; rather, it provides a tractable baseline for quantifying the security benefit of requiring two distinct compromises. The model is later refined through committee capture analysis using the hypergeometric distribution and validated through Monte Carlo simulation.
This baseline is then extended using a committee-based PoS model and an α-constrained effective hash-power model. In the baseline model, the probability of attacker success under the hybrid design is modeled as the product of the attacker’s control ratios in PoS and PoW in Equation (5).
Where r PoS is the attacker’s corruption ratio in the PoS layer and r PoW is the attacker’s effective control ratio in the PoW anchoring layer. For a committee-based PoS model, committee capture is modeled using the hypergeometric distribution, where a committee of size n is sampled without replacement from a validator pool of size N containing K corrupted validators. Under this model, a single-layer system with attacker control ratio r has compromise probability r. In CTIB, if the attacker has equal control r in both layers, the end-to-end compromise probability becomes r2. Therefore, reaching an end-to-end compromise probability above 0.51 requires r > √0.51 ≈ 0.714. This does not mean CTIB is immune below that value; rather, it shows that the sequential two-layer requirement raises the effective majority-control threshold under the stated assumptions.
The probability that the attacker controls at least the threshold number of committee seats determines PoS success under the committee rule. Monte Carlo simulation [48] is used to validate the analytical probability formulations under randomized trials. Each configuration is executed using repeated trials (e.g., 20,000 trials per configuration). In the independent model, PoS and PoW outcomes are simulated as Bernoulli events with success probabilities derived from r PoS and r PoW . In the committee-based model, PoS success is simulated by sampling committee composition without replacement and checking whether corrupted validators meet the acceptance threshold. PoW success is then simulated as a Bernoulli event using the attacker’s effective PoW ratio. The simulation outputs are compared to analytical results using absolute error and standard error to verify statistical consistency.
Test 1: Test results for CTIB against 51% attack
A “51% attack” is a majority-control problem. In PoW, if an attacker controls a large share of mining power, they can influence what is confirmed and when, including delaying or reordering published records. In PoS, a similar problem appears when an attacker controls enough validators (or voting power) to dominate decisions. This matters in CTI publication because CTI must be shared in a way that is both trustworthy and timely. CTIB reduces this risk by treating security as two consecutive “gates” that must be passed in the same attempt: a PoS acceptance gate followed by a PoW anchoring gate. Therefore, an attacker does not succeed by controlling only one layer; they must compromise both layers together. We start with a simple analytical independent baseline (Equation (5)) to obtain a clear reference point. Let r pos be the attacker’s control ratio in the PoS layer and r pow be the attacker’s control ratio in the PoW layer. Under the independent product model, the attacker succeeds only if both layers are compromised, as already defined in Equation (5). So the combined probability is Equation (8):
Equation (8)
p math _ eq 5 = r pos × r pow
This explains the “square-like” behavior in the ratio table when r pos = r pow = r because then p r 2 . It also gives a clear tipping-point argument: solving r 2 0.51 gives r 0.51 0.714 as illustrated in Table 7. In other words, under this baseline assumption, an attacker would need roughly 71.4% control in both layers to push the end-to-end success probability above 51%.
Table 7 first reports the expected compromise probabilities obtained directly from Equation (5). These values provide a simple analytical baseline, but they do not by themselves verify that the simulation procedure follows the same probabilistic logic. Therefore, the next step is to compare the analytical model with Monte Carlo estimates. Two validation settings are used. The first setting treats PoS compromise and PoW compromise as two independent Bernoulli events. A trial is counted as successful only when both events occur in the same run. This setting verifies that the simulation correctly reproduces the product model in Equation (5). The second setting follows the actual CTIB committee structure more closely. In this case, PoS compromise is determined by sampling a validator committee and checking whether attacker-controlled validators reach the required majority threshold. The PoW condition is then sampled separately as a Bernoulli event. For each configuration, the experiment is repeated for 20,000 trials, and the simulated attack probability is compared with the analytical value using absolute error and standard error.

5.6. Simple Analytical Baseline Theory Math vs. Simulation Model (The Independent Bernoulli Baseline)

We validate this baseline using a Monte Carlo simulation. For fairness across configurations, each table row is evaluated with the same number of trials, fixed to T = 20,000 . In the independent simulation, each trial contains two binary events: PoS compromised (yes/no) and PoW compromised (yes/no). We implement each event as a Bernoulli trial using the common sampling form rng.random() < p , where the outcome is True with probability p . A trial is counted as a successful attack only if both events are True in the same trial. After T trials, the simulated estimate is:
Equation (9)
p sim _ independent = # successful   trials T
For example, when r pos = 0.50 and r pow = 0.50 , the analytical value is p math _ eq 5 = 0.25 . If the simulation counts 5016 successes out of 20,000, then p sim = 5016 / 20,000 = 0.2508 , which is close to 0.25 as illustrated in Table 6. This comparison matters because both values represent the same model; close agreement indicates that the simulation is implementing Equation (5) correctly. Comparing p sim _ independent against p math _ eq 5 is important because they represent the same model. Close agreement means our interpretation of Equation (5) and our Monte Carlo implementation are consistent. To make “close” precise, we also report the standard error of the Monte Carlo estimate as illustrated in Table 8. For a probability estimate p ^ based on T independent trials, the standard error is:
Equation (10)
S E p ^ = p ^ 1 p ^ T
Finally, we clarify why we use a Bernoulli model instead of calling it “just a random function.” In code, we implement a Bernoulli event using a uniform random number and a threshold (for example, rng.random() < p). But the key point is the meaning: Bernoulli encodes a clean success/failure structure with a defined probability p . It aligns directly with our equations, where each trial represents a well-defined probabilistic event rather than an arbitrary random value. It also gives us a principled way to compute uncertainty (standard error) for Monte Carlo estimates because our estimates are proportions of Bernoulli outcomes. This is why Bernoulli is not “extra wording”; it is the correct scientific description of what the simulation is doing and why the results can be compared fairly against the analytical models as mentioned in Appendix A.
Analytical probability vs. hypergeometric distribution modeling and Bernoulli
We used another realistic way to simulate the PoS committee with a 51% attack. We compute the standard error for both independent and committee simulations. When the absolute difference between the analytical value and the simulated value is smaller than, or comparable to, this standard error, we treat the simulation as statistically consistent with the analytical probability at the chosen trial count. We use the hypergeometric distribution because the PoS validation step in CTIB is based on committee selection without replacement from a finite validator pool. This property makes hypergeometric modeling the correct and natural choice [49]. In CTIB, a PoS decision is not made by the entire validator set. Instead, a fixed-size committee of validators is randomly selected from the total pool. Each validator in the pool belongs to one of two categories: corrupted (attacker-controlled) or honest. Once a validator is selected into the committee, it cannot be selected again in the same committee. This is exactly the definition of sampling without replacement, which is the core assumption of the hypergeometric distribution.
The hypergeometric distribution answers the following question: “Given a finite population with a known number of “success” elements (corrupted validators) and “failure” elements (honest validators), what is the probability that a random sample of fixed size contains a specific number of successes?” This matches the CTIB PoS model precisely. Let N be the total number of validators, K the number of corrupted validators, and n the committee size. If X denotes the number of corrupted validators selected into the committee, then the probability of observing exactly X = x corrupted members is given by the hypergeometric formula. From this, we can compute the probability that the committee contains a corrupted majority, i.e., P ( X t ) , where t is the majority threshold (three out of five in CTIB). Using a simpler distribution, such as Bernoulli or binomial, would be incorrect in this setting. Bernoulli and binomial models assume independent trials with replacement, meaning each draw does not affect the next. In contrast, committee selection in CTIB is dependent; once a corrupted validator is selected, the remaining pool changes, and the probabilities for the next draw are updated. Hypergeometric modeling captures this dependency accurately.
Therefore, the hypergeometric distribution is used because:
  • The validator pool is finite.
  • Committees are sampled without replacement.
  • The attacker’s success depends on how many corrupted validators appear inside a single committee, not just on an average corruption rate.
  • It provides an exact analytical probability for committee capture that aligns with the actual PoS mechanism implemented in CTIB.
This is why p pos _ majority _ math is computed using a hypergeometric formulation, and why it serves as the correct analytical baseline for validating the committee-based Monte Carlo simulation results. After validating the baseline, we move to a model that matches CTIB’s PoS rule more closely. In CTIB, PoS is not a single yes/no event with probability r pos . It is a committee decision. In CTIB, PoS success (for the attacker) depends on whether the attacker captures a majority inside a randomly selected committee. With committee size n = 5 and threshold t = 3 , the attacker succeeds in PoS only if the committee contains at least three attacker-controlled validators. Committee selection is done from a finite validator pool without replacement, which is exactly why we use the hypergeometric distribution. If N is the total pool size, K is the number of attacker-controlled validators (often K = r pos N ), and X is the number of attacker-controlled validators inside the committee, then
Equation (11)
P X = x = K x N K n x N n ,   p pos _ majority _ math = P X t
and the probability that the attacker captures the committee majority is:
Equation (12)
p pos _ majority _ math = P X t = x = t n K x N K n x N n
This term can differ slightly from r pos because it depends on sampling effects and the majority threshold. We then combine PoS committee capture with PoW compromise using the same “two gates” logic:
Equation (13)
p committee - math = p pos _ majority _ math × r pow
A key point is that p pos _ majority _ math is a fixed analytical probability for a given table row, but it is not used as a multiplier inside each simulation trial. In the committee simulation, we resample a new committee in every trial and check whether it has a corrupted majority; this produces a trial-by-trial Boolean outcome committee_ok. Separately, we sample PoW compromise as a Bernoulli event with probability r pow , producing pow_ok. A trial is a successful attack only when (committee_ok AND pow_ok) is True. The simulated estimate is:
Equation (14)
p sim _ committee = # trials   where   ( committee   majority )   AND   ( PoW   compromised ) T
This logic also explains how it works. For the row r pos = 0.50 and r pow = 0.50 , the committee-majority probability is near 0.5 (by symmetry), so in T = 20,000 trials, we may observe around 10,000 trials where committee_ok=True. For instance, if the committee majority occurs in 10,032 trials, that corresponds to 10,032 / 20,000 = 0.5016 for the committee majority event alone. However, p sim _ committee counts only trials where both the committee majority and PoW compromise happen. If the final number of complete successes is 5042 trials, then p sim _ committee = 5042 / 20,000 = 0.2521 , which matches the example table value. The analytical value for the same row is p committee - math 0.5 × 0.5 = 0.25 . Therefore, the absolute error is 0.2500 0.2521 = 0.0021 .
To understand how the committee-based Monte Carlo simulation is constructed, it is important to distinguish clearly between what is fixed analytically and what varies during simulation. For each configuration reported in Table 6, the attacker’s control ratios in the PoS and PoW layers ( r pos and r pow ), the committee size ( n = 5 ), the majority threshold ( t = 3 ) from this committee to take control, and the total number of trials ( T = 20,000 ) are all fixed inputs. These values define the experimental setting and remain unchanged throughout the entire simulation run for that Table 6 row. The quantity p pos _ majority _ math is first computed analytically using the hypergeometric distribution. It represents the theoretical probability that a randomly selected PoS committee contains a corrupted majority, given the attacker’s control fraction in the validator pool. When, for instance, the r pos = 0.5 , the validator pool is symmetric; half of the validators are attacker-controlled, and half are honest. Because the committee size is odd and the acceptance rule requires a strict majority, this symmetry implies that the probability of selecting a committee with a corrupted majority is approximately 0.5. This value is a fixed analytical result for that configuration and does not change across trials. However, this analytical probability is not used as a fixed multiplier inside the Monte Carlo simulation. The simulation does not repeatedly compute 0.5 × random   PoW   outcome . Instead, it explicitly models the underlying random processes that the analytical probability summarizes. In each simulation trial, two independent binary outcomes are generated. First, a PoS committee is sampled without replacement from the validator pool, and the actual number of corrupted validators in that committee is counted. If this number is at least three, the committee condition is satisfied for that trial. Second, PoW compromise is sampled independently as a Bernoulli event with probability r pow .
A trial is counted as a successful attack only if both conditions hold simultaneously. This distinction is crucial. Although p pos _ majority _ math is fixed as a mathematical quantity, the committee outcome itself varies from trial to trial because the committee is re-sampled each time. In some trials, the committee may contain only two corrupted validators and fail the majority condition; in others, it may contain three or more and succeed, and we added a small example in Appendix B and Appendix C. Similarly, the PoW outcome is re-sampled independently in each trial. The simulation, therefore, consists of repeatedly “rolling the dice” for both layers rather than applying a fixed probability directly. Over many trials, the law of large numbers ensures that the fraction of trials in which the committee has a corrupted majority converges to p pos _ majority _ math , and the fraction of trials in which PoW is compromised converges to r pow . Because success requires both events to occur in the same trial, the fraction of successful trials converges to the product p pos _ majority _ math × r pow , which is precisely the analytical committee-based probability p math _ committee .
A simple illustrative example helps clarify this process. Suppose that, in a small number of trials, the committee sampling yields a corrupted majority in roughly half of the cases, while the PoW Bernoulli trial succeeds in roughly half of the cases. Success is observed only when both events occur together. With a small number of trials, the observed success rate may fluctuate noticeably, but as the number of trials increases to 20,000, these fluctuations diminish, and the simulated probability stabilizes near the analytical value. This is why, in Table 9, p sim _ committee appears close to 0.25 when r pos = r pow = 0.5 . Table 6 shows that the analytical probabilities closely match Monte Carlo simulation results for both the independent and committee-based models. Across all representative configurations, the absolute error remains within the expected statistical variation, confirming the validity of the probabilistic modeling approach and the robustness of CTIB against majority attacks.
Finally, we report a standard error to quantify the expected Monte Carlo variation at T = 20,000 . For an estimated probability p ^ from T independent trials,
S E p ^ = p ^ 1 p ^ T
When the absolute error between the analytical and simulated values is smaller than (or comparable to) the standard error, the simulation is statistically consistent with the analytical model at the chosen trial budget. In the example row, an absolute error of 0.0021 compared to a standard error around 0.0031 supports this consistency and strengthens confidence that both the committee math and the simulation procedure reflect the intended CTIB threat model.
In summary, p pos _ majority _ math is a fixed analytical probability derived once per configuration, while p sim _ committee is obtained by repeatedly simulating committee selection and PoW compromise. The simulation does not multiply probabilities directly; it counts successes over many trials. The close agreement between analytical and simulated values confirms that the Monte Carlo procedure correctly implements the committee-based security model of CTIB.
Test 2: Test Results For Sensitivity Analysis of the α-Constrained Effective Hash Power Model
This subsection evaluates the sensitivity of the α-constrained effective hash-power model. The purpose of this analysis is not to benchmark runtime performance, but to examine how a stake-based allowance could reduce raw hash-power dominance under a hypothetical governance policy. The current CTIB prototype does not enforce this policy at runtime. For each miner i, the model first computes a stake-based allowance factor as follows:
Equation (16)
allowance_i = min(α × s_i/S, 1)
where si is the miner’s stake, S is the total stake, and α controls the strength of the stake constraint. The miner’s effective hash contribution is then computed by multiplying its raw hash power by this allowance. Finally, effective hash shares are normalized across all miners. The example in Table 10 considers an attacker with 60% raw hash power but only 10% stake. When α = 2, the attacker’s allowance is limited to 0.2, so the attacker’s effective contribution is reduced before normalization. As a result, the attacker’s effective PoW share becomes approximately 27.27% rather than 60%. This illustrates how the model can reduce the influence of a high-hash, low-stake attacker under the stated assumptions. The analysis also shows that α is a tunable governance parameter. Lower α values create stricter stake-hash coupling and provide stronger resistance to raw hash-power concentration. Higher α values relax the constraint and allow raw hash power to have greater influence.
Overall, the α-model highlights three main observations. First, effective mining influence can be reduced when raw hash power is not supported by sufficient stake. Second, the total effective hash capacity can be lower than the raw aggregate hash power when several miners are constrained by insufficient stake. Third, practical deployment would require runtime enforcement rules, participation incentives, and governance policies to avoid discouraging honest miners with a limited stake. Therefore, the α-model is useful for security analysis, but production use requires additional protocol design.
To further analyze the sensitivity of the α-constrained effective hash-power model, Table 11 reports the attacker’s effective PoW share under different α values using the same illustrative scenario (attacker stake = 10%, raw hash = 60%). The results show that increasing α gradually relaxes the stake constraint, allowing stronger effective influence while still limiting raw hash-power dominance.

6. Prototype Scope and Limitations

This section clarifies the scope of the implemented CTIB prototype and distinguishes between components that were implemented and evaluated, components that were analyzed conceptually, and components that remain future deployment extensions. This clarification is important because the current prototype is designed to demonstrate the feasibility of the sequential PoS → PoW CTI publication workflow under controlled conditions rather than to claim production-ready deployment. CTIB is not presented as a production-ready CTI-sharing network. Instead, it is a prototype-level framework that demonstrates the feasibility of separating CTI validation from temporal anchoring and evaluates this idea through controlled benchmarking and probabilistic security analysis. The implemented prototype uses Solidity smart contracts, a FastAPI backend, a local Hardhat Ethereum development network, deterministic validator initialization, STIX-based CTI representation, SHA-256 hashing, PoS committee approval, and PoW anchoring. The evaluation focuses on end-to-end workflow correctness, local performance behavior, and majority-attack feasibility under stated assumptions. Table 12 summarizes the implementation and evaluation scope of CTIB.
CTIB is not presented as a production-ready CTI sharing network. Instead, it is a prototype-level framework that demonstrates the feasibility of separating CTI validation from temporal anchoring and evaluates this idea through controlled benchmarking and probabilistic security analysis.

6.1. Implemented and Evaluated Components

The implemented prototype realizes the core CTIB workflow from CTI submission to final anchoring. Each CTI submission is converted into a STIX 2.1 bundle, serialized into a deterministic canonical JSON representation, and hashed using SHA-256 to produce a feed_hash. The full STIX document is stored off-chain, while integrity-critical evidence is recorded on-chain. This includes the feed_hash, the PoS approval count, the publication status, and the PoW anchoring evidence fields such as nonce, resulting hash, and number of attempts. The PoS validation layer is implemented using a fixed-size committee of five validators with an acceptance threshold of at least three approvals. The validator set is initialized deterministically to support reproducible experiments. The PoW layer is implemented as a bounded nonce-search procedure over the accepted feed_hash under configurable difficulty_bits. The resulting PoW proof is recorded as anchoring evidence. The prototype also implements a FastAPI backend that coordinates CTI submission, STIX generation, PoS validation, PoW anchoring, and smart contract interaction through Ethereum JSON-RPC and ABI-based calls. These components are evaluated through local performance benchmarking and end-to-end workflow tests.

6.2. Analytical Scope, Design Limitations, and Advantages of the CTIB Framework

Some CTIB components are evaluated analytically or conceptually but are not enforced as a runtime mechanism in the current prototype. The main example is the α-constrained effective hash-power model, which is introduced to examine how stake-based economic commitment could limit raw hash-power dominance in a future hybrid PoS–PoW deployment. However, the implemented prototype does not enforce α-based mining eligibility, stake-weighted mining privileges, or dynamic runtime adjustment of miner difficulty. Therefore, the α-model should be interpreted as a governance-oriented security analysis rather than implemented protocol rules. Similarly, the current prototype includes only basic reward accounting and does not implement a complete economic incentive system with transferable tokens, slashing, dynamic validator reputation, or dispute-resolution mechanisms. These mechanisms are important for production deployment but remain outside the scope of the current prototype and are treated as future extensions. Despite these limitations, the experimental results and analytical evaluations presented in Section 5 show that CTIB offers several advantages over traditional CTI-sharing platforms and single-consensus blockchain-based CTI models.
First, CTIB mitigates majority control attacks by requiring an adversary to compromise both the PoS validation layer and the PoW anchoring layer, where the simplified independent-layer model gives an end-to-end attack probability of P_CTIB = r_PoS × r_PoW, raising the effective tipping point to approximately 71.4% when both compromise ratios are equal. Second, CTIB improves resistance to censorship by delay because, after PoS acceptance, the PoW layer cryptographically and temporally anchors the accepted feed_hash, increasing the cost and observability of suppression attempts.
Third, CTIB separates content validation from publication, anchoring the PoS layer, which evaluates CTI quality, relevance, and credibility, while the PoW layer finalizes and timestamps the accepted decision without reevaluating content. Fourth, CTIB improves scalability through on-chain/off-chain separation, storing only integrity-critical evidence on-chain, such as the feed_hash, validation outcome, and PoW proof fields, while keeping full STIX 2.1 CTI content off-chain.
Fifth, the α-constrained effective hash-power model provides a tunable analytical mechanism for studying how economic commitment could reduce the effective mining influence of a dominant attacker. In addition, although the prototype is evaluated in a local single-machine environment, the CTIB architecture supports higher availability by separating validation from anchoring and remains compatible with distributed multi-node deployment, replication, and failover mechanisms. The hybrid design also supports probabilistic analysis of multilayer corruption, although concrete recovery actions such as rollback, governance intervention, or remediation after detected compromise remain future work.
Finally, while Sybil resistance is not tested in the prototype because validators are pre-registered for reproducibility, CTIB can integrate production-level controls such as stake requirements, identity binding, or governance-based validator admission without changing the core PoS → PoW workflow.

6.3. Discussion of Threat Coverage and Prototype Scope

Table 13 summarizes the threat assumptions considered in the CTIB evaluation and highlights how the sequential hybrid design (PoS → PoW) reduces the feasibility of dominant attack vectors compared to single-consensus designs. In single PoW systems, majority attacks and censorship by hash-power dominance pose a high risk because control over mining resources can enable reordering or suppression of accepted data. Similarly, single PoS designs can be vulnerable to validator collusion or governance capture, where a small set of validators may delay or censor intelligence publication. CTIB mitigates these risks by separating validation from temporal anchoring: PoS committees evaluate correctness and credibility, while PoW anchors accepted decisions to time through computational proof. In the implemented prototype, the PoS committee is fixed at five validators with a ≥3/5 acceptance threshold. This committee rule reduces the probability of committee capture relative to naïve single-validator acceptance, while PoW anchoring increases the observability and cost of censorship-by-delay attempts by binding the accepted feed_hash to a verifiable proof. Furthermore, the α-constrained effective hash-power model (evaluated in Section 5) demonstrates how raw hash dominance can be reduced by constraining effective mining influence based on economic commitment. Overall, CTIB does not eliminate all threats; rather, it increases the economic and operational cost of attacks and reduces feasibility relative to single-consensus CTI blockchain designs. The quantitative impact of these mitigations is evaluated via analytical modeling and Monte Carlo simulation in the preceding subsections.
Publication reordering refers to manipulating the order or timing of CTI publication without changing the content itself. Although the CTI report remains valid, delaying or reordering its publication can reduce its operational value, since timeliness is critical in threat intelligence sharing. In single-consensus blockchain designs, such as pure PoW or pure PoS systems, publication reordering is possible because the same entities that decide acceptance also control transaction ordering. In these systems, reordering can occur silently and at low cost, making censorship or selective delay difficult to detect. CTIB does not claim to completely prevent publication reordering. Instead, it constrains this behavior through its sequential PoS–PoW design. In CTIB, a CTI report is first accepted by a PoS committee, and this acceptance decision is then bound to time through PoW anchoring. Any attempt to reorder publication after acceptance would either require recomputing the PoW anchor or would create an observable inconsistency between the acceptance decision and its time anchor. As a result, publication reordering in CTIB remains theoretically possible but becomes detectable and economically costly compared to single-consensus designs.

6.4. Security Features Not Implemented in the Current Prototype

Several security mechanisms discussed in the broader CTIB architecture are not fully implemented or empirically evaluated in the current prototype. These include contributor digital signature verification, validator signature verification, encryption-at-rest, encryption-in-transit, TAXII-based transport, production-grade validator admission control, and automated reputation management. As a result, the current prototype should not be interpreted as a complete secure CTI-sharing platform. Instead, it evaluates a specific design question, whether separating PoS-based content validation from PoW-based temporal anchoring can support reproducible CTI publication and improve resistance to majority-control attacks under stated assumptions. The absence of encryption means that confidentiality guarantees are not evaluated. The absence of TAXII transport means that interoperability with operational CTI sharing infrastructures remains a future extension. Finally, the absence of production-grade admission control means that open-network Sybil resistance is not demonstrated in this work.

6.5. Experimental and Security Evaluation Challenges

The experimental and security evaluation of CTIB is subject to several important challenges. The experimental evaluations are conducted in a controlled local environment using a Hardhat Ethereum development network and a FastAPI backend running on a single virtual machine. This setup supports reproducibility and isolates the CTIB workflow from external network noise. However, it does not represent a real distributed CTI-sharing deployment. In particular, the local environment does not model geographically distributed validators, heterogeneous node resources, nonuniform network latency, unstable communication channels, peer-to-peer propagation delays, validator churn, or live adversarial traffic. Therefore, the reported throughput and latency results should be interpreted as PoC measurements rather than production-scale scalability results. The benchmark results also depend on local runtime conditions, including CPU scheduling, process concurrency, memory behavior, and the local development network. Although multi-run averaging reduces run-to-run noises, distributed experiments are required before making claims about real-world scalability or operational readiness.
From a security evaluation perspective, the quantitative analysis focuses primarily on majority control attacks because they directly affect CTIB’s core publication property. In the baseline model, attacker success requires compromise of both PoS validation and PoW anchoring, which is useful for illustrating the benefit of sequential compromise requirements. However, this model relies on simplified assumptions, including independence between PoS and PoW compromise events. The committee-based analysis improves realism by modeling PoS committee capture using the hypergeometric distribution and validating the results through Monte Carlo simulation; nevertheless, it still abstracts away several real-world factors, including adaptive adversaries, validator bribery, long-term collusion, network-level attacks, and strategics timing attacks. In addition, Sybil attacks, API DoS, reputation poisoning, metadata leakage, fork handling, and governance recovery are not quantitatively evaluated in the current prototype. Addressing these threats requires additional mechanisms such as admission control, identity binding, rate limiting, distributed deployment, encrypted communication, reputation tracking, and slashing policies.

6.6. Practical Implications and Future Deployment Requirements

Despite these limitations, the CTIB prototype provides a useful foundation for studying hybrid consensus CTI publication. The results show that the main workflow can be implemented end-to-end and evaluated through measurable performance and security metrics. The prototype also demonstrates how content validation and publication anchoring can be separated into two independently analyzable stages.
For production deployment, several extensions are required. First, CTIB should be evaluated in a distributed multi-node environment with heterogeneous validators and realistic network latency. Second, the PoS validation process should be calibrated using expert-labeled CTI datasets and evaluated using inter-validator consistency metrics. Third, digital signatures and encryption should be implemented to support contributor authentication, validator accountability, and confidentiality. Fourth, TAXII transport should be integrated to improve interoperability with operational CTI-sharing ecosystems. Fifth, production-grade admission control, Sybil resistance, reputation management, slashing, and governance recovery mechanisms should be designed and evaluated. Finally, the α-constrained effective hash-power model should be transformed from an analytical governance abstraction into an enforceable runtime policy before it can be claimed as an operational defense mechanism.

7. Comparative Discussion with Existing CTI Blockchain Approaches

CTIB differs from prior blockchain-based CTI systems in four main ways. First, it separates CTI content validation from publication anchoring, whereas many prior systems treat CTI submissions as ordinary blockchain transactions. Second, it combines standardized STIX 2.1 representation with deterministic hashing and on-chain/off-chain separation. Third, it evaluates application-level CTI publication metrics, including end-to-end throughput and p50/p95 latency rather than only reporting ledger-level or protocol-level metrics. Fourth, it provides a quantitative majority-attack analysis using analytical probability, committee capture modeling, and Monte Carlo simulation. These differences position CTIB as a prototype-level framework for studying hybrid consensus CTI publication rather than as a production-ready CTI exchange platform.
As we mentioned in Table 1 in the related work sections, we listed most of the latest contraptions in the CTI and blockchain. We divided the previous works into four groups. Group 1: Blockchain Referenced Without Technical Details. Group 2: Blockchain with Proof but Without CTI Standards. Group 3: Blockchain Integrated with CTI Standards and Group 4: Comprehensive Blockchain-Based CTI Systems. Direct quantitative comparison between CTIB and prior CTI–blockchain proposals is limited because many systems report heterogeneous performance metrics (e.g., protocol micro-benchmarks or block-level throughput) rather than application-level, end-to-end CTI submission latency. Consequently, this paper evaluates CTIB against related approaches along three factors: workflow alignment with CTI publication (content screening, integrity evidence, and timeliness), threat model coverage (majority compromise, censorship by delay, and auditability), and implementation evidence (availability of a working prototype and published end-to-end performance benchmarks). Unlike many existing works, which are primarily conceptual or rely on a single-consensus mechanism, CTIB is evaluated through a working prototype with measurable performance and security properties. Also, we specified two specific papers that focus on the implementation and design part. The first one is [50]. This paper focuses on improving trust, privacy, and access control in CTI sharing among CSIRTs operating across jurisdictions. The authors emphasize legal and organizational barriers introduced by the EU NIS Directive and argue that traditional centralized sharing models are inadequate for sensitive CTI dissemination. The second paper is [51]. This paper aims to address the security and trust limitations of centralized cyber threat intelligence (CTI) sharing systems. The authors argue that even standards-based CTI platforms (STIX/TAXII) remain vulnerable to availability failures, content poisoning, and trust centralization. Their goal is to redesign CTI sharing using a decentralized architecture or Data Distribution Service (DDS) while remaining compliant with existing CTI standards. The comparison highlights CTIB’s hybrid consensus design, explicit on-chain/off-chain separation, and quantitative security evaluation as distinguishing features, while also emphasizing that certain advanced protections remain deployment-dependent extensions rather than fully implemented guarantees, as illustrated in Table 14.
CTIB remains the only system in this set that explicitly claims and evaluates majority (51%) attack mitigation and introduces an α-constrained effective hash-power analysis as part of the security evaluation both are absent from the added works, which are either permissioned Fabric designs (strong on governance/access control but not majority-attack modeling) or a non-blockchain Data Distribution Service (DDS) system focused on real-time dissemination rather than adversarial-consensus threats. Also, your broader motivation is consistent with the CTI-sharing literature: trust, privacy, standardization, and timeliness are repeatedly identified as barriers and drivers in CTI sharing, which supports why CTIB’s “publication security under adversarial conditions” is a meaningful differentiator.

7.1. Comparative Analysis with Existing Systems

This subsection positions the CTIB evaluation results within the broader landscape of CTI sharing and blockchain-enabled CTI systems. Unlike the related work reviewed in Section 2, which primarily focuses on architectural proposals or theoretical arguments, the comparison presented here is grounded in implemented capabilities, measured performance evidence, and quantitatively evaluated security properties (majority-attack feasibility, committee capture probability, and α-based effective hash constraints). Because the literature spans multiple research directions, direct comparisons are often not possible. Hybrid PoW/PoS systems are predominantly cryptocurrency-oriented and typically report block-level or protocol micro-benchmarks rather than application-level CTI submission latency, while CTI-focused blockchain systems frequently adopt permissioned consensus (e.g., Hyperledger Fabric) and, therefore, do not report PoW difficulty trade-offs or nonce-search overhead. For this reason, we provide a structured comparison across two complementary perspectives, survey-level coverage of integration factors, and implementation-oriented comparison of systems and reported metrics. To evaluate practical comparability, Table 15 summarizes the systems that are closest to CTIB either in hybrid PoW/PoS consensus design (primarily cryptocurrency-oriented) or CTI sharing on blockchain using structured standards (often permissioned). The table clearly distinguishes application-level performance metrics (end-to-end CTI submission throughput and latency) from protocol micro-benchmarks and block-level metrics. In CTIB, end-to-end measurements (throughput in feeds/min and latency percentiles p50/p95) correspond to the complete CTI workflow. STIX generation and hashing, PoS committee validation (5 validators, ≥3/5), PoW anchoring, and smart contract interaction. In contrast, hybrid cryptocurrency systems typically report block-interval or verification micro-benchmarks rather than API-level CTI workflow latency. Likewise, permissioned CTI blockchains may implement STIX-based sharing but often do not report PoW difficulty trade-offs or probabilistic majority-attack evaluation.
The reported performance numbers column distinguishes between systems that provide application-level, end-to-end performance measurements, and those that report only partial or lower-level metrics. In CTIB, performance is evaluated across the complete CTI submission workflow, including STIX generation and canonicalization, PoS committee validation, PoW anchoring, and smart contract interaction. Accordingly, CTIB reports throughput in terms of CTI feeds processed per minute, as well as latency percentiles (p50 and p95), which reflect both typical and tail-end processing delays experienced in practice. By contrast, some hybrid consensus blockchain systems, such as TwinsCoin, report only protocol-level micro-benchmarks, including proof verification overhead measured in microseconds per operation and proof size. While these metrics characterize internal consensus costs, they do not correspond to application-level submission latency or end-to-end workflow performance. Similarly, systems such as Decred typically publish block-level or network-level metrics (e.g., block interval or voting latency), which are not directly comparable to the timing of CTI publication through an API-driven pipeline. CTI-focused blockchain platforms based on permissioned frameworks, such as Hyperledger Fabric, emphasize trusted sharing and standards integration but frequently omit quantitative end-to-end performance benchmarks and probabilistic majority-attack evaluations. As a result, direct numeric comparison across these systems is often infeasible. CTIB, therefore, distinguishes itself by providing reproducible, application-level performance measurements aligned with real CTI workflows, alongside quantitative security evaluation.
To the best of our knowledge, no existing CTI sharing system jointly combines:
  • A sequential hybrid PoS → PoW consensus architecture.
  • Standardized CTI representation and deterministic hashing (STIX 2.1).
  • Explicit on-chain/off-chain separation.
  • Implementation-backed evaluation reporting application-level throughput/latency metrics alongside analytical and Monte Carlo security analysis.
Therefore, while direct numeric end-to-end comparison is limited by differences in domain and reported metrics, CTIB provides a stronger implementation-driven evidence base for practical CTI workflows than prior conceptual or single-layer designs.

7.2. Performance Comparison and Interpretation

Table 16 summarizes the performance metrics and numerical values reported in related works and contrasts them with the CTIB performance results. The table intentionally includes all reported metrics, even when they are not strictly equivalent, in order to provide a transparent view of how CTIB compares to existing approaches. Because CTIB adopts a permissionless hybrid design, while most related systems rely on permissioned architectures or transport-level dissemination, a direct one-to-one comparison is not always possible. Nevertheless, presenting all values side by side helps clarify where differences originate and what each metric actually measures. The numerical comparison in Table 16 should be interpreted carefully because the evaluated systems measure different layers of the CTI-sharing stack. Data Distribution Service (DDS)-based systems report transport-level message delivery, permissioned blockchains often report ledger-level throughput or transaction latency, while CTIB reports end-to-end CTI publication latency, including STIX generation, PoS committee validation, PoW anchoring, and smart contract interaction. Therefore, CTIB is not expected to outperform transport-only systems in raw latency; its contribution lies in combining publication accountability, integrity evidence, and quantitative adversarial analysis within a working CTI publication prototype.
Accordingly, CTIB’s advantage is not raw communication speed. Its advantage is that the measured latency corresponds to a complete validated and anchored CTI publication workflow. This makes CTIB’s metrics more directly aligned with the operational experience of CTI producers and consumers than isolated ledger TPS or middleware delivery latency.
When compared with DDS-based systems and permissioned-ledger throughput benchmarks like TrustShare [53], CTIB exhibits higher latency and lower throughput in raw numerical terms. This result is expected and arises from fundamental differences in system scope and evaluation methodology rather than from inefficiencies in the CTIB design. Data Distribution Service (DDS) focuses exclusively on transport-level dissemination. It does not involve consensus, on-chain immutability, cryptographic anchoring, or adversarial validation. Consequently, its reported latency and throughput values represent pure communication performance, not secure or auditable publication. In contrast, TrustShare [53] and IJARCSE [54] primarily report ledger-level performance metrics, such as transactions per second (TPS) and block-level latency, within permissioned blockchain environments. These metrics measure the cost of committing transactions to the ledger, but they do not capture the complete CTI workflow, including content validation, committee decision-making, and anchoring procedures. Therefore, the observed performance gap reflects differences in what is being measured, not a weakness of CTIB.
Despite its broader security scope, CTIB demonstrates competitive latency characteristics when compared to blockchain-based CTI systems that report operational limits. IJARCSE [54] indicates that system latency remains below 500 ms under normal conditions and is capped below 800 ms through block timeout mechanisms. CTIB’s measured median latency (p50) of 326–403 ms and tail latency (p95) of 553–701 ms fall within a similar operational range, even though CTIB performs additional validation and anchoring steps for each submission. This result shows that CTIB achieves comparable end-to-end responsiveness while enforcing stronger publication guarantees. Beyond raw speed measurements, CTIB makes a distinct and practical performance contribution by reporting application-level CTI workflow metrics. Instead of focusing solely on ledger TPS or middleware latency, CTIB measures performance from CTI submission through validation and final publication, using metrics such as feeds per minute and p50/p95 end-to-end latency.
These metrics directly reflect the real operational experience of CTI producers and consumers and provide a more meaningful basis for evaluating deployment-ready CTI publication pipelines. In contrast, many existing studies report isolated subsystem performance, which is difficult to translate into real-world operational impact.

8. Conclusions and Future Work

This paper presented CTIB, a proof-of-concept hybrid PoS → PoW framework for secure CTI publication. CTIB separates content validation from temporal anchoring: a PoS committee evaluates CTI submissions, while PoW anchors accepted feed hashes as verifiable publication evidence. The prototype was implemented using Solidity smart contracts and a FastAPI backend and evaluated under controlled local conditions using a Hardhat development network. Across ten independent local runs, CTIB achieved an average throughput of between 141.13 and 166.14 feeds/min, an average p50 latency of between 326.18 and 403.09 ms, and an average p95 latency of between 553.22 and 700.82 ms under the tested difficulty profiles. These measurements demonstrate prototype-level feasibility, but they do not represent production-scale performance in a distributed network with heterogeneous nodes, unstable links, or adversarial traffic. Security analysis showed that, under stated assumptions, requiring compromise of both PoS validation and PoW anchoring reduces majority-control feasibility compared with single-layer consensus models. Committee-based analysis using hypergeometric modeling and Monte Carlo simulation further supports the consistency of the analytical results. However, the current prototype does not empirically evaluate Sybil resistance, API-level denial-of-service, metadata leakage, encryption workflows, digital signatures, TAXII transport, or production-grade admission control.
The first major contribution is the development and evaluation of the CTIB framework for secure CTI sharing. CTIB introduces a sequential hybrid consensus architecture that separates validation from publication anchoring. PoS committee validation ensures content quality and relevance, while PoW anchoring provides cryptographic and temporal binding of accepted intelligence. Analytical modeling and Monte Carlo simulation showed that this hybrid design raises the effective threshold for majority (51%) attacks compared to single-consensus systems. The framework also supports standardized CTI representation using STIX 2.1, enabling compatibility with existing SOC and SIEM tools.
In the CTIB prototype, validators are pre-registered to enable reproducible evaluation, and resistance to Sybil attacks is, therefore, not empirically tested. Furthermore, CTIB relies on the underlying blockchain platform for fork handling and finality guarantees, and does not introduce custom fork-resolution mechanisms. These challenges highlight important areas for further investigation.
Future work will focus on six directions. First, CTIB should be evaluated in a distributed multi-node deployment with heterogeneous validators, nonuniform latency, and unstable network conditions. Second, the PoS validation process should be calibrated using expert-labeled CTI datasets and evaluated using inter-validator consistency metrics. Third, AI-based pre-validation and credibility scoring can be integrated before committee review to reduce validator workload. Fourth, production-grade admission control, Sybil resistance, reputation management, and slashing mechanisms should be developed. Fifth, encryption, digital signature verification, and TAXII-based transport should be implemented to support secure and interoperable CTI exchange. Sixth, the α-constrained effective hash-power model should be transformed from an analytical governance abstraction into an enforceable runtime policy and evaluated under adversarial deployment scenarios.
The results of this research suggest several practical recommendations for improving cloud security and CTI sharing. For CTIB, scalability and interoperability should be prioritized. Instead of storing large CTI payloads on-chain, decentralized storage solutions such as IPFS can be integrated for off-chain content storage while maintaining on-chain hash anchoring. Validator review processes can be improved by introducing structured evaluation criteria to reduce subjectivity and improve consistency. While the current hybrid PoS → PoW model demonstrates security benefits, alternative anchoring mechanisms (e.g., Proof-of-Authority or verifiable delay functions) may be explored to reduce resource consumption in long-term deployments. Also, the CTIB prototype includes a basic reward accounting mechanism; future work may extend this component into a full incentive and governance model, incorporating transferable tokens, slashing conditions, and dynamic reward policies to strengthen long-term participation and economic security. For CTIB, large-scale evaluation under distributed, multi-node deployments remains an important research direction. Future studies should examine performance, cost, and security under realistic network conditions with high volumes of CTI submissions. Integrating CTIB with existing intelligence platforms and SIEM systems would enable assessment of operational interoperability. Governance and incentive mechanisms, including validator admission, reputation management, and response strategies following detected compromise, also warrant further investigation.

Author Contributions

Conceptualization, A.E.-K.; methodology, A.E.-K.; software, A.E.-K.; validation, A.E.-K. and H.K.A.; formal analysis, A.E.-K.; investigation, A.E.-K.; resources, A.E.-K. and H.K.A.; data curation, A.E.-K.; writing—original draft preparation, A.E.-K.; writing—review and editing, A.E.-K. and H.K.A.; visualization, A.E.-K.; supervision, H.K.A.; project administration, H.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable. This study did not involve human participants, human data, or animals.

Informed Consent Statement

Not applicable. This study did not involve human participants.

Data Availability Statement

The data presented in this study are included in the article. Additional benchmark outputs, prototype logs, and simulation artifacts are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Bernoulli vs. Generic Random 0/1 Generation

When we build probabilistic simulations, we do not generate “random 0/1” values in an arbitrary way. Instead, we use a Bernoulli trial because it gives each simulation step a clear meaning: an event either happens or does not happen, with a known probability p . This matters in CTIB because our analytical equations are written in terms of probabilities, such as r pos and r pow . A Bernoulli model matches that structure directly, while a generic random output does not.
A generic random function produces a number, but the number has no built-in probabilistic interpretation unless we define it. A Bernoulli trial, in contrast, represents a specific event with exactly two outcomes: success with probability p , or failure with probability 1 p . This simple structure is exactly what we need when we model events like “PoS compromised” or “PoW compromised” in a single trial.
Practical implementation: how we sample Bernoulli in code
In practice, we implement Bernoulli sampling using a uniform random number and a threshold:
pow_ok = (rng.random() < p)
Here, rng.random() generates a uniform random number u [ 0 , 1 . If u < p , the event is recorded as success (True). Otherwise, it is failure (False). This turns a generic random generator into a precise Bernoulli sampler, where the success probability is exactly p .
Concrete numeric example (p = 0.8)
Assume p = 0.8 . Consider three simulation runs:
Random Draw  u u < 0.8 ?Outcome
0.13YesSuccess (True)
0.76YesSuccess (True)
0.92NoFailure (False)
In this small example, we observe two successes out of three trials, which is 66.7%. This is not equal to 80% because the sample size is very small. With more trials, the observed success rate converges toward the true probability p = 0.8 . This illustrates the difference between a single small experiment and the long-run expected behavior.
Application in CTIB simulation models
In the CTIB independent model, we represent compromise in PoS and PoW as Bernoulli events:
pos_ok = rng.random() < r_pos # PoS success (Bernoulli, prob = r_pos)
pow_ok = rng.random() < r_pow # PoW success (Bernoulli, prob = r_pow)
attack_success = pos_ok AND pow_ok
This directly corresponds to the analytical product model, where success requires both events to occur.
In the committee model, PoS is not a Bernoulli event because PoS depends on committee sampling. We first sample the committee composition (without replacement) and then decide on PoS success deterministically from the sampled result. PoW still remains Bernoulli:
committee_corrupted_count = hypergeometric_sample(...) # PoS via sampling
pos_ok = (committee_corrupted_count ≥ threshold) # Deterministic from sampling
pow_ok = rng.random() < r_pow # PoW remains Bernoulli
attack_success = pos_ok AND pow_ok
This separation is important: PoS is driven by the committee rule and sampling, while PoW is still a binary success/failure event with probability r pow .
Why this distinction matters
Using Bernoulli sampling instead of a generic “random 0/1” choice gives four clear benefits. First, it ensures theoretical consistency because each trial has a well-defined probability p that matches our analytical equations. Second, it improves reproducibility because once p is fixed, the model behavior can be predicted and tested. Third, it enables validation because we can compare simulated frequencies against analytical probabilities. Finally, it improves interpretability: “success with probability p ” has a clear and standard meaning, while “random 0/1” has no meaning unless we define the probability rule behind it. The expression rng.random() < p is not just a coding trick. It is a mathematically correct way to implement a Bernoulli trial, which bridges the simulation logic and the theoretical probability model used in CTIB.

Appendix B. Committee-Based Probability Model and Monte Carlo Validation

This Appendix explains how we estimate majority-attack feasibility under a committee-based PoS model and how we validate the analytical results using Monte Carlo simulation. We model attacker influence in two layers: r pos is the attacker’s control ratio in the PoS validator population, and r pow is the attacker’s control ratio in the PoW anchoring power. In CTIB, an end-to-end attack succeeds only if the attacker passes two conditions in the same attempt: (i) the PoS committee decision is captured (committee majority condition), and (ii) the PoW layer is compromised. For this reason, r pow appears as a multiplicative factor when we combine PoS and PoW conditions because PoW is the second gate that must be passed after the PoS gate.
Committee selection in PoS is performed by sampling validators from a finite pool without replacement. This is why we use the hypergeometric distribution. Let N be the total number of validators in the PoS pool, K be the number of attacker-controlled validators, n be the committee size, and X be the number of attacker-controlled validators inside the committee. In CTIB, the committee size is n = 5 , and the majority threshold is t = 3 (3-of-5). The probability that the committee contains exactly x attacker-controlled validators is:
P ( X = x ) = K x N K n x N n .
The probability that the attacker captures the PoS committee majority is the tail probability:
p pos _ majority _ math = P X t = x = t n K x N K n x N n .
Once we compute p pos _ majority _ math , we combine it with PoW compromise using the CTIB “two-gates” logic:
p math _ committee = p pos _ majority _ math × r pow .
We also estimate the same probability by Monte Carlo simulation. In each trial, we (i) sample a committee and check whether it has an attacker majority ( X 3 ), and (ii) sample PoW compromise as a Bernoulli event with probability r pow . A trial is counted as successful only if both conditions are true. After T trials, the simulated committee probability is:
p sim _ committee = # { trials   where   ( committee   majority )   AND   ( PoW   compromised ) } T .
To evaluate consistency between math and simulation, we compute absolute error:
Abs .   Error   ( Comm . ) = p math _ committee p sim _ committee .
We also report the standard error of the Monte Carlo estimate:
S E ( p ^ ) = p ^ ( 1 p ^ ) T .
A representative example from the results table is:
p math _ committee = 0.2500 ,   p sim _ committee = 0.2521 .
If the simulation counts 5042 successful trials out of T = 20,000 , then:
p sim _ committee = 5042 20,000 = 0.2521 .
The absolute error is:
0.2500 0.2521 = 0.0021 .
The standard error for p ^ = 0.2521 and T = 20,000 is approximately:
S E 0.2521 ( 1 0.2521 ) 20,000 0.0031 .
Since the absolute error is smaller than the standard error, the simulated result is statistically consistent with the analytical committee-based probability for this trial budget.
Finally, to clarify the meaning of hypergeometric counting, consider the hand-computed example N = 10 , K = 5 , n = 5 , and compute P ( X = 3 ) . We need 5 3 , 5 2 , and 10 5 . The combinations are:
5 3 = 10 ,   5 2 = 10 ,   10 5 = 252 .
Thus:
P ( X = 3 ) = 5 3 5 2 10 5 = 10 × 10 252 0.3968 .
The phrase “10 ways to choose 2 honest validators out of 5 honest validators” means that there are 10 distinct pairs that can be formed from five honest validators. This small example mirrors the same logic used at scale when N is large (e.g., N = 1000 ) and the model sums the probability of obtaining three, four, or five corrupted validators in a five-member committee.

Appendix C. Illustrative Example: Independent vs. Committee-Based Attack Probability

This Appendix provides a short numeric example showing why the values 0.16 and 0.1048 can both be correct but correspond to two different security models. We use the following parameters for illustration:
r pos = 0.4 ,   r p o w = 0.4 ,   N = 10 ,   K = 4 ,   n = 5 ,   t = 3 ,   a n d   T = 3   t r i a l s
(used only to demonstrate randomness, not for accurate estimation).
In the independent model (Equation (5)), PoS compromise is treated as a single event with probability r pos , and PoW compromise is treated as a single event with probability r pow . Attack success requires both events in the same attempt, so:
P math   ( Eq . ( 5 ) ) = r pos × r pow = 0.4 × 0.4 = 0.16 .
This value is correct under the simplified assumption that PoS behaves like a single Bernoulli event. However, CTIB uses a PoS committee, so the committee-based model is more realistic. Under committee selection without replacement, we compute the probability of an attacker majority X 3 using hypergeometric terms. With N = 10 , K = 4 , and n = 5 :
P ( X = 3 ) = 4 3 6 2 10 5 = 4 × 15 252 0.2381 ,
P ( X = 4 ) = 4 4 6 1 10 5 = 1 × 6 252 0.0238 ,
and P X = 5 = 0 because only four validators are corrupted. Therefore:
p pos _ majority _ math = P ( X 3 ) 0.2381 + 0.0238 = 0.2619 .
The committee-based end-to-end probability then becomes:
P committee - math = p pos _ majority _ math × r pow = 0.2619 × 0.4 0.1048 .
This explains why the committee-based probability (0.1048) is lower than the independent-model probability (0.16): the committee majority rule makes PoS compromise harder.
To illustrate simulation variability, we use only T = 3 Monte Carlo trials. In each trial, we sample a committee and set committee_ok=True only if at least three members are attacker-controlled. We also sample PoW compromise as a Bernoulli event with probability r pow = 0.4 . The attack succeeds only if both conditions are true. A possible trial outcome is: Trial 1 selects a committee with two corrupted validators (PoS fails, attack fails), Trial 2 selects a committee with three corrupted validators, but PoW fails (attack fails), and Trial 3 selects a committee with three corrupted validators, and PoW succeeds (attack succeeds). This gives:
P committee - sim = 1 3 0.33 .
This simulated value is not a stable estimate because T = 3 is too small. With only three trials, the possible outcomes are limited to 0 , 1 / 3 , 2 / 3 , or 1 , so the true probability 0.1048 cannot be approximated accurately. As the number of trials increases (e.g., 30, 300, and then 20,000), the simulated estimate stabilizes and converges toward the analytical value, which is why the full evaluation uses 20,000 trials and reports standard error.

References

  1. Reittinger, T.; Grill, J.; Pernul, G. Share and benefit: Incentives for cyber threat intelligence sharing. Int. J. Inf. Secur. 2026, 25, 37. [Google Scholar] [CrossRef]
  2. Tolah, A. BlockIntelChain: A blockchain-based cyber threat intelligence sharing architecture. Sci. Rep. 2026, 16, 190. [Google Scholar] [CrossRef]
  3. Salazar, T.; Araújo, H.; Cano, A.; Abreu, P.H. A survey on group fairness in federated learning: Challenges, taxonomy of solutions and directions for future research. Artif. Intell. Rev. 2026, 59, 81. [Google Scholar] [CrossRef]
  4. Şafak, I.; Frantti, T.; Akgün, M. A Blockchain-Based Explainable Federated Learning System for the Trustworthy Collective Defense of IoT Networks in the European Union. In Cyber Security: Policy and Technology; Springer: Cham, Switzerland, 2026; pp. 359–384. [Google Scholar]
  5. Chatziamanetoglou, D.; Rantos, K. Cyber threat intelligence on blockchain: A systematic literature review. Computers 2024, 13, 60. [Google Scholar] [CrossRef]
  6. Jordan, B.; Piazza, R.; Darley, T. STIX; Version 2.1; OASIS Standard: Woburn, MA, USA, 2021. [Google Scholar]
  7. Ishfaq, M. An Open-Source SOC Architecture for Automated Detection and Threat Intelligence. Int. J. Comput. Data Sci. 2025, 1, 1–8. [Google Scholar]
  8. Adrian, A.Z.A.; Megantara, R.A.; Al Zami, F. Hybrid Multilayer Architecture Integrating Suricata, Wazuh, and Cyber Threat Intelligence for Drive-by-Download Malvertising Detection. Sink. J. Dan Penelit. Tek. Informat. 2026, 10, 161–168. [Google Scholar] [CrossRef]
  9. Chechkin, A.; Pleshakova, E.; Gataullin, S. A Hybrid Neural Network Transformer for Detecting and Classifying Destructive Content in Digital Space. Algorithms 2025, 18, 735. [Google Scholar] [CrossRef]
  10. Surve, T.; Tyagi, A.K. Balancing Blockchain Sustainability: Analyzing Consensus Mechanisms and Environmental Impact. In Blockchain Technology for Water and Environmental Systems; CRC Press: Boca Raton, FL, USA, 2026; pp. 56–72. [Google Scholar]
  11. Feng, X.; Hong, Y.; Guo, L.; Feng, G.; Chen, H. Blockchain-Based Business Model: Open Innovation Strategy for Smart Edge Data Flow. In Proceedings of the Blockchain–ICBC 2025: 8th International Conference, Held as Part of the Services Conference Federation, Hong Kong, China, 27–30 September 2025; Springer: Cham, Switzerland, 2025. [Google Scholar]
  12. Janani, K.; Udayakumar, K.; Ramamoorthy, S.; Ragu, G.; Poorvadevi, R. Blockchain with Cloud Computing. In Blockchain Technology for the Engineering and Service Sectors; Scrivener Publishing: Austin, TX, USA, 2026; pp. 133–175. [Google Scholar]
  13. Riesco, R.; Larriva-Novo, X.; Villagrá, V.A. Cybersecurity threat intelligence knowledge exchange based on blockchain: Proposal of a new incentive model based on blockchain and Smart contracts to foster the cyber threat and risk intelligence exchange of information. Telecommun. Syst. 2020, 73, 259–288. [Google Scholar] [CrossRef]
  14. Tanrıverdi, M. Implementation of Blockchain Based Distributed Web Attack Detection Application. In 1st International Informatics and Software Engineering Conference (UBMYK); IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  15. Gadekallu, T.R. Blockchain-Based Attack Detection on Machine Learning Algorithms for IoT-Based e-Health Applications. IEEE Internet Things Mag. 2021, 4, 30–33. [Google Scholar] [CrossRef]
  16. Rathore, S. BlockSecIoTNet: Blockchain-based decentralized security architecture for IoT network. J. Netw. Comput. Appl. 2019, 143, 167–177. [Google Scholar] [CrossRef]
  17. Suhail, S.; Jurdak, R. Towards Trusted and Intelligent Cyber-Physical Systems: A Security-by-Design Approach. arxiv 2021, arXiv:2105.08886v2. [Google Scholar] [CrossRef]
  18. Banerjeea, M.; Lee, J. A blockchain future for internet of things security: A position paper. Digit. Commun. Netw. 2018, 4, 149–160. [Google Scholar] [CrossRef]
  19. Homayoun, S.; Dehghantanha, A. A Blockchain-based Framework for Detecting Malicious Mobile Applications in App Stores. In Proceedings of the IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 5–8 May 2019. [Google Scholar] [CrossRef]
  20. Aljihani, H. Standalone Behaviour-Based Attack Detection Techniques for Distributed Software Systems via Blockchain. Appl. Sci. 2021, 11, 5685. [Google Scholar] [CrossRef]
  21. Roy, D.G. A Blockchain-based Cyber Attack Detection Scheme for Decentralized Internet of Things using Software-Defined Network. Softw. Pract. Exp. 2021, 51, 1540–1556. [Google Scholar] [CrossRef]
  22. Si, H.; Sun, C. IoT information sharing security mechanism based on blockchain technology. Future Gener. Comput. Syst. 2019, 101, 1028–1040. [Google Scholar]
  23. Putz, B.; Pernul, G. Detecting Blockchain Security Threats. In Proceedings of the IEEE International Conference on Blockchain (Blockchain), Rhodes, Greece, 2–6 November 2020. [Google Scholar] [CrossRef]
  24. Falco, G.; Li, C. NeuroMesh: IoT Security Enabled by a Blockchain Powered Botnet Vaccine. In COINS ‘19: Proceedings of the International Conference on Omni-Layer Intelligent Systems; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–6. [Google Scholar] [CrossRef]
  25. Cha, J. Blockchain-Based Cyber Threat Intelligence System Architecture for Sustainable Computing. Sustainability 2020, 12, 6401. [Google Scholar] [CrossRef]
  26. Smys, S. Data Elimination on Repetition using a Blockchain based Cyber Threat Intelligence. J. Sustain. Wirel. Syst. 2021, 2, 149–154. [Google Scholar]
  27. Hajizadeh, M.; Afraz, N.; Ruffini, M. Collaborative Cyber Attack Defense in SDN Networks using Blockchain Technology. In Proceedings of the IEEE Conference on Network Softwarization (NetSoft), Ghent, Belgium, 29 June–3 July 2020. [Google Scholar] [CrossRef]
  28. Allouche, Y.; Tapas, N. Trade: Trusted anonymous data exchange: Threat sharing using blockchain technology. arXiv 2021, arXiv:2103.13158. [Google Scholar] [CrossRef]
  29. He, S.; Fu, J. BloTISRT: Blockchain-based Threat Intelligence Sharing and Rating Technology. In CIAT 2020: Proceedings of the 2020 International Conference on Cyberspace Innovation of Advanced Technologies; Association for Computing Machinery: New York, NY, USA, 2020; pp. 524–534. [Google Scholar] [CrossRef]
  30. Dunnett, K.; Pal, S.; Jadidi, Z. Challenges and Opportunities of Blockchain for Cyber Threat Intelligence Sharing. In Secure and Trusted Cyber Physical Systems: Recent Approaches and Future Directions; Springer: Cham, Switzerland, 2022; pp. 1–24. [Google Scholar] [CrossRef]
  31. Jiang, T.; Shen, G.; Guo, C.; Cui, Y.; Xie, B. BFLS: Blockchain and Federated Learning for sharing threat detection models as Cyber Threat Intelligence. Comput. Netw. 2023, 224, 109604. [Google Scholar] [CrossRef]
  32. Chatziamanetoglou, D.; Rantos, K. Blockchain-Based Cyber Threat Intelligence Sharing Using Proof-of-Quality Consensus. Adv. Cyber Threat. Intell. 2023, 2023, 3303122. [Google Scholar] [CrossRef]
  33. Dunnett, K.; Pal, S.; Jadidi, Z.; Jurdak, R. A Blockchain-Based Framework for Scalable and Trustless Delegation of Cyber Threat Intelligence. In IEEE International Conference on Blockchain and Cryptocurrency (ICBC); IEEE: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  34. Gong, S.; Lee, C. BLOCIS: Blockchain-Based Cyber Threat Intelligence Sharing Framework for Sybil-Resistance. Electronics 2020, 9, 521. [Google Scholar] [CrossRef]
  35. Wu, Y.; Qiao, Y. Towards Improved Trust in Threat Intelligence Sharing using Blockchain and Trusted Computing. In Proceedings of the 2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS), Granada, Spain, 22–25 October 2019. [Google Scholar] [CrossRef]
  36. El-Kosairy, A.; Aslan, H.; Abdelbaki, N. Transforming Cybersecurity: Leveraging Blockchain for Enhanced Threat Intelligence Sharing. Int. J. Saf. Secur. Eng. 2024, 14, 1139. [Google Scholar] [CrossRef]
  37. Verma, A.; Das, R.; Sekhawat, T. Next Generation Consensus Mechanisms: Innovations and Challenges in Distributed Systems. J. Blockchain Syst. Smart Contracts 2026, 1, 145–158. [Google Scholar]
  38. Mohammed, V.S.; Karthikeyan, M.M. Enhancing Mobile Network Security Through Blockchain Technology: A Zero Trust Approach Utilizing PoW, PoS, and BFT Algorithms. In Proceedings of the 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence, Erode, India, 20–22 January 2025. [Google Scholar]
  39. Sharma, P.; Jindal, R.; Borah, M.D. Blockchain-based distributed application for multimedia system using Hyperledger Fabric. Multimed. Tools Appl. 2024, 83, 2473–2499. [Google Scholar] [CrossRef]
  40. Solat, S.; Calvez, P.; Naït-Abdesselam, F. Permissioned vs. Permissionless Blockchain: How and Why There Is Only One Right Choice. J. Softw. 2021, 16, 95–106. [Google Scholar] [CrossRef]
  41. Daranda, A.; Kankevičienė, L.; Daranda, J. Temporal Anomaly Detection and Threat Intelligence Analysis in Telegram Cybersecurity Channels. Balt. J. Mod. Comput. 2026, 14, 262–292. Available online: https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/14_2_01_Daranda.pdf (accessed on 12 May 2026).
  42. Siddique, M.M.; Galib, S.M.; Adnan, M.N.; Sheikh, M.N.A. DFedForest++: A Novel Privacy-Enhanced Framework for Integrating Cyber Threat Intelligence in IDS Using Federated Learning. Future Internet 2026, 18, 173. [Google Scholar] [CrossRef]
  43. Kirupanithi, D.N.; Arumugam, S.D.; Bosco, J.J. Blockchain based decentralized e-marketplace. In AIP Conference Proceedings; No. 1; AIP Publishing LLC: Melville, NY, USA, 2025; Volume 3257. [Google Scholar]
  44. El-Kosairy, A.; AbdelBaki, N. Next-Gen Cloud Security: IRDS4C’s Deception Strategy for Early Intrusion and Ransomware Detection. Int. J. Saf. Secur. Eng. 2025, 15, 873. [Google Scholar]
  45. Zhou, Q. Proof Staked Work—A Simple Hybrid PoW/PoS with Potential Stronger 51–Attack Resistant. Available online: https://ethresear.ch/t/proof-staked-work-a-simple-hybrid-pow-pos-with-potential-stronger-51-attack-resistant/4740 (accessed on 12 May 2026).
  46. Li, S.N.; Campajola, C.; Tessone, C.J. Statistical detection of selfish mining in proof-of-work blockchain systems. Sci. Rep. 2024, 14, 6251. [Google Scholar] [CrossRef] [PubMed]
  47. Rahman, M.R.; Wroblewski, B.; Matthews, Q.; Morgan, B.; Menzies, T.; Williams, L. Mining temporal attack patterns from cyberthreat intelligence reports. Knowl. Inf. Syst. 2025, 67, 8941–8981. [Google Scholar] [CrossRef]
  48. Singh, A.; Jha, A.K.; Kumar, A.N. Prediction of cryptocurrency prices through a path dependent Monte Carlo simulation. Commun. Stat. Simul. Comput. 2025, 1–20. [Google Scholar] [CrossRef]
  49. Hafid, A.; Hafid, A.; Makrakis, D. Sharding-Based Proof-of-Stake Blockchain Protocols: Key Components & Probabilistic Security Analysis. Sensors 2023, 23, 2819. [Google Scholar] [CrossRef]
  50. Homan, D.; Shiel, I.; Thorpe, C. A new network model for cyber threat intelligence sharing using blockchain technology. In 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS); IEEE: New ork, NY, USA, 2019. [Google Scholar]
  51. Provatas, K.; Tzannetos, I.; Vescoukis, V. Standards-based cyber threat intelligence sharing using private blockchains. In 2023 18th Conference on Computer Science and Intelligence Systems (fedCSIS); IEEE: New York, NY, USA, 2023. [Google Scholar]
  52. Gambo, M.; Khan, A.; Almulhem, A.; Almadani, B. An Efficient Framework for Automated Cyber Threat Intelligence Sharing. Electronics 2025, 14, 4045. [Google Scholar] [CrossRef]
  53. Ali, H.; Buchanan, W.; Ahmad, J.; Abubakar, M.; Khan, M.; Wadhaj, I. TrustShare: Secure and Trusted Blockchain Framework for Threat Intelligence Sharing. Future Internet 2025, 17, 289. [Google Scholar] [CrossRef]
  54. Jain, S. Cyber Threat Intelligence Sharing Using Blockchain for Critical Infrastructure. Int. J. Adv. Res. Comput. Sci. Eng. IJARCSE 2025, 1, 25–33. [Google Scholar]
  55. Imashev, A.I.D.A.R. Blockchain-enabled federated learning framework for privacy-preserving cyber threat intelligence sharing. IRE J. 2025, 9, 492. [Google Scholar]
  56. Chepurnoy, A.; Duong, T.; Fan, L.; Zhou, H.-S. Twinscoin: A cryptocurrency via proof-of-work and proof-of-stake. In Proceedings of the 2nd ACM Workshop on Blockchains, Cryptocurrencies, and Contracts, Incheon, Republic of Korea, 4 June 2018. [Google Scholar]
  57. de Almeida Martins, M. Blockchain governance: Reducing trusted third parties with Decred project. Int. J. Inf. Technol. Manag. 2025, 24, 162–189. [Google Scholar]
Figure 1. CTIB end-to-end workflow. Blue rectangles indicate processing steps, the pink diamond indicates the PoS acceptance decision, black arrows indicate workflow direction, the green arrow indicates the accepted branch, and the red dashed arrow indicates the rejected branch.
Figure 1. CTIB end-to-end workflow. Blue rectangles indicate processing steps, the pink diamond indicates the PoS acceptance decision, black arrows indicate workflow direction, the green arrow indicates the accepted branch, and the red dashed arrow indicates the rejected branch.
Bdcc 10 00158 g001
Figure 2. Prototype output example (final approved record). Colors reflect syntax highlighting in the prototype output and do not encode additional semantic meaning.
Figure 2. Prototype output example (final approved record). Colors reflect syntax highlighting in the prototype output and do not encode additional semantic meaning.
Bdcc 10 00158 g002
Figure 3. Prototype output example (final approved STIX). Colors reflect syntax highlighting in the prototype output and do not encode additional semantic meaning.
Figure 3. Prototype output example (final approved STIX). Colors reflect syntax highlighting in the prototype output and do not encode additional semantic meaning.
Bdcc 10 00158 g003
Table 1. Comparative classification of prior blockchain-based threat and CTI sharing approaches [36].
Table 1. Comparative classification of prior blockchain-based threat and CTI sharing approaches [36].
GroupReferencesConsensus SpecifiedSTIX/TAXII/CybOXRewards/Incentives MentionedKey StrengthsKey Limitations
Group 1: Blockchain Referenced Without Technical Details[13]Not specifiedSTIX referencedNot specifiedRecognizes the relevance of blockchain for CTI and the importance of structured intelligenceLacks a consensus definition, incentive model, and full CTI system design
[14,15,16,17,18,19]Not specifiedNot specifiedNot specifiedHighlights blockchain as a sharing mediumNo proof type, no rewards, no CTI standards; limited interoperability
Group 2: Blockchain with Proof but Without CTI Standards[20,21,22,23,24]SpecifiedNot specifiedPartially ([21,22])Clearer blockchain mechanics and consensus understandingAbsence of STIX/TAXII/CybOX limits structured intelligence exchange
Group 3: Blockchain Integrated with CTI Standards[25,26,27,28,29,30,31,32,33]Partially specifiedSTIX (some include TAXII, CybOX)Mostly not specifiedSupports structured, interoperable CTI sharing using standard formatsIncomplete blockchain incentive and consensus specifications
Group 4: Comprehensive Blockchain-Based CTI Systems[34]SpecifiedSTIX/TAXII/CybOXSpecifiedSmart contracts, CTI feed rating, incentive mechanismsIncreased system complexity
[27,30,31,35]SpecifiedSTIX (some include TAXII/CybOX)Partially specifiedCombines blockchain mechanics, CTI standards, and governance considerationsSome implementation details remain abstract
Table 2. Comparison between permissioned and permissionless blockchain models.
Table 2. Comparison between permissioned and permissionless blockchain models.
AspectPermissioned Blockchain (e.g., Hyperledger Fabric)Permissionless Blockchain (e.g., Ethereum, Bitcoin)
Network AccessRestricted; participation requires prior authorizationOpen; any participant can join the network
Participant IdentityKnown and authenticated (organization-based identities)Anonymous identities
Trust ModelOrganizational and legal trust among participantsCryptographic and economic trust enforced by consensus
Incentive MechanismTypically absent or externally managedFinancial incentives (e.g., mining or staking rewards)
51% Attack ModelingNot formally defined or analyzed due to the absence of a quantifiable adversarial resourceCore security concept with explicit probabilistic threat modeling
Decentralization ModelGovernance-driven and permissioned; decentralization is limited to authorized organizationsOpen, trust-minimized decentralization with no central authority
Openness to Adversarial ParticipationClosed; not designed for open or adversarial environmentsOpen; explicitly designed to operate under adversarial conditions
Privacy and Data ControlStrong confidentiality through access control, channels, and private data mechanismsLimited native privacy; data is globally visible unless additional techniques are used
Primary Application DomainEnterprise, consortium systems, and CTI sharingCryptocurrencies and open decentralized applications
Table 3. Threat model coverage, CTIB mitigation level, and prototype evaluation status.
Table 3. Threat model coverage, CTIB mitigation level, and prototype evaluation status.
ThreatCTIB TreatmentPrototype Status
Majority/51% attackAnalytical model + Monte Carlo simulation (PoS → PoW dual-layer model)Evaluated
Validator collusionModeled via PoS committee capture probability (hypergeometric + simulation)Partially evaluated
False IoC publicationAddressed via PoS validation criteria (scoring model)Not tested
Sybil attacksRequires validator admission, stake binding, or identity governanceFuture work
Reputation poisoningRequires a reputation system, auditing, and slashing mechanismsFuture work
API denial-of-serviceRequires rate limiting, authentication, replication, and monitoringFuture work
Metadata leakageReduced via hash-only on-chain storage, but not eliminatedPartially addressed
Encryption workflowsDeployment-level extension (confidentiality not evaluated)Not implemented
Digital signaturesRequired for contributor/validator authenticationNot implemented
TAXII transportStandard interoperability mechanism for CTI exchangeNot implemented
Censorship by delayMitigated via PoW temporal anchoring (cost + observability)Evaluated (analytical + experimental)
Fork attacksDepends on underlying blockchain (Ethereum/Hardhat); not CTIB-specificPartially evaluated
Publication reorderingDetectable and costly due to PoS → PoW separation and anchoringEvaluated (conceptual + experimental)
Resource consumptionMeasured via latency and throughput benchmarkingEvaluated
Table 4. CTIB experimental phases overview.
Table 4. CTIB experimental phases overview.
PhaseNameObjectiveOutput
0Design assumptionsDefine threat model and scopeSystem assumptions
1Environment setupHardhat + FastAPILocal testnet
2Contract designCTIB smart contractDeployed contract
3STIX ingestionCTI normalizationCanonical JSON
4PoS validationCommittee review (3/5)Accepted/rejected
5PoW anchoringTemporal finalizationAnchored block
6Reward allocationIncentivesToken balances
7Run-all orchestrationReproducibilityDeterministic runs
8BenchmarkingLatency and throughputMetrics
9α-model testingHash power controlEffective hash
1051% evaluationSecurity validationProbabilities
Table 5. Single-run CTIB performance (N = 100 submissions).
Table 5. Single-run CTIB performance (N = 100 submissions).
Profiledifficulty_bitsSuccess Rate (%)Throughput (Feeds/min)p50 (ms)p95 (ms)Elapsed (s)
baseline894120.02382.061481.6449.99
medium1297115.26450.72994.6252.05
stress1697140.16382.58768.0842.81
Table 6. Average CTIB performance across 10 runs.
Table 6. Average CTIB performance across 10 runs.
Profiledifficulty_bitsAvg. Success Rate (%)Avg. Throughput (Feeds/min)Avg. p50 (ms)Avg. p95 (ms)Avg. Elapsed (s)
baseline893.6162.49334.28700.8238.58
medium1294.4166.14326.18553.2236.70
stress1694.8141.13403.09660.7742.57
Table 7. The results of the corruption probability equation for a 51% attack. Reprinted with permission from ref. [36]. Copyright 2024 the Authors.
Table 7. The results of the corruption probability equation for a 51% attack. Reprinted with permission from ref. [36]. Copyright 2024 the Authors.
PoW Corrupted Ratio ( r pow )PoS Corrupted Ratio ( r pos )Outcome
50%50%25%
51%51%26%
52%52%27%
70%70%49%
71%71%50%
71.5%71.5%51%
Table 8. Comparison between simple analytical baseline theory math vs. simulation model (the independent Bernoulli baseline), with Monte Carlo results for CTIB attack feasibility, including absolute based on 20,000 trials per case.
Table 8. Comparison between simple analytical baseline theory math vs. simulation model (the independent Bernoulli baseline), with Monte Carlo results for CTIB attack feasibility, including absolute based on 20,000 trials per case.
rPoSrPoWPmath (Equation (5))PsimAbs. Error
0.500.500.25000.25080.0008
0.500.700.35000.34950.0005
0.500.7150.35750.35310.0044
0.500.750.37500.37480.0003
0.510.500.25500.25530.0003
0.510.510.26010.26130.0012
0.510.7150.36470.36480.00015
0.520.520.27040.27190.0015
0.600.750.45000.44000.0100
0.700.750.52500.53570.0107
Table 9. Comparison between analytical probability vs. hypergeometric distribution modelling and Bernoulli with Monte Carlo results for CTIB attack feasibility, including absolute and standard errors, based on 20,000 trials per case.
Table 9. Comparison between analytical probability vs. hypergeometric distribution modelling and Bernoulli with Monte Carlo results for CTIB attack feasibility, including absolute and standard errors, based on 20,000 trials per case.
rPoSrPoWPcommittee-MathPcommittee-SimAbs. Error (Comm.)Std. Error
0.500.500.25000.25210.0021≈0.0031
0.500.700.35000.34820.0018≈0.0034
0.500.7150.35750.36320.0056≈0.0034
0.500.750.37500.37300.0020≈0.0034
0.510.500.25940.26010.0007≈0.0031
0.510.510.26460.26460.00003≈0.0031
0.510.7150.37090.36720.0038≈0.0034
0.520.520.27950.27800.0016≈0.0032
0.600.750.51220.51570.0035≈0.0035
0.700.750.62810.63320.0051≈0.0035
Notes: All Monte Carlo results are based on 20,000 trials per configuration. Absolute Error = |Analytical − Simulation|. Std. Error computed as p ( 1 p ) / T .
Table 10. Effective_hash_scenario with Alpha 2.
Table 10. Effective_hash_scenario with Alpha 2.
NameStakeraw_hashallowanceeffective_hasheffective_shareAttacker
attacker10600.212.00.2727272727272730TRUE
honest_130200.612.00.2727272727272730FALSE
honest_260201.020.00.45454545454545500FALSE
Table 11. Sensitivity of attacker effective PoW share under different α values.
Table 11. Sensitivity of attacker effective PoW share under different α values.
AlphaAttacker StakeAttacker Raw HashAttacker Effective ShareInterpretation
110%60%25.00%Strict stake-hash coupling
210%60%27.27%Reduced raw hash dominance
310%60%32.14%More relaxed constraint
510%60%42.86%Higher raw hash influence
Table 12. Implementation and evaluation scope of the CTIB prototype.
Table 12. Implementation and evaluation scope of the CTIB prototype.
ComponentImplementedEvaluatedStatus
STIX 2.1 bundle generationYesYesPrototype
Canonical JSON + SHA-256 feed_hashYesYesPrototype
Solidity smart contract evidence recordingYesYesPrototype
PoS committee threshold 3/5YesYesPrototype
PoW nonce anchoringYesYesPrototype
Solidity/FastAPI integrationYesYesPrototype
Local benchmarkingYesYesControlled local evaluation
Majority-attack probability modelYesYesAnalytical/simulation
α-constrained effective hash modelNo runtime enforcementYesAnalytical governance exploration
Digital signaturesNoNoFuture work
Encryption workflowNoNoFuture work
TAXII transportNoNoFuture work
Production admission controlNoNoFuture work
Distributed multi-node deploymentNoNoFuture work
API DoS testingNoNoFuture work
Metadata leakage evaluationPartialNoFuture work
Table 13. Threat model coverage and mitigation in CTIB.
Table 13. Threat model coverage and mitigation in CTIB.
Threat TypeSingle PoWSingle PoSCTIB (PoS → PoW)
51% Majority AttackHigh riskMedium riskSignificantly reduced (dual-layer model)
Sybil AttackMediumMediumNot evaluated; requires admission control.
Censorship by DelayHighHighMitigated via PoW anchoring
Validator CollusionN/AHighReduced via committee + PoW
Fork AttacksMediumMediumHarder due to dual control
Publication ReorderingPossiblePossibleDetected and costly
Resource ConsumptionHighLowMedium (PoW anchoring only, not full mining
Table 14. Comparison of CTIB and other integrated CTI with blockchain papers.
Table 14. Comparison of CTIB and other integrated CTI with blockchain papers.
Security/Operational AspectCovered by CTIBTable 1
[50,51]
[52][53][54][55]
Majority (51%) Attack MitigationYesNoN/A (non-blockchain DDS)No permissionedNo
Resistance to Censorship by DelayYesNoNo (focus is secure real-time dissemination, not anchoring-based anti-delay)No (no explicit anchoring/delay-deterrence mechanism stated)No (no explicit anti-delay deterrence; focuses on Fabric performance and resilience)No (not discussed as a publication property)
Separation of Validation and AnchoringYesNoNo (no PoS/PoW split; DDS workflow)No (no sequential validation → anchoring split described)No (no explicit two-stage validation/anchoring separation)No (not an explicit “validate then anchor” design)
On-Chain/Off-Chain Data SeparationYesPartial in DBN/A (non-blockchain)Yes (Fabric + IPFS off-chain storage is explicit)Yes (stores metadata hashes on-chain; artifacts off-chain)Partial (encrypted model updates + blockchain logging; no CTI artifact split)
Scalability of CTI StorageYesLimited, based on Fabric DBN/A (paper is not a blockchain storage design; focuses on dissemination)Yes (IPFS used specifically for off-chain storage scaling)Yes (off-chain storage + on-chain hashes to avoid ledger bloat)N/A (model-sharing focus; no CTI storage architecture described)
α-Constrained Effective Hash PowerYesNoN/A (non-blockchain)No (not present)
Performance BenchmarkingYesNoYes (prototype evaluation reports latency/throughput/success)Partial (prototype feasibility shown; simulation/economic analysis—no end-to-end CTI latency/throughput table like CTIB)Yes (empirical stats + simulation; throughput/latency reported)Partial (mentions latency/overhead measured, but no blockchain CTI publication benchmarks)
High Availability (HA)YesYesYes (DDS decentralization + continuous availability claims for Connext)No (no explicit HA architecture described as a design feature)Yes (Raft orderer cluster explicitly used for HA)No (HA not explicitly described)
Multilayer Corruption DetectionYesNoNo (threat model covers unauthorized pub/sub + tampering/replay, but not “multilayer corruption detection” as a named mechanism)No (not described as multilayer corruption detection)Partial (anomaly detection + reputation to detect poisoned updates)
Sybil Attack ResistanceArchitectural support only.PartialPartial (permissioned identities via certificates/permissions; not open-network Sybil modeling)Partial.
Fork Attack ProtectionPartialNot describedN/A (non-blockchain)No (not described)
Double Spending ProtectionN/A (not a cryptocurrency spending model)
Resource ConsumptionYesPartial (mentions overhead increases “moderately” due to encryption/blockchain)
Table 15. Implementation-oriented comparison with related systems.
Table 15. Implementation-oriented comparison with related systems.
System/PaperDomainCTI ScopeConsensus ModelCTI StandardsImplementation EvidencePerformance EvaluationSecurity Evaluation
CTIBCTI SharingFull CTIHybrid PoS → PoW (Sequential)• PoS committee (5, ≥3)• PoW anchoring onlySTIX 2.1End-to-end prototype(Solidity + Hardhat + FastAPI)Yes (measured)• 119–154 feeds/min• p50: 376–482 ms• p95: 478–708 msQuantitative Analytical model (Equation (5)), Monte Carlo (20 k trials), Committee capture + α-model
TwinsCoin
[56]
Cryptocurrency×Hybrid PoW + PoS×Research implementationProtocol micro-benchmarks only ~20–25 μs/opProof ≈960 bytesFormal security reasoning
Decred [57]Cryptocurrency/Governance×Hybrid PoW + PoS×Production networkBlock-level metrics onlyGeneral majority-resistance rationale
Homan et al. [50]CTI SharingCTI/AlertPermissioned (Fabric)STIX 2.0Testbed prototypeNo throughput/latency reportedNone (policy-based trust)
Provatas et al. [51]CTI SharingFull CTIPermissioned (Fabric)STIX + TAXIIPrototype/conceptualNo numeric benchmarksNone (no adversarial model)
Gambo et al. [52]CTI Sharing (real-time dissemination)CTI Sharing (automated)× (DDS pub/sub; non-blockchain)STIXPrototype frameworkYes (latency/throughput reported)DDS security mechanisms (certificates/permissions/encryption discussed)
TrustShare [53]CTI Sharing (CSIRT/organizational sharing)CTI workflows for sharingPermissionedSTIX + TAXIIPrototype (Fabric + IPFS + CP-ABE; benchmarked w/ Caliper)Yes (Caliper benchmarking; latency/throughput)CP-ABE access control + privacy/GDPR framing
IJARCSE [54]CTI Sharing (critical infrastructure focus)CTI sharing with STIX/TAXII framingPermissionedSTIX + TAXIIPrototype/testbed + simulationYes (latency/throughput discussed)DDoS/resilience discussion + simulation evidence
SSRN [55]Learning-based CTI sharingModel/learning updates (not a CTI publication pipeline)Blockchain + smart contractsSTIXExperimental framework (FL + blockchain logging)ML evaluation (accuracy/robustness style), not CTI-submission latencyPoisoning/adversarial robustness focus + reputation/incentives
Table 16. Numerical performance comparison.
Table 16. Numerical performance comparison.
PaperMetricTheir ValueCTIB Value (Exact)Interpretation
Gambo et al. [52]Latency (ms)0.822–0.986 ms @ 50 msg/sp50 = 326.18–403.09 msNot comparable: DDS measures message delivery only, while CTIB includes validation and anchoring
Throughput50 msg/s141.13–166.14 feeds/minNot comparable: DDS is transport-only; CTIB measures full publication workflow
TrustShare [53]Latency (ms)75 ms (Kubernetes cluster)p50 = 326.18–403.09 msNot workload-equivalent (ledger transaction vs. end-to-end CTI publication)
Throughput (TPS)500 TPS141.13–166.14 feeds/minTPS reflects ledger commits, not CTI validation + anchoring
Latency under load70–175 msp50 = 326.18–403.09 msMetrics capture different pipeline stages
IJARCSE [54]Throughput plateau~192 tx/s141.13–166.14 feeds/minTransaction rate is not equivalent to full CTI submission workflow
Typical latency<500 ms up to 200 tx/sp50 = 326.18–403.09 msComparable: CTIB p50 falls within reported operational range
Maximum latency cap<800 ms (2 s block timeout)p95 = 553.22–700.82 msComparable tail behavior
Latency under attack~900 msp95 = 553.22–700.82 msCTIB shows better tail latency in the controlled local setup and conditions (≈1.3×–1.6× lower)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Kosairy, A.; Aslan, H.K. A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation. Big Data Cogn. Comput. 2026, 10, 158. https://doi.org/10.3390/bdcc10050158

AMA Style

El-Kosairy A, Aslan HK. A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation. Big Data and Cognitive Computing. 2026; 10(5):158. https://doi.org/10.3390/bdcc10050158

Chicago/Turabian Style

El-Kosairy, Ahmed, and Heba Kamal Aslan. 2026. "A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation" Big Data and Cognitive Computing 10, no. 5: 158. https://doi.org/10.3390/bdcc10050158

APA Style

El-Kosairy, A., & Aslan, H. K. (2026). A Hybrid PoS–PoW Blockchain Framework for Secure Cyber Threat Intelligence Sharing: Design, Implementation, and Evaluation. Big Data and Cognitive Computing, 10(5), 158. https://doi.org/10.3390/bdcc10050158

Article Metrics

Back to TopTop