1. Introduction
Over the last few years, digital systems have been introduced into almost every sector, and while this has made things easier in many ways, it has also opened the door for more advanced cyberattacks. With the world moving faster toward cloud services and large digital infrastructures, the need for stronger cybersecurity tools is becoming hard to ignore. The problem is that many of the older cybersecurity frameworks were not really designed for the speed and complexity of today’s threats, especially when it comes to sharing Cyber Threat Intelligence (CTI) or protecting cloud environments. Most of these traditional systems depend on fixed rules, and because of that, they often fail when facing things like zero-day attacks or long running, stealthy APT campaigns.
Blockchain technology, with its distributed and tamper-resistant nature, seems to offer a way around some of these limitations. In this work, I focus on the issues of trust, privacy, and the reliability of shared threat information, and I suggest using an approach that mixes blockchain ideas with CTI sharing practices. As digital transformation continues, organizations rely more on interconnected platforms and cloud-based resources, which definitely provide scalability and flexibility, but at the same time bring new security concerns. More sophisticated attacks including ransomware and APT operations have started to outpace what traditional defensive tools can respond to, exposing weaknesses in systems that were once thought to be well-protected.
Table 1 gives a short overview of the impact that different ransomware attacks have had in recent years.
Cloud computing has changed the way digital systems are built and managed, giving organizations a level of flexibility and scalability that was not really possible before. But with all these advantages comes a set of complex security issues, peculiarly because of the shared responsibility model. This model assumes that both the cloud provider and the customer will take care of different parts of the security, and in practice this often creates confusion. Sometimes it is not clear who is responsible for what, and that gap leads to weak spots and inconsistent security practices.
Sharing Cyber Threat Intelligence (CTI) is supposed to assist with immature detection and better response, but it is not as simple as it sounds. One of the big problems is trust. Many entities hesitate to share erogenous threat data because they are troubled about confidentiality leaks or even the possibility of reputational harm if something is exposed. Privacy laws like GDPR make this even harder, since they introduce rigorous rules on what data can be exchanged and how it must be handled. Another issue is that different CTI platforms do not always speak the same “language”. Each one uses its own format and communication style, which means systems do not integrate smoothly, and teams end up with data that is less useful than it should be. All of these issues show why we significantly need modern ways of sharing CTI methods that deal directly with the questions of trust, privacy, and standardization rather than avoiding them. Solving these problems would allow organizations to work together more effectively and build stronger gross resilience against cyber threats. As shown in
Table 2 and noted in earlier studies [
4] CTI sharing still suffers from trust gaps, privacy restrictions, regulatory challenges, and difficulties in connecting different systems with each other.
Blockchain technology has recently appeared as one of the stronger option for dealing with many of the cybersecurity problems we see today, especially in cloud-based setups. The idea behind it is not complicated; instead of keeping data in one place, the blockchain spreads it out across many nodes and stores it in a way that cannot easily be changed. This makes it easier for different parties to trust the information they are sharing, because everyone knows it has not been quietly altered somewhere along the way.
In this work, I focus on two independent frameworks. The first one, IRDS4C, tries to improve immature detection of threats by relying on deception tricks like honeypots, honeytokens, and a set of pretender services that ransomware or intruders tend to poke at. The second one, CTIB, is a blockchain-based platform for sharing cyber threat intelligence, where the goal is to make the exchange of information secure and tamper resistant. CTIB uses a hybrid consensus model mixing Proof of Work with Proof of Stake to keep the shared data stable and protected. Putting both frameworks together aims to build a cybersecurity environment that is further dependable, further scalable, and a bit further hard-nosed for veridical cloud systems.
For CTIB, the independent goal was to see whether it could safely handle a deep flow of threat information from different sources. The tests showed that it managed high loads without dropping performance, which suggests it can scale well. Because of the hybrid consensus model, the stored data stays intact; an attacker would need to take control of further than 71% of the smooth network to break the system, which is not pictorial in practice. Using CTIB also helped participants trust the shared intelligence further, lowered excess alerts, and allowed quicker distribution of modern threat information. This, in turn, helped security teams respond more quickly and with better coordination.
Altogether, this paper introduces two hard-nosed frameworks IRDS4C and CTIB that support intrusion and ransomware detection, as well as assured sharing of cyber threat intelligence. By combining deception methods with blockchain technology, the approach aims to strengthen both detection and prevention of cyber threats in corrupt environments. The work includes not only an architectural description, but also an executable CTIB prototype and controlled experiments that quantify end-to-end CTI publication latency and throughput. The idea here is to bridge tralatitious security tools with newer ones, while paying attention to things like scalability, reliability, and cooperation among different security groups.
The rest of the paper is arranged as follows:
Section 2 goes over background information and previous studies related to CTI and cloud security, and explains the goals of this research—mainly improving detection, securing intelligence sharing, and helping systems work together more smoothly.
Section 3 goes through the methodology, architecture, and technical design of IRDS4C and CTIB.
Section 4 shows how the two frameworks are connected and how they operate together.
Section 5 includes a real-world case study demonstrating the integration at the cloud operating system layer.
Section 6 closes the paper with a summary of the main contributions, notes the current limitations, and points to possible directions for future work. Throughout this paper, the term <CTI> refers to cyber threat intelligence content, while <CTIB> denotes the blockchain-based platform used to store, validate, and disseminate CTI. The term <PoS committee> consistently refers to the fixed set of validators responsible for content-level validation prior to PoW anchoring.
2. Background and Related Work
In this section, we reviewed the background and prior research related to CTI, blockchain technology, and cloud security. We examined active methods, identified their limitations, and analyzed modern efforts to integrate blockchain into cybersecurity systems. Additionally, we explored various threat detection techniques, with a focus on their shortcomings in addressing ransomware attacks. Finally, we discussed how blockchain can be leveraged to address key challenges related to trust and assured information sharing in CTI frameworks.
2.1. CTI Feeds
Access to high quality threat intelligence is essential for effective cybersecurity. CTI provides careful insights into latent adversaries, including their capabilities, attack vectors, and behavioral patterns. CTI can be collected internally by organizations or sourced from third party providers. Mercenary CTI feeds often offer proprietary, in depth information curated by dedicated research teams, while open source CTI relies on community contributions that may offer diverse and innovative perspectives. The process of threat intelligence collection typically involves defining security objectives, gathering threat-related data, analyzing the information, and producing actionable reports. Security analysts then format this intelligence for integration into security tools such as firewalls, intrusion detection/prevention systems (IDS/IPS), and endpoint protection platforms. These feeds normally contain indicators of compromise (IoCs) such as IP addresses, malware hashes, and domain names [
5]. Standardization in CTI sharing is pivotal for enabling rapid, interoperable, and automated information exchange across organizations. To address this need, several standardized formats have been developed. For example, the Structured Threat Information Expression (STIX), developed by the Organization for the Advancement of Structured Information Standards (OASIS), provides a consistent schema for describing threat information such as adversary tactics and indicators of compromise. Trusted Automated eXchange of Intelligence Information (TAXII) complements STIX by offering assured transport mechanisms for CTI over the Internet. Another standard, the Incident Object Description Exchange Format (IODEF), was created by the Internet Engineering Task Force (IETF) to support the structured documentation and analysis of security incidents, though it is now less widely adopted. These standards enhance threat intelligence sharing by streamlining communication, facilitating collaboration among security teams, and supporting the automation of threat detection. Organizations such as MITRE and IETF have played key roles in the development of these standards. However, some vendors continue to use proprietary formats, such as OpenIOC by Mandiant, which may lack interoperability with standardized systems. additional tools such as APIs and data format converters can further improve responsiveness by simplifying the adoption of CTI standards [
5].
Despite these advancements, current CTI systems face several serious limitations. One better concern is the lack of mechanisms to ensure privacy and confidentiality. Organizations often hesitate to share details of security breaches due to concerns over reputational damage and legal liability. Additionally, trust remains an important challenge. The accuracy, completeness, and credibility of shared intelligence, especially from open source platforms, are frequently questioned due to incompatible quality assurance practices. As a result, many organizations prefer mercenary CTI platforms from established vendors such as Microsoft or Kaspersky. While these solutions may offer further dependable data, they also introduce centralization risks, notably the Single Point of Failure and Corruption (SPoFaC). A notable example of this vulnerability is the SolarWinds attack, in which adversaries successfully embedded malicious dynamic-link libraries (DLLs) into legitimate software updates. Due to the high level of trust in SolarWinds, customers deployed these compromised updates without additional verification, leading to widespread compromise. Similar risks and limitations have been documented in [
5].
2.2. Blockchain Technology
Blockchain technology has been proposed as a bright solution to address many of the limitations intrinsic in tralatitious CTI systems [
6]. It utilizes a distributed ledger, where data is firmly stored across binary nodes in the network [
7] making unofficial or covert alterations virtually unfeasible [
8]. This ensures the integrity and trustworthiness of the registered information. Blockchain relies on cryptographic techniques and consensus algorithms such as PoW, PoS, and Proof of Authority (PoA) to validate and append data firmly [
9]. Each consensus mechanism has an unparalleled approach to ensuring network security [
10]. PoW requires participants to solve complex computational puzzles. Ensuring data is only added by nodes that expend sizeable effort. PoS selects validators based on the amount of cryptocurrency they hold and are willing to “stake” as collateral [
11]. PoA assigns validation authority to a minimal set of trusted, reputation-based nodes. While blockchain offers transparency, decentralization, and tamper resistance, it also introduces challenges, such as the complexity of managing decentralized systems, scalability limitations, and high computational demands [
12].
To maintain network security, participants are incentivized to remain existent contributors through mechanisms such as token rewards. This incentive structure supports consistent engagement without relying on concentrated trust. Furthermore, impartial source blockchain platforms eliminate concentrated governance, fostering greater transparency and trust in CTI sharing. Despite these benefits, blockchain adoption in CTI systems is not without its challenges. A hard-nosed integration model involves employing a PoW-based blockchain to secure CTI exchange. In this system, participants must solve cryptographic challenges before they can submit modern threat data. This step makes sure that only trusted and committed users can share information. This wise helps maintain data accuracy, protects confidentiality, and prevents tampering, which encourages further users to participate by reducing worries about reputation and privacy. However, some pivotal limitations remain. One better risk is called a “51% attack”, where an exclusive person or group controls near of the network’s computing power. This could allow them to manipulate data. Another risk is “double spending”, where someone might try to submit the comparable threat information further than once or change antecedently shared information. Additionally, although blockchain firmly stores data and prevents changes, it does not automatically confirm that the shared information is correct or high quality. satisfactory now, there is not a standard wise to check if the cyber threat intelligence shared through blockchain is authentic or relevant. This is an important gap and needs further research and improvement.
2.3. CTI and Blockchain Integration
In this section, we explore the integration of CTI with blockchain technology, with a specific focus on consensus mechanisms. Blockchain offers a decentralized, secure, and obvious platform, which is peculiarly positive for sharing erogenous CTI data. In [
5] various dimensions of this integration were examined, including structured threat intelligence standards, participant incentive schemes, and the types of consensus algorithms employed. Integrating blockchain into CTI systems addresses long-standing challenges such as data authenticity, traceability, and inter-organizational trust. For instance, blockchain-based CTI platforms frequently employ smart contracts to automate the validation and distribution of threat intelligence. This automation enhances the reliability and timeliness of CTI sharing, which is essential for speedy and operative cybersecurity responses. Consensus mechanisms such as PoW, PoS, and Practical Byzantine Fault Tolerance (PBFT) play a serious role in determining the security, performance, and scalability of blockchain systems. PoW provides robust decentralization and security guarantees, albeit at the cost of high energy consumption and slower transaction throughput. PoS improves energy efficiency and verification speed but may raise concerns about fairness and centralization, favoring participants with big relic holdings. PBFT, while extremely effective in small networks, can be less ascendible due to communication overhead in substantial settings. Several modern initiatives have proposed hard-nosed approaches to enhance CTI sharing through blockchain-based architectures. These models aim to mitigate continual issues related to trust, data integrity, privacy, and participant incentivization. The Hyperledger CTI Hub [
13] leverages Hyperledger Fabric with Raft or Solo as its consensus mechanism. The system enables selected partners to exchange CTI through private communication channels. It supports the STIX format, uses off-chain storage for large files, and requires that data be reviewed and digitally signed by trusted entities to ensure integrity and authenticity. The Proof-of-Quality (PoQ) model [
14] emphasizes the evaluation of shared intelligence quality.
A group of validators assigns scores based on usefulness, reliability, and timeliness. Only CTI records meeting predefined thresholds are published on-chain. All validation decisions are immutably logged, promoting transparency and accountability. The Incentivized CTI Model [
15] integrates Ethereum smart contracts and a PoS consensus to incentivize high-quality data contributions. Contributors receive token-based rewards for valuable submissions, while inaccurate or misleading inputs lead to a loss of reputation, thereby limiting future participation. Blockchain-based Cyber Threat Intelligence Sharing Model (B-CTISM) prioritizes confidentiality and controlled data dissemination. It incorporates the Traffic Light Protocol (TLP) [
16] to define unselfish boundaries. Access control is enforced through smart contracts based on TLP labels. The model employs cryptographic hashes and Merkle trees to ensure data integrity and effective storage. Additionally, it allows real time updates to user roles and access permissions, aiding compliance with dynamic unselfish policies. Collectively, these models demonstrate the potential of blockchain to support secure, equitable, and trustworthy CTI unselfish ecosystems. By combining theoretical enablers (e.g., consensus algorithms, smart contracts) with pragmatic governance features (e.g., quality scoring, token incentives, and access controls), these frameworks offer viable solutions to many of the limitations observed in conventional CTI systems.
2.4. Hybrid Consensus Techniques with CTI
Consensus mechanisms are essential components in blockchain systems, ensuring the security, reliability, and trustworthiness of transactions. Traditionally, blockchain networks adopt a single consensus protocol, such as PoW or PoS. PoW requires miners to solve computationally intensive possible puzzles to validate transactions and add modern blocks. This approach provides powerful security guarantees, but at the cost of high energy consumption and long transaction confirmation times. PoS, by contrast, relies on validators who are selected based on their ownership (or “stake”) in the network’s native cryptocurrency. This method is broadly more energy-efficient, cost-effective, and faster, though it may introduce risks of centralization, as wealthier participants may gain disproportionate influence. Hybrid consensus mechanisms, which combine two or more consensus protocols, remain comparatively raw in blockchain-based CTI systems [
17]. A practical implementation of hybrid consensus is demonstrated by the cryptocurrency Decred, which integrates PoW and PoS to establish a balanced and fair reward distribution system, enable shared governance between miners and stakeholders, and strengthen resilience against double spending and 51% attacks. Hybrid consensus mechanisms have been investigated by researchers as effective solutions to common blockchain limitations. By combining PoW and PoS, a network can assign network security tasks (e.g., block generation) to miners, delegate transaction validation and governance responsibilities to stakeholders, enhance security, distribute rewards more equitably, and increase efficiency by separating concerns across roles.
Such hybrid models are particularly promising in sensitive and high-stakes domains like CTI, where data integrity, trust, and system responsiveness are critical. Other solutions have also employed hybrid or hybrid-like models for related use cases. Decred [
18] uses a PoW/PoS hybrid to promote inclusive governance, secure validation, and resilience to majority attacks. In a CTI context, this model can be adapted by distinguishing roles: validators (PoS participants) assess the quality of threat reports, while miners (PoW participants) ensure ledger integrity and block finalization. Multi-layer blockchain architecture extends this concept further. In such systems, fast-finality layers (e.g., PoS-based consensus) interact with base layers (e.g., PoW), enabling a balance between throughput and consensus assurance. Often dubbed the “Ethereum of China” [
19], the system employs a delegated Byzantine Fault Tolerance (dBFT) consensus mechanism, which combines elements of PoS and Byzantine consensus. Although not a strict PoW/PoS hybrid, dBFT enables trusted nodes, elected by token holders, to reach consensus rapidly with fault tolerance. This architecture is particularly suited for CTI environments where rapid agreement, minimal forking, and consistent data state are essential.
2.5. Cloud Security and Current Detection Solutions
The growing adoption of cloud technologies has resulted in an increase in cyberattacks targeting cloud-based platforms. Traditional security solutions such as IDS, IPS, and Web Application Firewall (WAF) often prove inadequate in detecting advanced and stealthy threats. While these tools provide a foundational layer of defense, additional technologies are required to act as a last line of defense, particularly for detecting intrusions and ransomware attacks. A key limitation of current detection mechanisms is their reliance on signature-based systems, which are ineffective against novel or obfuscated threats. Even newer methods, such as behavioral analysis (BA), machine learning (ML), and artificial intelligence (AI), often create false alerts. Their ability to accurately detect threats is still unreliable [
20]. To address these challenges, researchers have explored alternative approaches, such as honeypots and deception-based techniques. Honeypots simulate vulnerable systems or services to lure attackers, enabling security teams to observe and analyze attacker behavior in real time [
20]. These systems are particularly effective in detecting previously unknown threats and ransomware. However, sophisticated attackers may detect and avoid interacting with deception mechanisms, which limit their effectiveness. Recent work suggests that combining honeypots with ML and Cyber Threat Intelligence (CTI) can substantially enhance the detection of zero-day exploits and emerging ransomware variants [
21]. Ransomware has evolved into a significant threat, often encrypting files in ways that render recovery impossible without decryption keys. Notable examples such as REvil and NotPetya have exposed vulnerabilities in cloud infrastructure, particularly platforms with synchronization features like Google Drive and OneDrive [
22,
23].
Research has shown that deploying decoy or honey files in cloud environments can facilitate early ransomware detection by alerting systems to unauthorized access attempts [
24]. Nevertheless, signature-based and heuristic analysis techniques frequently fail to detect new malware variants or polymorphic threats, emphasizing the need for advanced and adaptive detection methods [
25,
26,
27,
28]. The sharing of CTI between organizations remains fraught with challenges, including data accuracy, privacy regulations, legal liability, source credibility, and the risk of reputational damage [
29,
30,
31]. Although cloud service providers integrate Security Information and Event Management (SIEM) tools to monitor and manage threats, the interoperability of different tools, each with unique APIs and integration requirements, adds significant complexity. Moreover, concerns around privacy and negative publicity hinder the open sharing of detailed threat intelligence [
32]. According to TechMonitor [
22] reported that a ransomware attack rendered 400 dental clinics inoperable. Similarly, NotPetya, which originated in Ukraine in 2017, quickly spread globally by exploiting operating system and network vulnerabilities, without relying on traditional social engineering techniques [
23,
33]. Several studies have proposed the use of honeytokens and honey files for ransomware detection. The authors in [
24,
25] demonstrated how decoy files could reveal malicious activities. Similarly, [
26] showed that fake resources could detect attempts to encrypt or exfiltrate sensitive data [
27,
28]. Static and dynamic analysis are two advanced techniques employed to confront these challenges. Static analysis involves manually reviewing suspicious code without executing it, offering detailed insights but requiring substantial human effort, often overwhelming analysts and increasing the risk of overlooked threats. Dynamic analysis, in contrast, executes the code in a controlled environment and often integrates ML/AI to accelerate detection. However, this method tends to produce frequent false positives and lacks the contextual understanding provided by human experts [
34,
35]. Tools like YARA rules are widely used by analysts to identify malware through pattern matching, but they require frequent updates, and obfuscation techniques can enable attackers to evade detection [
36,
37]. Moreover, creating robust and error-free YARA rules is a complex and error-prone task, making them insufficient as standalone detection tools. In summary, while advanced detection mechanisms such as real-time monitoring, deception-based strategies, and machine learning models offer significant improvements over traditional methods, they also introduce challenges. These include resource intensity, deployment complexity, and limited effectiveness against previously unseen or zero-day threats [
38].
2.6. AI–Blockchain Integration in Cybersecurity and CTI
Recent research increasingly explores the joint use of artificial intelligence and blockchain to enhance cybersecurity functions beyond integrity and auditability [
39]. AI techniques have been applied for malware classification, anomaly detection, threat prioritization, and automated response, while blockchain provides provenance, tamper resistance, and decentralizes trust. Several studies integrate machine learning models with blockchain-based CTI sharing to improve data quality and reduce false intelligence dissemination [
40]. For example, learning-based classifiers are used to score or filter CTI submissions before on-chain anchoring, while blockchain ensures accountability of contributors and validators. Other works employ deep learning for intrusion or ransomware detection and subsequently record alerts or indicators on-chain to support collaborative defense. In contrast to these approaches, CTIB deliberately separates content-level detection (IRDS4C) from integrity and accountability (blockchain), allowing each layer to be independently optimized and empirically evaluated.
3. Methodology and Implementation
This section outlines the methodology used to develop the CTIB and IRDS4C frameworks, detailing their design, data collection, implementation, and evaluation in comparison with tralatitious CTI approaches. The simple objective is to establish robust security mechanisms specifically tailored for cloud environments, addressing the limitations of accepted cybersecurity solutions. The methodology adopts both qualitative and duodecimal evaluation techniques to assess the effectiveness of the proposed systems. The framework integrates two complemental components:
IRDS4C (Intrusion and Ransomware Detection System for Cloud), which leverages deception-based techniques to detect threats within cloud infrastructures.
CTIB (Cyber Threat Intelligence using Blockchain), which utilizes blockchain technology to securely share threat intelligence across multiple organizations. Together, these systems aim to enhance cloud security through proactive threat detection and trusted intelligence sharing.
The IRDS4C framework implements deception strategies across binary layers of the cloud environment to enhance threat detection and response. At the network perimeter, pretender route paths and unreal border gateway nodes are deployed to detect Distributed Denial of Service (DDoS) and reconnaissance activities. At the web application layer, decoy web applications and APIs are used to lure attackers into revealing techniques such as SQL injection and cross-site scripting (XSS). Within the operating system layer, pretender services, dummy user accounts, and misleading system files are introduced to identify unofficial access attempts and privilege escalation. For the database layer, decoy databases containing sour credentials are employed to detect attempts to access or exfiltrate erogenous information. In the storage layer, crafted pretender files serve as immature warning triggers for ransomware activity, allowing for detection before real data is compromised. Lastly, decoy servers are strategically deployed across the cloud environment to monitor lateral movement, offering valuable insights into attacker behavior and tactics. These multi-layered deception mechanisms collectively form a robust, interconnected defense strategy that not only enhances real-time threat detection but also facilitates intelligence gathering to improve long-term cloud security, as summarized in
Table 3.
The first part is the deception modules; these modules place decoy resources at different layers in the cloud. These decoys attract attackers or ransomware. Second, a monitoring system collects and analyzes information gathered from these decoy resources to check if anyone trying to access or modify it. If the monitoring system detects any distrustful activity, a vigilant system instantly sends alerts to the administrators. Finally, there is an integration interface which helps this system to communicate and share Cyber Threat Intelligence with CTIB. Different procedures have been used to track file activities in ransomware detection, principally to find out if someone is trying to edit or encrypt pivotal files. File activity tracking can be cooked by hooking file system APIs or different similar system calls like System Service Descriptor Table, or by using honey files, honeytokens, and decoys as mentioned in [
41]. Other common ways to detect suspicious file activity are comparing file hashes, analyzing file entropy, and using a file system event handler watcher. There are multiple ways to identify ransomware behavior. These include checking windows audit logs and tracing events, using kernel drivers, or using process hooking techniques. One effective way is to continuously watch decoy files through a file system event handler watcher. The authors in [
42] suggested to monitor decoy files and enhanced this approach by using HMAC to detect if files were modified. However, this method might use more CPU resources. Another method based on file entropy [
42] is useful but sometimes gives false alarms due to its calculation method.
Some keyways include file hashing, file entropy analysis, and file event handler watcher. In this paper, we mainly focus on using the file event handler watcher API to monitor decoy resources, honey files, and honeytokens, which helps in ransomware detection as illustrated in
Figure 1.
3.1. CTIB Framework Design
The CTIB framework enables organizations to firmly share CTI using blockchain technology, supported by a hybrid consensus mechanism that combines PoW and PoS. A custom genesis block structure, sympathetic with standard CTI formats and data structures, was designed as illustrated in
Figure 2. Organizations submit threat intelligence, such as spiteful IP addresses or attack patterns, through an assured web interface. The submitted data is encrypted, digitally signed, and verified before being recorded on the blockchain. Initially, validators assess the data using the PoS mechanism, where their influence is based on their stake and honesty ranking (determined by their existent behavior and deposited tokens). Following this, miners using PoW to further validate the results, adding a second layer of verification. Contributors who submit verified and useful threat intelligence receive tokens as rewards. These tokens can be redeemed for premium intelligence services or other features available on the blockchain. All data access and reward policies are managed automatically via smart contracts. The framework adheres to widely accepted CTI standards such as STIX and TAXII, facilitating easier integration and interoperability [
17].
The two-tier validation process, PoS followed by PoW, provides protection against common attacks, such as double spending. As depicted in
Figure 3, the process begins when a researcher submits CTI data. The CTIB system transforms this data into a genesis block, which is validated sequentially by multiple validators. The first validator is selected based on honesty ranking and validates the genesis block to generate Block #1. This block is then passed to subsequent validators, each of whom reviews and appends their validation results. Once all validators complete their assessments, CTIB performs a consensus check; if three out of five validators approve the CTI, it proceeds to the PoW layer. In the second validation phase, miners re-verify the CTI data. If confirmed, the miner receives a cryptocurrency reward. Successful miners may also be promoted to become future validators, fostering a self-sustaining and incentivized participation mode and the pseudocode mentioned in
Appendix A. This workflow is implemented in our CTIB prototype and empirically evaluated in
Section 3.2. The integration of PoS and PoW in this manner ensures that the CTIB framework remains secure, fair, and resilient against tampering and manipulation [
44].
3.2. CTIB Prototype Implementation and Performance Evaluation
To operationalize CTIB as an executable prototype, we implemented the CTIB ledger layer as a solidity smart contract deployed on a local Ethereum-compatible development network (Hardhat Network). Hardhat Network provides a local blockchain node that exposes an HTTP JSON-RPC interface and supports deterministic contract deployment and transaction execution in a controlled environment. Because Hardhat Network mines blocks immediately by default, the measured end-to-end latency reflects controlled prototype overhead and excludes wide-area network propagation delays, peer-to-peer synchronization, and decentralized validator coordination costs. In its default configuration, Hardhat Network mines blocks immediately as transactions arrive, enabling repeatable measurement of end-to-end transaction confirmation time without Internet-scale network variability [
45]. A lightweight service layer was implemented using FastAPI and served via Uvicorn (ASGI), exposing endpoints for (i) validator initialization, and (ii) CTI submission and publication through the PoS → PoW pipeline. This setup follows standard deployment practice for FastAPI applications using an ASGI server [
46]. Complete reproducibility details, benchmark configuration parameters, and execution artifacts are provided in
Appendix D. All reported CTIB performance metrics were obtained from a controlled local prototype deployment (Hardhat devnet + FastAPI/Uvicorn) and do not represent production-scale blockchain performance.
3.2.1. Functional Test + Stage-Level Timings
In a functional end-to-end submission test, CTIB successfully finalized and published the first CTI report (cti_id = 1) after receiving four validator approvals (out of five), demonstrating that the PoS voting stage and PoW sealing stage were both executed in the implemented pipeline. The measured wall-clock end-to-end completion time was 399.18 ms, with internal stage timings of 158.30 ms for the PoS review stage and 1.31 ms for the PoW nonce search (difficulty_bits = 12). The remaining time is attributable to request processing, serialization, and transaction finalization overhead in the prototype runtime.
Table 4 summarizes the CTIB performance results across three PoW difficulty profiles. At difficulty_bits = 8 (“baseline”), CTIB achieved a success rate of 99% with a throughput of 125.92 feeds/min (≈2.10 feeds/s), median end-to-end latency of 455.55 ms, and p95 latency of 577.68 ms. At difficulty_bits = 12 (“medium”), throughput decreased to 104.31 feeds/min (≈1.74 feeds/s) with a median latency of 533.14 ms and p95 latency of 742.27 ms. Under the highest tested configuration (difficulty_bits = 16, “stress”), CTIB sustained 74.93 feeds/min (≈1.25 feeds/s) with median latency of 724.99 ms and p95 latency of 1364.17 ms. Overall, increasing PoW difficulty increases computational cost and degrades throughput and tail latency, as expected for PoW-based sealing.
3.2.2. Benchmark Results
We benchmarked CTIB using an automated client that submits structured CTI items and measures the end-to-end pipeline time from API submission until the CTI record is approved and published. For each run, we report (i) success rate, (ii) throughput (feeds/min), and (iii) end-to-end latency distribution (median and p95). These indicators are commonly used in blockchain benchmarking frameworks and allow direct comparison between configurations under increasing workload difficulty. To study the impact of PoW cost on system performance, we evaluated three difficulty profiles (difficulty_bits ∈ {8, 12, 16}). Each profile submitted n = 100 CTI items, where a run is counted as successful when a valid PoW nonce is found within the configured search budget and the PoS validator threshold is satisfied.
4. Integration of IRDS4C and CTIB Frameworks and Discussion
After introducing both the IRDS4C and CTIB frameworks, this section explores how they can be integrated into an integrated system for cloud service providers. The IRDS4C framework functions chiefly as a detection and deception system, designed to identify ransomware and intrusions, serving as the last layer of defense before serious data is compromised. Once a threat is detected, pertinent intelligence is firmly transmitted via the CTIB framework to authorized clients, enabling speedy response while maintaining data privacy and integrity. This section outlines how cloud service providers can detect threats, intrusions, and ransomware across various cloud infrastructure layers, and how these findings are shared through the integration of IRDS4C with the CTIB framework. It also explains how alerts and threat intelligence are communicated to clients, and highlights the challenges encountered in this process, including issues related to latency, scalability, and trust management. Finally, the operational workflow of the proposed integrated framework is presented, demonstrating how it addresses the limitations of traditional detection and threat-sharing mechanisms by combining advanced deception-based detection with secure, blockchain-enabled dissemination of cyber threat intelligence.
4.1. Integration of IRDS4C and CTIB Frameworks
The integration of the IRDS4C and CTIB frameworks into an integrated system addresses several challenges faced by cloud service providers. IRDS4C functions as the last layer of defense, employing deception-based techniques to detect advanced threats such as ransomware and intrusions. Upon threat detection, IRDS4C quickly initiates evasive responses and sends alerts to the Security Information and Event Management (SIEM) system or different concentrated log management platforms. These alerts are afterwards shared firmly via the CTIB framework, as illustrated in
Figure 4.
Figure 4 depicts the integration of CTIB and IRDS4C, connecting various cloud service models, Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), with serious security components such as IRDS4C, SIEM, and CTIB. This combined framework facilitates seamless interactions among cloud service providers, clients, auditors, and end-users, thereby enhancing the overall cybersecurity infrastructure. Within this architecture, IRDS4C operates as the core detection mechanism, leveraging deceptive elements such as honey files and honeypots to identify malicious activities. It transmits alerts to the SIEM, which aggregates logs from multiple security sources, including WAFs, IDS/IPS, and IRDS4C, to conduct deeper analysis and detect coordinated attacks. The CTIB framework securely disseminates CTI among authorized participants using blockchain technology.
This ensures data privacy and integrity while promoting collaboration through incentive mechanism. Clients receive real-time threat alerts directly from CTIB, while auditors can verify the authenticity of shared intelligence via blockchain without exposing sensitive data. Ultimately, end users benefit from enhanced protection when interacting with cloud-based applications secured through this multi-layered defense strategy. To preserve client privacy, each participant is assigned a unique hashed account on the CTIB blockchain. This ensures anonymity while enabling the secure exchange of threat intelligence. When IRDS4C detects threats, using deception techniques such as honey files or honeypots, it transmits alerts to the SIEM for logging and correlation. Subsequently, detailed threat information is securely shared via the CTIB blockchain. The framework incentivizes clients to contribute intelligence by offering a reward, while auditors can verify reports without revealing the identity of the source. This integrated approach accelerates threat detection and enhances the overall security posture of cloud environment.
4.2. Leveraging IRDS4C System for Multi-Layer Threat Detection and Defense
Deception units have been getting more attention lately in cybersecurity because they help spot threats early, sometimes even before something serious really happens. In the IRDS4C setup, I ended up spreading deception elements around different parts of the IT environment—things like the web apps, operating systems, storage, servers, databases, and even the network gateway. The whole idea is to make the place a little confusing for an attacker so they probably touch something fake at some point.
At the border gateway layer, IRDS4C uses a mix of fake routes, some made-up BGP announcements, and honeynodes mainly to keep an eye on weird traffic and log anything that feels suspicious. In the web application layer, the unit deploys decoy apps, APIs, and fake user accounts, which helps in catching attacks like SQL injection or maybe brute force logins. On the operating system layer, I added counterfeit services and fake users and some dummy files so the unit can identify attempts to obtain unauthorized access or install rootkits, which attackers still try to do sometimes.
For the database side, IRDS4C sets up fake databases with bogus credentials and a bunch of crafted SQL queries. These are mostly there to expose any attacker trying to steal or mess with the stored data. In the storage layer, the unit places decoy files inside cloud storage so ransomware hits those first, which gives an earlier warning before the real data is touched. On the server layer, fake remote services and decoy servers are used to watch for brute force activity or unauthorized access tries. Altogether, this multi-layer deception strategy makes detection stronger and improves resilience, which is shown later in
Table 4. The table in the previous section gives an overview of how IRDS4C is deployed across these six layers in a cloud network: border gateway, web application, operating system, database, storage, and server. It shows how each layer uses its own decoy trick to find or delay advanced cyberattacks. The whole point is to drag the attacker away from real assets, and collect useful information about what they are doing and how. By spreading deception across these layers, IRDS4C improves the overall security posture of the organization. It helps in spotting new or unknown threats early, gathering some practical news about attacker behavior, and reducing the chance of a stealthy intrusion. The design supports a more proactive threat-hunting style, giving teams better tools to deal with normal and advanced cyberattacks.
On the CTIB side, each participant is given a hashed account so their identity stays private while still allowing safe sharing of threat news. When IRDS4C notices something suspicious like touching honey files or interacting with honeypots it sends an alert to the SIEM for logging and correlation. Afterwards, detailed threat news is shared firmly done the CTIB blockchain. The framework encourages clients to share their own news by giving them rewards, and auditors can check reports without actually knowing who submitted them. Putting everything together helps detection go faster, and improves the cloud security posture in a way that supports teamwork and quicker responses.
4.3. Overview of IRDS4C Deception-Based Detection Across Cloud Layers
Deception systems have been getting more and more attention lately in cybersecurity, mostly because they can find threats early even before something big really happens, and sometimes even earlier than expected. The IRDS4C unit uses deception tricks across many layers of the IT infrastructure like the web applications, operating systems, databases, storage, servers, and also the network gateways, which sort of makes things more confusing and risky for attackers who try to move around inside it. At the border gateway layers, IRDS4C uses a mix of fake networks routes and some fabricated BGP announcements, plus honeynode to watch strange traffic that does not look normal and log anything that seems a bit suspicious. For the web application layers, it puts decoy applications and APIs, and even some fake user accounts to try to catch malicious behavior like SQL injection or brute force login trials, which attackers still do a lot. Inside the operating system layers, I added counterfeit services, some user accounts that are not real, and system files just to help identify unauthorized access or rootkit installation attempts even if they try to hide. For protection of the database side, IRDS4C sets up fake databases and fake credentials along with crafted SQL queries so anyone trying to steal or manipulate data gets exposed pretty fast. The storage layer uses decoy files inside cloud storage environments so ransomware touches those first, which gives an early warning before the real data ends up encrypted. On the server layer, fake remote services and decoy servers help monitor and log brute force or unauthorized tries even when they do not look serious at first. This whole multi-layer deception makes detection stronger and improves resilience, which is shown later in
Table 5. The table in the earlier section basically explains how IRDS4C is deployed across six layers of the cloud network: border gateway, web application, operating system, database, storage, and servers. It shows what type of decoy trick is used at each layer and how each one helps in detecting or slowing down complex cyber threats that maybe would not be caught otherwise. The whole point is mostly to lead attackers away from real assets, and collect useful information about what they are really doing even if they are trying to hide it. By spreading deception in all these layers, IRDS4C makes the organization’s security posture stronger. It helps find unknown threats early, gathers some actionable news about attacker behavior, and reduces the chance of a stealthy intrusion that goes unnoticed. Algorithm A1 (
Appendix A) summarizes the decoy deployment process. For each cloud layer, the engine reads the corresponding policy rules, instantiates the required decoys, enables logging on each decoy, and updates the central decoy registry. This ensures consistent coverage and simplifies management of large-scale deployments. IRDS4C uses a file event watcher to continuously monitor decoy resources. Whenever a decoy file is modified, renamed, or deleted, the system retrieves metadata about the responsible process (e.g., executable path, parent process, user context, and open handles). These features are combined into a behavioral profile. The detection component computes a ransomware score based on suspicious patterns such as rapid encryption attempts, access to multiple decoy files, unusual process trees, and unauthorized privilege escalation. When the score exceeds a predefined threshold, IRDS4C raises an alert, isolates the offending process, and triggers evidence collection, including snapshots and logs. Algorithm A2 (
Appendix A) provides the pseudocode for the monitoring and detection loop. This event-driven approach enables near real-time detection with minimal overhead on production systems, since benign processes are not expected to interact with decoy resources under normal conditions. Once a ransomware incident is confirmed, IRDS4C constructs a structured CTI report in STIX format. The report includes an indicator object describing the observable patterns, an observed-data object containing raw events, and a threat-actor object representing the adversary.
5. A Real-Life Scenario: Ransomware Detection Using IRDS4C and CTIB in Cloud OS Layer
This section explains how the IRDS4C and CTIB systems operate in tandem to address the limitations of traditional cybersecurity approaches. It also outlines the integration of the IRDS4C framework into a practical deployment scenario, demonstrating the generation of CTI in STIX and TAXII formats and its subsequent transformation into a CTIB genesis block, ready for incorporation into the CTIB blockchain framework.
5.1. IRDS4C Real Life Scenario
This subsection evaluates the effectiveness of the IRDS4C framework in comparison to traditional detection techniques such as file hashing and file entropy analysis. File hashing and file entropy analysis are used in this study strictly as illustrative baseline references, representing widely adopted traditional detection techniques. They are not intended to serve as comparisons against state-of-the-art machine learning-based detectors or modern EDR platforms. The study in [
47] highlights the differences in detection rates, response times, and false positive rates between IRDS4C and conventional methods across various ransomware families, as summarized in
Table 6.
For the experimental setup, the IRDS4C framework was deployed on the Google Cloud Platform and tested using various cloud layers. Multiple experiments were conducted with a diverse set of ransomware samples and intrusion scenarios. Detailed information about the experimental environments, including hardware specifications and software versions, can be found in references [
43,
47] and mentioned in (
Appendix B).
In terms of detection time, IRDS4C demonstrates a significant advantage. For instance, the Phobos (RedLine) ransomware was detected within 9 s using IRDS4C, while file hashing required 35 s for detection, as reported in [
48]. This rapid detection capability is crucial for minimizing the impact of ransomware attacks. Following detection, all relevant information is compiled into a structured report, which is then converted into the STIX and TAXII formats. This structured threat intelligence is subsequently embedded into the genesis block of the CTIB blockchain and released into the CTIB network for secure and transparent dissemination. The following phases will provide a detailed discussion of the integration and operation of the proposed frameworks.
5.1.1. Phase 1: Generating CTI Report for IRDS4C
This underscores the efficiency of the IRDS4C framework in rapidly detecting ransomware threats and reducing response times. To evaluate the accuracy of each detection method, several performance metrics are considered. True positives (TPs) represent ransomware samples that are correctly identified as malicious, while false positives (FPs) denote benign samples that are incorrectly classified as ransomware. False negatives (FNs) correspond to actual ransomware samples that are not detected by the system. The probability of detection, denoted as P(Detection), reflects the system’s ability to correctly identify true threats and is calculated using the formula: P(Detection) = TP/(TP + FN).
Precision quantifies the accuracy of positive predictions by measuring the proportion of correctly identified ransomware among all samples flagged as threats: Precision = TP/(TP + FP). Recall, also referred to as sensitivity, assesses the system’s capability to detect all actual ransomware threats: Recall = TP/(TP + FN). The F1 score provides a harmonic mean of precision and recall, offering a balanced evaluation of both metrics, particularly valuable in scenarios with imbalanced datasets: F1 score = 2 × (Precision × Recall)/(Precision + Recall).
Table 6 summarizes the detection performance for various ransomware families, including the counts of true positives, false positives, and false negatives. These results validate the effectiveness of the IRDS4C framework in accurately identifying ransomware while minimizing false alarms and enhancing overall detection reliability.
This analysis underscores the efficacy of the IRDS4C framework, particularly its rapid detection capabilities and the absence of false positives, distinguishing it from traditional detection methods [
48]. The following sections will provide a more detailed examination of the implications of these findings for cybersecurity practices. Based on the updated analysis, the IRDS4C framework consistently demonstrates superior performance across various ransomware families, exhibiting high detection probabilities and precision scores. In contrast, conventional techniques such as file hashing and file entropy analysis show variable levels of effectiveness, though they remain adequate in certain scenarios. Overall, this provides a comprehensive comparison of the effectiveness of each detection approach.
Calculating Mean Time to Detect (MTTD)
Measuring the Mean Time to Detect (MTTD) for the IRDS4C framework, file hashing, and file entropy involves calculating the average time required to detect ransomware threats using each method. This metric is crucial for demonstrating the effectiveness and efficiency of the IRDS4C system in comparison to traditional techniques.
To compute the MTTD, begin by collecting the detection times recorded for each ransomware family using the three detection approaches: IRDS4C, file hashing, and file entropy. Organize this data into a structured dataset that clearly associates each detection time with its corresponding method. Then, calculate the MTTD by summing all detection times for a given method and dividing the total by the number of samples analyzed.
Mathematically, MTTD is defined as MTTD = (Σ Detection Times)/(Number of Samples). This calculation provides a standardized way to compare the responsiveness of different detection systems under consistent conditions.
Table 7 presents the detection times (in seconds) for each ransomware family for various detection techniques [
47] and
Table 8 provides MTTD times (in seconds) for various detection techniques.
The IRDS4C framework significantly outperforms both file hashing and files entropy in terms of mean time to detect ransomware threats, demonstrating its efficiency and effectiveness.
5.1.2. Phase 2: Converting CTI Report for IRDS4C Ransomware to STIX and TAXII Format
In this phase, a structured CTI report is prepared to summarize the analysis comparing the IRDS4C framework with traditional detection methods (file hashing and file entropy), focusing on detection rates and MTTD. This report is intended to be disseminated via the CTIB systems. The contributors generate the CTI reports, which includes all relevant experimental details and information on ransomware detection performed by IRDS4C.
The report is formatted according to standard CTI representations, specifically using STIX and TAXII. STIX is a standardized language for representing structured threats intelligence data, enabling the comprehensive documentation of cyber threat indicators. Threat intelligence captured by IRDS4C is structured in STIX formats to facilitate interoperability with cybersecurity tools. For example, detected ransomware indicators such as file hashes, malicious IP addresses, and domain names are summarized within STIX reports. These typically include a unique indicator ID, a descriptive title (e.g., “IRDS4C Ransomware Detection”), timestamps of detection events, and relevant IoCs.
Once IRDS4C identifies a threats, TAXII messages are used to securely and efficiently disseminate the STIX-formatted threat intelligence. TAXII supports real-time threat information exchange between systems, enhancing the responsiveness of detection mechanisms. Within the CTIB blockchain, threat intelligence is immutably recorded starting from the genesis block. This block contains essential metadata such as timestamps, unique block hashes, and digital signatures. When IRDS4C detects a ransomware variant, a blockchain transaction is created, storing critical metadata (e.g., SHA-256 hashes of the threat) directly on chain. To ensure scalability and optimize performance, full transaction details and datasets are maintained off-chain, while references to this data are securely linked within the blockchain. In the STIX formats, we started with an {indicator} which shows how IRDS4C detects ransomwares. This indicator includes a description of how IRDS4C detected the ransomwares and compares IRDS4C against traditional methods like file hashing as an example for real-life scenario. It also provides the date from when this indicator becomes relevant and notes that it is used specifically during the detection phases. The next part, called {observed-data} records real examples from tests, including timestamps and details about actual ransomware samples used. Also, there is the {threat-actor} part, which typically identifies who carried out the attacks.
In this case, however, it is used to highlight the main purpose of the analysis; it shows how IRDS4C compares against traditional detection methods. TAXII is a secure method for sharing STIX threat intelligence data between security systems like SIEMs. Although TAXII is not strictly needed in the CTIB framework since blockchain already ensures secured data sharing, TAXII messages are still created for compatibility with other security systems. For instances, when IRDS4C detects ransomwares, it creates a TAXII message summarizing key details such as the ransomware types, timestamps, and detection indicators. This helps security teams quickly integrate the information into their existing security tools.
5.1.3. Phase 3: Generating the CTIB Genesis Block
Based on the CTIB blockchain structure described in the paper document and the previous STIX and TAXII report,
Table 9 illustrates an enhanced version of the CTI report that includes the CTIB genesis block structure. The JSON structure for the CTIB genesis block includes several fields: block_hash, a unique value computed after the block data is finalized; header, which contains metadata such as previous_hash set to “0” since it is the first block, a timestamp indicating the block creation time, total_data initially set to zero, and total_feeds also starting at zero.
The structure also includes cti_feed, which details the threat intelligence feed, comprising a unique feed_hash, a feed_url for access, and Indicators of Compromise (IoCs) such as IP addresses, domains, URLs, severity levels, attack types, and the owner’s digital signature. Additionally, quality_reviews is currently empty as no evaluations have been recorded, and average_quality is initialized as a placeholder. This layout clearly defines the genesis block structure, with specific values adaptable according to implementation requirements, as illustrated in
Figure 5.
The presented design integrates the CTIB and IRDS4C frameworks to enhance cloud cybersecurity. IRDS4C employs decoys across multiple layers, including web applications, operating systems, databases, storage, servers, and network gateways to detect attackers and ransomware at an early stage, outperforming traditional detection methods such as file hashing. For instance, IRDS4C identified the Phobos ransomware within 9 s, significantly faster than conventional approaches, and without generating false positives. CTIB allows threat intelligence to be shared safely using blockchain technology, which helps protect privacy and builds trust. By using both PoW and PoS together, CTIB makes a 51% attack extremely unlikely. An attacker would need to control about 71.5% of the mining power and the staked coins at the same time. As shown in [
17], this number comes from basic probability equations that describe how the two parts of the system work together. The attacker must win the mining race in PoW and be selected as a validator in PoS at the same time. When these two probabilities are multiplied, the chance of success drops sharply, so reaching 51% becomes almost impossible without taking over most of the whole network. This threshold is derived under idealized analytical assumptions, including independence between PoW and PoS control probabilities, homogeneous validator behavior, and the absence of network latency, adaptive adversaries, or strategic collusion effects. As such, the 71% value should be interpreted as an upper-bound security intuition rather than a precise real-world attack probability. A short explanation of these probability equations is provided in the appendix (
Appendix C) for clarity. As a result, the combined CTIB and IRDS4C frameworks offer strong and reliable protection [
17,
47], as shown in
Figure 6.
Scalability represents a critical challenge for the IRDS4C and CTIB framework in cloud security and cyber threats intelligence application. Traditional blockchain architectures, particularly those relying solely on PoW, often face limitations such as low transaction throughput, high resource consumption, and restricted network participation. The proposed integration of CTIB and IRDS4C addresses these challenges by combining multiple consensus mechanisms to balance security, efficiency, and scalability effectively.
5.2. Discussion
The case study demonstrates how the IRDS4C deception sensors, when deployed across all layers of cloud infrastructure and integrated with the CTIB blockchain, significantly enhance ransomware defense capabilities. Unlike traditional threat intelligence (CTI) feeds that rely on centralized storage managed by a single vendors, CTIB decentralizes threat data across distributed nodes. This architecture eliminates the risk of a single point of failure and prevents any single entity from altering or withholding threat alerts. In contrast, conventional CTI platforms depend on centralized servers, where any compromise or downtime can disrupt intelligence dissemination.
Table 10 compares the CTIB systems with other advanced CTI frameworks proposed in the recent literature, most of which primarily aim to enhance data-sharing mechanisms or improve format standardization. However, these frameworks typically lack an integrated detection engine. In contrast, the CTIB frameworks incorporates IRDS4C, a multi-layered deception-based detection system that actively analyzes file behaviors in real time.
Accordingly, the reported improvements should be interpreted relative to traditional baseline techniques rather than as a replacement or direct benchmark against production-grade EDR or AI-based detection systems.
IRDS4C achieves rapid and accurate ransomware detections with zero false positives and demonstrates an MTTD of approximately 12.4 s, an outcome not observed in other models. A further distinguishing feature of CTIB is its dual consensus blockchain architecture, integrating PoW and PoS. PoW ensures the immutability and tamper resistance of stored data, while PoS enables efficient validation by permissioned participants, enhancing both performance and security. This hybrid approach provides strong resilience: executing a 51% attack would require control of over more than 70% of both the hashing powers and the staked tokens, making such an attack practically infeasible.
CTIB also automates the generation of threat data in standardized STIX/TAXII formats, allowing seamless interoperability with existing Security Operations Center (SOC) tools, including those not directly connected to CTIB. Unlike traditional CTI feeds, which often transmit indicators in formats likes CSV or JSON over insecure channels such as HTTPS, CTIB’s blockchain inherently preserves data integrity and traceability. Moreover, CTIB is the only reviewed framework that offers a fully integrated detection system, IRDS4C, capable of issuing real-time alerts based on behavioral analysis and deceptive responses. It maintains zero false positives while sustaining an MTTD of approximately 12.4 s. CTIB’s uses of blockchains extends beyond data integrity to include permissioned validations via PoS and transparency via PoW capabilities that are not concurrently present in existing frameworks, which typically adopt a single consensus model. Finally, while many existing CTI models emphasize theoretical transaction per second (TPS) benchmarks, the CTIB + IRDS4C framework has been empirically validated using 53 ransomware samples and rigorously stress tested for validator security. The results confirm its practical effectiveness in securely detecting and disseminating verified threat intelligence in real time. The reported measurements were collected on a local development blockchain (Hardhat Network) and therefore represent controlled prototype performance rather than Internet-scale deployment. Because Hardhat mines blocks immediately by default, these latency results should be interpreted as an upper-bound estimate of prototype overhead excluding wide-area network propagation delays and distributed validator communication costs.
5.3. Operational, Energy, and Ethical Considerations
Deploying IRDS4C and CTIB in real environments may introduce additional operational costs. These costs include maintaining decoy resources across multiple layers of the cloud, storage for CTI reports, and running supporting systems such as log collectors and monitoring agents. Cloud-based deployments may also generate charges related to virtual machines, storage volumes, and network usage.
Although the current study does not measure these costs directly, real-world use would require budgeting for infrastructure, maintenance, and periodic updates to decoy assets to ensure long-term effectiveness. In this study, CTIB is evaluated as an executable prototype on a local development blockchain; therefore, the following energy discussion is provided as a deployment consideration rather than a measured outcome. It is important to note that consensus mechanisms based on PoW generally require higher energy consumption due to the mining process. Even though the hybrid model reduces the full dependence on PoW by combining it with PoS, practical deployment would still require careful evaluation of energy usage and environmental impact. Future implementations should explore tuning PoW difficulty, using renewable energy cloud regions, or replacing PoW with lighter mechanisms if sustainability becomes a major concern.
Regarding the ethical and legal considerations of deception techniques, such as honeypots and honey files, can raise ethical and legal questions depending on local regulations. Some jurisdictions place restrictions on collecting data about attackers, monitoring behavior, or storing interaction logs without explicit consent. Organizations must also consider privacy rules and industry compliance requirements before deploying deception systems in production. The goal of IRDS4C is to protect systems by attracting malicious activity only to controlled and isolated decoy environments; however, any real deployment should follow legal guidance and ethical best practices to avoid misuse or unintended exposure of sensitive information.
5.4. Challenges
Despite its strengths, the IRDS4C and CTIB framework presents several limitations that warrant further investigation. First, the evaluation was conducted on a limited dataset comprising seven Windows-based ransomware families. To generalize the findings, additional testing is required using Linux-based, ESXi-targeted, and container-based ransomware, as well as under real-time traffic conditions in multi-cloud environments. Another challenge is the potential for advanced adversaries to recognize and evade static decoy elements over times. Sustaining effective deception thus necessitates frequent updates, which introduces additional overhead in terms of storage and system administration. Moreover, storing complete STIX reports directly on-chain increases blockchain size, potentially impacting long-term performance.
Future implementation should consider storing only cryptographic hash summaries on chain, with full reports maintained in off-chain storage solutions such as the InterPlanetary File System (IPFS). The CTIB framework’s review system, which assesses the quality of threat intelligence based on validator feedback, may suffer from inconsistency in the absence of standardized evaluation protocols. Adopting structured review templates with predefined scoring criteria would enhance the reliability, objectivity, and fairness of the assessment process. Although CTIB’s integration with the Traffic Light Protocol (TLP) and its support for smart contract-based automated access control show promise, these capabilities require thorough validation across diverse organizational environments. In summary, while the IRDS4C and CTIB framework demonstrate strong performance in terms of threat detection, data integrity, and trust management, further enhancements are essential to support broader adoption and ensure long-term operational scalability and efficiency.
6. Conclusions and Future Work
This paper investigated the integration of the IRDS4C and CTIB frameworks to enhance both cyber threat detection and intelligence sharing within cloud environments. The primary contributions lie in presenting a unified pipeline that begins with deception-based ransomware detections {IRDS4C} and culminate in secure, tamper-proof cyber threat information sharing via CTIB without reliance on any central authority or trusted node. IRDS4C leverages advanced deception techniques, including honeypots, honeytokens, fake route, and dummy APIs, deployed across multiple layers of cloud infrastructure (network, operating system, database, storage, and server). These mechanisms create a realistic yet controlled trap environment, allowing for the behavioral analysis of ransomware and the generation of precise alerts. In a practical evaluation involving 53 ransomware samples, IRDS4C achieved an average MTTD of approximately 12 s three to four times faster than conventional methods such as file hashing or entropy analysis while maintaining a zero false positive rates.
This accuracy is attributed to the nature of the decoys: benign traffic does not interact with the fake resources, eliminating false alarms. Following detection, IRDS4C automatically constructs comprehensive STIX-formatted threat intelligence reports. These are securely disseminated via the CTIB blockchain network, which employs a novel hybrid consensus model that combines PoS validation with PoW. This approach enables rapid consensus while ensuring the data’s integrity, immutability, and verifiability. Security analysis indicates that an attacker would require control of approximately 71% of both the stake and hashing power to compromise the network, rendering such an attack highly impractical. Despite these strengths, several limitations remain. Storing entire STIX bundles on-chain significantly increases blockchain size approximately 220 kB per incident which may result in elevated storage costs in long-term deployments. Additionally, the experimental evaluation was limited to Windows-based ransomware samples, potentially restricting generalizability to threats targeting Linux, containerized environments, or SaaS platforms. Another limitation involves the data governance process: the current validator scoring system relies on free-form comments, which can lead to inconsistencies in the quality assessment of shared intelligence.
To address these limitations, several enhancements are proposed. First, large STIX artifacts and associated memory dumps will be offloaded to decentralized storage systems such as the InterPlanetary File System (IPFS), with only compact 46-byte Content Identifiers (CIDs) and brief metadata retained on-chain. Second, IRDS4C’s deception-based detection capabilities will be extended beyond Windows virtual machines to include Kubernetes clusters, cloud storages APIs, and Azure Active Directory tokens, thereby validating its effectiveness in modern, heterogeneous cloud’s ecosystems. Third, to improve the consistency of CTI quality evaluations, standardized JSON schema templates will be introduced, incorporating well-defined criteria for accuracy, timeliness, and completeness. Furthermore, considering energy efficiency and performance considerations, alternative consensus mechanisms such as PoA will be explored to enhance scalability and reduce resource consumption.
Finally, AI techniques will be incorporated to prioritize and triage threat alerts automatically, enabling more efficient analyst workflows by focusing attention on the most critical threat. By implementing these improvements, the IRDS4C and CTIB framework will evolve into a comprehensive, scalable, and resilient platform for real-time threat detection and intelligence sharing. This system promises to significantly advance cybersecurity capabilities across diverse and dynamic cloud infrastructures along with automated decoy rotation mechanism.