Next Article in Journal
Malicious-Secure Threshold Multi-Party Private Set Intersection for Anonymous Electronic Voting
Next Article in Special Issue
Towards Empowering Stakeholders Through Decentralized Trust and Secure Livestock Data Sharing
Previous Article in Journal
Compact 8-Bit S-Boxes Based on Multiplication in a Galois Field GF(24)
 
 
Article
Peer-Review Record

Dynamic Sharding and Monte Carlo for Post-Quantum Blockchain Resilience

Cryptography 2025, 9(2), 22; https://doi.org/10.3390/cryptography9020022
by Dahhak Hajar 1,*, Nadia Afifi 1 and Imane Hilal 2
Reviewer 1:
Reviewer 2: Anonymous
Cryptography 2025, 9(2), 22; https://doi.org/10.3390/cryptography9020022
Submission received: 10 February 2025 / Revised: 31 March 2025 / Accepted: 3 April 2025 / Published: 11 April 2025
(This article belongs to the Special Issue Emerging Trends in Blockchain and Its Applications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper presents a dynamic sharding approach that aims to reduce the impact of DDoS attacks in blockchain systems. This is achieved by capitalizing on the adaptive resource allocation and optimized transaction throughput.

The paper could act as a valuable study in the field. However, there are significant inefficiencies that prohibit us from recommending this article for publication. 

The related work section is weak. The reader should be provided with context regarding certain terms. For example, in the context of blockchain systems what type of DoS attacks are possible and what an attacker would aim to exhaust and how? An attempt to formalize the attacks in this study should be made. As described the DDoS attack should (and probably is) be avoided by incorporating simply authentication in messages to avoid their spoofing.

Also, a figure of how traditional sharding works would be valuable for newcomers in the field.

Metrics such as "transaction success", "resource consumption", "fault tolerance" should formally defined and described.

In Table 1 the words in row 1 are broken for no reason.

There is an unreasonable chance in the font in the text in page 5 and further below.

Monte Carlo simulations are mature and a good first step for a study. Therefore, it does not require significant explanation or providing reasoning for its adoption. However, due to how basic (and non-scalable) this method is, the authors should provide more sophisticated alternatives and compare the results. 

In Figure 2, it is clear that the average transaction latency saved through the use of sharding is minimal. This metric should be one of the strong points of this research but apparently it is not.

The text that refers to figure 6 is not consistent with figure 6. It is shown that sharding inflicts more disk usage than non-sharding.

It is never mentioned what the parameters of the Monte Carlo simulations are.

Figure 3 contradicts Figure 7.

Generally, the contradictory results and lack of transparency do not allow us to recommend the article for publication. When ready the authors should improve the article, focusing on the experimental section.

 

Comments on the Quality of English Language

The use of English language could be improved but overall the article was easy to follow.

Author Response

Dear Reviewers,

We sincerely appreciate the time and effort you dedicated to reviewing our manuscript. Your valuable feedback has greatly contributed to improving the quality and clarity of our work. Below, we provide detailed responses to each of your comments and describe the modifications made accordingly.

Response to Reviewer 1

  1. Weak Related Work Section

Comment 1: The related work section is weak. The reader should be provided with context regarding certain terms. For example, in the context of blockchain systems what type of DoS attacks are possible and what an attacker would aim to exhaust and how? An attempt to formalize the attacks in this study should be made. As described the DDoS attack should (and probably is) be avoided by incorporating simply authentication in messages to avoid their spoofing.
response:
[Blockchain networks are particularly vulnerable to Distributed Denial of Service (DDoS) attacks, which try to interrupt transaction processing, overload network resources, and reduce system availability [15]. Unlike traditional centralized systems, where firewalls and intrusion detection systems can minimize these dangers, blockchain networks must protect against DDoS attacks while maintaining decentralization [16].

Several forms of DDoS attacks may affect blockchain networks. Volumetric attacks flood the network with transactions, draining the transaction queue and increasing validation delays [17]. Protocol-based attacks, such as Eclipse attacks, aim to exploit vulnerabilities in the consensus system by isolating a node from the main network and forcing it to engage only with compromised peers. Resource depletion attacks take use of computational limitations, exhausting CPU, memory, and storage resources, making it harder for authorized users to communicate with the blockchain [18]. Finally, Sybil-based DDoS attacks use several fake identities to alter voting-based consensus mechanisms, which poses a serious risk to Proof-of-Stake (PoS) blockchains [19].

DDoS attacks have a serious impact on blockchain performance, causing greater transaction latency, higher fees, network congestion, and decreased availability [20]. Addressing these vulnerabilities needs a combination of cryptographic, protocol-level, and architectural solutions. While authentication-based protections are widely employed in standard networks to prevent denial-of-service attacks, they do not provide adequate protection in blockchain systems. Public blockchains are intended to be open and permissionless, allowing any member to publish transactions without previous verification. Unlike centralized networks, where access control lists and firewall rules can limit traffic, blockchain transactions must be authenticated using consensus processes rather than identity verification [21] [22].

Furthermore, Sybil attacks render authentication-based systems unreliable. To avoid authentication constraints, an attacker can construct several fake identities, flooding the network with malicious transactions that appear to be from normal users [23]. Furthermore, economic strategies such as imposing transaction costs to discourage spam may not eliminate DDoS concerns, as attackers with sufficient financial resources can still congest the network, particularly in high-fee systems like Ethereum [24] Given these restrictions, authentication alone cannot successfully protect against large-scale DDoS attacks in decentralized systems. To fight attacks, blockchain networks must employ adaptive architectural solutions that dynamically allocate resources and optimize transaction throughput.]

Thank you for pointing this out. We agree with this comment. We have significantly expanded the Related Work section to provide a more detailed discussion of Distributed Denial-of-Service (DDoS) attacks in blockchain systems. We now elaborate on the types of attacks possible, the resources targeted by attackers, and the consequences for blockchain networks. Additionally, we clarify why authentication mechanisms alone are insufficient for mitigating DDoS attacks.

  1. Inclusion of a Figure Illustrating Traditional Sharding
    • We have added a new figure (Figure 1) that visually explains how traditional sharding works. This provides a clearer understanding for readers unfamiliar with sharding mechanisms.
  1. Definition of Metrics Used in the Study

[The selection of performance measures is an important part of evaluating blockchain resilience under different settings. The success of sharding-based security methods cannot be evaluated without establishing measurable metrics of both network stability and transaction efficiency.

This study uses the following critical performance metrics:

  • Transaction Success Rate: This indicator measures the percentage of legitimate transactions that are successfully handled. A high transaction success rate implies resilience to congestion and DDoS attacks, whereas a low success rate reflects system inefficiencies and weaknesses [43].
  • Transaction Latency: This metric measures the time it takes for a transaction to be confirmed and is a direct indicator of blockchain scalability. Increased delay during attacks reveals inefficiencies in resource management and network congestion [44]
  • Resource Consumption (CPU, Memory, and Disk Usage): These characteristics define the computational cost associated with various blockchain topologies. High CPU and memory consumption represent processing inefficiencies, but efficient resource allocation demonstrates the effectiveness of dynamic sharding in reducing attack-induced bottlenecks [45]

These measures together provide a comprehensive picture of blockchain resilience, demonstrating how well sharded and non-sharded post-quantum blockchains perform in simulated DDoS assault situations. This study guarantees a thorough examination of blockchain security and scalability by combining these metrics with Monte Carlo-based probabilistic modeling.]

    • We now explicitly define and justify the key metrics used in our evaluation: transaction success rate, resource consumption, and fault tolerance. These definitions have been incorporated into the methodology section for clarity.
  1. Formatting Issues in Table 1 and Page 5
    • The formatting errors in Table 1, where words were broken incorrectly, have been corrected.
    • The inconsistent font style on page 5 and subsequent sections has been rectified to maintain uniformity throughout the document.
  1. Monte Carlo Simulation Explanation and Alternative Methods

[Probabilistic Modeling:

A Markov Chain model is used to simulate state transitions between different network conditions:

  • S₀ (Normal Operation): Shards operate optimally with balanced workloads.
  • S₁ (Mild Congestion): Transaction queues increase due to sudden transaction surges.
  • S₂ (Severe Congestion): Validation delays grow significantly, affecting throughput.
  • S₃ (Shard Overload): One or more shards become unresponsive due to excessive load.

Transition probabilities between these states are computed based on historical blockchain stress-test data, ensuring that the model reflects real-world system behavior.]

    • While Monte Carlo simulations serve as a robust probabilistic evaluation method, we acknowledge their limitations in scalability. We have now included a discussion on alternative approaches, such as Markov Chain Monte Carlo (MCMC) to provide a broader perspective on simulation methodologies.
  1. Clarification of Experimental Results

[While Figure 4 displays system behavior under intense adversarial stress, Figure 8 takes a broader view, measuring blockchain transaction performance in the absence of direct attack using Monte Carlo simulations over 10,000 iterations. This simulation considers probabilistic fluctuations in network congestion, transaction arrival rates, and resource availability, providing insights into the blockchain's long-term stability under various operational scenarios. Unlike the DDoS stress test in Figure 4, in which the non-sharded blockchain saw huge transaction failures, Figure 8 demonstrates that transactions continue to succeed over time when no external assault is occurring. However, some important observations emerge:

  • The cumulative number of successful transactions remains below 750 after 10,000 iterations, indicating that even without direct attacks, the non-sharded architecture struggles with scalability.
  • The linear growth pattern of successful transactions confirms that the non-sharded blockchain is highly constrained by its centralized validation process, making it less efficient in handling transaction load.
  • Compared to the expected performance of the sharded blockchain, the non-sharded system exhibits a significantly lower transaction success rate, reinforcing its susceptibility to congestion over time.

Thus, the results from Figure 8 do not contradict Figure 4 but rather complement them:

  • Figure 4 highlights the extreme vulnerability of the non-sharded blockchain under attack, where transaction failures dominate.
  • Figure 8 reveals that, even under normal conditions, the non-sharded blockchain remains inefficient, struggling to maintain high throughput due to its centralized structure.

These findings emphasize the fundamental scalability limitations of non-sharded blockchains and reinforce the necessity of dynamic sharding to enhance transaction success rates and network resilience.]

Now there names are figure 4 and figure 8

    • The discrepancy between Figure 3 and Figure 7 has been addressed by ensuring that the interpretation of results is consistent throughout the paper.
    • We have verified the description of Figure 6 to ensure it aligns with the actual findings, particularly regarding disk usage in sharded and non-sharded environments.
    • Additional explanations have been provided to clarify why sharding results in a minimal reduction in transaction latency while improving overall system scalability and fault tolerance.

Reviewer 2 Report

Comments and Suggestions for Authors

The main question addressed by the research is: "Can dynamic sharding combined with Monte Carlo simulations enhance the resilience and scalability of post-quantum blockchains against Distributed Denial of Service (DDoS) attacks?"
The study explores whether dynamic sharding can optimize resource allocation and improve transaction throughput under high-intensity attack scenarios, and how Monte Carlo simulations can be used to assess the performance, security, and scalability of these adaptive blockchain systems.

The topic is both original and highly relevant to the field of blockchain security and post-quantum cryptography. In terms of:

  • Relevance: The intersection of post-quantum cryptography and blockchain security is a critical area of research, especially given the growing threats posed by quantum computing to traditional cryptographic methods. By focusing on enhancing resilience against DDoS attacks, this study addresses a significant and timely challenge in blockchain security.
  • Originality and Gap Addressed: The research uniquely combines dynamic sharding with Monte Carlo simulations, which is a novel approach to studying blockchain resilience. It not only addresses the vulnerability of post-quantum blockchains to DDoS attacks but also explores adaptive sharding techniques for scalable transaction processing. This fills a gap in the literature where static sharding has been widely studied, but dynamic sharding in post-quantum contexts remains underexplored.

Compared with other published material this article adds the following contributions:

  1. Novel Integration of Techniques: It uniquely combines dynamic sharding with Monte Carlo simulations, offering a probabilistic and adaptive approach to blockchain resilience that has not been thoroughly explored in existing literature.
  2. Focus on Post-Quantum Resilience: It specifically addresses the resilience of post-quantum blockchains (using the Falcon signature scheme) to classical DDoS attacks, thus bridging the gap between quantum-resistant cryptography and traditional network security challenges.
  3. Comprehensive Performance Evaluation: The paper presents a thorough evaluation of sharded versus non-sharded architectures under simulated DDoS attacks and Monte Carlo scenarios, providing detailed insights into latency, transaction success rates, and resource utilization.
  4. Scalability and Efficiency Analysis: It demonstrates how dynamic sharding can enhance scalability and resource efficiency, particularly under high-stress network conditions, which is crucial for future quantum-resilient blockchain systems.

 

Regarding methodology, I suggest the authors to make several improvements to further enhance its rigor and reproducibility:

  1. More detailed simulation parameters: The paper could benefit from a more detailed description of the Monte Carlo simulation parameters, including the distribution types, iteration counts, and variability in attack patterns.
  2. Justification of metrics: While the metrics used (latency, transaction success rates, CPU, memory, and disk usage) are appropriate, the authors should provide a stronger justification for their selection and discuss their relevance to real-world blockchain deployments.
  3. Comparative analysis with other sharding techniques: Including a comparative analysis with other sharding techniques, such as static sharding or hybrid models, would provide a more comprehensive evaluation of dynamic sharding's effectiveness.
  4. Statistical significance testing: Incorporating statistical tests to assess the significance of performance differences between sharded and non-sharded architectures would strengthen the validity of the results.
  5. Reproducibility: Providing the source code or pseudocode for the Monte Carlo simulations and DDoS attack scenarios would enhance the reproducibility of the study.

The conclusions are consistent with the evidence and arguments presented in the paper. The conclusions logically follow the experimental findings, highlighting the superior performance of dynamic sharding in terms of transaction success rates, latency reduction, and resource efficiency under DDoS attacks. The paper effectively links the results to the main research question, demonstrating that dynamic sharding can indeed enhance the scalability and resilience of post-quantum blockchains. The conclusions also appropriately acknowledge the limitations of the study, including the reliance on simulation environments and the need for further research on inter-shard communication and synchronization challenges. The recommendations for future work, particularly on enhancing shard synchronization and exploring other quantum-resistant cryptographic schemes, are well-grounded in the identified challenges and limitations of the current study.

 

The references are appropriate and up-to-date, covering relevant studies from both the blockchain and post-quantum cryptography domains.

The tables and figures effectively illustrate the experimental results and enhance the overall presentation of the study. However, some improvements are also recommended to authors:

  1. Clarity and readability: The figures, particularly those comparing resource utilization (CPU, memory, and disk usage), would benefit from clearer labeling, consistent color schemes, and higher resolution to improve readability.
  2. More informative graphs: Including confidence intervals or error bars on performance metrics would provide a better understanding of the variability and reliability of the results.
  3. Visual comparison: A side-by-side visual comparison of sharded versus non-sharded architectures using bar charts or box plots would make the performance differences more apparent.
  4. Consistency and labeling: Ensure consistent labeling and numbering of tables and figures for easier cross-referencing within the text.

 

The writing is generally clear and follows an academic tone suitable for a scientific journal.

Author Response

Dear Reviewers,

We sincerely appreciate the time and effort you dedicated to reviewing our manuscript. Your valuable feedback has greatly contributed to improving the quality and clarity of our work. Below, we provide detailed responses to each of your comments and describe the modifications made accordingly.


Response to Reviewer 2

  1. Detailed Description of Monte Carlo Simulation Parameters
    • [Monte Carlo Simulation Parameters

To assess the resilience and scalability of dynamic sharding in post-quantum blockchain systems under Distributed Denial-of-Service (DDoS) attacks, we employ Monte Carlo simulations to model various attack scenarios and system responses. To improve the analysis, we provide an in-depth examination of the Monte Carlo simulation parameters, providing reproducibility and clarity in the modeling of blockchain resilience. The simulation system uses a variety of probability distributions to precisely simulate real-world blockchain processes under various attack scenarios.

  1. Number of iterations:

 The Monte Carlo simulation runs 10,000 iterations to reach statistical significance. This threshold was set using empirical convergence analysis, which ensures that more iterations have no substantial impact on the projected results.

  1. Input parameter distributions:

 To effectively represent real-world blockchain dynamics, we employed the following probability distributions for important parameters:

  • Transaction Interarrival Time: In blockchain systems, transaction arrivals follow an exponential distribution with a rate parameter of λ = 0.05 transactions per millisecond, reflecting the bursty character of transaction flows. This distribution is used to accurately model both normal operating conditions and periods of congestion caused by DDoS attacks.
  • Attack Intensity (DDoS Load): Represented by a Poisson distribution with mean values μ = {50, 100, 500} requests per second, simulating different attack magnitudes.
  • Shard Workload Distribution: Assumed to follow a normal distribution (μ = 500 transactions, σ = 100), dynamically adjusted through load-balancing mechanisms.
  • Transaction Size: Modeled with a log-normal distribution (μ = 250 bytes, σ = 50), reflecting variations in transaction complexity.
  • Validator Processing Time: Defined using a gamma distribution (k = 2, θ = 30 ms), based on empirical benchmarks from blockchain validation processes.
  1. Experimental Conditions:

The simulation evaluates blockchain performance under multiple DDoS attack intensities and sharding configurations:

  • Baseline Scenario: Normal transaction volume, no attack.
  • Moderate Attack: 100 malicious transactions per second targeting random shards.
  • Severe Attack: 500 malicious transactions per second directed at a specific shard.
  • Adaptive Sharding Response: Dynamic sharding activated with real-time workload redistribution.

Each scenario is analyzed based on transaction success rates, network latency, and resource utilization.

  1. Probabilistic Modeling:

A Markov Chain model is used to simulate state transitions between different network conditions:

  • S₀ (Normal Operation): Shards operate optimally with balanced workloads.
  • S₁ (Mild Congestion): Transaction queues increase due to sudden transaction surges.
  • S₂ (Severe Congestion): Validation delays grow significantly, affecting throughput.
  • S₃ (Shard Overload): One or more shards become unresponsive due to excessive load.

Transition probabilities between these states are computed based on historical blockchain stress-test data, ensuring that the model reflects real-world system behavior.

  1. Transaction Failure Handling:

The simulation accounts for failed transactions due to:

  • Network congestion: Queue overflow probability p = 0.15 under attack conditions.
  • Shard overload: Probability p = 0.10 in worst-case scenarios where the shard exceeds its maximum capacity.
  • Timeout failures: Transactions exceeding 5x the normal processing time are considered lost.

Failed transactions are either retried (if failure is transient) or discarded if congestion remains high.

 

  1. Validation and Real-World Comparison:

To ensure the accuracy and relevance of the simulation results, they are compared with:

  • Empirical data from Ethereum 2.0 testnets, particularly sharding proposals.
  • Historical DDoS attack logs on blockchain networks, providing real-world insights into attack patterns and mitigations.
  • Theoretical models of blockchain scalability, validating the consistency of the Monte Carlo results with prior research.

The Monte Carlo simulation supplements DDoS testing by expanding the scope of study to include dynamic and probabilistic scenarios. Together, these techniques provide a comprehensive understanding of how sharding affects the performance, scalability, and resilience of post-quantum blockchains in real-world scenarios.

]

    • We have expanded the methodology section to provide an explicit breakdown of the Monte Carlo simulation parameters. This now includes details on the types of probability distributions used, the iteration counts, and variability in attack patterns.
  1. Justification of Selected Performance Metrics

The selection of performance measures is an important part of evaluating blockchain resilience under different settings. The success of sharding-based security methods cannot be evaluated without establishing measurable metrics of both network stability and transaction efficiency.

This study uses the following critical performance metrics:

  • Transaction Success Rate: This indicator measures the percentage of legitimate transactions that are successfully handled. A high transaction success rate implies resilience to congestion and DDoS attacks, whereas a low success rate reflects system inefficiencies and weaknesses [43].
  • Transaction Latency: This metric measures the time it takes for a transaction to be confirmed and is a direct indicator of blockchain scalability. Increased delay during attacks reveals inefficiencies in resource management and network congestion [44]
  • Resource Consumption (CPU, Memory, and Disk Usage): These characteristics define the computational cost associated with various blockchain topologies. High CPU and memory consumption represent processing inefficiencies, but efficient resource allocation demonstrates the effectiveness of dynamic sharding in reducing attack-induced bottlenecks [45]

These measures together provide a comprehensive picture of blockchain resilience, demonstrating how well sharded and non-sharded post-quantum blockchains perform in simulated DDoS assault situations. This study guarantees a thorough examination of blockchain security and scalability by combining these metrics with Monte Carlo-based probabilistic modeling.

 

    • We now provide a stronger justification for why we chose latency, transaction success rates, and resource consumption (CPU, memory, and disk usage) as our primary performance metrics. These metrics align with real-world blockchain challenges and are critical for evaluating resilience under adversarial conditions.
  1. Comparative Analysis with Other Sharding Techniques

Comparative Analysis of Sharding Strategies

While sharding improves blockchain performance, sharding solutions range in efficiency, adaptability, and security. This study examines three main techniques to sharding: static, hybrid, and dynamic.

Tableau 2: Comparison of Static, Hybrid, and Dynamic Sharding

Criteria

Static Sharding

Hybrid Sharding

Dynamic Sharding

Scalability

High

Moderate High

Very High

DDoS Resilence

Low

Moderate

High

Ressource Allocation

Fixed

Partially Adaptive

Fully Adaptive

Synchronization Overhead

Low

Moderate

High

Implementation Complexity

Low

Moderate

High

 

  • Static Sharding (e.g., Ethereum 2.0) pre-allocates transactions to fixed shards, which improves scalability but exposing individual shards to targeted DDoS attacks.
  • Hybrid Sharding introduces limited adaptability, which reduces the risk of targeted attacks but still results in synchronization delays.
  • Dynamic Sharding constantly redistributes transactions and validator assignments, preventing adversaries from forecasting shard allocations. While dynamic sharding improves security and performance, it requires more computational resources.

This study focuses on dynamic sharding because it can reduce DDoS attacks while improving scalability and load balancing. The approach used for evaluating the durability of dynamic sharding vs a non-sharded blockchain architecture under adversarial situations is described in the sections that follow.

 

    • A new comparative table has been introduced to contrast dynamic sharding with static and hybrid sharding techniques. This analysis demonstrates the advantages and trade-offs associated with each approach.
  1. Inclusion of Statistical Significance Testing
    Statistical Significance Tests:

To confirm the validity of our experimental findings, we used statistical significance tests to determine whether the observed differences between sharded and non-sharded blockchain designs were statistically significant. This section describes the methods utilized for hypothesis testing, the findings, and the implications for the study.

  • Methdology for statistical Tessting:

To determine if the reported improvements in the sharded blockchain are statistically significant, we conducted the following tests:

  • The Shapiro-Wilk Test for Normality: was performed to examine if the performance parameters (transaction success rate, latency, and resource usage) have a normal distribution.
  • Student's t-test: Once the normality assumption was granted we used an independent two-sample t-test to compare the means of sharded and non-sharded blockchain designs.
  • Mann-Whitney U Test: In situations where normality was not proven, we used this non-parametric test to determine whether one distribution consistently produces greater results than another.
  • Binomial Exact Test: Given the enormous imbalance of successful transactions between the two topologies, we used an exact binomial test to see if the difference was statistically significant.
    • Results of Statistical Tests

The outcomes of our statistical significance tests are summarized below:

  • Transaction Success Rate: The binomial exact test showed a p-value < 0.00001, indicating a substantial increase in transaction success rates in the sharded architecture compared to the non-sharded version.
  • CPU Usage: The Shapiro-Wilk test showed that CPU utilization data is not regularly distributed (p < 0.0001). The sharded blockchain significantly reduced CPU utilization, as validated by the Mann-Whitney U test (p < 0.0001).
  • Memory Usage: Similarly, the Mann-Whitney U test showed that memory consumption is significantly lower in the sharded blockchain (p < 0.0001), suggesting more efficient resource allocation.
  • Disk Usage: The Shapiro-Wilk normality test confirmed that disk usage values in both sharded and non-sharded architectures consist of constant values (p = 1.000 for both distributions), indicating no variance in the dataset. Given this, neither the t-test nor the Mann-Whitney U test could be meaningfully applied. While no statistical tests could be done due to the dataset's lack of variation, the findings demonstrate that sharded architectures require additional disk space to allow scalability and decentralized processing.
    • Interpretation and Implications:

The statistical research demonstrates that the sharded blockchain design performs much better than the non-sharded architecture in terms of transaction success rates, CPU efficiency, and memory consumption. These findings confirm the idea that dynamic sharding improves blockchain scalability and resilience in hostile environments, notably against DDoS attacks.

However, disk utilization had a greater absolute value in the sharded design, but no significant statistical tests could be performed due to the dataset's lack of variation. This is a predicted outcome of sharding, as each shard retains a piece of the blockchain state independently, increasing total storage needs. While this increases storage overhead, it has no negative effect on performance because disk access is still efficient across shards.

These statistical insights provide credibility to our experimental findings, demonstrating that the observed advantages in scalability and security are not the result of random fluctuations, but rather are inherent in the dynamic sharding method.

 

    • To enhance the validity of our findings, we have incorporated statistical tests (Shapiro-Wilk normality test, Student’s t-test, Mann-Whitney U test, and Binomial Exact test). These tests confirm that the performance improvements in dynamic sharding are statistically significant.
  1. Enhancing Reproducibility
    • Input: transaction_load, shard_threshold, max_shards, congestion_level
    • Output: Updated shard allocations
    •  
    • Initialize shard_count ← Initial number of shards
    • Monitor real-time transaction_load and congestion_level
    • While system is running do:
    • If transaction_load > shard_threshold AND shard_count < max_shards then:
    • Increase shard_count by 1 (create new shard)
    • Redistribute active transactions among all shards
    • If congestion_level < predefined_safe_limit AND shard_count > 1 then:
    • Merge least active shards to optimize resource utilization
    • Update shard allocations and broadcast changes to nodes
    • Repeat process periodically

 

  • Input: attack_duration, transaction_rate, blockchain_mode
  • Output: transaction success/failure rate, resource utilization
  •  
  • Initialize transaction_success ← 0
  • Initialize transaction_failure ← 0
  • Start attack_timer ← 0
  • While attack_timer < attack_duration do:
  • Generate incoming_transactions ← transaction_rate per second
  • If blockchain_mode = "Sharded" then:
  •          Distribute transactions across shards
  • Process transactions in parallel
  • Else:
  • Process all transactions in a single validation pool
  • For each transaction in incoming_transactions:
  • If network congestion or node overload then:
  • transaction_failure ← transaction_failure + 1
  • Else:
  • transaction_success ← transaction_success + 1
  • Record CPU, memory, and disk usage at this timestep
  • Increment attack_timer
  • Compute success_rate ← transaction_success / (transaction_success + transaction_failure)
  • Return success_rate, resource_metrics

 


Input: total_iterations = 10000

Output: success rate and failure rate of transactions

 

  1. Initialize transaction_success ← 0
  2. Initialize transaction_failure ← 0
  3. For i from 1 to total_iterations do:
  4. Generate transaction_load ∈ [low_load, high_load]
  5. If system_mode = "Sharded" then:
  6. Assign transactions to shards dynamically
  7. Execute transactions in parallel

         Else:

  1. Process all transactions in a single validation pool
  2. For each transaction:
  3. If network congestion or shard overload:
  4. transaction_failure ← transaction_failure + 1

                 Else:

  1. transaction_success ← transaction_success + 1
  2. Compute success_rate ← transaction_success / total_iterations

 

    • To facilitate reproducibility, we have included pseudocode for our Monte Carlo simulation and DDoS attack scenarios and sharding architecture. These additions provide greater transparency regarding our methodology.
  1. Improvements to Figures for Clarity
  • Thank you for your recommendation, We have enhanced the readability of figures by improving their resolution and ensuring consistent labeling.

 

Conclusion

We appreciate the constructive feedback from both reviewers, which has led to substantial improvements in our manuscript. We believe that the revised version more clearly articulates the significance of our contributions and enhances the clarity, rigor, and reproducibility of our study.

Thank you for your valuable insights and for considering our revised submission.

Sincerely,
Hajar Dahhak, Nadia Afifi, and Imane Hilal

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The paper has been improved significantly but there are still several areas that require improvement or could benefit from some changes. 

- The discussion about DDoS attacks can be improved with the addition of figures describing the alternative attacks.

- A new font should be adopted when outlining all algorithms.

-Figure 3 should be converted into a bar chart.

-The difference between "Advanced Cumulative Success Analysis" and "Cumulative Transaction Success" and a more comprehensive justification regarding the lowering of the cumulative success rate should be provided.

Author Response

Comment 1: The discussion about DDoS attacks can be improved with the addition of figures describing the alternative attacks.

We appreciate this valuable suggestion. In response, we have enhanced the discussion on DDoS attacks in the “Related Work” section by adding a new figure (Figure 2) that presents a taxonomy of alternative DDoS attack types targeting blockchain systems. This visual classification distinguishes between volumetric attacks, protocol-based attacks (e.g., Sybil, Eclipse), and resource-depletion attacks, thereby providing a clearer and more comprehensive understanding of the different vectors and their potential impact on blockchain infrastructures.

 

Comment 2:  A new font should be adopted when outlining all algorithms.
Thank you for this valuable suggestion. In response, we have updated the formatting of all algorithmic blocks by adopting a monospaced font (Courier New) that is commonly used for presenting pseudocode and improves overall readability. Additionally, we ensured consistent indentation and alignment throughout the algorithms to enhance clarity.
Comment 3: Figure 3 should be converted into a bar chart.
Thank you for your suggestion. We agree that a bar chart provides a more appropriate visual comparison of average latencies between the sharded and non-sharded infrastructures. Accordingly, Figure 3 has been revised and replaced with a bar chart that clearly highlights the latency difference, with an adjusted Y-axis scale to enhance visual perception of the variation. The updated figure improves clarity and supports the analysis more effectively.

Comment 4 : The difference between "Advanced Cumulative Success Analysis" and "Cumulative Transaction Success" and a more comprehensive justification regarding the lowering of the cumulative success rate should be provided.

Thank you for this insightful comment. We have clarified the conceptual distinction between the two metrics in Section 5.2.4. While Cumulative Transaction Success reflects the overall ratio of validated transactions throughout the experiment, the Advanced Cumulative Success Analysis offers a time-aware, fine-grained evaluation of how success rates evolve under dynamic network conditions, such as congestion or attack-induced overload. To address the second part of the comment, we have also added a detailed explanation of the observed decrease in cumulative success rates, particularly under DDoS conditions. The revised paragraph discusses how factors like transaction queue overflow, temporary node saturation, and processing delays contribute to reduced performance in the non-sharded infrastructure, highlighting the advantage of dynamic sharding in maintaining higher success rates.

The basic Cumulative Transaction Success measure reflects the overall ratio of successfully completed transactions, but it does not account for changes in network behavior over time. In contrast, the Advanced Cumulative Success Analysis provides a more detailed and dynamic evaluation by documenting how success rates change in response to shifting network loads and congestion situations. This statistic measures not just the ultimate result, but also the system's stability and consistency during the simulation.

Several variables might explain the observed decrease in success rates, notably for the non-sharded architecture. These include temporary resource saturation, queue overflows, and longer transaction times owing to processing delays. Because the non-sharded approach routes all transactions through a single validation pool, it is more prone to performance deterioration as network demands increase. On the other side, the sharded design benefits from distributed processing, allowing it to sustain high throughput and constant success rates even under pressure. This comparative research underscores the necessity for scalable, adaptive infrastructures, such as dynamic sharding, to ensure long-term blockchain stability.

Back to TopTop