Adaptive Ensemble Machine Learning Framework for Proactive Blockchain Security
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis paper presents significant innovations in proactive blockchain security through the application of machine learning, while also highlighting critical areas for improvement, particularly in terms of empirical validation and the exploration of advanced security protocols. Further research is essential to transition from theoretical frameworks to practical, resilient solutions in real-world blockchain implementations.
Strong Points:
- This paper introduces a proactive defense mechanism against blockchain attacks, addressing the gap in existing literature that primarily focuses on detection rather than prevention.
- The authors leverage machine learning techniques, specifically supervised models such as Random Forest and Gradient Boosting, which have demonstrated high accuracy in anomaly detection. This novel integration enhances the capability of blockchain systems to respond to evolving threats.
- By comparing both class-balanced and imbalanced datasets, the research contributes valuable insights into how dataset representation affects the performance of anomaly detection models.
- This work emphasizes the need for validation in real-world blockchain systems, advocating for a shift from simulated environments to practical applications, which is crucial for the technology's adoption.
Weakness Points:
- The findings are based on simulated datasets, which may not accurately represent the complexities and dynamic conditions of real blockchain environments. This limitation could affect the generalizability of the results.
- While the paper discusses the potential for real-world application, it does not provide sufficient empirical data from actual blockchain networks to validate the proposed framework, leaving a critical gap in its applicability.
- The study’s hyperparameter searches were constrained for practicality, suggesting that broader exploration could yield better performance and robustness in the models.
- As blockchain technology evolves, there is a growing necessity to consider quantum computing threats. The paper lacks a thorough discussion on how to incorporate quantum-resistant strategies within the proposed framework.
Author Response
Response to Reviewer 1’s Comment
Strong Points:
Comments 1: This paper introduces a proactive defense mechanism against blockchain attacks, addressing the gap in existing literature that primarily focuses on detection rather than prevention.
Response 1: Thank you for your positive observational feedback.
Comments 2: The authors leverage machine learning techniques, specifically supervised models such as Random Forest and Gradient Boosting, which have demonstrated high accuracy in anomaly detection. This novel integration enhances the capability of blockchain systems to respond to evolving threats.
Response 2: Thank you for your observational constructive feedback.
Comments 3: By comparing both class-balanced and imbalanced datasets, the research contributes valuable insights into how dataset representation affects the performance of anomaly detection models.
Response 3: Thank you for your positive comment feedback.
Comments 4: This work emphasizes the need for validation in real-world blockchain systems, advocating for a shift from simulated environments to practical applications, which is crucial for the technology's adoption..
Response 4: We appreciate the structured acknowledged feedback.
Weakness Points:
Comments 1: The findings are based on simulated datasets, which may not accurately represent the complexities and dynamic conditions of real blockchain environments. This limitation could affect the generalizability of the results.
Response 1: Thank you for pointing this out. We added explicit statement in the Abstract and Conclusion sections. The Abstract now states: “The results are based on simulated environment, and should be considered as preliminary until the experiment is performed in real blockchain environment.” (Abstract, page. 1, paragraph 1, lines 27-29). In the Conclusions, we noted real-world complexity and the need for validation: “This study is limited by the exclusive use of simulated dataset A and B,…. we consider all performance to be based on simulation and subject to validation in real blockchain environment.” (Conclusions, page. 20, paragraph 2, lines 582-588).
Comments 2: While the paper discusses the potential for real-world application, it does not provide sufficient empirical data from actual blockchain networks to validate the proposed framework, leaving a critical gap in its applicability.
Response 2: Thank you for your suggestion. We added a specific future work plan in the conclusions covering our operational node plan and variance-sensitivity evaluation: “In future research, we plan to reuse the 10x5 CV methodologies with similar seeds on real blockchain data, use an operational node to gather transaction and block level data collection, and evaluate variance sensitivity under controlled disturbance such as noise injections and class prior changes.” (Conclusions, page. 20, paragraph 2, lines 589-593).
Comments 3: The study’s hyperparameter searches were constrained for practicality, suggesting that broader exploration could yield better performance and robustness in the models.
Response 3: Thank you for your valuable feedback. We noted in the conclusion that our hyperparameter searches are bounded for practicality and the actual parameters are summarized in Table 2 (ML Models and Hyperparameter) (methodology, page. 7, lines 241). We note in Conclusions that “hyperparameter searches were bounded for practicality, and broader sweeps or Auto ML could yield further improvements.” (Conclusions, p. 20, paragraph 2 lines 588-589).
Comments 4: As blockchain technology evolves, there is a growing necessity to consider quantum computing threats. The paper lacks a thorough discussion on how to incorporate quantum-resistant strategies within the proposed framework.
Response 4: Thank you for your valuable feedback. We have added concise integration sentence and pointed to where quantum-resilient controls fit without changing the ML pipeline. In the Discussion/Conclusions, we now state: “future enhancement will combine detection with quantum safeguard such as adoption adaptive confirmation polices and post quantum signature schemes.” (Conclusions, p. 20, paragraph 3 lines 602-604). Also in the literature review, we have also indicate compatibility stating “Using quantum resilient methods…will be compatible with our framework without changing the learning and anomaly model’s pipeline.” (Literature Review, p. 4, paragraph 1, lines 149-152). These additions clarify where protocol-level quantum defenses attach (protocol/consensus layer) while leaving the learning pipeline intact.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis article proposes an "Adaptive Ensemble ML Framework for Proactive Blockchain Security" and compares models (RF, GB, LR, KNN, OCSVM, IF, autoencoder) on two simulated datasets (A, B), in balanced/imbalanced variants, with 10x5-fold cross-validation. The authors declare that oversampling was used only in the training datasets, and report metrics (F1, Accuracy, AD%) with per-attack cross-sections and threshold sensitivity analysis for unsupervised models. Additionally, they present an Adaptive Security Response layer, which is intended to combine detection with defensive actions and, according to the tables, significantly increase F1 (e.g., RF from 0.498 to \sim 0.961).
It is an interesting direction (combining ML detection with response). However, empirical evidence is based solely on simulated data, and the ASR description does not allow for distinguishing the "impact of detection" from the "impact of actions changing data/test." In its current form, the novelty is declarative.
There are numerical and methodological inconsistencies, including extreme results in the base tables and ambiguous validation of the ASR module. Some figures appear clichéd/very abbreviated; descriptions are missing or difficult to read. References are generally up-to-date, but lack verifiable references to specific attack variants (e.g., BBEDSA) and proprietary tools/data.
- Inconsistencies in the baseline results
- Table 4 (Imbalanced) displays outliers (Acc/Prec/Rec/F1 = 1.00 for most models; 0.00 for LR/KNN), which strongly suggest an error in split/metric preparation or reporting. It is inconsistent with the subsequent, stable CV results of \sim 0.90 F1. Please diagnose and correct (including providing scripts generating Tables 3-4).
- In the CV section, the results appear reasonable (RF/GB/LR F1=0.90+-0.06), which is in stark contrast to the "baseline" and the "ensemble without ASR =0.54" estimate. Explain why the baseline/ensemble values are so low, given that the same models in CV achieve =0.90 (are these different sets, different features, a different split?). Standardise the pipeline and clearly describe the exact set used to generate Tables 13-14.
- ASR does not change the ground truth. If classification metrics increase after "enabling ASR," it means that: (a) the set/test-labelling has changed, (b) you are calculating metrics after interventions (which changes the event distribution), or (c) the metrics are not comparable. Authors need to define precisely what was measured. In the current description, the jump in RF F1 from \sim 0.498 to \sim 0.961 is implausible without details and control on the same test. Propose a separate "policy effect" metric (e.g., loss/risk reduction) and leave the classic F1 before the reaction.
- The number of classes per attack (A and B, before/after SMOTE/ADASYN), the exact set sizes, and the splitting rules (folds/seeds), as well as the list of features (7?) and their scale, are missing. The text mentions "batch 181x7" and "7 features"; this should be consolidated and described in the Materials and Methods section. The authors write that SMOTE/ADASYN was only used for training (correct), but please clearly show the flow (diagram + pseudocode) and random seed dumps.
- Data Availability: "Data is available upon request" is insufficient. MDPI requires a specific statement with a link to the repository (Zenodo/OSF/figshare/GitHub, with a link to Zenodo) or a clear justification for the restrictions. Please provide A/B sets (or simulation generators/configurations, seed files, scripts) to meet data policies and MDPI guidelines.
- Authors declare paired tests (t-test/Wilcoxon) and 95% CIs, but there are no full tables of values/statistics at the model/attack level (although there are partial tables, which are inconsistent). I request: (i) CI for key differences (balanced vs. imbalanced), (ii) multiple control (e.g., Holm), (iii) publication of curves (PR/ROC) and decision threshold analyses for the supra models.
- The text contains placeholders and generic captions ("Figure 1. Research Model") without details, as well as minor inconsistencies in numbering/descriptions. Please complete/verify the completeness of figures (1-6) and their legends, and correct the citation order.
- Please clarify the definitions and sources of variants (e.g., BBEDSA, "Denial-of-Chain"), standardise names ("Eclispse"--Eclipse), and provide the primary works for the taxonomy/implementation. References to [17], [18], and [25] appear in several places - check the consistency of the bibliography and cited pages.
Minor:
- Section 3.7 - Computing Environment: Typo "6 BG VRAM" - "6 GB VRAM"; ensure that the library versions and the exact Python 3.13.7 are actually used and reproducible (`requirements.txt`, `pip freeze` files).
- Sections should be organised as Abstract, Keywords, Introduction, Materials and Methods, Results, Discussion, Conclusions (without "Section II/III" with IEEE notation).
- Numerous minor errors ("Eclispse", "transforms", capitalisation after a period, subject-verb agreement: "This study address" - "addresses"). Please provide complete proofreading. Examples appear in the literature and methodology review.
- AD% has a definition similar to TPR for the "anomaly" class - it is worth clearly indicating which class is positive in individual experiments and standardising the vocabulary (anomaly vs. attack).
- It is worth moving the "Computational Efficiency" section to the Supplement and adding variance and data scales (number of samples/features) to provide context for the time readings.
- Add full legends (what kind of data, what scales, how many folds/seeds), ensure zoomability (vector PDF/SVG), and add n/counts in the tables.
- Strongly emphasise the limitations of "simulation-only" and outline the real-world validation (ablation: without Finney/with Finney; admixture of network noise; drift).
- Add a public link to the data/generator + code (e.g., Zenodo/OSF; code repo + DOI). "On request" does not meet MDPI practices.
- Formal sections are included; however, please ensure full compliance with COPE and MDPI policies.
Author Response
Comments 1: Table 4 (Imbalanced) displays outliers (Acc/Prec/Rec/F1 = 1.00 for most models; 0.00 for LR/KNN), which strongly suggest an error in split/metric preparation or reporting. It is inconsistent with the subsequent, stable CV results of \sim 0.90 F1. Please diagnose and correct (including providing scripts generating Tables 3-4).
Response 1: Thank you for your valuable feedback. We already explains that the single-split analysis is retained only as a descriptive baseline performed prior to the 10×5 CV study, and that CV is used for all findings (page 9, paragraph 4, lines 324–325). It also explicitly states that “Table 4’s perfect/imperfect scores indicate imbalance-driven overfitting, which motivates the cross-validated analysis” (Table 4 caption, page10). Scripts and data are publicly available with a reproduction readme (Data Availability, page 21).
Comments 2: In the CV section, the results appear reasonable (RF/GB/LR F1=0.90+-0.06), which is in stark contrast to the "baseline" and the "ensemble without ASR =0.54" estimate. Explain why the baseline/ensemble values are so low, given that the same models in CV achieve =0.90 (are these different sets, different features, a different split?). Standardise the pipeline and clearly describe the exact set used to generate Tables 13-14.
Response 2: Thank you for your observational feedback. The baseline/ensemble values come from the single-split baseline in 4.1, not from the 10×5 CV protocol, so they are lower. The =0.90 F1 figures are from 10×5 CV (averaging folds and seeds, with balancing applied to training folds only). To standardize and avoid confusion, we clarify near Tables 13-14 that “the same single-split baseline test fold is used there, and we report ASR outcomes separately from pre-intervention metrics on that same set”. (page 16, paragraph 1, line 465-468)
Comments 3: ASR does not change the ground truth. If classification metrics increase after "enabling ASR," it means that: (a) the set/test-labelling has changed, (b) you are calculating metrics after interventions (which changes the event distribution), or (c) the metrics are not comparable. Authors need to define precisely what was measured. In the current description, the jump in RF F1 from \sim 0.498 to \sim 0.961 is implausible without details and control on the same test. Propose a separate "policy effect" metric (e.g., loss/risk reduction) and leave the classic F1 before the reaction.
Response 3: Thank you for your constructive feedback. In the manuscript, we keep classic F1 fixed on the original test fold and report ASR as a separate policy effect on the same split. It is stated in table 14 caption that “The same single baseline configuration outlined in section 4.1, is used for evaluating the ensemble and ASR improved models, as seen in Table 13 and 14”, page 15, lines 448-449 and on page 16, paragraph 1, line 465-468 states that “ASR results are not directly equivalent to the other classification metrics obtained on the original test fold. To evaluate the effect of the intervention on subsequent improvement, we present ASR effects separately from the previous metrics, evaluated on the same split test used for Table 13-14”. Hence, the RF jump from 0.498 to 0.961 is the ASR policy effect, not pre-intervention F1.
Comments 4: The number of classes per attack (A and B, before/after SMOTE/ADASYN), the exact set sizes, and the splitting rules (folds/seeds), as well as the list of features (7?) and their scale, are missing. The text mentions "batch 181x7" and "7 features"; this should be consolidated and described in the Materials and Methods section. The authors write that SMOTE/ADASYN was only used for training (correct), but please clearly show the flow (diagram + pseudocode) and random seed dumps.
Response 4: Thank you for your structured feedback. We have corrected minor inconsistency (7 to 9 features). Figures 2–3 show the end-to-end flow. Exact per-attack counts, files, and runnable scripts are in the repository linked in the Data Availability statement. More consideration about this feedback will be reviewed in our follow-up study using real live blockchain data.
Comments 5: Data Availability: "Data is available upon request" is insufficient. MDPI requires a specific statement with a link to the repository (Zenodo/OSF/figshare/GitHub, with a link to Zenodo) or a clear justification for the restrictions. Please provide A/B sets (or simulation generators/configurations, seed files, scripts) to meet data policies and MDPI guidelines.
Response 5: Thank you for your feedback. Data availability has been addressed. We provide a public repository link with a readme for reproduction (Data Availability, page 21).
Comments 6: Authors declare paired tests (t-test/Wilcoxon) and 95% CIs, but there are no full tables of values/statistics at the model/attack level (although there are partial tables, which are inconsistent). I request: (i) CI for key differences (balanced vs. imbalanced), (ii) multiple control (e.g., Holm), (iii) publication of curves (PR/ROC) and decision threshold analyses for the supra models.
Response 6: Thank you for your valuable feedback. We already report means±SD, 95% CIs, and paired tests. To keep this study and revision focused, we will include Holm-adjusted multiple comparisons and publish full PR/ROC curves with decision-threshold analyses for the supra models in the future practical study using real blockchain data.
Comments 7: The text contains placeholders and generic captions ("Figure 1. Research Model") without details, as well as minor inconsistencies in numbering/descriptions. Please complete/verify the completeness of figures (1-6) and their legends, and correct the citation order.
Response 7: Thank you for your valuable feedback. We have renamed all necessary figure captions. We also verified figure numbering are in correct order, details are labeled in the text and the in-text citation order.
Comments 8: Please clarify the definitions and sources of variants (e.g., BBEDSA, "Denial-of-Chain"), standardise names ("Eclispse"--Eclipse), and provide the primary works for the taxonomy/implementation. References to [17], [18], and [25] appear in several places - check the consistency of the bibliography and cited pages.
Response 8: Thank you for your observational feedback. Naming is standardized to “Eclipse,” and the primary sources for each attack are cited where the attacks are introduced and compared. Specifically, the Literature Review cites Denial of Chain attack [17] and Black Bird 51% hashrate attack [18] in context (page 3, paragraph 1, and lines 90-99), the comparison table later cross-references Black Bird and BBEDSA (BAR mechanism) with [18] and [25] respectively (page 19 paragraph 1, line 558-559). We also checked that in-text uses of [17], [18], and [25] are consistent with the bibliography.
Minor
Comments 1: Section 3.7 - Computing Environment: Typo "6 BG VRAM" - "6 GB VRAM"; ensure that the library versions and the exact Python 3.13.7 are actually used and reproducible (`requirements.txt`, `pip freeze` files).
Response 1: Thank you for the valuable feedback. The minor error has been changed. We will also deposit all necessary files and scripts in the repository for exact reproducibility in Data Availability Statement. Computer Configuration, page 9, paragraph 2, lines 310–315.
Comments 2: Sections should be organised as Abstract, Keywords, Introduction, Materials and Methods, Results, Discussion, Conclusions (without "Section II/III" with IEEE notation).
Response 2: Thank you for your constructive feedback. The sections have been organised as recommended based on your recommendations. "Section II/III" others with roman numerals have been removed.
Comments 3: Numerous minor errors ("Eclispse", "transforms", capitalisation after a period, subject-verb agreement: "This study address" - "addresses"). Please provide complete proofreading. Examples appear in the literature and methodology review.
Response 3: Thank you for your correctional feedback. We have fixed typos and grammar. Examples: “tranforms”-“transforms” (page 7, paragraph 2 line 257), “deploy ability”-“deployability” (page 14, paragraph 1, line 408), capitalization and subject-verb corrections in the literature/methodology sections.
Comments 4: AD% has a definition similar to TPR for the "anomaly" class - it is worth clearly indicating which class is positive in individual experiments and standardising the vocabulary (anomaly vs. attack).
Response 4: Thank you for your observational feedback. It has been adjusted (page 8 line 297 and 303).
Comments 5: It is worth moving the "Computational Efficiency" section to the Supplement and adding variance and data scales (number of samples/features) to provide context for the time readings.
Response 5: Thank you for your suggestive feedback. To keep this revision clean, we retain a short efficiency summary in Results and already provide the data scale context there (n = 15, 9 features) with timing dispersion noted (page14, paragraph 1, lines 409). A fuller breakdown will be part of the practical follow-up study on live blockchain data, rather than expanding the present manuscript.
Comments 6: Add full legends (what kind of data, what scales, how many folds/seeds), ensure zoomability (vector PDF/SVG), and add n/counts in the tables.
Response 6: Thank you for your feedback. Figures 1–6 now have complete legends and in text statement stating all necessary details. Vector PDFs are supplied in the submission package so captions remain readable when zoomed.
Comments 7: Strongly emphasise the limitations of "simulation-only" and outline the real-world validation (ablation: without Finney/with Finney; admixture of network noise; drift)..
Response 7: Thank you for your valuable feedback. The Abstract and Conclusions now strongly emphasize the simulation-only limitation and outline the real-data validation plan. The Abstract now states: “The results are based on simulated environment, and should be considered as preliminary until the experiment is performed in real blockchain environment.” (Abstract, page. 1, paragraph 1, lines 27-29). In the Conclusions, we noted real-world complexity and the need for validation: “This study is limited by the exclusive use of simulated dataset A and B,…. we consider all performance to be based on simulation and subject to validation in real blockchain environment.” (Conclusions, page. 20, paragraph 2, lines 582-588).
Comments 8: Add a public link to the data/generator + code (e.g., Zenodo/OSF; code repo + DOI). "On request" does not meet MDPI practices.
Response 8: Thank you for your positive feedback. A public repository link (data/generator, seeds, scripts, README) is provided in the Data Availability statement. (Data Availability, page 21).
Comments 9: Formal sections are included; however, please ensure full compliance with COPE and MDPI policies.
Response 9: Thank you for your recommendation feedback. Necessary Formal sections are present and aligned with MDPI guidance (Data Availability, Conflicts of Interest, Author Contributions). We confirm adherence to COPE/MDPI policies.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe article can be accepted
Comments for author File: Comments.pdf
Author Response
Positive Aspects:
Comments 1: The authors substantiate the inherent security of blockchain technology, which faces a growing number of attacks, necessitating the development of integrated, proactive defense mechanisms.
Response 1: Thank you for your motivational feedback.
Comments 2: The authors' work aims to fill a gap in existing research, which often focuses on isolated detection methods and lacks validation in real-world conditions.
Response 2: We appreciate your valuable feedback.
Comments 3: The research is based on simulating attacks on a blockchain system and subsequently analyzing the resulting data using various machine learning models.
Response 3: Thank you for your feedback.
Comments 4: The authors created two independent datasets (A and B), simulating different types of attacks, including DoC, Black Bird, Sybil, Eclipse, and Finney. This allowed them not only to train models but also to verify their generalizability on new data.
Response 4: We appreciate the structured acknowledged feedback.
Comments 5: The authors justify that for anomaly detection, both supervised (Random Forest, Gradient Boosting, Logistic Regression, K-Nearest Neighbors) and unsupervised (One-Class SVM, Isolation Forest, Autoencoder) models were used.
Response 5: Thank you for your motivational feedback.
Comments 6: The experiment presented in the article was designed to compare the performance of the models on two types of data: balanced (using SMOTE/ADASYN) and imbalanced.
Response 6: Thank you for your feedback.
Comments 7: The authors of the article also integrated the Adaptive Security Response (ASR) framework, which links anomaly predictions with proactive measures, such as transaction isolation.
Response 7: Thank you for your structured feedback.
Comments 8: The article presents the use of two independent datasets (A and B), which confirmed that models trained on one set generalize well to the other, demonstrating resilience to changing conditions.
Response 8: Thank you for your valuable feedback.
Negative Aspects:
Comments 1: The authors conclude that their integrated approach can be effective. However, the article lacks detailed information on comparisons with other similar studies and a discussion of the limitations of the methods used.
Response 1: Thank you for pointing this out. We have a direct comparison to prior work in Table 15 and the surrounding narrative, and we already discuss methodological limits. Comparisons appear in Discussion “We compare our study in relation to prior studies on blockchain attacks (Table 15). While earlier studies often focused on a single attack type, proposed protocol-level defences with no ML benchmark” (page 18-20, lines 550-560).
Comments 2: It would be beneficial to test the framework on real-world data, not just simulated data.
Response 2: Thank you for your feedback. We added in the cconclusion a plan for real blockchain live data plan and reusing 10×5 CV methodologies with similar seeds on real blockchain data, stating that “In future research, we plan to reuse the 10x5 CV methodologies with similar seeds on real blockchain data, use an operational node to gather transaction and block level data collection, and evaluate variance sensitivity under controlled disturbance such as noise injections and class prior changes” (page 20, paragraph 2 lines 589-593).
Comments 3: More details are needed on how exactly the ASR framework integrates with the models and how it implements countermeasures.
Response 3: Thank you for your feedback. The paper already outlines ASR actions and evaluation in Results with Tables 13-14 and Figure 6 (page. 15-16, lines 436-481). To clarify the integration, we have also described linkage of ASR in (page 7, paragraph 2 lines 251-258). Table 13 (ensembles), Table 14 (models and ASR), Figure 6 (with vs. without ASR), and the sentence that states ASR results are not directly equivalent to pre-intervention classification metrics (Page. 16, paragraph 1, 465-468).
Comments 4: Given that the research topic is related to security, it would have been worthwhile to include a discussion of the potential ethical risks associated with using such systems.
Response 4: Thank you for point this out. For this manuscript, we will defer the ethics analysis until we evaluate the framework under practical conditions with real live blockchain data. To state this, we added one clarifying sentence “assess and evaluate ethical risk under practical conditions using real live blockchain data” in the Conclusions. (Conclusions, page. 20, paragraph 3, lines 609-610).
Comments 5: The article does not specify the categories of attacks that affect the system’s implementation.
Response 5: Thank you for your feedback. In the methodology section (Blockchain Attacks Used) lists of attack category are mentioned into “consensus-level (Black Bird variants and DoC), network-level (Sybil, Eclipse), and a temporal anomaly (Finney)” (page 4, paragraph 3, lines 169 - 170).
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper presents a relevant contribution to the field of blockchain security, but some structural and content improvements could enhance its clarity and impact.
1、Ensure all technical terms are defined when first introduced.
2、The literature review should more clearly delineate the gaps in current research.
3、When discussing the methodologies, consider providing a flowchart that visually represents the research process.
4、Ensure that all references follow a consistent citation style as per the journal's guidelines.
Author Response
Comments 1: Ensure all technical terms are defined when first introduced.
Response 1: Thank you for your observation. We have revised the manuscript to define all necessary technical terms at their first appearance using full abbreviation spelling to avoid making the manuscript feel cluttered. For example, "Synthetic Minority Oversampling Technique (SMOTE)” (abstract, page 1, paragraph 2, line 23), Initial Coin Offerings (ICOs) ” (literature review, page 2, paragraph 2, line 85).
Comments 2: The literature review should more clearly delineate the gaps in current research.
Response 2: Thank you for your suggestion. We have added a sentence that collectively better clarify the gap in current research “As a result of the comprehensive literature review, prior research shows the lack of preventive frameworks validation in real world blockchain, evaluations are limited to single attack variants, preventing generalization across blockchain attack vectors, and emerging threats such as post quantum attacks doesn’t have much attention.” (Literature review, page. 4, paragraph 1, lines 146-149).
Comments 3: When discussing the methodologies, consider providing a flowchart that visually represents the research process.
Response 3: Thank you for your suggestion. We have updated Figure 1 to present a clearer flowchart of the research methodology, from data simulation to model evaluation and comparative analysis, this addition makes the methodology more transparent and easier to follow. (Methodology, page. 4, paragraph 3, lines 161-166).
Comments 4: Ensure that all references follow a consistent citation style as per the journal's guidelines.
Response 4: Thank you for highlighting this, We have carefully revised all reference to match the MDPI reference and citation guideline. (Reference, page. 22, lines 633-700).