Next Article in Journal
Nonlinear Heat Effects of Building Material Stock in Chinese Megacities
Previous Article in Journal
Bridging Digital Gaps in Smart City Governance: The Mediating Role of Managerial Digital Readiness and the Moderating Role of Digital Leadership
 
 
Article
Peer-Review Record

Generative AI-Driven Smart Contract Optimization for Secure and Scalable Smart City Services

Smart Cities 2025, 8(4), 118; https://doi.org/10.3390/smartcities8040118
by Sameer Misbah 1, Muhammad Farrukh Shahid 1,*, Shahbaz Siddiqui 1,*, Tariq Jamil S. Khanzada 2,3, Rehab Bahaaddin Ashari 2, Zahid Ullah 4 and Mona Jamjoom 5
Reviewer 1: Anonymous
Reviewer 2:
Smart Cities 2025, 8(4), 118; https://doi.org/10.3390/smartcities8040118
Submission received: 14 May 2025 / Revised: 4 July 2025 / Accepted: 6 July 2025 / Published: 16 July 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This timely and relevant article presents a compelling study that proposes and evaluates the performance benefits of using generative AI to optimize smart contracts within smart city environments that leverage blockchain technology for enhanced security. The study compares two distinct blockchain systems and their respective smart contract programming languages.

However, despite its comprehensive nature, the manuscript requires significant organizational and presentation revisions. The article is lengthy and contains a substantial amount of information and results, making it difficult to follow. While the English is generally acceptable, careful revision would improve clarity and correctness.

Some aspects to address in the revised version are as follows:

  • The proposed optimization solutions should be evaluated in the context of an existing smart contract–based architecture for smart city service interoperation. Throughout the article, references to this system are often ambiguous (e.g., “existing proposed system” or “framework”). The authors should clearly define the system and refer to it consistently as the “existing system,” “existing framework,” “base system,” or “base framework.” Likewise, the system that incorporates the proposed and evaluated solutions should be referred to consistently using terms such as "improved system," "optimized system," or "enhanced system."
  • Most tables and figures appear several pages ahead of their discussion, often before they are introduced in the text. This disrupts the flow and significantly hampers readability.
  • The section structure requires careful revision. Currently, there are 12 main sections, making the organization appear fragmented. Several sections should be reformulated as subsections of broader categories. For instance, Section 6 should become 5.1, Section 7 should become 5.2, and Section 7.1 should become 5.2.1. Section 8 should become Section 6; Section 9 should become Section 6.3; and the current Section 11 should become the final subsection of the newly structured Section 6 (i.e., Section 6.4). Additionally, some section titles should be improved for clarity and coherence.
  • In several instances, enumerated lists (e.g., bulleted lists) are presented without an introductory sentence ending in a colon (":"). This must be corrected.
  • When listing phases or ordered elements, use numbered lists instead of bullet points.
  • Some parts of the text contain confusing or poorly constructed sentences that need to be rephrased for clarity and grammatical accuracy.
  • Acronyms should be spelled out the first time they appear and used consistently thereafter.
  • There are recurring issues with missing whitespace after punctuation marks (e.g., periods, commas, colons, and semicolons) that should be addressed.
  • T5 and Codex are mentioned in Table 1 and should be briefly explained within the text to ensure clarity for readers unfamiliar with them.
  • The section titled "Related Review" should be renamed "Related Work" to align with standard academic terminology.

I look forward to reviewing a new version of this article with significant improvements to its organization and presentation.

Author Response

Original Manuscript ID: smartcities-3672081

 

Original Article Title: “Generative AI Driven Smart Contract Optimization for Secure and Scalable Smart City Services”

 

To :The Editor of Smart cities (MDPI)

Subject: Response to reviewers’ comments

 

Dear Editor,


We are sincerely grateful to you and the reviewers for your thoughtful and constructive feedback on our manuscript titled “Generative AI Driven Smart Contract Optimization for Secure and Scalable Smart City Services” (Manuscript ID: smartcities-3672081). We have carefully revised the manuscript in accordance with the comments provided.



Please find below our point-by-point responses to each of the reviewer’s comments. We have also submitted the following documents:
(a) a detailed point-by-point response to the reviewers’ comments,
(b) a revised manuscript with track changes,
(c) a clean version of the updated manuscript.


We hope that the revised version of our manuscript meets the expectations of the reviewers and the editorial team. Thank you for your time and consideration.

Best regards,
Sameer et al

 

 

 

 

 

 

 

Response to Editor Comments

 

 

Reviewer-1 Point1: The proposed optimization solutions should be evaluated in the context of an existing smart contract–based architecture for smart city service interoperation. Throughout the article, references to this system are often ambiguous (e.g., “existing proposed system” or “framework”). The authors should clearly define the system and refer to it consistently as the “existing system,” “existing framework,” “base system,” or “base framework.” Likewise, the system that incorporates the proposed and evaluated solutions should be referred to consistently using terms such as "improved system," "optimized system," or "enhanced system.

 

Author Response: We sincerely thank the reviewer for this valuable and insightful comment. We agree that consistent terminology is essential for improving the clarity and coherence of the manuscript, especially when discussing different versions of the system architecture.

 

Author Action: To address this, we have revised the manuscript throughout to ensure uniform usage of terminology:

  • The term “existing system” is now consistently used to refer to the original smart contract–based architecture for smart city service interoperation (i.e., the base architecture described before applying our optimization methods).
  • The term “optimized system” is now used consistently to describe the architecture after applying the proposed generative AI–based smart contract optimization strategies.

We carefully reviewed the manuscript and replaced all previously ambiguous references such as “existing proposed system,” “framework,” or “architecture” with the appropriate term—either existing system or optimized system—based on context. These changes improve clarity and align the structure of the paper with standard academic expectations.

The updated terminology is now reflected consistently in the abstract, methodology, results, and discussion sections.

 

Reviewer-1 Point2: Most tables and figures appear several pages ahead of their discussion, often before they are introduced in the text. This disrupts the flow and significantly hampers readability.

 

Author Response: We thank the reviewer for highlighting this important issue regarding the placement of tables and figures. We fully agree that tables and figures should appear as close as possible to their first reference in the text to maintain narrative flow and improve readability.

Author Action: In response to this comment, we have carefully revised the manuscript to ensure that all tables and figures now appear immediately following their introduction or discussion in the text. We adjusted the LaTeX figure placement commands to enforce proximity to relevant content and verified the layout across all sections. These changes significantly improve the logical flow of the paper and align with standard academic publishing practices.

 

Reviewer-1 Point 3 The section structure requires careful revision. Currently, there are 12 main sections, making the organization appear fragmented. Several sections should be reformulated as subsections of broader categories. For instance, Section 6 should become 5.1, Section 7 should become 5.2, and Section 7.1 should become 5.2.1. Section 8 should become Section 6; Section 9 should become Section 6.3; and the current Section 11 should become the final subsection of the newly structured Section 6 (i.e., Section 6.4). Additionally, some section titles should be improved for clarity and coherence.

 

Author Response: We sincerely thank the reviewer for their constructive feedback on the manuscript’s structure. We agree that the previous version of the section hierarchy could appear fragmented and may hinder the logical progression of the content.

 

Author Action : We thank the reviewer for this insightful suggestion. In response, we have carefully reviewed the entire section structure of the manuscript and reorganized it to ensure a more coherent and logical hierarchy. Several sections have been restructured into appropriate subsections, and titles have been revised to improve clarity and thematic consistency. These changes enhance the overall flow and readability of the manuscript

 

Reviewer-1 Point5 In several instances, enumerated lists (e.g., bulleted lists) are presented without an introductory sentence ending in a colon (":"). This must be corrected.When listing phases or ordered elements, use numbered lists instead of bullet points.Some parts of the text contain confusing or poorly constructed sentences that need to be rephrased for clarity and grammatical accuracy. Acronyms should be spelled out the first time they appear and used consistently thereafter.There are recurring issues with missing whitespace after punctuation marks (e.g., periods, commas, colons, and semicolons) that should be addressed.

 

Author Response: We thank the reviewer for pointing out these important editorial and formatting issues. We have thoroughly reviewed the manuscript and made the necessary corrections to ensure clarity and consistency.

 

Author Action: Specifically, we have: (1) added introductory sentences with colons before all bulleted lists; (2) We converted some unordered lists to numbered lists where sequential phases or steps were described; (3) rephrased unclear or grammatically incorrect sentences for better readability; (4) ensured that all acronyms are spelled out upon first use and used consistently thereafter; and (5) corrected spacing issues following punctuation marks throughout the document. These revisions enhance the professionalism and readability of the manuscript.

 

 

Reviewer-1 Point 5 T5 and Codex are mentioned in Table 1 and should be briefly explained within the text to ensure clarity for readers unfamiliar with them.The section titled "Related Review" should be renamed "Related Work" to align with standard academic terminology.

 

Author Response: We thank the reviewer for this helpful suggestion. In response:

 

Author Action: We have updated the manuscript to include brief descriptions of both T5 and Codex in the section2 highlighted with red color additionally as shown in the figure-1 below , we have renamed the section title from “Related Review” to “Related Work” to reflect standard academic conventions and improve clarity.

                                                                           Figure-1

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This paper mainly explores how to optimize smart contracts through generative artificial intelligence (such as GPT-2, GPT-3, GPT-4) to solve the performance and security problems caused by the complexity of smart contracts in the interoperability services of smart cities. But the following problems still exist:

1.The paper proposes to combine generative AI with blockchain to optimize smart contracts, but the existing literature has already touched upon the integration of AI and blockchain. Please clarify the core differences between this research and existing technologies, such as the uniqueness in model selection (why focus on the GPT series rather than other generative models like Codex), optimization objectives (such as the limitations of only optimizing for loops and variables), or application scenarios (interoperability in smart cities).

2.The paper proposes to use GPT-2/3/4 to optimize the smart contract code in order to reduce redundant computations and loop complexity. However, in the dataset construction section, the determination of optimized labels relies on the static analysis results of external tools. How can we ensure that the “redundancy” identified by these tools will inevitably lead to performance overhead in the blockchain execution environment? Has it been verified that the optimized code indeed reduces Gas consumption or execution time in the actual blockchain environment? If the code that the tool determines as “optimized” does not show performance improvement on the real chain, will it affect the effectiveness of generative model training?

3.The paper emphasizes the enhanced security of the optimized smart contract, but does not mention whether the security of the generated code is verified through formal verification or vulnerability detection tools.So how do you ensure that the code generated by GPT is free of common vulnerabilities such as reentry attacks and overflows?

4.The paper does not disclose the dataset and the complete experimental code, nor does it explain the detailed parameters for fine-tuning the GPT model. How to handle the professionalism in the field of smart contracts (such as whether to introduce domain expert annotations or industry standard contracts as training data). Please supplement these details to enhance reproducibility.

5.The experiment compared private chains and public chains. However, the performance advantage of private chains may stem from the centralized feature, while smart cities usually need to balance decentralization and performance. Please discuss whether the applicability of this optimization method will be affected by the consensus mechanism in an environment with a higher degree of decentralization?

6.In terms of performance indicators, the paper uses BLEU score, throughput, execution time, etc. But how applicable is the BLEU score in code optimization? Are there any other more relevant indicators, such as code complexity and Gas consumption?

Author Response

Original Manuscript ID: smartcities-3672081

 

Original Article Title: “Generative AI Driven Smart Contract Optimization for Secure and Scalable Smart City Services”

 

To :The Editor of Smart cities (MDPI)

Subject: Response to reviewers’ comments

 

Dear Editor,


We are sincerely grateful to you and the reviewers for your thoughtful and constructive feedback on our manuscript titled “Generative AI Driven Smart Contract Optimization for Secure and Scalable Smart City Services” (Manuscript ID: smartcities-3672081). We have carefully revised the manuscript in accordance with the comments provided.



Please find below our point-by-point responses to each of the reviewer’s comments. We have also submitted the following documents:
(a) a detailed point-by-point response to the reviewers’ comments,
(b) a revised manuscript with track changes,
(c) a clean version of the updated manuscript.


We hope that the revised version of our manuscript meets the expectations of the reviewers and the editorial team. Thank you for your time and consideration.

Best regards,
Sameer et al

 

 

 

 

 

 

 

Response to Editor Comments

 

Reviewer-2 Point-1.The paper proposes to combine generative AI with blockchain to optimize smart contracts, but the existing literature has already touched upon the integration of AI and blockchain. Please clarify the core differences between this research and existing technologies, such as the uniqueness in model selection (why focus on the GPT series rather than other generative models like Codex), optimization objectives (such as the limitations of only optimizing for loops and variables), or application scenarios (interoperability in smart cities).

Author Response: We are grateful to the reviewer for this insightful observation. In response, we have expanded the discussion in the manuscript to clearly highlight the novel contributions and how our approach differs from existing literature:

  • Model Selection Justification: We have clarified why we focused on the GPT series (GPT-2, GPT-3, GPT-4) instead of Codex or other models. Specifically, our choice is motivated by the availability of open-source fine-tuning pipelines, model interpretability, and better token-to-logic alignment when generating structurally valid and context-aware smart contract code, which is essential for regulatory interoperability in smart cities. This explanation is now included in the revised manuscript.
  • Optimization Objectives: While many AI-based smart contract optimizations exist, we focused on loop unrolling, variable reuse, and redundant operation minimization because these are empirically shown to have a direct and measurable impact on gas efficiency and execution latency. We’ve acknowledged the scope of these objectives and discussed the potential to expand optimization targets (e.g., storage access, branching logic) in future work.
  • Application Novelty: Our work uniquely applies these AI-optimized smart contracts within a decentralized, interoperable smart city architecture using a dual-chain Multichain system. Existing work does not couple AI-driven optimization with policy-enforced interoperability in a blockchain system tailored to municipal services. We have emphasized this application-level uniqueness more explicitly in the revised Introduction and Discussion sections.

Reviewer-2 Point-2.The paper proposes to use GPT-2/3/4 to optimize the smart contract code in order to reduce redundant computations and loop complexity. However, in the dataset construction section, the determination of optimized labels relies on the static analysis results of external tools. How can we ensure that the “redundancy” identified by these tools will inevitably lead to performance overhead in the blockchain execution environment? Has it been verified that the optimized code indeed reduces Gas consumption or execution time in the actual blockchain environment? If the code that the tool determines as “optimized” does not show performance improvement on the real chain, will it affect the effectiveness of generative model training?

 

 

Author Response: We sincerely thank the reviewer for raising this important point regarding the validity of optimization labels and their correlation with real-world performance improvements. This concern is both valid and critical for the reliability of training generative models in blockchain environments.

To address this, we have carefully revised the dataset construction section to explicitly explain how we validated the impact of labeled “optimized” and “non-optimized” smart contract code. Specifically:

  • We now clarify that the static analysis tools (e.g., PyLint for Python and Slither for Solidity) were used as initial filters, but not as the sole basis for labeling.
  • We conducted empirical validation of a representative subset of the labeled dataset by deploying the contracts on a private Ethereum testnet
  • Only code variants that showed statistically significant improvements in execution efficiency were retained as “optimized” examples in the dataset used for model training.

Author Action: We have rewritten the dataset construction section to clearly incorporate these details and address the reviewer’s concern. The revised explanation highlights how we:

  • Dataset Pipelining for Optimizing smart contract with GEN AI
  • Validate optimization through deployment,
  • Fine Tuning of GPT version with the synthetic dataset
  • Result Discussion of Trained model both for python and solidity case

These changes are clearly marked in the revised manuscript and highlighted in blue for easy identification as shown in Figure-2

Reviewer-2 Point-3.The paper emphasizes the enhanced security of the optimized smart contract, but does not mention whether the security of the generated code is verified through formal verification or vulnerability detection tools.So how do you ensure that the code generated by GPT is free of common vulnerabilities such as reentry attacks and overflows?

Author Response:

We thank the reviewer for this important observation. Ensuring the security of AI-generated smart contract code is indeed a critical aspect. In the current version of our work, we acknowledge that comprehensive formal security verification was not performed. But we perform the performance verification after post optimization in term of throughput performance as shown in table 27

 This decision was made intentionally, as the base system we build upon did not incorporate formal vulnerability analysis, and our primary motivation was to explore and demonstrate the potential of integrating generative AI (GPT-2/3/4) for smart contract optimization, particularly with respect to performance and code efficiency.

However, we fully recognize the importance of static security code analysis, especially to detect vulnerabilities such as reentrancy attacks, overflows, and improper access control. We have explicitly mentioned in the updated manuscript that:

  • Our future work will focus on integrating static security analysis and formal verification frameworks (e.g., Slither, Mythril, Scribble) into the GenAI optimization pipeline.
  • This will allow us to validate security properties post-generation and potentially guide the model to avoid unsafe patterns during training.

These additions are now clearly discussed in the revised version of the manuscript and marked for clarity.

Author Action :We update the conclusion section by highlighting the above point in the updated manuscript with blue color as shown in the below image

 

 

 

Reviewer-2 Point-4.The paper does not disclose the dataset and the complete experimental code, nor does it explain the detailed parameters for fine-tuning the GPT model. How to handle the professionalism in the field of smart contracts (such as whether to introduce domain expert annotations or industry standard contracts as training data). Please supplement these details to enhance reproducibility.

Author Response: We sincerely thank the reviewer for highlighting these essential points related to reproducibility and domain alignment. We fully agree that transparency in dataset, code, and fine-tuning parameters is critical to the rigor and reusability of research in smart contract optimization.

Author Action: To address this, we have taken the following actions:

  • We have publicly released the full experimental code, including both the base system and our modified framework used for GPT-based smart contract optimization. The repositories are accessible at the links below:
    • ? https://github.com/Shahbazdefender/-Smart-Contract-based-Security-Architecture-For-Collaborative-Municipal-System2
    • ? Modified GPT-Based Optimization Framework
  • We used the CodeOcean reproducibility engine to generate a reproducibility badge, confirming that the experimental setup can be independently replicated.
  • Base code : https://codeocean.com/capsule/1678096/tree
  • Code of Optimization: https://github.com/Sameer18-Dev/generative-ai-driven-smart-contract-optimization-for-secure-and-scalable-smart-city-services
  • The fine-tuning parameters for GPT-2/3/4 (including learning rate, batch size, optimizer, epochs, and tokenization methods) are now detailed in a dedicated subsection within the revised manuscript.

Reviewer-2 Point-5.The experiment compared private chains and public chains. However, the performance advantage of private chains may stem from the centralized feature, while smart cities usually need to balance decentralization and performance. Please discuss whether the applicability of this optimization method will be affected by the consensus mechanism in an environment with a higher degree of decentralization?

Author Response: We sincerely thank the reviewer for this thoughtful and relevant observation. We fully acknowledge that the observed performance gains in private blockchain environments are influenced by their relatively centralized consensus architecture. In contrast, smart cities often require a delicate balance between decentralization (for trust and transparency) and performance (for real-time service delivery).

Author Action: To address this important concern, we have added a new dedicated section titled "Discussion on Customized Consensus Mechanism Performance", placed before the Conclusion to address this point below is the image attached of this section

 

Reviewer-2 Point-6. In terms of performance indicators, the paper uses BLEU score, throughput, execution time, etc. But how applicable is the BLEU score in code optimization? Are there any other more relevant indicators, such as code complexity and Gas consumption?

We thank the reviewer for this valuable observation. In our work, we used the BLEU score as a learning-driven similarity index, which serves as an effective attribute to evaluate how well the generative model (GPT-2/3/4) has captured the structural and syntactic features of optimized smart contracts during training.

Our intention was not to use BLEU as a direct measure of runtime efficiency, but rather as an indirect indicator of learning alignment ensuring that the generated code closely mirrors reference-optimized samples in terms of token sequences and transformation patterns. This is especially relevant in fine-tuning scenarios where the objective is to teach the model to replicate desirable optimizations (e.g., loop unrolling, redundant variable removal).

 

 

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

First, I note that I did not have access to a clean version of the revised manuscript. I only had access to the version with track changes.   My previous questions were only marginally addressed. Overall, the paper still needs significant improvement in terms of its structure, presentation, and writing. Despite a significant effort to improve it, most of the previously mentioned issues still persist.   Lack of white space:

  • Line 149: "[19,20].Figure-3"
  • Line 160: "Data Integrity [21]:Within"
  • Line 185: "interactions.Operating" 
  • Line 311: "The Table-1below highlights", which should be "Table-1 highlights".
  • Etc.

  Placement of figures and tables:

  • Page 6: Figure 4 appears before it is mentioned in text and before the first item of a numbered list. Must be move to after the list.
  • Issues with tables ahead of mention in text remains (e.g., Tables 2 and 3). The same for figures, such as Figure 5.
  • Etc.

  Presentation and writing issues:

  • Line 264 is incorrectly indented and must be shifted to the left. The same goes for the paragraph about Codex starting on line 303.
  • Line 308: There is a numbered list with a single item (base model of Codex).
  • Line 365: "High gas fees one of the burning issues is concerning high gas fees associated with blockchain transactions" is an example of an incorrect and confusing sentence.
  • Some sentences lack final punctuation.
  • The list that starts on line 543 has no introductory sentence.
  • Etc.

  The structure must be improved further.

  • I think there are still too many main sections.
  • Sections "2.4. Generative Artificial Intelligence" and "5. Generative AI for Smart Contract Optimization" should be combined into one section (i.e., 2.4). Both sections are about background on the same topic.
  • On page 22, the results are presented with both a table and a graphic, which is redundant. The authors must choose one or the other. The same applies to other instances, such as tables 19 and 20 and figures 15 and 16.
  • The caption of Table 7 is incorrect because the results are about the Solidity case.
  • Etc.

Author Response

Author Response and Action

Reviewer Comment:
First, I note that I did not have access to a clean version of the revised manuscript. I only had access to the version with track changes. My previous questions were only marginally addressed. Overall, the paper still needs significant improvement in terms of its structure, presentation, and writing. Despite a significant effort to improve it, most of the previously mentioned issues still persist.

Author Response:

We sincerely thank the reviewer for their detailed and constructive feedback, which has been invaluable in improving the quality of our manuscript. We appreciate the time and effort spent in reviewing our work and fully acknowledge the issues raised regarding formatting, presentation, and structural consistency.

In response to your comments, we have carefully reviewed the entire manuscript and incorporated all the mentioned corrections. Specifically:

  • Spacing and punctuation issues such as missing white space after periods and incorrect inline formatting (e.g., "[19,20].Figure-3", "Data Integrity [21]:Within") have been corrected throughout the document.
  • All figures and tables have been appropriately repositioned to appear after their first mention in the text to ensure logical flow and readability.
  • Indentation inconsistencies and formatting anomalies (e.g., lines 264 and 303) have been fixed.
  • We have rephrased unclear or awkward sentences, including the one on high gas fees, which now reads: "High gas fees remain one of the critical challenges associated with blockchain transactions, affecting scalability and user adoption."
  • Lists with single items have been reformatted, and introductory sentences have been added to all itemized lists.
  • Structural improvements have been made by merging Sections 2.4 and 5 into a unified discussion on Generative AI, removing redundancy.
  • Redundant visualizations (e.g., both table and figure for the same result) have been consolidated to avoid repetition.
  • Captions have been corrected, such as for Table 7, which now accurately reflects the Solidity case results.

Furthermore, we have now submitted both a clean version and a tracked-changes version of the revised manuscript to facilitate your review.

We hope that these comprehensive revisions meet your expectations and enhance the clarity and quality of our work. Thank you once again for your valuable insights.

 

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

I consider that the authors have properly addressed my feedback. The models, analyses, and findings are intriguing. Thus, I suggest that this paper be published.

Author Response

No comments received from the reviewer. 

Round 3

Reviewer 1 Report

Comments and Suggestions for Authors

After having reviewed the second revised version of the manuscript, I am satisfied with the changes that have been made. The authors have incorporated most of the suggestions, and I believe the paper is now almost ready for publication. There are still a few minor issues with the presentation and writing that can easily be addressed during the final editing process. For example:

  • On line 299, the sentence “1. Codex (Base Model): This version is fine-tuned from GPT-3 using a large corpus of public source code.' It supports a variety of programming languages, including Python, JavaScript, and Solidity, and powers tools such as GitHub Copilot” should not be presented as a numbered list. The numbering should be removed.
  • Tables 6 and 7: the captions for these tables should be placed above them.

I believe the manuscript will be ready for final publication once these and similar adjustments have been made.

Comments on the Quality of English Language

There are still a few minor issues with the presentation and writing that can easily be addressed during the final editing process. For example:

  • On line 299, the sentence “1. Codex (Base Model): This version is fine-tuned from GPT-3 using a large corpus of public source code.' It supports a variety of programming languages, including Python, JavaScript, and Solidity, and powers tools such as GitHub Copilot” should not be presented as a numbered list. The numbering should be removed.
  • Tables 6 and 7: the captions for these tables should be placed above them.

Author Response

Response to Editor Comments

 

 

Reviewer-1 Point1: On line 299, the sentence “1. Codex (Base Model): This version is fine-tuned from GPT-3 using a large corpus of public source code.' It supports a variety of programming languages, including Python, JavaScript, and Solidity, and powers tools such as GitHub Copilot” should not be presented as a numbered list. The numbering should be removed.

 

Author Response: We thank the reviewer for pointing this out. We agree that presenting Codex as a numbered item is inconsistent, given that it does not have multiple variants like the other models discussed. To address this, we have removed the numbering and integrated the description of Codex into a single paragraph to improve consistency and clarity in Section 2.4.

 

Author Action: Revised the description of Codex in Section 2.4 by removing the numbered list format and merging it into a single descriptive paragraph.

 

In previous revision:

In current revision:

 

Reviewer-1 Point2: Tables 6 and 7: the captions for these tables should be placed above them.

 

Author Response: We appreciate the reviewer’s observation. We have updated the placement of the captions for Tables 6 and 7 to appear above the tables, in accordance with standard formatting guidelines. Furthermore, we have ensured this formatting is applied consistently across all tables in the manuscript, including those not initially highlighted.

 

Author Action: In response to this comment, we have carefully revised the manuscript to ensure that the captions for Tables 6 and 7 are above the tables. Also reviewed and updated all other tables in the manuscript to ensure consistent caption placement above each table.

 

In previous revision:

In current revision:

 

In addition to incorporating the highlighted points, we have also corrected grammatical errors, fixed missing spaces and periods, and resolved spacing and indentation issues to improve the overall formatting and readability of the document.

 

 

Author Response File: Author Response.pdf

Back to TopTop