Next Article in Journal
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
Previous Article in Journal
Enhancing Industrial Processes Through Augmented Reality: A Scoping Review
 
 
Article
Peer-Review Record

Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud

Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359
Submission received: 16 July 2025 / Revised: 2 August 2025 / Accepted: 5 August 2025 / Published: 7 August 2025
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities, 2nd Edition)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This study presents an automated infrastructure deployment solution leveraging Azure Bicep and GitHub Actions. Following concerns must be mentioned before it can be considered for publication.

  1. Section 1 (Page 3): While architectural diagrams and parameter files have been presented, there is a lack of visual illustrations regarding the specific execution results. It is recommended to include screenshots of GitHub Actions workflow runs, thereby enabling readers to more intuitively grasp the feasibility and practicality of the methodology proposed in this paper.
  2. Line 131 (Page 3): The definition of Infrastructure as Code (IaC) is good but lacks context in terms of its evolution. Consider adding a few sentences explaining how IaC has evolved from traditional manual configuration to automated solutions like Bicep and Terraform.
  3. Section 2.3 (Page 6): The discussion on alternative Infrastructure as Code tools is informative but could benefit from a summarized comparison table highlighting key differences in language paradigms, cloud support, state management, and learning curves. This will improve readability and allow readers to grasp the distinctions more quickly.
  4. Figure 2(Page 10): The font size in the chart is relatively small. It is advisable to appropriately enlarge the font size of the text and labels, and adjust the image background to a light color, so as to improve its aesthetic appeal and clarity.
  5. Section 6 (Page 28): It is recommended to include performance comparison charts, such as the comparison of average time consumption between manual deployment and automated deployment (bar chart or line chart), and the comparison of failure rate or configuration deviation rate before and after automated deployment (column chart), so as to intuitively present their performance differences.
  6. Code examples are detailed but very long. Consider moving lengthy code into an appendix or supplementary material to keep the main text concise.
  7. Section 7.1 (Page 35): Nakkawita and Pushpakumara (2024) demonstrate that modeling historical patterns enables predictive maintenance and enhances long-term system safety. Similarly, this study leverages historical deployment patterns through version-controlled Bicep templates and parameterization to ensure infrastructure reliability and prevent "configuration drift incidents." Citing their research (e.g.DOI: 10.1016/j.jsasus.2024.09.001) underscores that learning from historical data—whether in physical infrastructure or infrastructure as code—is critical to achieving sustainable system resilience.
  8. Section 7.3 (Page 36): Kumar et al. (2024) optimized industrial processes by converting low-value iron ore into high-efficiency agglomerates, minimizing waste and enhancing sustainability. Their approach parallels the requirements for resource-efficient Infrastructure as Code pipelines, where automated cost control ensures that cloud infrastructure is "lean" and environmentally responsible. As emphasized in their study (e.g.DOI: 10.1016/j.jsasus.2024.11.001), process efficiency is critical to operational sustainability and reducing resource footprints, whether in manufacturing or cloud automation.

Author Response

First, I would like to thank you for your thorough review of our paper “Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud” (futureinternet-3791985) and helpful comments to improve it.

 

Reviewer 1

Comments to the Authors
This study presents an automated infrastructure deployment solution leveraging Azure Bicep and GitHub Actions. Following concerns must be mentioned before it can be considered for publication.

Section 1 (Page 3): While architectural diagrams and parameter files have been presented, there is a lack of visual illustrations regarding the specific execution results. It is recommended to include screenshots of GitHub Actions workflow runs, thereby enabling readers to more intuitively grasp the feasibility and practicality of the methodology proposed in this paper.

Line 131 (Page 3): The definition of Infrastructure as Code (IaC) is good but lacks context in terms of its evolution. Consider adding a few sentences explaining how IaC has evolved from traditional manual configuration to automated solutions like Bicep and Terraform.

Section 2.3 (Page 6): The discussion on alternative Infrastructure as Code tools is informative but could benefit from a summarized comparison table highlighting key differences in language paradigms, cloud support, state management, and learning curves. This will improve readability and allow readers to grasp the distinctions more quickly.

Figure 2(Page 10): The font size in the chart is relatively small. It is advisable to appropriately enlarge the font size of the text and labels, and adjust the image background to a light color, so as to improve its aesthetic appeal and clarity.

Section 6 (Page 28): It is recommended to include performance comparison charts, such as the comparison of average time consumption between manual deployment and automated deployment (bar chart or line chart), and the comparison of failure rate or configuration deviation rate before and after automated deployment (column chart), so as to intuitively present their performance differences.

Code examples are detailed but very long. Consider moving lengthy code into an appendix or supplementary material to keep the main text concise.

Section 7.1 (Page 35): Nakkawita and Pushpakumara (2024) demonstrate that modeling historical patterns enables predictive maintenance and enhances long-term system safety. Similarly, this study leverages historical deployment patterns through version-controlled Bicep templates and parameterization to ensure infrastructure reliability and prevent "configuration drift incidents." Citing their research (e.g.DOI: 10.1016/j.jsasus.2024.09.001) underscores that learning from historical data—whether in physical infrastructure or infrastructure as code—is critical to achieving sustainable system resilience.

Section 7.3 (Page 36): Kumar et al. (2024) optimized industrial processes by converting low-value iron ore into high-efficiency agglomerates, minimizing waste and enhancing sustainability. Their approach parallels the requirements for resource-efficient Infrastructure as Code pipelines, where automated cost control ensures that cloud infrastructure is "lean" and environmentally responsible. As emphasized in their study (e.g.DOI: 10.1016/j.jsasus.2024.11.001), process efficiency is critical to operational sustainability and reducing resource footprints, whether in manufacturing or cloud automation.

 

To Reviewer 1:

            Thank you very much for your review and valuable remarks.

 

  1. Section 1 (Page 3): While architectural diagrams and parameter files have been presented, there is a lack of visual illustrations regarding the specific execution results. It is recommended to include screenshots of GitHub Actions workflow runs, thereby enabling readers to more intuitively grasp the feasibility and practicality of the methodology proposed in this paper.

Response 1:  We appreciate the reviewer’s observation regarding the lack of visual illustrations related to the actual execution results of the proposed automation workflows. In response to this valuable suggestion, we have enhanced Section 6.3 by incorporating two representative screenshots directly from the GitHub Actions user interface—one for the deployment pipeline (Figure 4) and one for the deletion pipeline (Figure 5). These additions provide visual evidence of the successful execution of the CI/CD workflows and reinforce the practical applicability and operational feasibility of the proposed methodology. We believe that this visual supplement significantly improves the comprehensibility and completeness of the presented DevOps automation solution.

  1. Line 131 (Page 3): The definition of Infrastructure as Code (IaC) is good but lacks context in terms of its evolution. Consider adding a few sentences explaining how IaC has evolved from traditional manual configuration to automated solutions like Bicep and Terraform.

Response 2:  We thank the reviewer for the insightful comment regarding the contextualization of Infrastructure as Code (IaC). To address this recommendation, we have expanded Section 2.1 by adding historical context on the evolution of IaC practices. Specifically, we describe the transition from traditional manual infrastructure configuration—typically performed via graphical user interfaces or command-line scripts—towards modern, declarative, and version-controlled automation approaches enabled by tools such as Bicep and Terraform. This addition clarifies the paradigm shift that IaC represents in cloud engineering and supports a more comprehensive understanding of its significance in DevOps practices.

  1. Section 2.3 (Page 6): The discussion on alternative Infrastructure as Code tools is informative but could benefit from a summarized comparison table highlighting key differences in language paradigms, cloud support, state management, and learning curves. This will improve readability and allow readers to grasp the distinctions more quickly.

Response 3:  We appreciate the reviewer’s constructive suggestion to enhance Section 2.3 with a more accessible comparison of the discussed Infrastructure as Code (IaC) tools. In response, we have added a structured comparison table within the same section, which presents the key differences between Azure Bicep, Terraform, Pulumi, and AWS CloudFormation. The table includes a concise juxtaposition of language paradigms, supported cloud platforms, state management mechanisms, modularity, learning curves, and ideal use cases. We believe this addition improves the clarity and readability of the section, enabling readers to quickly compare the capabilities and trade-offs of each tool in a systematic manner.

  1. Figure 2(Page 10): The font size in the chart is relatively small. It is advisable to appropriately enlarge the font size of the text and labels, and adjust the image background to a light color, so as to improve its aesthetic appeal and clarity.

Response 4:  We thank the reviewer for pointing out the visual limitations in Figure 2. In response, we have revised the diagram in Section 4.1 by increasing the font size of all labels and annotations to enhance legibility. Additionally, the background has been adjusted to a light, neutral tone to improve both visual clarity and aesthetic appeal. These modifications aim to ensure that the architecture diagram is more accessible and reader-friendly across various formats and screen types.

  1. Figure 2(Page 10): The font size in the chart is relatively small. It is advisable to appropriately enlarge the font size of the text and labels, and adjust the image background to a light color, so as to improve its aesthetic appeal and clarity.

Response 5:  We appreciate the reviewer’s valuable recommendation to include visual performance comparisons between manual and automated deployments. In response, we have updated Section 6.1 to include two comparative charts: (1) a bar chart illustrating the average time required for manual versus automated deployment, and (2) a column chart comparing the observed failure/configuration deviation rates. These visualizations offer a clear and intuitive representation of the operational benefits achieved through Infrastructure as Code and CI/CD automation. We believe that this enhancement significantly strengthens the empirical aspect of the study and supports the practicality of the proposed methodology.

  1. Code examples are detailed but very long. Consider moving lengthy code into an appendix or supplementary material to keep the main text concise.

Response 6:  We thank the reviewer for highlighting the importance of maintaining conciseness in the main body of the text. In response, we have decided to relocate the full-length code examples to a dedicated appendix section. This change preserves the technical completeness of the work while improving the readability and flow of the main narrative. The main text now includes only the most relevant code excerpts, with references directing the reader to the full listings in the appendix for further inspection.

  1. Section 7.1 (Page 35): Nakkawita and Pushpakumara (2024) demonstrate that modeling historical patterns enables predictive maintenance and enhances long-term system safety. Similarly, this study leverages historical deployment patterns through version-controlled Bicep templates and parameterization to ensure infrastructure reliability and prevent "configuration drift incidents." Citing their research (e.g.DOI: 10.1016/j.jsasus.2024.09.001) underscores that learning from historical data—whether in physical infrastructure or infrastructure as code—is critical to achieving sustainable system resilience.

Response 7:  We appreciate the reviewer’s effort in suggesting a relevant reference to support the discussion on the importance of leveraging historical deployment patterns in achieving infrastructure reliability. However, after carefully reviewing the referenced article available at:

https://www.sciencedirect.com/science/article/pii/S2949926724000295,

(Development of a rating model for assessing the condition of steel railway bridges - https://doi.org/10.1016/j.jsasus.2024.09.001)

we found that its content focuses on predictive maintenance and physical asset modeling, without a direct connection to Infrastructure as Code or cloud automation frameworks such as Bicep.

It appears there may have been a misreference or an unintended mismatch in context. Nonetheless, we fully agree with the reviewer’s broader point regarding the value of historical patterns in enhancing resilience and reproducibility, and this perspective is already reflected in our discussion on configuration drift prevention and version-controlled deployments..

  1. Section 7.3 (Page 36): Kumar et al. (2024) optimized industrial processes by converting low-value iron ore into high-efficiency agglomerates, minimizing waste and enhancing sustainability. Their approach parallels the requirements for resource-efficient Infrastructure as Code pipelines, where automated cost control ensures that cloud infrastructure is "lean" and environmentally responsible. As emphasized in their study (e.g.DOI: 10.1016/j.jsasus.2024.11.001), process efficiency is critical to operational sustainability and reducing resource footprints, whether in manufacturing or cloud automation.

Response 8:  We thank the reviewer for referring to the work of Kumar et al. (2024) and drawing a parallel between industrial process optimization and resource-efficient Infrastructure as Code (IaC) pipelines. However, upon examination of the referenced article associated with DOI 10.1016/j.jsasus.2024.11.001, we were unable to locate content that directly relates to resource-efficient IaC or cloud automation workflows. It appears that the focus of the paper lies mainly in industrial material processing rather than cloud-native CI/CD or infrastructure cost control.

Therefore, while the conceptual parallel regarding operational efficiency and minimizing resource footprints is appreciated, we were unable to identify a substantive thematic overlap to warrant explicit citation. Nonetheless, we fully agree with the broader principle that cost‑awareness and resource optimization are essential in IaC pipelines. This principle is indeed reflected throughout our manuscript — for example, in sections discussing automated cost control, lean infrastructure provisioning, and environmental sustainability practices.

We remain open to considering alternative recommendations or a more thematically aligned reference should the reviewer wish to suggest one.

 

Thank you very much for your remarks and comments. They were very useful for me to emphasize the main tasks and contributions of the manuscript, and also to focus the readers attention on the new and unique elements.

Reviewer 2 Report

Comments and Suggestions for Authors

The paper is generally well written and structured.

While the paper describes the engineering integration of two matching technologies, the scientific contribution is not mad explicit and a solid evaluation for a journal missing.

More detailed comments below:

  • the numerous code parts should use syntax highlighting, smaller font and be labelled as Listings and just important ones kept
  • Figure 1 has to be improved, Figure 2 as well - why not draw it?
  • 2.3. needs a more systematic approach to a comparison
  • 2.4. as well including more tools
  • 3.3. might need some weights for each goal for later evaluation
  • Fig 3 => listings!
  • 4.2. is standard usage imo, so focus on the unique bits and pieces there
  • 5 and 6 are engineering chapters, where is the science?
  • chapter 7 misses some evaluation, e.g. goal fullfilment AND performance/cost considerations

In short, a solid technical report but not a scientific paper!

Author Response

Thank you very much for your remarks and comments. They were very useful for me to emphasize the main tasks and contributions of the manuscript, and also to focus the readers attention on the new and unique elements.

 

 

First, I would like to thank you for your thorough review of our paper “Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud” (futureinternet-3791985) and helpful comments to improve it.

 

Reviewer 2

Comments to the Authors

The paper is generally well written and structured.

While the paper describes the engineering integration of two matching technologies, the scientific contribution is not mad explicit and a solid evaluation for a journal missing.

More detailed comments below:

the numerous code parts should use syntax highlighting, smaller font and be labelled as Listings and just important ones kept

Figure 1 has to be improved, Figure 2 as well - why not draw it?

2.3. needs a more systematic approach to a comparison

2.4. as well including more tools

3.3. might need some weights for each goal for later evaluation

Fig 3 => listings!

4.2. is standard usage imo, so focus on the unique bits and pieces there

5 and 6 are engineering chapters, where is the science?

chapter 7 misses some evaluation, e.g. goal fullfilment AND performance/cost considerations

In short, a solid technical report but not a scientific paper!

 

 

To Reviewer 2:

           

Thank you for your review and valuable remarks.

 

  1. While the paper describes the engineering integration of two matching technologies, the scientific contribution is not mad explicit and a solid evaluation for a journal missing.

Response 1:  We thank the reviewer for their insightful comment. We understand the importance of explicitly articulating the scientific contribution of the manuscript, particularly in the context of academic publishing.

In response, we have revised the manuscript to better highlight the original scientific contribution, which lies in the formalization and implementation of a modular, parameterized, and security-hardened Infrastructure as Code (IaC) framework for Microsoft Azure, using Bicep templates and GitHub Actions. Specifically, our contribution includes:

  • A reusable, modular Bicep architecture that supports secure and environment-specific provisioning of Azure resources, with strict adherence to CI/CD and DevSecOps principles;
  • A multi-environment automation pipeline that is scalable and audit-ready, supporting full lifecycle operations (deployment and teardown) in less than 15 minutes, reducing manual setup time from hours to minutes (as shown in Section 6);
  • A structured evaluation of the approach, comparing performance and reliability between manual and automated deployments (Section 6), including empirical metrics on time reduction (from multiple hours to ~15 minutes) and error rate (from 30% to <1%).

We have made these contributions more explicit in the Abstract, Introduction (final paragraph), and Section 6 (GitHub Actions CI/CD Pipelines), including quantified comparisons and a clearer statement of the added scientific and practical value.

We hope that these clarifications help reinforce the manuscript's relevance and originality for the journal’s readership.

  1. the numerous code parts should use syntax highlighting, smaller font and be labelled as Listings and just important ones kept.

Response 2:  We thank the reviewer for their valuable comment regarding the presentation of code in the manuscript.

To improve readability and conform to academic standards, we have carefully revised the formatting of the code examples:

  • The most relevant and illustrative code fragments are now retained in the main body of the paper, clearly marked as Listings, numbered consecutively, and accompanied by concise captions.
  • Non-essential or lengthy code blocks have been moved to the Appendix, as suggested, to maintain the flow of the main text while preserving the technical depth for interested readers.
  • Due to constraints in the document formatting environment, advanced syntax highlighting and further font size reduction were not fully feasible, but we ensured consistent styling and logical indentation for clarity.

We believe this restructuring improves the balance between technical depth and readability, and we thank the reviewer once again for highlighting this aspect.

Regarding Figure 1 (repository structure), we have considered the suggestion to draw it instead of using a screenshot or code-like layout. However, due to its inherently hierarchical and textual nature, we have opted to preserve the current representation, while improving formatting, alignment, and labeling for better readability.

We hope that these visual enhancements meet the expectations for clarity and academic presentation.

  1. 2.3. needs a more systematic approach to a comparison.

Response 4:  We thank the reviewer for pointing out the need for a more systematic comparison in Section 2.3.

In response, we have significantly revised Section 2.3 by introducing a structured comparison table (Table 1). The table systematically contrasts the key characteristics of the most widely used Infrastructure-as-Code tools—Azure Bicep, Terraform, Pulumi, and AWS CloudFormation—across criteria such as language type, cloud provider support, integration level, modularity, state management, and ideal use cases.

This structured format improves clarity and enables readers to quickly grasp the strengths and limitations of each tool in a side-by-side manner, thereby addressing the reviewer’s concern directly.

  1. 2.4. as well including more tools.

Response 5:  We thank the reviewer for the suggestion to include additional tools in Section 2.4.

In the revised version, Section 2.4 already includes a comparative overview of the most widely adopted CI/CD tools relevant to GitHub Actions, including Azure DevOps, Jenkins, and GitLab CI. These tools were selected based on their widespread use, relevance to Azure deployments, and their differing characteristics in terms of integration, complexity, and operational models.

While there are other automation platforms available, we believe that the current selection offers a representative and balanced comparison, aligned with the scope and focus of this work. Expanding further could dilute the clarity of the section without significant additional value to the reader.

Nonetheless, we remain open to including specific tools if the reviewer has particular suggestions.

  1. 3.3. might need some weights for each goal for later evaluation.

Response 6:  We appreciate the reviewer’s thoughtful suggestion regarding the need to assign relative weights to each goal in Section 3.3 to enable future evaluation and prioritization.

In response, we have updated Section 3.3 to include a weighted prioritization of the project goals based on their criticality to successful cloud infrastructure automation. Each goal is now accompanied by a rationale and a proposed relative weight (on a scale from 1 to 5) reflecting its importance for reliability, maintainability, security, and scalability.

This addition allows readers to better understand the trade-offs and relative emphasis placed on different aspects of the design and facilitates potential quantitative evaluation in future work.

  1. Fig 3 => listings!

Response 7:  We thank the reviewer for the helpful remark regarding Figure 3.

In response, Figure 3 has been reclassified and reformatted as a Listing, since it primarily contains structured parameter file definitions rather than graphical content. This change aligns with academic formatting standards and improves the logical consistency of the manuscript.

We have also ensured that all parameter files and their values are fully documented and explained in the surrounding text, so that readers can understand their role and customize them as needed for multi-environment deployments.

  1. 8. 2. is standard usage imo, so focus on the unique bits and pieces there.

Response 8:  We thank the reviewer for their observation regarding Section 4.2.

While we acknowledge that certain elements of the described architecture—such as the use of Azure Resource Manager (ARM), GitHub Actions, and parameterized Bicep modules—reflect common industry practices, we would like to clarify that the implementation presented in Section 4.2 is tailored to a highly specific, production-ready scenario involving multi-environment deployment, security hardening, and DevSecOps alignment for Microsoft Azure.

This architecture: Integrates multiple reusable Bicep modules with environment-specific parameters, Implements secrets isolation, key vault integration, and conditional logic, Follows a strict modularization and naming convention policy to support traceability and rollback.

Therefore, while some components are standard, the specific combination, parameter structuring, and deployment flow provide a unique and replicable example for secure and scalable infrastructure automation.

We have revised the introductory paragraph of Section 4.2 to emphasize this uniqueness and its limitations as a “template” rather than a universal blueprint. We hope this clarification aligns with the reviewer’s expectations.

  1. 5 and 6 are engineering chapters, where is the science?

Response 9:  Indeed, Sections 5 and 6 describe the engineering implementation and automation logic in detail, but we respectfully argue that this is not merely descriptive. These chapters also carry scientific value by demonstrating:

- A novel, modular Infrastructure as Code (IaC) design pattern using Bicep and GitHub Actions, combining best practices in a reusable and secure manner;

- A replicable methodology for implementing environment-specific cloud infrastructure with parameterized templates and conditional deployment logic;

- A performance comparison between manual and automated deployment methods, offering empirical insights into time savings and error reduction (as detailed in Section 6.3);

- A practical demonstration of DevSecOps integration, including secrets management and workflow hardening, which extends beyond theoretical exposition.

To better clarify this contribution, we have revised the conclusions of Sections 5 and 6 and added explicit statements of scientific relevance in the introductory paragraphs.

We hope this addresses the concern and clarifies the added academic value of these implementation-focused sections.

  1. chapter 7 misses some evaluation, e.g. goal fullfilment AND performance/cost considerations.

Response 10:  We thank the reviewer for their valuable suggestion regarding Section 7.

We acknowledge the importance of including a concise goal fulfillment assessment and performance/cost evaluation in the concluding part of the paper. In response, we have expanded Chapter 7 to include:

  • A brief summary of goal achievement, referring back to the weighted priorities established in Section 3.3;
  • A discussion of performance metrics (deployment time, error rate) obtained in Section 6, highlighting the benefits of automation;
  • A cost-related reflection, explaining how the design enables resource optimization (e.g., ephemeral environments, teardown automation) and provides foundations for future cost modeling.

While Section 7 is primarily focused on synthesizing findings from earlier sections, we agree that a brief evaluative component enhances its value. These additions aim to meet the reviewer’s expectations while keeping the section concise.

  1. In short, a solid technical report but not a scientific paper!

Response 11:  We acknowledge the reviewer’s final assessment and appreciate their recognition of the technical strength of the manuscript.

While the paper indeed presents a practical and implementation-focused study, we respectfully argue that it also meets the standards of a scientific contribution through:

- The formalization of a modular, secure, and reproducible IaC-based architecture, which can serve as a reference model in both academia and industry;

- The introduction of structured evaluation criteria, including weighted goal assessment and empirical performance metrics;

- The provision of a generalizable methodology for DevSecOps automation on Azure, which is applicable across different cloud environments with appropriate adaptation.

We have revised several sections to better highlight these scientific aspects and make the contribution more explicit. We hope that with these clarifications, the manuscript will be seen as both technically robust and scientifically relevant.

 

Thank you very much for your remarks and comments. I greatly appreciate your efforts to get very thorough and detailed with my manuscript! They were very useful for me to emphasize the main tasks and contributions of the manuscript, and also to focus the readers attention on the new and unique elements.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

It can be accepted in present form.

Reviewer 2 Report

Comments and Suggestions for Authors

The paper has been improved according to the reviewers feedback. 

Back to TopTop