Next Article in Journal
Research Gap in Personal Branding: Understanding and Quantifying Personal Branding by Developing a Standardized Framework for Personal Brand Equity Measurement
Previous Article in Journal
The Digital Economy’s Contribution to Advancing Sustainable Economic Development
 
 
Article
Peer-Review Record

From Presence to Performance: Mapping the Digital Maturity of Romanian Municipalities

Adm. Sci. 2025, 15(4), 147; https://doi.org/10.3390/admsci15040147
by Catalin Vrabie
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Adm. Sci. 2025, 15(4), 147; https://doi.org/10.3390/admsci15040147
Submission received: 23 March 2025 / Revised: 10 April 2025 / Accepted: 12 April 2025 / Published: 17 April 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper on the digital maturity of Romanian municipalities presents a highly relevant and timely analysis of local governments' adaptability in the face of technological change. The study's focus on a significant cohort of Romanian city halls provides valuable insights into the current state of digital governance at the municipal level.

The methodology employed in this research is robust and well-presented, leading to targeted and meaningful conclusions. The authors have effectively utilized a defined scale to rate each municipality's web portal, providing a comprehensive overview of the digital landscape across 103 Romanian municipalities.

While the current study offers a snapshot of digital governance, a longitudinal survey conducted over several years would undoubtedly enhance our understanding of the evolving digital transformation in Romanian local administrations. This approach would allow for a more nuanced view of the changes and adaptations occurring over time.

The authors' findings highlight the importance of further investigating the differences in quality and availability of digital services. Future research should focus on examining the governance and steering logic of political entities, as well as the involvement of citizens and businesses in the digital transformation process. This would provide deeper insights into the factors influencing the success of e-government initiatives.

To build upon this valuable work, it would be beneficial to develop a conceptual framework that includes benchmarking for public institutions' adaptability. Such a framework could help ensure continuity and efficiency in service delivery while providing a standardized method for assessing and comparing digital maturity across municipalities.

In conclusion, this paper makes a significant contribution to our understanding of digital governance in Romanian municipalities and lays a strong foundation for future research in this critical area.

Author Response

Firstly the author would like to thank the reviewr for his/her kind message and valuable commnts. Regarding the suggested framework and benchmarking approach for assessing public institutions' adaptability, while the current study focuses specifically on the evaluation of municipalities' official web portals, our ongoing research extends to examining citizens' and other stakeholders' involvement by analyzing official municipal social media profiles. Due to the scope limitations of the present article, we will address your recommendation comprehensively in a forthcoming dedicated article. This subsequent study will propose a detailed conceptual framework tailored explicitly to the Romanian e-government and Smart Cities ecosystem, providing an integrated approach to assessing digital maturity and stakeholder engagement.

Reviewer 2 Report

Comments and Suggestions for Authors

This manuscript addresses an essential and timely topic regarding digital maturity in local governance, particularly focusing on Romanian municipalities. The authors have conducted a comprehensive assessment of digital maturity through an empirical analysis of official web portals across municipalities, applying a detailed set of indicators. Overall, the paper is well-structured and contributes valuable insights. However, there are critical areas that require improvement. The following are my detailed comments:
Major comments:
•    While the manuscript references Layne & Lee’s E-Government Maturity Model and the UN EGDI, it does not fully integrate these frameworks into the analysis. The study claims most municipalities are at the “Transactional” stage (e-Gov 2.0) with some transitioning to “Integrated Services” (e-Gov 3.0), but this assertion is not substantiated with specific examples or a clear mapping of indicators to these stages.
•    The selection process for the 23 indicators (from an initial 48) is outlined, but the rationale for excluding the other 25 is vague (“not focusing on digitalization efforts”). Similarly, the scoring system (e.g., fixed-point scale, conversion to 1-5 relative scores) lacks detail on how binary (yes/no) indicators were weighted versus subjective ones (e.g., “Pleasant design”). Please provide a supplementary table listing all 48 initial indicators, with a brief justification for inclusion/exclusion. Clarify the scoring methodology—e.g., were all indicators equally weighted? How were subjective assessments (C51, C52) standardized across evaluators? Consider discussing inter-rater reliability if multiple researchers scored the websites.
•    The study evaluates the availability of digital services but does not assess their usage (e.g., adoption rates by citizens) or effectiveness (e.g., reduced administrative burden, citizen satisfaction). This is acknowledged as a limitation, but it weakens the claim of “mapping digital maturity,” which typically encompasses performance outcomes.
•    The manuscript frequently mentions e-Gov 3.0 and AI as the next frontier, but the analysis lacks concrete evidence of AI adoption in Romanian municipalities. The absence of AI-specific indicators in the framework undermines claims about Romania’s proximity to e-Gov 3.0.

Minor comments:
•    Table 3 (p. 8) could be enhanced with a column comparing 2014 and 2024 percentages to visually emphasize growth (e.g., e-petitions from 43 to 88 municipalities).
•    Figure 2 (p. 9) is referenced but not provided in the document. 

Author Response

Firstly, we would like to thank the reviewer for his valuable comments and suggestions. Below the reviewer could see our answers. In the same time we made the proper adjustments in the paper in order to increase its value to other researchers.

This manuscript addresses an essential and timely topic regarding digital maturity in local governance, particularly focusing on Romanian municipalities. The authors have conducted a comprehensive assessment of digital maturity through an empirical analysis of official web portals across municipalities, applying a detailed set of indicators. Overall, the paper is well-structured and contributes valuable insights. However, there are critical areas that require improvement. The following are my detailed comments:
Major comments:
•    While the manuscript references Layne & Lee’s E-Government Maturity Model and the UN EGDI, it does not fully integrate these frameworks into the analysis. The study claims most municipalities are at the “Transactional” stage (e-Gov 2.0) with some transitioning to “Integrated Services” (e-Gov 3.0), but this assertion is not substantiated with specific examples or a clear mapping of indicators to these stages.

We acknowledge that while we reference Layne & Lee’s E-Government Maturity Model and the UN EGDI, the manuscript currently lacks a detailed mapping of our indicators to these frameworks. To address this, we revised the manuscript to clearly demonstrate how each indicator aligns with specific stages of these maturity models. We also included concrete examples from our dataset to substantiate our assertion about the municipalities' positions within the “Transactional” and “Integrated Services” stages.

This explicit mapping and concrete evidence clarify the relationship between our empirical findings and existing theoretical maturity frameworks, strengthening the analytical rigor of the study.

These modifications do provide stronger theoretical grounding and empirical evidence to support our analysis.


•    The selection process for the 23 indicators (from an initial 48) is outlined, but the rationale for excluding the other 25 is vague (“not focusing on digitalization efforts”). Similarly, the scoring system (e.g., fixed-point scale, conversion to 1-5 relative scores) lacks detail on how binary (yes/no) indicators were weighted versus subjective ones (e.g., “Pleasant design”). Please provide a supplementary table listing all 48 initial indicators, with a brief justification for inclusion/exclusion. Clarify the scoring methodology—e.g., were all indicators equally weighted? How were subjective assessments (C51, C52) standardized across evaluators? Consider discussing inter-rater reliability if multiple researchers scored the websites.

We added in the text the following explanation (thank you for pointing this out): The remaining 25 indicators, while valuable for assessing broader aspects of good governance and local administrative performance, focused primarily on traditional governance concerns rather than digital maturity. Examples of these excluded indicators include measures such as reporting the presence of potholes in city roads, assessing whether public transportation adequately serves citizens' needs, particularly in newly developed neighborhoods, evaluating the cleanliness of public spaces, or the availability and quality of recreational facilities. Such governance-focused indicators, although important, did not directly contribute to evaluating the municipalities' digital maturity and were therefore set aside for a separate, governance-centered analysis.

Moreover, we added more info regarding the scoring methodology and addressed the C51/C52 limitations on the dedicated section of the article.
•    The study evaluates the availability of digital services but does not assess their usage (e.g., adoption rates by citizens) or effectiveness (e.g., reduced administrative burden, citizen satisfaction). This is acknowledged as a limitation, but it weakens the claim of “mapping digital maturity,” which typically encompasses performance outcomes.

Thank you for this important observation. We fully agree that digital maturity ideally includes not just the availability of services, but also their adoption and effectiveness. Due to methodological constraints—particularly the lack of access to internal municipal data and real-time analytics—this study focused on the supply side of digital maturity (service presence and website functionality). We acknowledge that this limits our ability to fully “map” maturity in a performance-based sense.

To address this, we have clarified the scope of our analysis in the manuscript and revised the terminology to specify that we are assessing structural digital readiness or digital service infrastructure maturity, rather than comprehensive digital performance. Additionally, we have noted plans for future research that will include user-level indicators such as adoption rates, satisfaction surveys, and service utilization metrics.
•    The manuscript frequently mentions e-Gov 3.0 and AI as the next frontier, but the analysis lacks concrete evidence of AI adoption in Romanian municipalities. The absence of AI-specific indicators in the framework undermines claims about Romania’s proximity to e-Gov 3.0.

Thank you for pointing out this critical gap in our current analysis. Indeed, while the manuscript frequently refers to the emergence of e-Gov 3.0 and the transformative role of AI in public administration, we acknowledge that the present study does not incorporate AI-specific indicators. This omission reflects a deliberate methodological choice… we addressed it by the end of Findings section – thank you for the important comment.

Minor comments:
•    Table 3 (p. 8) could be enhanced with a column comparing 2014 and 2024 percentages to visually emphasize growth (e.g., e-petitions from 43 to 88 municipalities).
– if doing so we argue that other researchers / reviewers will ask for a complete analysis over time and that is not the intention of the present article. We hope the reviewer understands that (we have as a solution the possibility of cutting the information off, but we consider it important). Please advise.
•    Figure 2 (p. 9) is referenced but not provided in the document. – we addressed that, thank you.

Thank you once again for the valuable commnets. We hope that the updated version of the article will prove more relevant for the scientific community around the egov and smart cities concept. Wishing you all the best!

Reviewer 3 Report

Comments and Suggestions for Authors

The paper under review is a recent contribution that examines the application of novel methods and methodologies to extract selected attributes related to city halls, primarily sourced from their respective websites. The methodological framework is appropriate, employing sufficient techniques, and the data sources have been selected with due consideration. Key highlights, including the impact of the COVID-19 pandemic as a catalyst for the expansion of online services (e.g., Page 11, lines 348–350), are accurately presented.

However, to enhance the robustness and reproducibility of the study, further elaboration on the website selection criteria is necessary. A critical aspect of such research is ensuring that the sample frame (i.e., the list of URLs) is rigorously linked to the corresponding administrative units (city halls). Many studies fail in this regard, thereby compromising the reliability of their findings. Specifically, the following details should be clarified:
- Selection criteria for city halls – What demographic, administrative, or geographic parameters were applied?
- URL collection methodology – Were automated scrapers, manual searches, or official registries used?
- Quality assurance mechanisms – Was an annotation exercise conducted to manually validate a subset of the data?
- Indicator calculation – How were the metrics, particularly those subdivided into classes (e.g., C51 and C52), computed?

Notably, the criteria for C51 and C52 subclasses appear highly subjective. A more detailed explanation of the measures taken to ensure objectivity and reliability in classification would strengthen the methodological rigor.

Additionally, the paper should clarify whether automated data extraction tools (e.g., web crawlers or scrapers) were employed. For instance, the detection of social media profiles (e.g., Facebook or X) could be efficiently conducted using automated retrieval techniques, yet the methodology section does not explicitly address this.

The treatment of communication metrics (Page 11, line 367) requires refinement. Merely documenting the presence of social media accounts does not sufficiently capture communication efficacy. A more meaningful analysis would incorporate indicators such as update frequency, audience engagement, or content relevance.

Regarding technological assessment (Page 13, line 441), the assertion that platform choice (e.g., Joomla, WordPress) inherently reflects design quality is problematic. Some scholars argue that newer technologies may introduce inefficiencies (e.g., excessive code redundancy), whereas legacy systems might still function optimally. A more nuanced discussion is warranted.

The limitations section (Page 13, line 451) cites the dynamic nature of web data as a constraint. While this is a well-recognized challenge in digital research (aligned with the velocity attribute in Big Data literature), its inclusion does not necessitate extensive justification.

If Pearson correlation is applied (Page 15, line 561), the analysis should first confirm linearity in data distribution and address potential outliers. Explicitly stating these checks would reinforce the statistical validity of the findings.

General Recommendations for Improvement

Beyond the specific methodological concerns, several structural enhancements would elevate the paper’s academic rigor:

1) Explicit Research Questions (RQs) and Hypotheses – The paper references "relevance to the research question" (Page 5, line 213), yet no clearly enumerated RQs or testable hypotheses are provided. A structured presentation (e.g., RQ1, RQ2, RQ3…) would improve clarity and reproducibility.
2) Research Gap vs. Contribution – While the research gap is well-identified, the absence of explicitly defined RQs diminishes the paper’s utility for future scholars.

In summary, addressing these methodological and structural concerns would significantly enhance the study’s validity, transparency, and scholarly impact.

Author Response

We would firstly like to thank the reviewer for his/her valuable comments. Below are aour answers – also addressed in the paper.

The paper under review is a recent contribution that examines the application of novel methods and methodologies to extract selected attributes related to city halls, primarily sourced from their respective websites. The methodological framework is appropriate, employing sufficient techniques, and the data sources have been selected with due consideration. Key highlights, including the impact of the COVID-19 pandemic as a catalyst for the expansion of online services (e.g., Page 11, lines 348–350), are accurately presented.

However, to enhance the robustness and reproducibility of the study, further elaboration on the website selection criteria is necessary. A critical aspect of such research is ensuring that the sample frame (i.e., the list of URLs) is rigorously linked to the corresponding administrative units (city halls). Many studies fail in this regard, thereby compromising the reliability of their findings. Specifically, the following details should be clarified:
- Selection criteria for city halls – What demographic, administrative, or geographic parameters were applied?
- URL collection methodology – Were automated scrapers, manual searches, or official registries used?
- Quality assurance mechanisms – Was an annotation exercise conducted to manually validate a subset of the data?
- Indicator calculation – How were the metrics, particularly those subdivided into classes (e.g., C51 and C52), computed?

We thank the reviewer for this pertinent observation regarding the rigor and transparency of our data collection and validation process. We have expanded the “Research Methodology” section of the manuscript to clarify the criteria used for selecting municipalities (we took them all 103 municipalities), detail the URL collection process, explain the mechanisms used to ensure data quality, and describe the specific methodology for computing each class of indicators—particularly the subjective ones (C51 and C52).

These clarifications aim to enhance both the robustness and reproducibility of the study. We believe these additions will help improve transparency and support future research replications.

Notably, the criteria for C51 and C52 subclasses appear highly subjective. A more detailed explanation of the measures taken to ensure objectivity and reliability in classification would strengthen the methodological rigor.

Additionally, the paper should clarify whether automated data extraction tools (e.g., web crawlers or scrapers) were employed. For instance, the detection of social media profiles (e.g., Facebook or X) could be efficiently conducted using automated retrieval techniques, yet the methodology section does not explicitly address this.

Thank you for this helpful observation. We confirm that automated tools were indeed used during the data collection process. Specifically, the ParseHub API was employed to extract structured information from municipal websites, including the detection of features such as contact sections, downloadable forms, and social media links. Although, our study extends to analyzing the citizens engagement on social media, for the purpose of the present article we didn’t provide information about that since will turn the article into a way to large analyze. However, another article, focused fully on citizen interaction will shortly follow. Furthermore, we clarified how these automated results were validated to ensure accuracy.

The treatment of communication metrics (Page 11, line 367) requires refinement. Merely documenting the presence of social media accounts does not sufficiently capture communication efficacy. A more meaningful analysis would incorporate indicators such as update frequency, audience engagement, or content relevance.

We appreciate this important observation. The current study focused on the structural presence of communication tools (e.g., social media accounts), which, while useful as an initial proxy for digital outreach, indeed does not capture the full spectrum of communication efficacy.

We acknowledge that metrics such as content relevance, update frequency, and citizen engagement would provide deeper insights. Due to resource and scope limitations, these dimensions were not included in this iteration. However, we fully agree with their value and plan to integrate such dynamic indicators in the next phase of this longitudinal research.

We have revised the manuscript to acknowledge this limitation explicitly and to frame the current communication assessment as a foundational layer for more sophisticated analyses to come.

Regarding technological assessment (Page 13, line 441), the assertion that platform choice (e.g., Joomla, WordPress) inherently reflects design quality is problematic. Some scholars argue that newer technologies may introduce inefficiencies (e.g., excessive code redundancy), whereas legacy systems might still function optimally. A more nuanced discussion is warranted.

We thank the reviewer for this thoughtful observation. Indeed, the original statement may have inadvertently suggested a deterministic link between the use of specific content management systems (CMS) such as Joomla or WordPress and inferior design quality.

We acknowledge that the design quality of a website is less dependent on the platform itself and more on the expertise of the developers and the customization applied. Furthermore, it is true that modern CMSs—while offering powerful features—may introduce unnecessary complexity or code inefficiencies if not implemented properly. Conversely, some legacy systems can deliver clean, efficient interfaces if maintained with care.

We have revised the manuscript to reflect a more balanced and nuanced view, emphasizing that it is not the platform but its implementation and design choices that determine user experience and performance.

The limitations section (Page 13, line 451) cites the dynamic nature of web data as a constraint. While this is a well-recognized challenge in digital research (aligned with the velocity attribute in Big Data literature), its inclusion does not necessitate extensive justification.

Thank you for your observation. We have revised the corresponding paragraph to acknowledge the limitation briefly and succinctly, without overemphasizing its implications.

If Pearson correlation is applied (Page 15, line 561), the analysis should first confirm linearity in data distribution and address potential outliers. Explicitly stating these checks would reinforce the statistical validity of the findings.

We appreciate the reviewer’s important point regarding the assumptions underlying the use of Pearson correlation. Prior to applying the Pearson coefficient, we visually inspected the distributions of the dataset using scatterplots and conducted preliminary linearity checks. No significant outliers were observed, and the relationships between the classes of analysis and the final scores exhibited reasonably linear patterns. Nonetheless, we agree that explicitly stating these methodological checks in the paper will enhance the rigor and credibility of the statistical analysis. The manuscript (discussions and conclusion section) has been updated accordingly.

General Recommendations for Improvement

Beyond the specific methodological concerns, several structural enhancements would elevate the paper’s academic rigor:

1) Explicit Research Questions (RQs) and Hypotheses – The paper references "relevance to the research question" (Page 5, line 213), yet no clearly enumerated RQs or testable hypotheses are provided. A structured presentation (e.g., RQ1, RQ2, RQ3…) would improve clarity and reproducibility.

2) Research Gap vs. Contribution – While the research gap is well-identified, the absence of explicitly defined RQs diminishes the paper’s utility for future scholars.

We thank the reviewer for highlighting the need for explicitly defined research questions. While the original manuscript referred to a general research aim, we re-formulated more clearly structured Research Questions (RQs) will enhance the clarity, focus, and reproducibility of the study. We have now included three explicit research questions that align with the scope, methodology, and findings of the paper. These have been added at the end of the Introduction section.

In summary, addressing these methodological and structural concerns would significantly enhance the study’s validity, transparency, and scholarly impact.

Thank you once again for the valuable commnets. We hope that the updated version of the article will prove more relevant for the scientific community around the egov and smart cities concept. Wishing you all the best!

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for addressing my comments. 

Back to TopTop