Next Article in Journal
Improving Building Acoustics with Coir Fiber Composites: Towards Sustainable Construction Systems
Previous Article in Journal
Self-Assessment Tool in Soft Skills Learning During Clinical Placements in Physiotherapy Degree Programs: A Pilot Validation Study
Previous Article in Special Issue
Assessing Benefits and Risks of Urban and Peri-Urban Agriculture (UPA): A Spatial Approach
 
 
Article
Peer-Review Record

Urban Projects and the Policy-Making Cycle: Indicators for Effective Governance

Sustainability 2025, 17(14), 6305; https://doi.org/10.3390/su17146305
by Francesca Abastante and Beatrice Mecca *
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Sustainability 2025, 17(14), 6305; https://doi.org/10.3390/su17146305
Submission received: 10 June 2025 / Revised: 4 July 2025 / Accepted: 7 July 2025 / Published: 9 July 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The article tackles an important but underdeveloped question: translating the high-level SDG 11 monitoring architecture into an operational toolbox to inform decisions at district- and building-scale urban projects. Combining a PRISMA-guided review, analysis of existing Italian and European SATs, and a small expert elicitation, the authors derive a consolidated shortlist of 27 indicators to support all four stages of the policy-making cycle. The manuscript is well organised, the literature coverage is generally comprehensive, and the proposed workflow offers a replicable template for other jurisdictions. In its present form, however, the paper still exhibits methodological, editorial, and conceptual gaps that must be addressed before it can be recommended for publication.
Drawbacks:
1. The expert survey is restricted to three Italian regions involved in the GLOSSA project. This raises transferability concerns because housing, transport, and risk management data regimes differ markedly even within the EU.
2. Only three experts are consulted, and their qualitative judgements are reported verbatim without intercoder reliability statistics or rank-aggregation logic.
3. Nearly half of the shortlisted indicators are still labelled scenario-based or qualitative and depend on bespoke project information.
4. The abstract and Discussion promise an “open-source toolbox”, yet no architecture diagram, software stack, or governance model is supplied.
5. There are technical mistakes in lines 366 and 398.
Recommendations:
1. Expand the validation panel to include stakeholders from at least one northern and one eastern European city, or include a sensitivity analysis showing how the final list would change under different regional data availability scenarios.
2. Either enlarge the sample (≥10 experts) or justify why three domain specialists suffice statistically. Provide Fleiss’ κ or Krippendorff’s α to demonstrate agreement levels and explain how conflicting ratings were reconciled.
3. For full reproducibility, supply explicit formulas, data sources, spatial resolution, and recommended update frequency in an online appendix or the open-source toolbox.
4. Add at least one worked example (real or hypothetical) that shows how the “Outdoor common areas” metric would inform phase problem framing, how it would steer phase, option appraisal, and how it would be monitored in phase.
5. Demonstrate real-world usability through a brief case study or pilot deployment.

Author Response

COMMENT: The article tackles an important but underdeveloped question: translating the high-level SDG 11 monitoring architecture into an operational toolbox to inform decisions at district- and building-scale urban projects. Combining a PRISMA-guided review, analysis of existing Italian and European SATs, and a small expert elicitation, the authors derive a consolidated shortlist of 27 indicators to support all four stages of the policy-making cycle. The manuscript is well organised, the literature coverage is generally comprehensive, and the proposed workflow offers a replicable template for other jurisdictions. In its present form, however, the paper still exhibits methodological, editorial, and conceptual gaps that must be addressed before it can be recommended for publication.

ANSWER: Thank you for your appreciation and suggestions. We have significantly improved the paper in this new version

 

COMMENT: The expert survey is restricted to three Italian regions involved in the GLOSSA project. This raises transferability concerns because housing, transport, and risk management data regimes differ markedly even within the EU.

ANSWER: Thank you for your valuable comment. We acknowledge that data regimes related to housing, transport, and risk management can differ significantly across European countries, and we fully agree that this can affect transferability when comparing across national contexts.

However, the primary objective of our work is not to develop a model directly transferable at the European scale. Rather, the paper aims to define a set of operative sustainability indicators starting from the strategic ones and explore how these can be made operational at the national and local scales within the Italian context. The methodological framework was therefore designed to reflect both national-level priorities and the diversity of regional specificities across Italy.

The selection of the three Italian regions involved in the GLOSSA project, one from the North (Piedmont), one from the Centre-South (Campania), and one from the Islands (Sardinia), was intentional. This geographical distribution was meant to capture the socio-economic, infrastructural, and environmental diversity present within Italy, thus enhancing the internal generalizability of the model and indicator set across different national contexts. While the findings are specific to Italy, we believe the methodological approach and operationalization process can offer valuable insights for similar exercises in other countries.

We have clarified this point in the revised manuscript (Section 3.1, line 215-221; and Section 5, line 517-519), to avoid misunderstandings regarding the scope and intended applicability of the study. Moreover we added a map containing a visual representation of the three Regions involved in the study (Section 3.1).

 

COMMENT: Only three experts are consulted, and their qualitative judgements are reported verbatim without intercoder reliability statistics or rank-aggregation logic.

ANSWER: We thank the reviewer for raising this important point. The selection of the three experts was carefully made to ensure both thematic and territorial relevance to the study. All experts are senior academics with consolidated expertise in real estate appraisal, indicator-based evaluation methods, and urban planning. Moreover, each expert is affiliated with one of the three Italian regions analyzed (Piedmont, Campania, and Sardinia), which was essential to capture context-specific insights on the measurability and territorial relevance of the proposed indicators. Given the stakeholder-oriented and exploratory nature of this empirical study, the primary goal was not to perform statistical generalization, but rather to elicit informed, context-sensitive judgments to support the operationalization of strategic indicators. Therefore, standard intercoder reliability measures were not applicable nor appropriate in this case. Nevertheless,the collected data were aggregated using basic statistical synthesis procedures, with particular reference to the use of the mode as a measure of central tendency to represent the distribution of observations. This clarification has been included in the revised manuscript in section 3.1, line 241-246 and line 259-261.

 

COMMENT: Nearly half of the shortlisted indicators are still labelled scenario-based or qualitative and depend on bespoke project information.

ANSWER: Thank you for this observation, which we consider accurate. Many of the indicators are qualitative in nature and based on scenario-based measurements, involving different procedures depending on the specifics of the project. These indicators are well-established and derived from evaluation protocols recognized at both the national and European levels (BREEAM, GBC, and ITACA), as well as from assessment practices used nationally for evaluating housing quality (PINQUA 2022). Therefore, although they are not quantitative indicators, we regard them as valid, useful, and fundamental for the design process.

 

COMMENT: The abstract and Discussion promise an “open-source toolbox”, yet no architecture diagram, software stack, or governance model is supplied.

ANSWER: Thank you for this comment; we acknowledge the need for greater clarity. The open-source toolbox represents the overall output of the GLOSSA project, whereas the objective of this paper focuses on one of the initial stages of the research, specifically the development of the set of indicators underpinning the open-source toolbox. These points have been clarified in greater detail in the abstract and introduction.

 

COMMENT: There are technical mistakes in lines 366 and 398.

ANSWER: Thank you for the revision, we have corrected those errors.

 

COMMENT: Expand the validation panel to include stakeholders from at least one northern and one eastern European city, or include a sensitivity analysis showing how the final list would change under different regional data availability scenarios.

ANSWER: We appreciate the reviewer’s suggestion to expand the validation panel or include a sensitivity analysis involving Northern or Eastern European cities. However, we respectfully argue that this recommendation falls outside the scope and objectives of the present study.

As clarified in the manuscript, our research is explicitly framed within the Italian national context. The purpose is not to produce a pan-European comparative analysis, but rather to develop and test a methodology for translating strategic sustainability goals into a context-sensitive, operational indicator framework applicable at both the national and local levels within Italy.

In this regard, the selection of three regions from different parts of the country (North, Centre-South, and Islands) was intended to ensure internal diversity and territorial representativeness across the Italian context. Including stakeholders from other European cities, while certainly valuable in a broader comparative perspective, would not align with the empirical design or the intended application of the study.

What we believe can be generalized and transferred beyond the Italian case is the theoretical and methodological approach, particularly the multi-scalar reasoning and the use of expert-informed operationalization of strategic indicators. However, the specific set of indicators and their operational thresholds are inevitably bound to national and regional governance frameworks, data availability, and socio-territorial conditions.

We have clarified these points in the revised manuscript to prevent potential misunderstandings regarding the scope and intended applicability of the study. In line with your first comment explain this issue in section 3.1, line 215-221; and section 5, line 517-519.

 

COMMENT:  Either enlarge the sample (≥10 experts) or justify why three domain specialists suffice statistically. Provide Fleiss’ κ or Krippendorff’s α to demonstrate agreement levels and explain how conflicting ratings were reconciled.

ANSWER: We appreciate the reviewer’s suggestion, however, we respectfully argue that this recommendation falls outside the scope and objectives of the present study. We have clarified the issue of selecting the three experts in section 3.1, lines 217-222.

 

COMMENT: For full reproducibility, supply explicit formulas, data sources, spatial resolution, and recommended update frequency in an online appendix or the open-source toolbox.

ANSWER: We thank the reviewer for the suggestion. We have provided the Excel file containing the proposed indicators.

 

COMMENT: Add at least one worked example (real or hypothetical) that shows how the “Outdoor common areas” metric would inform phase problem framing, how it would steer phase, option appraisal, and how it would be monitored in phase.

ANSWER: Thank you for this suggestion. We have included examples in Section 4, specifically on lines 348–356 and 360-366.

 

COMMENT: Demonstrate real-world usability through a brief case study or pilot deployment.

ANSWER: We thank the reviewer for the insightful suggestion regarding the demonstration of real-world usability. While we fully agree on the importance of case study applications, we would like to clarify that the primary contribution of this paper is the development of a theoretical-methodological framework for the operationalization of urban sustainability indicators in the Italian context.

The study includes both academic studies and a structured empirical validation through expert consultation. These steps were designed to test the methodological soundness and contextual relevance of the proposed indicators’ set.

The deployment and testing of the indicators in real-world planning processes is part of the next phase of the GLOSSA project and is currently under development. As such, the present paper lays the conceptual and methodological groundwork for these future applications, which will allow for more in-depth exploration of implementation challenges and usability in diverse urban contexts.

We have clarified this positioning in section 5, line 504-509,  to better communicate the scope and expected follow-up of the research.

Reviewer 2 Report

Comments and Suggestions for Authors

This manuscript focuses on assessing sustainability in urban projects. From an engineering, architectural, energy and 3D modelling perspective, in order to make the manuscript more understandable even for those who are not specifically from the sector, I suggest:        

  • In line 67, the phrase “not entirely appropriate” is mentioned. I recommend explaining technically at this stage why established tools are not appropriate in this context.
  • In line 213, the phrase “indicators that were not relevant...” is mentioned. I recommend specifying technically and scientifically the factors that lead to the conclusion of “irrelevance” for the district. Explain what the relevant factors are, so as to give the reader a better understanding of the factors governing the choices.
  • In line 240, there is a reference to “the ITACA protocol...”. I recommend explaining what the ITACA Protocol consists of, given the concentration of fundamental sustainability issues at the building level, including rainwater reuse, the construction of star-shaped electrical systems (to combat electromagnetic fields), and the study of the building's positioning in relation to the cardinal points to maximise solar energy. Remember its pioneering role in the study of sustainability.
  • At line 293, reference is made to “...Protocollo Itaca...but it has been excluded”. I recommend specifying in great detail the reasons for the exclusion of a tool that has been used for years as a means of assessing sustainability. Among the award points are the use of renewable energy sources, the use of local materials and the life cycle of materials. Provide a more technical and substantial explanation of the reasons for excluding the Protocol.
  • In addition, in order to improve the manuscript with useful ideas, I suggest reading the following articles:

10.3390/su15129788

10.3390/su11030813

Author Response

This manuscript focuses on assessing sustainability in urban projects. From an engineering, architectural, energy and 3D modelling perspective, in order to make the manuscript more understandable even for those who are not specifically from the sector, I suggest:       

COMMENT: In line 67, the phrase “not entirely appropriate” is mentioned. I recommend explaining technically at this stage why established tools are not appropriate in this context.

ANSWER: Thank you for this revision. We clarified that the underlying reason lies in the fact that the ISTAT and NSDS indicators operate at the national scale, whereas those proposed by the RSSD are at the regional scale. As such, they are not fully appropriate for the urban and building scales.

 

COMMENT: In line 213, the phrase “indicators that were not relevant...” is mentioned. I recommend specifying technically and scientifically the factors that lead to the conclusion of “irrelevance” for the district. Explain what the relevant factors are, so as to give the reader a better understanding of the factors governing the choices.

ANSWER: Thank you for this suggestion. We have specified the reasons for excluding irrelevant indicators in section 3.2, lines 247-252.

 

COMMENT: In line 240, there is a reference to “the ITACA protocol...”. I recommend explaining what the ITACA Protocol consists of, given the concentration of fundamental sustainability issues at the building level, including rainwater reuse, the construction of star-shaped electrical systems (to combat electromagnetic fields), and the study of the building's positioning in relation to the cardinal points to maximise solar energy. Remember its pioneering role in the study of sustainability.

ANSWER: We thank the reviewer for this suggestion. We acknowledge the pioneering role of the ITACA protocol; however, since we also considered and mentioned other sustainability assessment tools—which we believe hold equal relevance and importance—we have included a clarifying sentence on their role and significance in promoting sustainable building and urban development in Section 3, lines 235–238.

 

COMMENT: At line 293, reference is made to “...Protocollo Itaca...but it has been excluded”. I recommend specifying in great detail the reasons for the exclusion of a tool that has been used for years as a means of assessing sustainability. Among the award points are the use of renewable energy sources, the use of local materials and the life cycle of materials. Provide a more technical and substantial explanation of the reasons for excluding the Protocol.

ANSWER: We thank the reviewer for this comment. However, we wish to clarify that the ITACA protocol was not excluded from our research. The study considered three protocols—ITACA, GBC, and BREEAM—to enrich the indicator set with operational measures applicable at the building and neighborhood scales. What we state in line 293 refers specifically to the exclusion of certain indicators, such as the “percentage reduction of the non-renewable energy performance index,” which was replaced by proxy indicators. For instance, although the “percentage reduction of the non-renewable energy performance index” is a highly sophisticated measure, it adopts a performance-based perspective rather than a prescriptive one and does not specify which sustainable and renewable technologies should be implemented in a project. Consequently, this indicator was substituted with the “Energy Sustainability” indicator, derived from the National Innovative Program for Housing Quality.

Therefore, no revisions or additions were made to the paper regarding this specific issue.

COMMENT: In addition, in order to improve the manuscript with useful ideas, I suggest reading the following articles: 10.3390/su15129788; 10.3390/su11030813

ANSWER: We thank the reviewer for this suggestion. We have supplemented the manuscript with additional references concerning the use of indicators in urban governance, including the second reference suggested (lines 137–139).

Reviewer 3 Report

Comments and Suggestions for Authors

Introduction and General Context: The manuscript under review addresses a timely and important topic – integrating sustainability into the urban policy-making cycle by proposing a set of indicators for effective governance. The introduction of the article lays out the context and rationale for the study, highlighting the need for tools to assess the sustainability of urban projects at the local level. The authors clearly explain the significance of the public policy cycle (phases (i)-(iv)) and the fact that existing global indicators (the 2030 Agenda, SDG11) do not sufficiently cover the stages of alternative formulation and implementation (phases (ii) and (iii) of the cycle). Thus, the gap that this study aims to fill is well articulated. The objective of the work is clearly stated: the development of a framework to localize and adapt global sustainability indicators to the scale of urban projects. The introduction is well structured and provides theoretical background (including the policy-making cycle concept and the relevance of SDG11) with appropriate references. It could be suggested, however, that the authors more explicitly emphasize the originality of their approach relative to existing literature. For example, a final paragraph in the introduction summarizing the novel contribution of the study (what new insight or tool this indicator framework provides compared to other urban sustainability assessment studies) would be useful. Additionally, the authors might explicitly mention in the introduction the geographic scope (European/Italian context) of the research to set appropriate expectations regarding the applicability of the results.

Theoretical Framework and Literature Review: Section 2 provides an overview of indicators for urban and architectural sustainability and their connection to the phases of the policy cycle. The authors cite relevant literature on the definition and role of indicators (e.g., references [23], [24] for the definition of indicators). They also integrate the policy cycle concept (ref. [20]) and explain how indicators can support different stages of this cycle. This theoretical framework is well constructed and helps the reader understand the context. It might be improved by including a few additional concrete examples or prior studies that have used indicators in urban governance, to better illustrate the state of the art. Also, the degree of self-citation in this section appears moderate – the authors cite some of their own work (e.g., [27], [28]) to support their points. This is acceptable if those works are directly relevant, but it would also be beneficial to incorporate other external sources to balance the perspective.

Methodology (Research Design): Section 3 describes the research design, which is divided into two main phases: (i) a systematic literature review (SLR) and (ii) indicator identification from the collected literature. The methodology is complex, combining an inductive approach (PRISMA-guided SLR) with a deductive analysis (expert surveys). The use of the PRISMA 2020 protocol is commendable, indicating rigor in the literature selection process. The authors mention that they applied a search filter in Scopus with terms such as “Indicator”, “SDG11”, etc., limited to 2015–2024 and the European context. The selection process led to 29 sources (scientific papers and technical documents) being included in the final qualitative analysis. It would be helpful for the text to specify more clearly the inclusion/exclusion criteria and the exact number of initial results versus final included studies (these details likely appear in the PRISMA Figure 2, but mentioning them in the text would increase transparency). Additionally, the inclusion of institutional documents (national and regional sustainability strategies, sustainability assessment tool manuals such as ITACA, GBC, BREEAM) is appropriate for the study’s goals, though it should be highlighted that these are not traditional academic sources but rather “grey literature.” This methodological choice is justified by the need to gather all relevant indicators, but the authors could explicitly acknowledge it.

After the literature review, the authors proceeded to indicator identification. They aggregated indicators from four main sources: the official SDG11 indicators monitored by ISTAT (32 initial indicators), national and regional sustainable development strategies (121 indicators), sustainability assessment tool protocols (ITACA, GBC, BREEAM – 109 indicators), and relevant scientific articles (18 articles contributed 79 indicators). This process yielded an impressive preliminary set of 341 indicators, organized into 17 thematic categories corresponding to 7 SDG11 targets. The authors also explain the exclusion of SDG11 targets 11.a, 11.b, and 11.c from the study’s scope, noting that these pertain to broad urban policies beyond the scale of individual projects (urban-rural linkages, policy implementation, etc.) and thus fall outside the design-oriented focus. This decision is reasonable and clearly justified in the text.

Subsequently, the methodology involves multiple filtering steps for the collected indicators. The first filtering was the removal of duplicates, reducing the number of indicators from 341 to 244 (Table 2). Then, the authors applied a relevance filter with respect to the study’s scale and context, removing indicators that were not pertinent to project evaluation at district/building level or that were overly complex/difficult to operationalize. This step also involved replacing some complicated indicators with more practical proxies – for example, the complex “Global Non-Renewable Primary Energy” indicator from the ITACA protocol was dropped and replaced with a more operational proxy indicator “Energy Sustainability,” which measures the number of renewable energy systems used. After this analysis, an intermediate set of 57 indicators was obtained, organized into 13 thematic categories. We note that some initial categories were eliminated due to lack of indicators after filtering (e.g., the categories “Expenditure on cultural goods or services,” “Deaths or injuries from disasters,” “Emissions,” and “Assaults and harassment” were removed). It would be useful for the authors to comment on the implications of removing these categories; for instance, dropping the “Emissions” category (related to environmental impact) is noteworthy and warrants a brief justification given the importance of emissions indicators for urban sustainability.

The next methodological step was the validation of the 57-indicator set through expert consultation. The authors administered a questionnaire to three academic experts (one for each region involved in the GLOSSA project: Piedmont, Campania, Sardinia) specializing in urban and architectural evaluation. The experts were asked to evaluate each of the 57 indicators in terms of its importance/relevance and calculability, using a qualitative scale (high, medium, low, none, unknown). The outcome of this assessment is clearly presented: all 57 indicators were deemed valid by the experts, though with varying levels of relevance and ease of calculation (lines 718-726). Specifically, 15 indicators were rated as “highly relevant and easily calculable,” another 15 as “highly relevant but with average calculability,” 4 as “moderately relevant and easily calculable,” 14 as “moderately relevant with average calculability,” and 9 were considered “not relevant and/or difficult to calculate”. This classification is useful for understanding the experts’ perspective on each indicator.

Finally, the authors derived the proposed final set of 27 indicators based on the expert evaluations. They first selected the 15 indicators deemed highly relevant and easy to calculate. Then they examined the distribution of these across the 13 categories, aiming to have each thematic category covered by at least one monitoring and one evaluation indicator, at both building and district scales. For categories where this “full package” was incomplete (i.e., missing indicators for certain scales or phases), they added indicators from the next tier – that is, those rated “highly relevant but moderately calculable” – to fill the gaps. Even so, the authors note that it was not possible to cover every category with all combinations (monitoring and evaluation at both levels) because the initial set did not contain suitable indicators in some cases. This process is described transparently and demonstrates a thoughtful approach in synthesizing a balanced set of indicators. However, it would be helpful to explicitly state in the text how many indicators were added beyond the initial 15 to reach the total of 27, and from which categories they came (presumably 12 additional indicators from the “high relevance, moderate calculability” group). Clarifying this would give the reader a complete picture of how the final set was constructed.

Overall, the research design is solid and well explained. The multi-method approach (SLR + expert consultation) is appropriate and lends robustness to the results. The methodological validity is high, though there are inherent limitations which the authors acknowledge (for example, the relatively small number of experts and the limited geographical scope). We still suggest that the authors provide more details, if possible, on the criteria for selecting the experts (how the three experts were chosen and whether their representation is sufficient), and whether the evaluation process was consistent (did all experts use the same definitions for “relevance” and “calculability”?). Addressing these points would further strengthen confidence in the methodology.

Results and Discussion: Section 4 presents the results, i.e. the proposed set of indicators for monitoring and evaluating urban projects in line with SDG11. The results are presented in text as well as through tables and figures, which aids clarity. Table 1 summarizes the thematic categories defined for each SDG11 target, giving the reader a clear understanding of how the authors organized the indicators by domain. Table 2 shows the numerical distribution of indicators before and after duplicate removal, highlighting how certain targets (11.3, 11.4, 11.5, 11.7) have fewer available indicators– an interesting finding that is also discussed in the conclusions. Figure 3 illustrates the classification of the 57 indicators by spatial scale (building/district) and evaluation stage (ex-ante, in-itinere, ex-post), and Figure 4 presents the characteristics of the final set of 27 indicators, highlighting the fragmentation of the set, general relevance, and issues related to data availability. The graphical presentation is useful, although in the review PDF the figure quality is likely reduced; for the final publication, it is recommended that they be provided at good resolution and in color for optimal readability.

The authors discuss the results appropriately. It is appreciated that they mention the fragmentation of the indicator set and problems regarding data availability for some indicators. This frankness about the weaknesses of the proposed set adds credibility to the study. Additionally, the results are contextualized: for example, the authors highlight that the least developed aspects in current urban indicators correspond to targets 11.3, 11.4, 11.5, 11.7, confirming the authors’ hypothesis about the still limited use of indicators outside official statistical frameworks.

Conclusions and Final Remarks: Section 5 combines the discussion with the conclusions and highlights the contributions of the research. The authors reiterate how the final set of 27 indicators fills an operational gap in the SDG11 indicators by enabling the evaluation of projects at building and district scales in the implementation and monitoring stages, which adds value for urban governance. The original contribution of the work is clear: integrating indicators from non-institutional sources (scientific literature, assessment tools) with official ones, positioning them strategically within the urban policy-making cycle, and proposing a framework that is replicable for other contexts. The potential impact of the study is significant, especially for local authorities and planners seeking to align urban projects with the Sustainable Development Goals in a concrete, measurable way.

The authors also enumerate the limitations of the study, which is highly appreciated. They acknowledge that the literature filtering and strong focus on European and national sources may have excluded some useful indicators. They also acknowledge that the expert validation was limited to knowledge from three regions (Piedmont, Campania, Sardinia), meaning the set of 27 indicators is somewhat specific to the context of those regions. This limitation implies that the results should be generalized with caution. In addition, the authors mention that they did not, in this paper, delve into the details of the units of measurement or calculation methods for each indicator in the final set, noting that some indicators from SATs involve scenario-based calculations that may not be relevant in some contexts. All of these limitations are valid and it is very good that the authors openly discuss them. As a suggestion, it might be helpful for the conclusion to explicitly underline that the proposed indicator set is a starting point that needs to be adapted to different local contexts, and that it complements rather than replaces official indicators. Also, explicitly mentioning any plans to extend the study (which the authors do – they outline future steps such as applying the framework to a national scale, broader consultations with other public authorities, and field-testing the open-source tool) gives clear directions for follow-up, which is a positive aspect.

Quality of Writing and Technical Issues: The article is written in English at a generally good level. The text is mostly clear and well organized. However, there are a few minor language or typographical errors that should be corrected upon revision:

  • For example, shows “2030Agenda” without a space, whereas it should be written “2030 Agenda”.
  • The phrase “the Italian Nation as well as the Italian Regions define their positioning...” reads somewhat oddly; I recommend rephrasing for clarity, e.g., “both the national and regional authorities in Italy assess their positioning...”.
  • In Table 1, the word “pulic” is a typo and should be corrected to “public” (“Presence and quality of green areas or public spaces”).
  • In the reference list, reference [42] has “11 Novemebr 2024”, which is a spelling error (“November”). Such minor spelling mistakes should be fixed.
  • Additionally, the clause “led to the a priori exclusion of useful indicators” is stylistically awkward. It would be clearer to say “could have led to the exclusion a priori of some useful indicators” or rephrase to avoid the “the a priori” construction.

References and Self-Citation: The reference list is extensive (42 entries) and includes relevant and recent sources, including documents from 2023-2024, indicating that the scientific context is up-to-date. The references cover both academic literature (ISI articles such as Sustainability, Smart Cities, etc.) and institutional reports (OECD, UN, ISTAT) and standards (PRISMA guidelines, BREEAM manual), reflecting the interdisciplinary nature of the topic. The degree of self-citation is noticeable; the authors cite their own previous works multiple times (at least 5-6 out of 42 sources, e.g., references 2, 27, 28, 34, 35, 36 include the authors of this paper). This is not necessarily problematic since those works appear relevant to the subject (e.g., the authors’ earlier studies on SDG11 indicators in the Italian context). However, I recommend that the authors ensure each self-citation is well justified and essential to the argument, and avoid over-reliance on their own publications at the expense of independent sources. Overall, the bibliography seems adequate and balanced, combining theoretical and practical sources. It might be helpful to add a few references to similar international studies that have proposed urban indicator frameworks (if any recent ones exist), to more clearly position the article’s contribution in relation to them.

Originality, Relevance, and Applicability: The study makes an original contribution by proposing an integrated set of specialized indicators for assessing the sustainability of urban projects at the local level, addressing a practical need in urban governance. In the literature, there are many studies on urban sustainable development indicators, but this work stands out through its combination of global indicators with local specificity (Italian/regional context) and its direct alignment with the policy-making cycle. The originality also lies in the synthesis method (systematically combining diverse sources and expert validation) and the focus on SDG11 at building/district scale – a scale at which official indicators are not typically defined. The practical relevance of the results is high: the set of 27 indicators can potentially be used by local public authorities or urban planners to monitor and evaluate the impact of urban projects in line with sustainability objectives. However, as the authors themselves acknowledge, the direct applicability of the set is currently limited to the studied context (three regions in Italy). To increase international impact, further adaptations and validations in other contexts would be needed. The fact that the authors indicate the possibility of replicating the methodological framework in other regions and scaling it up nationally is promising, as it suggests the potential for generalization. In my opinion, once revised according to the suggestions above, this paper will be a valuable contribution to the discourse on sustainable cities and data-driven governance tools.

Conclusion and Overall Recommendation: In conclusion, the manuscript is well-structured and addresses a contemporary topic with a rigorous methodology, presenting results that are useful to both practitioners and researchers. The comments above are intended to improve the clarity and quality of the work, but they do not detract from its underlying scientific value. I encourage the authors to incorporate these suggestions to ensure the final version is as strong as possible. Overall, the paper has clear merit for publication, pending the minor revisions noted.

Author Response

COMMENT: Introduction and General Context: The manuscript under review addresses a timely and important topic – integrating sustainability into the urban policy-making cycle by proposing a set of indicators for effective governance. The introduction of the article lays out the context and rationale for the study, highlighting the need for tools to assess the sustainability of urban projects at the local level. The authors clearly explain the significance of the public policy cycle (phases (i)-(iv)) and the fact that existing global indicators (the 2030 Agenda, SDG11) do not sufficiently cover the stages of alternative formulation and implementation (phases (ii) and (iii) of the cycle). Thus, the gap that this study aims to fill is well articulated. The objective of the work is clearly stated: the development of a framework to localize and adapt global sustainability indicators to the scale of urban projects. The introduction is well structured and provides theoretical background (including the policy-making cycle concept and the relevance of SDG11) with appropriate references. It could be suggested, however, that the authors more explicitly emphasize the originality of their approach relative to existing literature. For example, a final paragraph in the introduction summarizing the novel contribution of the study (what new insight or tool this indicator framework provides compared to other urban sustainability assessment studies) would be useful. Additionally, the authors might explicitly mention in the introduction the geographic scope (European/Italian context) of the research to set appropriate expectations regarding the applicability of the results.

ANSWER: We thank the reviewer for this suggestion, which we have addressed and detailed in the introduction, lines 92–99.

COMMENT: Theoretical Framework and Literature Review: Section 2 provides an overview of indicators for urban and architectural sustainability and their connection to the phases of the policy cycle. The authors cite relevant literature on the definition and role of indicators (e.g., references [23], [24] for the definition of indicators). They also integrate the policy cycle concept (ref. [20]) and explain how indicators can support different stages of this cycle. This theoretical framework is well constructed and helps the reader understand the context. It might be improved by including a few additional concrete examples or prior studies that have used indicators in urban governance, to better illustrate the state of the art. Also, the degree of self-citation in this section appears moderate – the authors cite some of their own work (e.g., [27], [28]) to support their points. This is acceptable if those works are directly relevant, but it would also be beneficial to incorporate other external sources to balance the perspective.

ANSWER: We thank the reviewer for the positive feedback regarding the structure and the moderate level of self-citation supporting our arguments. We have taken into account the suggestion to integrate additional references concerning the use of indicators in urban governance (lines 137–139).

COMMENT: Methodology (Research Design): Section 3 describes the research design, which is divided into two main phases: (i) a systematic literature review (SLR) and (ii) indicator identification from the collected literature. The methodology is complex, combining an inductive approach (PRISMA-guided SLR) with a deductive analysis (expert surveys). The use of the PRISMA 2020 protocol is commendable, indicating rigor in the literature selection process. The authors mention that they applied a search filter in Scopus with terms such as “Indicator”, “SDG11”, etc., limited to 2015–2024 and the European context. The selection process led to 29 sources (scientific papers and technical documents) being included in the final qualitative analysis. It would be helpful for the text to specify more clearly the inclusion/exclusion criteria and the exact number of initial results versus final included studies (these details likely appear in the PRISMA Figure 2, but mentioning them in the text would increase transparency).

ANSWER: We thank the reviewer for the helpful observation. We acknowledge the importance of providing greater transparency regarding the exclusion process within the PRISMA-based literature review. The papers were excluded based on a combination of content-related and methodological criteria. We have added a summary table in Section 3.1. This addition enhances the transparency of our review process and supports reproducibility for future research.

 

COMMENT:  Additionally, the inclusion of institutional documents (national and regional sustainability strategies, sustainability assessment tool manuals such as ITACA, GBC, BREEAM) is appropriate for the study’s goals, though it should be highlighted that these are not traditional academic sources but rather “grey literature.” This methodological choice is justified by the need to gather all relevant indicators, but the authors could explicitly acknowledge it.

ANSWER: We thank the reviewer for this remark. The requested clarification has been incorporated into Section 3.1, lines 208-213.

COMMENT: After the literature review, the authors proceeded to indicator identification. They aggregated indicators from four main sources: the official SDG11 indicators monitored by ISTAT (32 initial indicators), national and regional sustainable development strategies (121 indicators), sustainability assessment tool protocols (ITACA, GBC, BREEAM – 109 indicators), and relevant scientific articles (18 articles contributed 79 indicators). This process yielded an impressive preliminary set of 341 indicators, organized into 17 thematic categories corresponding to 7 SDG11 targets. The authors also explain the exclusion of SDG11 targets 11.a, 11.b, and 11.c from the study’s scope, noting that these pertain to broad urban policies beyond the scale of individual projects (urban-rural linkages, policy implementation, etc.) and thus fall outside the design-oriented focus. This decision is reasonable and clearly justified in the text. Subsequently, the methodology involves multiple filtering steps for the collected indicators. The first filtering was the removal of duplicates, reducing the number of indicators from 341 to 244 (Table 2). Then, the authors applied a relevance filter with respect to the study’s scale and context, removing indicators that were not pertinent to project evaluation at district/building level or that were overly complex/difficult to operationalize. This step also involved replacing some complicated indicators with more practical proxies – for example, the complex “Global Non-Renewable Primary Energy” indicator from the ITACA protocol was dropped and replaced with a more operational proxy indicator “Energy Sustainability,” which measures the number of renewable energy systems used. After this analysis, an intermediate set of 57 indicators was obtained, organized into 13 thematic categories. We note that some initial categories were eliminated due to lack of indicators after filtering (e.g., the categories “Expenditure on cultural goods or services,” “Deaths or injuries from disasters,” “Emissions,” and “Assaults and harassment” were removed). It would be useful for the authors to comment on the implications of removing these categories; for instance, dropping the “Emissions” category (related to environmental impact) is noteworthy and warrants a brief justification given the importance of emissions indicators for urban sustainability.

ANSWER: We thank the Reviewer for the valuable suggestion, with which we fully agree. The removal of this category is undoubtedly a relevant aspect. However, the set of indicators collected and proposed in this paper is based on existing sources and was developed according to a specific methodology. We believe that this aspect could represent a valuable direction for future research aimed at developing new indicators to address current gaps in the measurement of sustainability at both the district and building scales. We have addressed this issue in the conclusions, lines 486-495.

COMMENT: The next methodological step was the validation of the 57-indicator set through expert consultation. The authors administered a questionnaire to three academic experts (one for each region involved in the GLOSSA project: Piedmont, Campania, Sardinia) specializing in urban and architectural evaluation. The experts were asked to evaluate each of the 57 indicators in terms of its importance/relevance and calculability, using a qualitative scale (high, medium, low, none, unknown). The outcome of this assessment is clearly presented: all 57 indicators were deemed valid by the experts, though with varying levels of relevance and ease of calculation (lines 718-726). Specifically, 15 indicators were rated as “highly relevant and easily calculable,” another 15 as “highly relevant but with average calculability,” 4 as “moderately relevant and easily calculable,” 14 as “moderately relevant with average calculability,” and 9 were considered “not relevant and/or difficult to calculate”. This classification is useful for understanding the experts’ perspective on each indicator. Finally, the authors derived the proposed final set of 27 indicators based on the expert evaluations. They first selected the 15 indicators deemed highly relevant and easy to calculate. Then they examined the distribution of these across the 13 categories, aiming to have each thematic category covered by at least one monitoring and one evaluation indicator, at both building and district scales. For categories where this “full package” was incomplete (i.e., missing indicators for certain scales or phases), they added indicators from the next tier – that is, those rated “highly relevant but moderately calculable” – to fill the gaps. Even so, the authors note that it was not possible to cover every category with all combinations (monitoring and evaluation at both levels) because the initial set did not contain suitable indicators in some cases. This process is described transparently and demonstrates a thoughtful approach in synthesizing a balanced set of indicators. However, it would be helpful to explicitly state in the text how many indicators were added beyond the initial 15 to reach the total of 27, and from which categories they came (presumably 12 additional indicators from the “high relevance, moderate calculability” group). Clarifying this would give the reader a complete picture of how the final set was constructed. Overall, the research design is solid and well explained. The multi-method approach (SLR + expert consultation) is appropriate and lends robustness to the results. The methodological validity is high, though there are inherent limitations which the authors acknowledge (for example, the relatively small number of experts and the limited geographical scope). We still suggest that the authors provide more details, if possible, on the criteria for selecting the experts (how the three experts were chosen and whether their representation is sufficient), and whether the evaluation process was consistent (did all experts use the same definitions for “relevance” and “calculability”?). Addressing these points would further strengthen confidence in the methodology.

ANSWER: We thank the Reviewer for these comments. We have added relevant information regarding the experts in Section 3.1, lines 240-245. All experts involved were informed of the objective to observe and evaluate the indicators in terms of their relevance and calculability with respect to the Italian context and their specific regional background (added in section 3.1, line 246-248). This procedure was designed to ensure consistency across the evaluations. Nevertheless, we acknowledge that some degree of variability among evaluators is inevitable.

COMMENT: Results and Discussion: Section 4 presents the results, i.e. the proposed set of indicators for monitoring and evaluating urban projects in line with SDG11. The results are presented in text as well as through tables and figures, which aids clarity. Table 1 summarizes the thematic categories defined for each SDG11 target, giving the reader a clear understanding of how the authors organized the indicators by domain. Table 2 shows the numerical distribution of indicators before and after duplicate removal, highlighting how certain targets (11.3, 11.4, 11.5, 11.7) have fewer available indicators– an interesting finding that is also discussed in the conclusions. Figure 3 illustrates the classification of the 57 indicators by spatial scale (building/district) and evaluation stage (ex-ante, in-itinere, ex-post), and Figure 4 presents the characteristics of the final set of 27 indicators, highlighting the fragmentation of the set, general relevance, and issues related to data availability. The graphical presentation is useful, although in the review PDF the figure quality is likely reduced; for the final publication, it is recommended that they be provided at good resolution and in color for optimal readability.

ANSWER: We thank the Reviewer for the suggestion. We have checked the quality and resolution of the images to ensure they are appropriate.

The authors discuss the results appropriately. It is appreciated that they mention the fragmentation of the indicator set and problems regarding data availability for some indicators. This frankness about the weaknesses of the proposed set adds credibility to the study. Additionally, the results are contextualized: for example, the authors highlight that the least developed aspects in current urban indicators correspond to targets 11.3, 11.4, 11.5, 11.7, confirming the authors’ hypothesis about the still limited use of indicators outside official statistical frameworks.

COMMENT: Conclusions and Final Remarks: Section 5 combines the discussion with the conclusions and highlights the contributions of the research. The authors reiterate how the final set of 27 indicators fills an operational gap in the SDG11 indicators by enabling the evaluation of projects at building and district scales in the implementation and monitoring stages, which adds value for urban governance. The original contribution of the work is clear: integrating indicators from non-institutional sources (scientific literature, assessment tools) with official ones, positioning them strategically within the urban policy-making cycle, and proposing a framework that is replicable for other contexts. The potential impact of the study is significant, especially for local authorities and planners seeking to align urban projects with the Sustainable Development Goals in a concrete, measurable way.

The authors also enumerate the limitations of the study, which is highly appreciated. They acknowledge that the literature filtering and strong focus on European and national sources may have excluded some useful indicators. They also acknowledge that the expert validation was limited to knowledge from three regions (Piedmont, Campania, Sardinia), meaning the set of 27 indicators is somewhat specific to the context of those regions. This limitation implies that the results should be generalized with caution. In addition, the authors mention that they did not, in this paper, delve into the details of the units of measurement or calculation methods for each indicator in the final set, noting that some indicators from SATs involve scenario-based calculations that may not be relevant in some contexts. All of these limitations are valid and it is very good that the authors openly discuss them. As a suggestion, it might be helpful for the conclusion to explicitly underline that the proposed indicator set is a starting point that needs to be adapted to different local contexts, and that it complements rather than replaces official indicators. Also, explicitly mentioning any plans to extend the study (which the authors do – they outline future steps such as applying the framework to a national scale, broader consultations with other public authorities, and field-testing the open-source tool) gives clear directions for follow-up, which is a positive aspect.

ANSWER: We thank the reviewer for this suggestion. As a result, we have added and clarified in the Conclusions (lines 513-515) that the proposed set of indicators does not replace the official SDG 11 framework, but rather complements it, serving as a starting point that can be further developed and expanded.

COMMENT: Quality of Writing and Technical Issues: The article is written in English at a generally good level. The text is mostly clear and well organized. However, there are a few minor language or typographical errors that should be corrected upon revision:

For example, shows “2030Agenda” without a space, whereas it should be written “2030 Agenda”.

The phrase “the Italian Nation as well as the Italian Regions define their positioning...” reads somewhat oddly; I recommend rephrasing for clarity, e.g., “both the national and regional authorities in Italy assess their positioning...”.

In Table 1, the word “pulic” is a typo and should be corrected to “public” (“Presence and quality of green areas or public spaces”).

In the reference list, reference [42] has “11 Novemebr 2024”, which is a spelling error (“November”). Such minor spelling mistakes should be fixed.

Additionally, the clause “led to the a priori exclusion of useful indicators” is stylistically awkward. It would be clearer to say “could have led to the exclusion a priori of some useful indicators” or rephrase to avoid the “the a priori” construction.

ANSWER: We thank the Reviewer for these revisions. We have corrected and addressed them in the manuscript.

 

COMMENT: References and Self-Citation: The reference list is extensive (42 entries) and includes relevant and recent sources, including documents from 2023-2024, indicating that the scientific context is up-to-date. The references cover both academic literature (ISI articles such as Sustainability, Smart Cities, etc.) and institutional reports (OECD, UN, ISTAT) and standards (PRISMA guidelines, BREEAM manual), reflecting the interdisciplinary nature of the topic. The degree of self-citation is noticeable; the authors cite their own previous works multiple times (at least 5-6 out of 42 sources, e.g., references 2, 27, 28, 34, 35, 36 include the authors of this paper). This is not necessarily problematic since those works appear relevant to the subject (e.g., the authors’ earlier studies on SDG11 indicators in the Italian context). However, I recommend that the authors ensure each self-citation is well justified and essential to the argument, and avoid over-reliance on their own publications at the expense of independent sources. Overall, the bibliography seems adequate and balanced, combining theoretical and practical sources. It might be helpful to add a few references to similar international studies that have proposed urban indicator frameworks (if any recent ones exist), to more clearly position the article’s contribution in relation to them.

ANSWER: We thank the reviewer for the thorough analysis and positive feedback regarding the relevance of the cited sources. We have carefully re-examined all the self-citations in the manuscript, confirming their pertinence, and have committed to incorporating additional sources as suggested in the previous comments.

Originality, Relevance, and Applicability: The study makes an original contribution by proposing an integrated set of specialized indicators for assessing the sustainability of urban projects at the local level, addressing a practical need in urban governance. In the literature, there are many studies on urban sustainable development indicators, but this work stands out through its combination of global indicators with local specificity (Italian/regional context) and its direct alignment with the policy-making cycle. The originality also lies in the synthesis method (systematically combining diverse sources and expert validation) and the focus on SDG11 at building/district scale – a scale at which official indicators are not typically defined. The practical relevance of the results is high: the set of 27 indicators can potentially be used by local public authorities or urban planners to monitor and evaluate the impact of urban projects in line with sustainability objectives. However, as the authors themselves acknowledge, the direct applicability of the set is currently limited to the studied context (three regions in Italy). To increase international impact, further adaptations and validations in other contexts would be needed. The fact that the authors indicate the possibility of replicating the methodological framework in other regions and scaling it up nationally is promising, as it suggests the potential for generalization. In my opinion, once revised according to the suggestions above, this paper will be a valuable contribution to the discourse on sustainable cities and data-driven governance tools.

Conclusion and Overall Recommendation: In conclusion, the manuscript is well-structured and addresses a contemporary topic with a rigorous methodology, presenting results that are useful to both practitioners and researchers. The comments above are intended to improve the clarity and quality of the work, but they do not detract from its underlying scientific value. I encourage the authors to incorporate these suggestions to ensure the final version is as strong as possible. Overall, the paper has clear merit for publication, pending the minor revisions noted.

Reviewer 4 Report

Comments and Suggestions for Authors

1. The paper mentions that the research mainly focuses on three regions in Europe and Italy (Piedmont, Campania, and Sardinia). It is suggested that the reasons for choosing these regions be supplemented in the methods section, such as whether it is because they represent different types of cities? Also, it can be mentioned how to promote this method to more places in the future, such as other European countries and even the whole world.
2. Although the PRISMA method was used to screen the literature, the details on how to exclude the literature were insufficient. Why were 441 papers excluded? Is it because the theme doesn't match or the data quality is poor? It is suggested to add a table to simply list the main reasons for exclusion. In this way, readers can have a clearer understanding of how the research gradually narrowed down the scope, and it is also convenient for others to repeat this research.
3. The paper states that three experts were invited to verify the indicators, but it did not specify who these experts were (such as scholars, government officials or industry practitioners?). There were no specific questions asked in the question sheet either. It is suggested to supplement the background information of the experts, such as their professional fields and work experience, and briefly describe the content of the questionnaire (such as whether it is a scoring or an open-ended question?) . Also, it is possible to discuss whether the experts' opinions are biased, such as whether they pay more attention to certain specific types of indicators?
4. It is suggested to add some details in the discussion section, such as whether some good indicators have been abandoned due to data issues? Are there any alternative solutions, such as using alternative data or simplifying the calculation methods? This can enable readers to have a clearer understanding of whether these indicators can be used in practice.
In the end, 27 indicators were selected, but no one was mentioned as more important. In reality, decision-makers may need to make trade-offs, such as which takes priority, "air quality" or "transportation convenience"? It is suggested to mention it in the discussion or future research section. For instance, expert scoring or statistical methods (such as AHP) can be used to prioritize indicators, which will be more convenient for practical application.

Author Response

COMMENT: The paper mentions that the research mainly focuses on three regions in Europe and Italy (Piedmont, Campania, and Sardinia). It is suggested that the reasons for choosing these regions be supplemented in the methods section, such as whether it is because they represent different types of cities? Also, it can be mentioned how to promote this method to more places in the future, such as other European countries and even the whole world.

ANSWER: Thank you for your valuable comment. We have clarified this point in the revised manuscript (Section 3.1, line 215-221; and Section 5, line 517-519).

 

COMMENT: Although the PRISMA method was used to screen the literature, the details on how to exclude the literature were insufficient. Why were 441 papers excluded? Is it because the theme doesn't match or the data quality is poor? It is suggested to add a table to simply list the main reasons for exclusion. In this way, readers can have a clearer understanding of how the research gradually narrowed down the scope, and it is also convenient for others to repeat this research.

ANSWER: We thank the reviewer for the helpful observation. As suggested, we acknowledge the importance of providing greater transparency regarding the exclusion process within the PRISMA-based literature review. The papers were excluded based on a combination of content-related and methodological criteria. We have added a summary table in Section 3.1.  This addition enhances the transparency of our review process and supports reproducibility for future research.

 

COMMENT: The paper states that three experts were invited to verify the indicators, but it did not specify who these experts were (such as scholars, government officials or industry practitioners?). There were no specific questions asked in the question sheet either. It is suggested to supplement the background information of the experts, such as their professional fields and work experience, and briefly describe the content of the questionnaire (such as whether it is a scoring or an open-ended question?) . Also, it is possible to discuss whether the experts' opinions are biased, such as whether they pay more attention to certain specific types of indicators?

ANSWER: Thank you for these revisions. We believe that the experts, as academics belonging to the same research field, can provide an impartial judgment free from conflicts of interest.

We added information about their background and about the questionnaire in Section 3.2, line 241-249.

 

COMMENT: It is suggested to add some details in the discussion section, such as whether some good indicators have been abandoned due to data issues? Are there any alternative solutions, such as using alternative data or simplifying the calculation methods? This can enable readers to have a clearer understanding of whether these indicators can be used in practice.

ANSWER: Thank you for these revisions, we clarify this issue and provide an emblematic example in section 5, line 486-491.

 

COMMENT: In the end, 27 indicators were selected, but no one was mentioned as more important. In reality, decision-makers may need to make trade-offs, such as which takes priority, "air quality" or "transportation convenience"? It is suggested to mention it in the discussion or future research section. For instance, expert scoring or statistical methods (such as AHP) can be used to prioritize indicators, which will be more convenient for practical application.

ANSWER: We thank the reviewer for this thoughtful observation. We fully agree that in real-world decision-making processes, trade-offs and prioritization among indicators are necessary.

However, the set of 27 indicators proposed in this study is not intended to be interpreted as a ranked list. Rather, it represents a core, non-reducible framework of indicators that are considered essential for selecting, designing, and monitoring urban areas in line with sustainability goals.

That said, we recognize the need for prioritization tools in context-specific applications. For this reason, a ranking phase based on the Best Worst Method (BWM) is foreseen in the next steps of the GLOSSA project. This method will be used to assess the relative importance of the 27 indicators in specific case studies, acknowledging that the weight and relevance of each indicator may vary significantly depending on the territorial context, planning goals, and stakeholder perspectives.

Therefore, a universal ranking of the indicators is neither feasible nor desirable within the scope of this paper, which focuses on establishing a shared and adaptable indicator set. We have included this clarification in the discussion section - line 524-526 - of the revised manuscript and outlined the planned implementation of context-sensitive weighting as part of future research.

Back to TopTop