Monitoring Postfire Biodiversity Dynamics in Mediterranean Pine Forests Using Acoustic Indices
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsSee attached file.
Comments for author File: Comments.pdf
Author Response
Dear reviewer,
Thank you for the insightful comments. Please find below the responses to each comment made.
General Comments
This paper focuses on an important and promising method of evaluating biodiversity in altered landscapes. There are many analyses presented, and I had trouble getting to the point of some of them. There seemed to be errors (omissions) in some, and I had questions about some of the presentations. The paper would benefit from having a series of figures showing scatterplots of values for the five acoustic indices against values for species richness and Shannon Diversity Index. Specifics are provided below.
Response
Thank you for taking the time to review our manuscript. We truly appreciate your suggestions.
Comment 1
Keywords. I would add “Mediterranean” in front of “pine forests.”
Response 1
Thank you. The keyword is added.
Comment 2
Section 1, line 79. Provide a reference for BirdNet.
Response 2
Thank you. A new reference is added:
“Wood, C.M.; Kahl, S.; Chaon, P.; Peery, M.Z.; Klinck, H. Survey Coverage, Recording Duration and Community Composition Affect Observed Species Richness in Passive Acoustic Surveys. Methods in Ecology and Evolution 2021, 12, 885–896, doi:10.1111/2041-210X.13571.”
Comment 3
Section 1, line 83. Some readers may not know about eBird or eBird hotspots. Perhaps add text such as “in programs such as eBird, administered by the Cornell Laboratory of Ornithology” after the mention of citizen science participation in line 75. Provide a reference.
Response 3
Thank you for your comment. We agree. We have added the following text and reference:
As citizen science in bird-watching and monitoring evolves, tools like eBird, managed by the Cornell Lab of Ornithology, support professional scientists and naturalists in contributing valuable data on bird distributions and species diversity (Sullivan et al., 2014).
Sullivan, B.L.; Aycrigg, J.L.; Barry, J.H.; Bonney, R.E.; Bruns, N.; Cooper, C.B.; Damoulas, T.; Dhondt, A.A.; Dietterich, T.; Farnsworth, A.; et al. The eBird Enterprise: An Integrated Approach to Development and Application of Citizen Science. Biological Conservation 2014, 169, 31–40, doi:10.1016/j.biocon.2013.11.003.
Comment 4
Section 1, lines 127-129. How do these indices differ? Do they emphasize different components of spectrograms? I see details are provided in Section 2.3 but stating that these indices emphasize or de-emphasize different characteristics or components of the spectrogram would be useful here.
Response 4
Thank you for your comment. We agree. We have added a brief description of the acoustic indices in section 1 and included references.
Comment 5
Section 1, lines 137-139. These objectives do not get proper emphasis in Section 3.2. A lot of analysis results are presented, but definitive information on which index is best and over what time period is not.
Response 5
Thank you for your comment. We have included key takeaways in lines 555-567 in order to make our results clearer. More specifically we added the following text:
“In the case of the best performing indicators in our study, the Bioacoustic Index (BI) and the Normalized Difference Soundscape Index (NDSI), the optimal time to monitor biodiversity in post-fire Mediterranean pine forests is during summer dawns and spring dawns respectively. At these periods, both vocal bird species variation is maximum and the activity of an acoustically dominant species – the European nightjar – is lower. While our exclusion of files dominated by European nightjar calls yielded only subtle improvements in the predicive performance of the two acoustic indicators, the results show that this is an area that should be further explored in future studies assessing the effectiveness of acoustic indicators as a tool for rapid biodivrsity. In summary, our results show both the potential and additional research needed before of BI and NDSI can be effectively used to rapidly and effectively monitor pine forest restoration, and – ultimately – inform post-fire management strategies in these habitats.”
Comment 6
Section 2.1. More detail should be provided regarding the placement of sensors. One important consideration is how near the sensors are to the boundaries of each site because of the influence of offsite conditions and recording of offsite bird sounds. Was an attempt made to place all sensors away from boundaries? How much variability in vegetation and terrain was there within and among sites that could influence the results.
Response 6
We agree that we can improve the description of how sensors were placed away from habitat edges (e.g. burnt and unburnt pine forests, pine forest and other habitats) and roads. The sensors were placed over 200 m away from edges and a minimum of 50 m from forest roads. We worked with the assumption that most bird calls recorded originated within a 50 m radius (with exceptions being larger birds like raptors, owls and corvids). The Aleppo pine forest across the Sithonia peninsula is similar in its understory vegetation. Sites "unburnt", "bunt 2001", and burnt 2009" are part of a contiguous Aleppo pine forest and differ only on their fire burnt history. The "burnt 2018" site is further away, but it was selected because it was a) listed as Aleppo Forest in the forest/habitat maps, and b) unburnt pine forests at the periphery of the burnt patch had similar understory vegetation as the unburnt site.
We included the following text in lines 202-27: “As the Aleppo pine forests across the Sithonia peninsula are similar in forest structure, constituting in effect a continuous forest, we consider the the selected sites differing only in their burning history. Also, based on field observations, we consider most bird calls recorded as having originated within a 50 m radius from the AUs, with the exception of few, larger species (e.g. raptors, owls, and corvids).”
Comment 7
Section 2.3, line 216 and Figure 3. What is the difference between NDSIa and NDSIb? It seems the default and adjusted values are the same only for NDSIb.
Response 7
We are grateful for this comment. We neglected to mention this vital difference. The a and b in NDSI refer to the anthropophony and biophony components of the recording. More specifically, a lower frequency range is used for man-made sounds and a higher one for biophonic sounds. For this research we adjusted the default values of the biophonic component of NDSIb from 2–11 kHz to 2–10 kHz, according to the frequential extend of the bird species identified.
Comment 8
Section 2.4, line 267. Superscript the 2 in R2 here and throughout.
Response 8
You are absolutely correct. Done!
Comment 9
Section 2.4, line 273-281. I would be concerned that eliminating recordings with European nightjars from the analysis would eliminate sites, times of day, and portions of seasons from the analysis, generally reducing the power of the analysis.
Response 9
Thank you for your comment. The concern is justifiable. Nevertheless, it is accustomed to remove dominant species from such analysis. The action to remove the recording that contained the nightjars was done as a secondary analysis in order to identify whether the results are affected. We concluded that there is a statistical difference but still the results are the same. We performed this analysis to demonstrate how the dominant species could affect performance of acoustic indices in predicting species richness and diversity, as it is a common concern about the limit of the acoustic indices. Several researchers have indicated that a large proportion of calls from dominant species could affect the performance. Here, we exclude only one dominant species and test our models for its effect.
Comment 10
Section 2.4, line 274. Provide the scientific name for this and all other species mentioned in the text. Note that here and elsewhere you refer to this species as the European nightjar but starting on line 477, you refer to it as the Eurasian nightjar. Be consistent.
Response 10
Thank you for this comment. We made alterations throughout the text and fixed the wrong name.
Comment 11
Section 2.4, line 278. What was the “first most abundant species in terms of calls?”
Response 11
Thank you for your comment. Acoustic biodiversity is measured by the number of identified calls, whilst traditional biodiversity indicators utilize the number of species. The most abundant species that were vocally identified were the Sardinian Warbler, Eurasian Nightjar, European Robin and the Common Chiffchaff.
Comment 12
Section 3.1, line 296. Change “As can be seen in figure 4, four species accounted for 73.7%” to “The four species shown in Figure 4 accounted for 73.7%”.
Response 12
Thank you! Done!
Comment 13
Section 3.2. In all tables in this section, the Burnt 2001 area and Time Period: Dawn are missing. I could find no text explaining this, so I assume it is an error.
Response 13
Thank you for this comment. The intercept represents the expected outcome when all predictors are at their reference levels. For example, if "Time Period: Dawn" and "Burnt 2001 area" are the reference categories, the intercept would indicate the expected outcome under those conditions. The coefficients for other categories are then interpreted as deviations from this intercept. For continuous predictors, their values are typically assumed to be zero when evaluating the intercept, unless the variables have been centered or scaled.
Comment 14
Section 3.2, line 315. What does “τηε” mean?
Response 14
Thank you for pointing out this mistake. The correct word is “the”.
Comment 15
Section 3.2, Figure 5. Because this figure presents correlation coefficients segregated into different areas, seasons, and times of day, it is hard to determine how well the indices predict the two biodiversity metrics overall. I think a better approach would be to present a series of figures showing scatterplots of values for the five acoustic indices against values for species richness and the Shannon Diversity Index—ten graphs in all that could be assembled into two figures each representing one of the biodiversity metrics. If you also want to keep Figure 5, add a row that shows the overall correlation coefficient for each index as the first or last row of the figure.
Response 15
Thank you. The heatmap of figure 5 was conducted in order to identify the subgroup category that better reveals the correlation between acoustic indicators and diversity metrics. It was done as a screening process in order to highlight the best performing indices and categories for future analysis.
Also this analysis help us confirm that there is significant variation across areas, time of the day and seasons prior to the GLMM.
In appendix D we have included scatter plots as an additional way to showcase the resulting correlations.
Comment 16
Section 3.2, lines 334-335. What in Table 2 indicates that “the BI model however was clearly the best supported candidate model?”
Response 16
Thank you for your comment. It was the highest Akaike information criterion (AIC) in relation to the resulting AIC of other models as presented in the parenthesis.
Comment 17
Section 3.2, lines 340-341. What in Table 3 indicates that “the NDSI model was clearly the best supported candidate model?”
Response 17
Thank you for your comment. It was the highest Akaike information criterion (AIC).
Comment 18
Section 3.2, line 345. Table numbering is off. This table should be labelled Table 2.
Response 18
Thank you for pointing this out. We fixed this error.
Comment 19
Section 3.2, Table 2 (labelled Table 1). I am having difficulty with the way the data are presented in this and other tables in this section. I would think the variables presented should be error (random variation among sensors), intercept, season (not broken out into individual seasons as done here), area (not broken out into individual areas as done here), and time period (not broken out into individual times as done here). What is the “Acoustic index” effect shown here. How does this make sense if there is only a single index being measured in each GLM?
Response 19
Season and Time are categorical variables, and hence by default will be represented in the results as multiple beta coefficients, the value of which should be examined in relation to the one (missing) category contained in the intercept. For instance, the beta coefficient value 0.063 for Spring in the baseline model (not including any acoustic index) should be interpreted as Species Richness being higher in Spring compared to Fall (which is contained in the intercept) – but given that the SE is 0.037 for this effect – it is not a significant one (i.e. 95% CI would include 0).
Comment 20
Section 3.2, line 347. What is the random variable “site” mentioned here? I assume this should be “sensor.”
Response 20
Thank you for this comment. You are correct. We replaced the “site” with “sensor” in all tables.
Comment 21
Section 3.2, line 347-348. More clearly state what values are presented for each variable. I would suggest changing “The estimate coefficients st error and p value significance *** ( p < 0.001) **(p < 0.01) * (p < 0.05). (p < 0.1)” to “For each variable, the estimated coefficient, standard error of the estimate (in parentheses), and statistical significance of the estimate are presented; *** indicates p < 0.001, ** p < 0.01, * p < 0.05), and NS p > 0.05.” Add NS to the table as appropriate.
Response 21
Thank you for this comment. We added nonsignificance to our table and changed the description accordingly. Indeed, this was a valuable change, not only for making the table more understandable, but this also helped in organizing cells.
Comment 22
Section 3.2, line 353. Table numbering is off. This table should be labelled Table 3.
Response 22
Thank you for noticing this. It is done.
Comment 23
Section 3.2, lines 355-356. Make changes as suggested for Table 2.
Response 23
Thank you it is done.
Comment 24
Section 3.2, line 363. Here you say “site (i.e., sensor)” In Section 2.1, you refer to your test areas as “sites,” but elsewhere as “areas.” In Section 2.1, lines 167-169, you say three sensors were used in most areas. These statements lead to confusion as to what you mean in Section3.2. Is site equivalent to area (4 of them) or sensor (11 of them)?
Response 24
Thank you for this comment. By using the term "Sites," we refer to specific points where the sensors were placed in each study area. We changed "sites" to "area" in Section 2.1. The total number of sensors is 11.
Comment 25
Section 3.2, line 364 and 365. Table numbering is off. You should be referring to Table 4.
Response 25
Thank you, it is done.
Comment 26
Section 3.2, line 366-368. Why is the comparison being made between the two indices in different seasons. A more valid comparison would be between the two indices in the same season. The difference could be caused by differences in the number of birds vocalizing in different seasons (more species being detected during migration vs fewer species in summer) and have nothing to do with the index.
Response 26
Thank you for this comment. The reason for the comparison between different seasons directly corresponds to the research questions of this work. More specifically the goal was to identify the ideal season to use the acoustic indices more effectively.
Comment 27
Section 3.2, line 371. This is the only place in the ms that your study areas are referred to as age classes.
Response 27
Thank you for pointing this out. We changed the “age classes” to “burn history”.
Comment 28
Section 3.2, line 374. Table numbering is off. This table should be labelled Table 4.
Response 28
Thank you. Done!
Comment 29
Section 3.2, lines 375-376. Make changes as suggested for Table 2.
Response 29
Thank you, it is done.
Comment 30
Section 4, lines 387 and 394. Italicize “Pinus halepensis.”
Response 30
Thank you. Done.
Comment 31
Section 4, line 452. Change NSDI to NDSI. Check throughout for consistency.
Response 31
Thank you. It is done.
Comment 32
Section 4, line 492. What are the “single-acoustic indicators?”
Response 32
Thank you for this comment. By “single-acoustic indicators” we mean the models that included individual acoustic indicators, instead of a combination.
Comment 33
Section 4, line 523. Delete the second “of.”
Response 33
Thank you. It is done.
Comment 34
Appendix A. Provide a title to the appendix. What are the values presented in this table?
Response 34
Thank you for this comment. We have provided a title for this table. “Appendix A: Summary of detected Bird Species Calls per Area”. The values regard the number of birds identified in each location.
Comment 35
Appendix B. Provide a more informative title to the table that includes a description of what the values represent.
Response 35
Thank you. It is done. “Appendix B: GLMM results for Species Richness models Excluding European Nightjar”.
Comment 36
Appendix C. Provide a more informative title to the table that includes a description of what the values represent.
Response 36
Thank you. It is done. “Appendix C: GLMM results for Shannon Diversity models Excluding European Nightjar”.
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsThe study evaluates how acoustic indices can effectively monitor bird biodiversity in Mediterranean pine forests after fires, highlighting the BI and NDSI indices as the most promising.
The inclusion of acoustic monitoring technologies offers a promising avenue to study biodiversity in a non-invasive and efficient manner. However, it is crucial to validate these methods and recognize their limitations.
Some comments:
a) It is not explicitly mentioned that interactions were studied in the generalized linear models (GLM) used to analyze the relationship between acoustic indices, species richness and Shannon diversity index (SDI). However, it is mentioned that area, season and time period were included as covariates in the fixed effects models, indicating that variability among these variables was considered in the analysis. Why they were not included should be justified.
b) Possible outliers have been studied?, the proposed models are sensitive to this.
c) Temporal and Spatial Effects: In an ecological study, temporal and spatial effects are important. GLMs may not adequately capture spatial and temporal variability without the inclusion of specific terms, which may limit their effectiveness in the analysis of longitudinal or geospatial data. Has an alternative been tried? why not? what were those data like? should be described to better understand the use of this methodology.
Author Response
Dear reviewer,
Thank you for taking the time to provide us with a useful review. We have responded to each one of your comments. We greatly appreciate your insightful comments and suggestions.
General comment:
The study evaluates how acoustic indices can effectively monitor bird biodiversity in Mediterranean pine forests after fires, highlighting the BI and NDSI indices as the most promising.
The inclusion of acoustic monitoring technologies offers a promising avenue to study biodiversity in a non-invasive and efficient manner. However, it is crucial to validate these methods and recognize their limitations.
Response:
Thank you again for your valuable comments. We agree. We have provided more information regarding the limitations of this study.
Comment 1
It is not explicitly mentioned that interactions were studied in the generalized linear models (GLM) used to analyze the relationship between acoustic indices, species richness and Shannon diversity index (SDI). However, it is mentioned that area, season and time period were included as covariates in the fixed effects models, indicating that variability among these variables was considered in the analysis. Why they were not included should be justified.
Response 1
Thank you for the detailed feedback. In our study, we indeed incorporated area, season, and time period as covariates to control for variability in these factors and improve the accuracy of predictions for species richness and Shannon Diversity Index (SDI) based on acoustic indices.
Including area, season, and time period as covariates aimed to account for their main effects on acoustic diversity. These factors were found to significantly impact bird species richness and SDI in prior studies (e.g., dawn time recordings often capture higher bird diversity). Furthermore, we aimed at model simplicity. Adding interactions among area, season, and time period would increase model complexity, possibly complicating interpretation and increasing the risk of overfitting.
Comment 2
Possible outliers have been studied? The proposed models are sensitive to this.
Response 2
Thank you for this comment. In our research we conducted a preliminary assessment of potential outliers to ensure robust model results. We inspected box plots and histograms to identify extreme values and assessed basic statistical metrics to detect data points significantly deviating from central tendencies. Outliers making up 5% were excluded prior to conducting non-parametric tests, which are better suited for handling.
We tailored GLMM model specifications to the response variable characteristics, carefully choosing distributions and link functions. For species richness (SR), an overdispersed count variable, we used a negative binomial distribution with a logit link to model SR in terms of log-odds. For the Shannon Diversity Index (SDI), we selected a Poisson distribution with a logit link to maintain consistency in count-based interpretations.
We have included in our references the article “Generalized linear mixed models: A practical guide for ecology and evolution” by Bolker et al., 2009 as it provides insights into choosing distributions and discusses preliminary diagnostics like residual analysis.
In lines 300-302 we have included the following text:
“As indicated in Bolker et al., 2009, the potential outliers were assessed through visual inspections and statistical metrics, with extreme outliers (top 5% deviations) excluded to improve model fit and accuracy.”
Bolker, B. M., Brooks, M. E., Clark, C. J., Geange, S. W., Poulsen, J. R., Stevens, M. H. H., & White, J.-S. S. (2009). Generalized linear mixed models: A practical guide for ecology and evolution. Trends in Ecology & Evolution, 24(3), 127–135. https://doi.org/10.1016/j.tree.2008.10.008
Comment 3
Temporal and Spatial Effects: In an ecological study, temporal and spatial effects are important. GLMs may not adequately capture spatial and temporal variability without the inclusion of specific terms, which may limit their effectiveness in the analysis of longitudinal or geospatial data. Has an alternative been tried? why not? what were those data like? should be described to better understand the use of this methodology.
Response 3
Thank you for highlighting this important point. We acknowledge that Generalized Linear Models (GLMs) have limitations in capturing complex spatial and temporal variability, which is particularly significant in ecological datasets. In this study, we applied GLMM to analyze general post-fire regeneration trends in Aleppo pine (Pinus halepensis) communities.
Our choice of GLMs was guided by the relatively uniform structure of the post-fire regeneration patterns in Aleppo pine, which typically exhibit predictable temporal stages.
More specifically as indicated in
“Kazanis, D.; Spatharis, S.; Kokkoris, G.D.; Dimitrakopoulos, P.G.; Arianoutsou, M. Drivers of Pinus halepensis Plant Community Structure across a Post-Fire Chronosequence. Fire 2024, 7, 331. https://doi.org/10.3390/fire7090331”
high initial seedling density immediately post-fire, is followed by gradual density reduction and stabilization over the subsequent years. Furthermore, species composition remains largely consistent with mature unburned forests from the first year, with major changes occurring primarily in species abundance rather than composition.
This stability in the underlying ecological structure allowed us to capture these predictable post-fire trends effectively using GLMs, as our data did not exhibit the high spatiotemporal heterogeneity that might require more complex models.
While we considered mixed-effects models to account for finer spatial or temporal random effects, the largely predictable trends and limited complexity in our data structure led us to retain GLMs, which provided adequate insight for the scope of this analysis.
Author Response File: Author Response.docx
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsGeneral Comments
The authors made significant changes to the manuscript and addressed most of my comments. Original comments that I feel were not adequately addressed are provided below.
Comments not Adequately Addressed
· Original comment: Section 3.2. In all tables in this section, the Burnt 2001 area and Time Period: Dawn are missing. I could find no text explaining this, so I assume it is an error.
My response: An adequate response was provided, but no change was made to the ms. I would like to see some additional text added that provides the explanation provided in the response.
· Original comment: Section 3.2, Figure 5. Because this figure presents correlation coefficients segregated into different areas, seasons, and times of day, it is hard to determine how well the indices predict the two biodiversity metrics overall. I think a better approach would be to present a series of figures showing scatterplots of values for the five acoustic indices against values for species richness and the Shannon Diversity Index—ten graphs in all that could be assembled into two figures each representing one of the biodiversity metrics. If you also want to keep Figure 5, add a row that shows the overall correlation coefficient for each index as the first or last row of the figure.
My response: Scatterplots were added, but they are very low resolution and are currently unreadable. Provide the overall correlation coefficient for each index to Figure 5 as requested in my original comment.
· Original comment: Section 3.2, lines 334-335. What in Table 2 indicates that “the BI model however was clearly the best supported candidate model?”
My response: Add “as indicated by its low AIC score.”
· Original comment: Section 3.2, lines 340-341. What in Table 3 indicates that “the NDSI model was clearly the best supported candidate model?”
My response: Add “as indicated by its low AIC score.”
· Original comment: Section 3.2, line 363. Here you say “site (i.e., sensor)” In Section 2.1, you refer to your test areas as “sites,” but elsewhere as “areas.” In Section 2.1, lines 167-169, you say three sensors were used in most areas. These statements lead to confusion as to what you mean in Section 3.2. Is site equivalent to area (4 of them) or sensor (11 of them)?
My response: The authors claim to have made a change to Section 2.1, but I do not see one.
Author Response
Dear reviewer,
Thank you for taking the time to review our manuscript again. We greatly appreciate your suggestions. Please find below the responses to your comments.
General Comments:
The authors made significant changes to the manuscript and addressed most of my comments. Original comments that I feel were not adequately addressed are provided below.
Response:
Thank you again. Your comments greatly improved our manuscript.
Comments not Adequately Addressed
Original comment: Section 3.2. In all tables in this section, the Burnt 2001 area and Time Period: Dawn are missing. I could find no text explaining this, so I assume it is an error.
My response: An adequate response was provided, but no change was made to the ms. I would like to see some additional text added that provides the explanation provided in the response.
Response:
Thank you for this comment. The reference levels for the categorical variables were set as Burnt 2001 area for 'Area' and Time Period: Dawn for 'Time Period'. Therefore, the intercept is representative of the expected outcome when all the predictors are at these reference levels, while the coefficients for other categories are indicative of their deviations from this very baseline. For this very reason, the Autumn season is not present, as it is a reference level to the categorical variable 'Season'. It follows that the coefficients for other seasons are in relation to that baseline represented by the Autumn.
We updated all table captions with the phrase “Burnt 2001, Dawn, and Autumn are incorporated in the intercept”. Also, we included the following text in lines 365-370: “The intercept in this model represents the expected biodiversity metric values when predictors are at their baseline levels: Burnt 2001 for area, dawn for time period and Autumn for season. Coefficients for other categories thus indicate deviations from this baseline, while unscaled continuous predictors default to zero values, potentially limiting interpretability of the intercept if not centered”.
Original comment: Section 3.2, Figure 5. Because this figure presents correlation coefficients segregated into different areas, seasons, and times of day, it is hard to determine how well the indices predict the two biodiversity metrics overall. I think a better approach would be to present a series of figures showing scatterplots of values for the five acoustic indices against values for species richness and the Shannon Diversity Index—ten graphs in all that could be assembled into two figures each representing one of the biodiversity metrics. If you also want to keep Figure 5, add a row that shows the overall correlation coefficient for each index as the first or last row of the figure.
My response: Scatterplots were added, but they are very low resolution and are currently unreadable. Provide the overall correlation coefficient for each index to Figure 5 as requested in my original comment.
Response:
Thank you for this comment. The dpi resolution of the scatterplot was improved in order to provide an image of better quality. Additionally a table was added in the appendix including the overall correlations of acoustic indices to determine the general patterns of the acoustic indices compared with biodiversity, as correctly suggested. Furthermore, before proceeding with the correlation results of grouped variables we added a part referring to this specific appendix to give a clear view of the overall correlations.
To identify optimal times and seasons for applying acoustic indicators, we focused on generating a representative measure for each group of data. Rather than using all raw data points, we aggregated values by calculating the mean for each variable across groups defined by area, time of day, and season. This approach allowed us to create a model that better reflects the broader trends within each time period or season, helping highlight which conditions (time and season) best correspond with variations in acoustic indices and biodiversity metrics.
Similarly to Budka et al. (2023), we used the mean instead of total numbers in our analysis in order to improve the effectiveness of our models.
(https://doi.org/10.1016/j.ecolind.2023.110027)
In order to make this clearer, we added new text in section 2.4 of the methodology (lines 290-295)
“Having confirmed significant variation across areas, times of day, and seasons, we calculated mean values for each acoustic index (ACI, ADI, AEI, NDSI, BIO) at the level of each 10-minute file. This grouping by mean across season, area, and time of day allowed us to examine the predictive value of these indices for bird diversity metrics (species richness-SR and Shannon diversity index-SDI), while accounting for all variation explained by season, area (burning history), and time of day.”
And in lines 348-350: “The acoustic indices (ACI, ADI, AEI, BI, NDSI) and biodiversity metrics (Species Richness SR, Shannon Diversity Index SDI) grouped by mean per hour and sensor (n=1056) did not follow a normal distribution”.
Original comment: Section 3.2, lines 334-335. What in Table 2 indicates that “the BI model however was clearly the best supported candidate model?”
My response: Add “as indicated by its low AIC score.”
Response:
Thank you for this comment. Done.
Original comment: Section 3.2, lines 340-341. What in Table 3 indicates that “the NDSI model was clearly the best supported candidate model?”
My response: Add “as indicated by its low AIC score.”
Response:
Thank you for this comment. Done.
Original comment: Section 3.2, line 363. Here you say “site (i.e., sensor)” In Section 2.1, you refer to your test areas as “sites,” but elsewhere as “areas.” In Section 2.1, lines 167-169, you say three sensors were used in most areas. These statements lead to confusion as to what you mean in Section 3.2. Is site equivalent to area (4 of them) or sensor (11 of them)?
My response: The authors claim to have made a change to Section 2.1, but I do not see one.
Response:
Thank you for highlighting this. We changed "sites" to "area" in Section 2.1. We have now highlighted these changes (lines: 180-184, 202-207 and 476-492).
Reviewer 2 Report
Comments and Suggestions for AuthorsDear authors, thank you for the effort. This version is much improved.
Thanks for the answers and clarifications.
I accept the work on this version.
The best
Author Response
Dear reviewer,
We are grateful for your valuable comments and insights.
Our manuscript has been substantially improved thanks to your contribution.
Best regards,
Dimitrios Spatharis, Aggelos Tsaligopoulos, Yannis Matsinos, Ilias Karmiris, Magdalini Pleniou, Elisabeth Navarrete, Eleni Boikou and Christos Astaras