Centering Communities in Biodiversity Monitoring and Conservation: Preliminary Insights from a Citizen Science Initiative in Kalimantan, Indonesia
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsGeneral Comments
General questions to help guide your review report for research articles: Is the manuscript clear, relevant for the field and presented in a well-structured manner? Yes
- Are the cited references mostly recent publications (within the last 5 years) and relevant? Yes (though some relevant citations that are not included are from earlier times and still relevant). Does it include an excessive number of self-citations? No.
- Is the manuscript scientifically sound and is the experimental design appropriate to test the hypothesis? Yes
- Are the manuscript’s results reproducible based on the details given in the methods section? Yes, insofar as any studies of human beings are replicable.
- Are the figures/tables/images/schemes appropriate? Yes. Do they properly show the data? Are they easy to interpret and understand? Is the data interpreted appropriately and consistently throughout the manuscript? Please include details regarding the statistical analysis or data acquired from specific databases. This is all done well.
- Are the conclusions consistent with the evidence and arguments presented? Yes.
- Please evaluate the ethics statements and data availability statements to ensure they are adequate. Excellent, though I raise an important issue in this region. The people actually LIKE to have their knowledge acknowledged; in my research, communities have wanted me to mention their community and their names because they wanted to claim ownership and associate pride.
General questions to help guide your review report for review articles:
- Is the review clear, comprehensive and of relevance to the field? Is a gap in knowledge identified? Yes to both.
- Was a similar review published recently and, if yes, is this current review still relevant and of interest to the scientific community? Not that I know of.
- Are the cited references mostly recent publications (within the last 5 years) and relevant? Are any relevant citations omitted? Does it include an excessive number of self-citations? Already answered.
- Are the statements and conclusions drawn coherent and supported by the listed citations? Yes, very well, except in one case.
- Are the figures/tables/images/schemes appropriate? Do they properly show the data? Are they easy to interpret and understand? Yes to all.
Rating
- Novelty: The question is original in this context, though Citizen science has been common in the US. The questions are clear and well-defined and the results advance knowledge.
- Scope: Does the work fit the journal scope*? Yes.
- Significance: The results are very significant, and suggest new ways of conducting conservation research that are less expensive, make use of indigenous knowledge, and empower local communities as well as probably being more sustainable.
- Quality: The paper is very well written, well organized, and succinct (if anything, a bit too succinct).
- Scientific Soundness: The approach and conduct of the research appears scientifically very sound.
- Interest to the Readers: This will be of interest to many readers!
- Overall Merit: This paper addresses an issue that has troubled conservation workers for a long time, and incorporates more recent thinking about the involvement and roles of local communities in conservation.
- English Level: English is excellent.
Specific Comments
Lines 115-117 and 122-26 “Anecdotal data was primarily collected through our local facilitator, who lives near the participating villages and maintains regular informal contact with community members, both in person and through WhatsApp. Through open-ended conversations, the facilitator collected insights on wildlife conditions, village priorities, forest use, and views on the program.” It is interesting that there is little reference to the considerable published information and grey literature on this region (Danau Sentarum National Park generally). I’m curious as to why the authors seem to rely primarily on ‘anecdotal data’. No doubt it’s also important, to get very location-specific info. Examples of useful existing material on social issues there include work by Wadley, Yuliani, Colfer, Dudley, Valentinus Heri, Aglionby, Yasmi, Harwell, and others.
Line 134-136: “Further anecdotal information was obtained through field visits by a core team member, who observed how monitoring tools were used and engaged in informal discussions with villagers.“ It would be good to have some sense of the disciplinary background of the core team member who made these visits – to assess his/her likely skills in community work. I know at least three of the authors have extensive and long-lasting experience in this region. That would be important to describe briefly, to strengthen readers’ confidence in the results.
Lines 168-169: “Data privacy was maintained by anonymizing observer identities and generalizing sensitive location data in reports and publications.” I do understand that this is the global standard on protecting participants, but I’ve found that the people I’ve worked with in Kalimantan tend to like to have their names and photos published. They are proud of their knowledge, want to share it, and want to get credit for it, in general. Just something to think about…
Lines 205-206: “However, traditional practices and subsistence needs occasionally lead to the continued hunting of certain species, such as pigs.” I’m intrigued that pigs are protected (which I imply from this sentence). Although my own experience is out of date, pigs were sometimes extremely common (especially when there was a mast fruiting). And of course seriously central to and desired in Iban diets.
Lines 253-255 “By June 2025, the cumulative total had reached 14,085 observations in phase 3. This increase was partly attributed to the expansion of the list of reward-eligible species to include taxa of cultural, national, and global conservation significance.” I would think it might also have to do with increased sensitivity on the part of the community observers, as they got used to noting them and also gained skills perhaps at spotting these species and their indicators.
Lines 256-7: “These trends highlight a clear positive correlation between the strength of incentives and community engagement.” It seems worthwhile to experiment a bit (in other places) with lower levels of recompense, to see what makes the most sense (given the likely shortage of conservation funds generally). I’m sure it would vary greatly from place to place…
Lines 258-59: “showing the effectiveness of phased, adaptive motivation strategies in CS within rural communities.” – especially the adaptive element in your design!
Lines 292-3: “Some participants even noted that recording wildlife was more rewarding than a day of manual labor, which typically pays around USD 9.” I can well imagine it would be! Does ‘rewarding’ refer to receipt of money or feelings of satisfaction?
Lines 318-319: “USD 401 per unit, covering hardware, memory cards, and batteries [10]. Deployment and maintenance costs equate to hundreds of thousands of Rupiah” I would suggest sticking to USD, since this is for an int’l audience.
Lines 334-5: “they are generally accepting decisions by the verification team” – should be “…accepting of decisions…”
Lines 354-6: “By integrating traditional ecological knowledge with modern tools such as ESRI’s Survey123 and Field Maps, the project has been able to monitor species distribution.” I think this is a wonderful attempt. I would add a little on what specific traditional ecological knowledge these folks are using – just to give readers a better sense of their contribution (beyond their labor). I’m thinking things like knowledge of wildlife habitats, seasons and behavior; ability to identify species more easily than outsiders; perhaps finely honed observational and auditory skills; practice in tracking…etc.
Line 374 “varying quality in observers’ quality of´ -- I’d say you only need the first ‘quality’
General: I’m very curious to learn a little more about the demographics of the respondents. Were the villages Iban or Melayu? Did that make a difference (I’d imagine the Iban would have more ecological knowledge related to land-based species, perhaps more interest in conservation??, though it would be interesting to know). Were there relevant differences in gender or age of observers?
Future Directions: The pride that local folks feel in their own knowledge can be used also, in sharing that knowledge globally (especially if they are recognized explicitly).
Lines 415-7: “This transformation in mindset is a crucial step toward sustainable coexistence, ensuring that conservation efforts are not just about preserving biodiversity but also about reshaping how communities perceive and interact with their natural environment.” I think this sentence includes an assumption that there are no conservation feelings extant already in these communities. It subtly reinforces the idea, common among many conservationists, that villagers are ignorant and even enemies of conservation. Whereas my own experience, and obviously yours, is that they have a different knowledge set than the western conservationists, but also (as your research has shown) have a lot to contribute. I would just suggest substituting “strengthening” or ‘building on’ rather than “reshaping”. I guess “reshaping” makes me think of an external party acting on the people rather than with them, as you seem to have done very successfully.
Comments for author File: Comments.pdf
Author Response
Comment 1: The paper is very well written, well organized, and succinct (if anything, a bit too succinct).
Response 1: Thank you for your thoughtful review of our paper. We have taken your suggestions and comments on board. With your help, we have been able to elaborate on our findings and provide supporting evidence from other studies. We hope that we have adequately addressed the points raised in your review.
Comment 2: Lines 115-117 and 122-26 “Anecdotal data was primarily collected through our local facilitator, who lives near the participating villages and maintains regular informal contact with community members, both in person and through WhatsApp. Through open-ended conversations, the facilitator collected insights on wildlife conditions, village priorities, forest use, and views on the program.” It is interesting that there is little reference to the considerable published information and grey literature on this region (Danau Sentarum National Park generally). I’m curious as to why the authors seem to rely primarily on ‘anecdotal data’. No doubt it’s also important, to get very location-specific info. Examples of useful existing material on social issues there include work by Wadley, Yuliani, Colfer, Dudley, Valentinus Heri, Aglionby, Yasmi, Harwell, and others.
Response 2: Thank you for your comment and suggestions for studies we could explore to strengthen our research. We have added several relevant references in that regard. In response, we have made several changes to our manuscript. Firstly, we have rephrased and elaborated on all mentions of ‘anecdotal’ evidence to highlight that these constitute semi-structured interviews held by our local facilitator, observation studies of app use and program implementation, and informal discussions led by our core team member to gauge community engagement and barriers to access, which informed the development of our program.
Comment 3: Line 134-136: “Further anecdotal information was obtained through field visits by a core team member, who observed how monitoring tools were used and engaged in informal discussions with villagers.“ It would be good to have some sense of the disciplinary background of the core team member who made these visits – to assess his/her likely skills in community work. I know at least three of the authors have extensive and long-lasting experience in this region. That would be important to describe briefly, to strengthen readers’ confidence in the results.
Response 3: Thank you for your question. The local facilitator is a key position in this project, and he has ideal expertise for the role. He has worked on community conservation and livelihood projects in the area for the past 25 years. He is not an outsider, and all these communities know him from previous projects. He speaks both Iban and Melayu. We have added a sentence to section 2.2. Local Engagement and Contextual Knowledge, regarding his suitability.
Comment 4: Lines 168-169: “Data privacy was maintained by anonymizing observer identities and generalizing sensitive location data in reports and publications.” I do understand that this is the global standard on protecting participants, but I’ve found that the people I’ve worked with in Kalimantan tend to like to have their names and photos published. They are proud of their knowledge, want to share it, and want to get credit for it, in general. Just something to think about…
Response 4: Thank you for your comment. At this early stage in the project, we have chosen to maintain the anonymity of the villages involved. For internal communication, which is visible to all participating villages, we publicise the outcomes in a newsletter. At a later stage, we intend to ask participants if they would like the villages to be named in external communication.
Comment 5: Lines 205-206: “However, traditional practices and subsistence needs occasionally lead to the continued hunting of certain species, such as pigs.” I’m intrigued that pigs are protected (which I imply from this sentence). Although my own experience is out of date, pigs were sometimes extremely common (especially when there was a mast fruiting). And of course seriously central to and desired in Iban diets.
Response 5: Thank you for highlighting this. You are correct that a Bearded pig is not protected under Indonesian law. We have corrected this statement in the text.
Comment 6: Lines 253-255 “By June 2025, the cumulative total had reached 14,085 observations in phase 3. This increase was partly attributed to the expansion of the list of reward-eligible species to include taxa of cultural, national, and global conservation significance.” I would think it might also have to do with increased sensitivity on the part of the community observers, as they got used to noting them and also gained skills perhaps at spotting these species and their indicators.
Response 6: Thank you for the comment. We have gathered testimonies from participants on their reasons for participating. In addition, we have just completed a baseline social survey, which will give us further insights into perception change. We have now mentioned the social baseline assessment throughout the manuscript, where relevant, to demonstrate our method of determining social benefits and perception changes.
Comment 7: Lines 256-7: “These trends highlight a clear positive correlation between the strength of incentives and community engagement.” It seems worthwhile to experiment a bit (in other places) with lower levels of recompense, to see what makes the most sense (given the likely shortage of conservation funds generally). I’m sure it would vary greatly from place to place…
Response 7: Thank you for your comments. We do vary the financial reward from time to time in response to data flow based on factors such as spatial saturation of certain areas. We also respond to requests from the villages. Our model is to move away from ‘conservation funds’ or philanthropic funds to long-term sustainable finance sources to support this project and other villages far into the future. We have now described these intentions in section 4.3. Future Directions.
Comment 8: Lines 258-59: “showing the effectiveness of phased, adaptive motivation strategies in CS within rural communities.” – especially the adaptive element in your design!
Response 8: Thank you for highlighting this point. We also found the adaptive element to be very important, as it allowed us to account for context-specific circumstances that were particularly clear in the rural community setting. This flexibility was essential for ensuring that the motivation strategies remained relevant and effective throughout implementation.
Comment 9: Lines 292-3: “Some participants even noted that recording wildlife was more rewarding than a day of manual labor, which typically pays around USD 9.” I can well imagine it would be! Does ‘rewarding’ refer to receipt of money or feelings of satisfaction?
Response 9: Thank you for the question; in this case, it was mainly financially rewarding. We have updated the narrative, accordingly.
Comment 10: Lines 318-319: “USD 401 per unit, covering hardware, memory cards, and batteries [10]. Deployment and maintenance costs equate to hundreds of thousands of Rupiah” I would suggest sticking to USD, since this is for an int’l audience.
Response 10: We have removed the mention of Rupiah, as it was for descriptive purposes, rather than providing information on a concrete value, and have updated the text to reflect this.
Comment 11: Lines 334-5: “they are generally accepting decisions by the verification team” – should be “…accepting of decisions…”
Response 11: We have made this correction. Thank you
Comment 12: Lines 354-6: “By integrating traditional ecological knowledge with modern tools such as ESRI’s Survey123 and Field Maps, the project has been able to monitor species distribution.” I think this is a wonderful attempt. I would add a little on what specific traditional ecological knowledge these folks are using – just to give readers a better sense of their contribution (beyond their labor). I’m thinking things like knowledge of wildlife habitats, seasons and behavior; ability to identify species more easily than outsiders; perhaps finely honed observational and auditory skills; practice in tracking…etc.
Response 12: You have identified a valid point which we have not elaborated on sufficiently. We have found that observers know exactly where to find certain species, such as orangutans. They have deep knowledge of seasonality in species presence, and deeply understand tracks and signs. We have added some additional text to section 4.1. Lessons Learned.
Comment 13: Line 374 “varying quality in observers’ quality of´ -- I’d say you only need the first ‘quality’
Response 13: We have removed the repeated use of ‘quality’. Thank you.
Comment 14: General: I’m very curious to learn a little more about the demographics of the respondents. Were the villages Iban or Melayu? Did that make a difference (I’d imagine the Iban would have more ecological knowledge related to land-based species, perhaps more interest in conservation??, though it would be interesting to know). Were there relevant differences in gender or age of observers?
Response 14: Thank you for your question. Initial findings suggest that we are seeing a difference between the Iban and Melayu communities in terms of uptake of the system. The highest performing village is Iban, and the lowest performing Melayu, however, the forest type and accessibility are also factors. Other factors that we are starting to observe are the value of other livelihood options in the village, which can vary seasonally. For instance, during August 2025, when Iban communities were busy with making ladang and Melayau communities were busy fishing, the number of observations declined as people were busy with ‘gotong royong’ activities. We do not yet have a long enough time series to evaluate what causes the differences between villages; thus, we have not included details in our manuscript.
Comment 15: Future Directions: The pride that local folks feel in their own knowledge can be used also, in sharing that knowledge globally (especially if they are recognized explicitly).
Response 15: We agree that taking pride in their knowledge can play an important role in community involvement, which we have also observed in other applications of our citizen science programmes. We have, therefore, highlighted this point in the future directions section, in relation to long-term programme sustainability and ongoing community engagement.
Comment 16: Lines 415-7: “This transformation in mindset is a crucial step toward sustainable coexistence, ensuring that conservation efforts are not just about preserving biodiversity but also about reshaping how communities perceive and interact with their natural environment.” I think this sentence includes an assumption that there are no conservation feelings extant already in these communities. It subtly reinforces the idea, common among many conservationists, that villagers are ignorant and even enemies of conservation. Whereas my own experience, and obviously yours, is that they have a different knowledge set than the western conservationists, but also (as your research has shown) have a lot to contribute. I would just suggest substituting “strengthening” or ‘building on’ rather than “reshaping”. I guess “reshaping” makes me think of an external party acting on the people rather than with them, as you seem to have done very successfully.
Response 16: Thank you for raising this point. A very important element of this programme is that we work with communities and not against them. This is a main reason why we have a local implementor, rather than members of our core team, as we want to avoid presenting as an organisation that comes in to ‘force’ our ideologies upon communities. We have amended the narrative to reflect this point and have, instead, stated that we wish to strengthen the already existing positive relationships that communities have with the natural environment.
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsGeneral:
This is an interesting manuscript. I find the topic important, and I think that publication of such studies, even if rather descriptive and methodological, is important in order to further the discussion and application of citizen science and community engagement in conservation research, particularly in rural/traditional settings.
I am not an expert on any of the topics of this paper – not tropical research, not great apes’ research, and not citizen science per se, even though I have touched upon each of these to some minor extent at some point. Please take my comments with a grain of salt, and forgive me if I am missing meaningful points.
I appreciate the clear writing and presentation, and – of course – the endeavor itself and the intimate involvement and engagement with the local community.
I have several comments that don’t preclude publication, but would probably require some serious discussion and perhaps also some addition of data/analysis of the data collected.
My main concern is that the ms does not engage, and seems to be nearly oblivious, with the major challenges with the kind of data that is collected here in comparison to data from structured surveys / transects / camera traps. Namely, that CS of this kind introduces a huge array of biases in the data, that are very challenging to deal with, and must be addressed (or at least seriously discussed). These include –
- Biases of places searched and places less-searched (nearer to the village / agricultural plots, near/along trails, in more open and less vegetation-dense habitat, perhaps less-extreme slopes, if topography is challenging at places)
- Biases re time-of-day
- Biases with respect to sampling effort: certain days (more/less on weekends?) / seasons / weather (less search during rain, for example)
- Hard to interpret absence-of-observations: is it because no one searched there? Or is it because there are less target-organisms at these places?
- No record of sampling effort: if on a certain day 2 people walked along a trail and searched for animals or whether it was 20 people – would make a huge difference, both with respect to how many observations occurred, and with respect to the influence on animal behavior.
- Bias in favor of observations in habitat where vision is less impaired by vegetation density
Transects or other structured surveys have their own drawbacks, but at least don’t suffer from most of these biases. It doesn’t mean I’m in favor of structured surveys instead of CS, but these challenges need to be considered, discussed, and – to the extent possible – accommodate/compensated-for in the analysis.
One (partial) solution is to carry out both and then use the results to validate each method and correct it for biases that don’t occur in the other.
A second set of challenges arises from the nature of the CS implementation: once participants have strong incentives to report observations, apart from falsification (which you are aware of) there are other biases which will be introduced and need to be addressed: people will return to the same sites repeatedly day after day or week after week, to re-observe the same individual/group, leading to confirmation bias / pseudo-replication / other biases; people might track the same individual/group through the day and report them multiple times as they are active at different sites/activities (which is fine – but this information needs to be considered in the analysis); people might be tempted to actively create situations that would increase observations, like leave food/food-leftovers/peels that the animals might be attracted to.
Pseudo-replication, where observations are not really independent of one another and the same individuals are sited repeatedly, is a serious concern.
You probably can’t currently treat all these biases – but it is at least important to consider and discuss them. If you can also carry out analysis that deals with some of these it would be even better.
How did you estimate population sizes? Do you have means of identifying the same individuals and using a capture-recapture analysis? If so – what are the details? (I may have missed this, in which case I apologize)
- It might be interesting for the reader to see some more statistics about the observations and which/why/how some were excluded due to inconsistency or low quality. This could demonstrate nicely your ability to control data quality.
- Perhaps provide a set of example-observations (recordings, pictures, videos)? This could give the reader a feeling of what the data looks like (again, perhaps this is in the supp material that I couldn’t access from my phone)
Important questions regarding the CS approach; again – worthy at least of some mention/discussion:
Is the approach sustainable? Is it reasonable to plan to continue payment-for-observations in the future in the longer run (perhaps yes; I’m not implying otherwise, just noting that this needs to be considered/mentioned/discussed).
Ethics: is it a good idea to be paying sums of money that are comparable to those gained from traditional work? It may attract some people to working in observations and setting aside traditional sources of income? Is this OK, or does it create a risk that in the future if the program is stopped, a problem will emerge? In other words – would we be creating a problematic reliance on this scheme? Similarly, is there a risk that some necessary but not-very-rewarding agricultural resources would be neglected? (e.g. when international conglomerates recruited local farmers in Ethiopia to grow plants for production of oil, a local shortage of traditional foods like Sorgom was created, costs increased, and some people went hungry).
Since the income is comparable to that from agriculture, we must ask about the societal impact: does it promote certain parts of society more than others (e.g. those that have smartphones, increasing society gaps)? Is there a risk that traditional kin/economic structures will be broken? (e.g. in some traditional societies in which traditional sources of income are commanded by the head of the family/clan, the introduction of paid labor led to societal restructuring, since young people get their own salaries and can choose to not adhere to the elders’ instructions/requests). These are questions that need to be considered when introducing a monetary scheme that is non-negligible compared to traditional rates of income. Perhaps in the specific society you dealt with these aren’t problems at all – I don’t know – but these are points to consider; they should be discussed (or at least mentioned) with respect to their relevance or irrelevance in this society, and highlighted as things to consider if the approach is to be adopted in other places/societies in the world.
These questions relate to the specific setting, as well as to the broader implementation of the approach which is recommended in the manuscript.
We must ask what the long-term consequences for the population are – positive and negative – and what measures we use to assess these (again, for this population and for others in which the approach might be implemented).
What is the end-game in this respect – for how long is the scheme reasonably sustainable, and what might replace it eventually? (should be mentioned/discussed)
Finally, to play the devil’s advocate for a moment, one might point out: well, what you did was pay for wildlife observations; what did you expect would happen, other than a huge increase in the number of observations? I mean, isn’t this what you’d expect, in almost any scenario? What’s new / not-obvious here?
I would think the challenge is in demonstrating that the data accumulated in this way can be helpful despite the many biases/challenges that might be involved (highlighted above). If so, a major effort in the paper should be put into highlighting these challenges and discussing how the data you collected can be useful despite the drawbacks. Merely the possibility of collecting observations via payment per-observations to local residents is in itself not particularly impressive.
Minor:
I couldn’t see the reference list; perhaps because I’m in the field and viewing the manuscript through my phone.
There’s some inconsistency in the currency used with respect to costs of analogous sampling/monitoring (USD and rupias, I think).
==
As I wrote, I think this ms is worthy of publication, and I applaud the effort put into the project; but I’d suggest that quite some improvement can be gained by discussing – and perhaps addressing - the really challenging aspects of using CS to derive reliable conclusions for wildlife monitoring.
Author Response
Overall comment: This is an interesting manuscript. I find the topic important, and I think that publication of such studies, even if rather descriptive and methodological, is important in order to further the discussion and application of citizen science and community engagement in conservation research, particularly in rural/traditional settings.
I am not an expert on any of the topics of this paper – not tropical research, not great apes’ research, and not citizen science per se, even though I have touched upon each of these to some minor extent at some point. Please take my comments with a grain of salt, and forgive me if I am missing meaningful points.
I appreciate the clear writing and presentation, and – of course – the endeavor itself and the intimate involvement and engagement with the local community.
I have several comments that don’t preclude publication, but would probably require some serious discussion and perhaps also some addition of data/analysis of the data collected.
Response: We thank the reviewer for their thoughtful and encouraging overall assessment of our manuscript. We appreciate the recognition of the importance of publishing descriptive and methodological studies to advance citizen science and community engagement in conservation research, particularly in rural and traditional contexts. We also value the acknowledgement of our clear presentation and the close collaboration with local communities, which was central to the study.
While we understand that the reviewer does not consider themselves a specialist in all areas covered by the paper, we are grateful for the constructive feedback provided. We have carefully addressed each of the comments raised, adding further discussion and, where possible, incorporating additional data and analyses to strengthen the clarity and robustness of the manuscript.
Comment 1: My main concern is that the ms does not engage, and seems to be nearly oblivious, with the major challenges with the kind of data that is collected here in comparison to data from structured surveys / transects / camera traps. Namely, that CS of this kind introduces a huge array of biases in the data, that are very challenging to deal with, and must be addressed (or at least seriously discussed). These include –
- Biases of places searched and places less-searched (nearer to the village / agricultural plots, near/along trails, in more open and less vegetation-dense habitat, perhaps less-extreme slopes, if topography is challenging at places)
- Biases re time-of-day
- Biases with respect to sampling effort: certain days (more/less on weekends?) / seasons / weather (less search during rain, for example)
- Hard to interpret absence-of-observations: is it because no one searched there? Or is it because there are less target-organisms at these places?
- No record of sampling effort: if on a certain day 2 people walked along a trail and searched for animals or whether it was 20 people – would make a huge difference, both with respect to how many observations occurred, and with respect to the influence on animal behavior.
- Bias in favor of observations in habitat where vision is less impaired by vegetation density
Transects or other structured surveys have their own drawbacks, but at least don’t suffer from most of these biases. It doesn’t mean I’m in favor of structured surveys instead of CS, but these challenges need to be considered, discussed, and – to the extent possible – accommodate/compensated-for in the analysis.
One (partial) solution is to carry out both and then use the results to validate each method and correct it for biases that don’t occur in the other.
Response 1: This is a valid point and indeed a general concern about unstructured and opportunistic citizen science-based biodiversity monitoring. Statistical methods have, however, advanced significantly, and there are now many robust ways to deal with these unstructured and opportunistic datasets, which have proven to be as accurate as structured surveys. A good example of such methods is described here: van Strien, A.J., van Swaay, C.A.M., Termaat, T., 2013. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models. Journal of Applied Ecology 50, 1450-1458. As such, we have noted the risk of bias in our discussion.
As we are in the early stages of data collection, we haven’t decided yet how we will analyze the data, but mostly likely will use Bayesian occupancy analyses similar to what we used in our 6-year citizen science pilot study in oil palm (see here for a technical write-up: https://www.biorxiv.org/content/10.1101/2025.07.11.664290v1.full). Furthermore, we aim to ground-truth the species occurrence patterns derived from the citizen science methods with traditional structured surveys, in this case, drone surveys for orangutans and gibbons. These are planned in year 2 of our study (we are currently in year 1), allowing for sufficient accumulation of citizen science data and resulting spatial distribution patterns of target species, before we groundtruth these.
Comment 2: A second set of challenges arises from the nature of the CS implementation: once participants have strong incentives to report observations, apart from falsification (which you are aware of) there are other biases which will be introduced and need to be addressed: people will return to the same sites repeatedly day after day or week after week, to re-observe the same individual/group, leading to confirmation bias / pseudo-replication / other biases; people might track the same individual/group through the day and report them multiple times as they are active at different sites/activities (which is fine – but this information needs to be considered in the analysis); people might be tempted to actively create situations that would increase observations, like leave food/food-leftovers/peels that the animals might be attracted to.
Pseudo-replication, where observations are not really independent of one another and the same individuals are sited repeatedly, is a serious concern.
You probably can’t currently treat all these biases – but it is at least important to consider and discuss them. If you can also carry out analysis that deals with some of these it would be even better.
Response 2: Again, we share the reviewer’s concern about duplication and pseudo-replication, and we, in fact, encounter this frequently in the data: 1) a group of people observe an orangutan nest, and every member of the group reports the same nest (despite our clear instructions not to do this); 2) the same observer reporting the same object multiple times on the same day. These issues were particularly prevalent in one village, not in the other three where we operate. We have developed methods to identify these problems, and only financially reward the first unique sighting, and very quickly the replication rates reduced. We have now added text in the revised manuscript about these methodological problems and how we addressed them. We have added some examples of pseudo-replication in the supplementary materials to demonstrate how we dealt with this challenge.
The Bayesian occupancy models that we use depend on repeat visits to the same area in order to estimate detection and non-detection rates for different species. So, revisiting an area in itself is not a problem and, in fact, a statistical requirement. We are, however, concerned about the non-independence of our observations and how this is affected by variable payments for different target species. On the other hand, if we end up generating tens of thousands of wildlife observations per village over several years, we may end up using different methods. In these Big Data contexts, machine learning can replace classical statistical models, because computation and predictive accuracy become more important than exact inference. At this stage, we don’t know yet exactly how we will address this challenge, but we believe that there is an answer. The current paper does not yet seek to provide that answer, though.
One final point to make in defence of our approach is that collecting large data volumes is not the primary objective. Instead, what we aim to do is for local people to re-evaluate their relationship to wildlife from a more consumerist view to a guardian perspective in which wildlife is an asset to be managed and maintained. How statistically robust monitoring methods are, may then become a secondary consideration if the primary objective of stabilizing threatened wildlife populations is achieved (although we recognize that this also requires evidence).
Comment 3: How did you estimate population sizes? Do you have means of identifying the same individuals and using a capture-recapture analysis? If so – what are the details? (I may have missed this, in which case I apologize)
Response 3: We do not estimate population sizes of target wildlife species. Instead, we use the citizen science data to generate spatial occupancy estimates. While occupancy generally correlates strongly with abundance (i.e. the likelihood of a species being present in a given area increases if it is more abundant), that correlation is not perfect (for example, species can be, and often are, spatially clumped). As pointed out above, specifically for orangutans (and maybe gibbons), we will conduct thermal drone surveys that should be able to count every individual of those species in a sample area. This will allow us to correlate our citizen science-based occupancy estimates with actual counts, and if that correlation is strong, we could extrapolate the abundance estimate across the entire village area. As we mentioned, this work is still in the planning phase.
Comment 4: It might be interesting for the reader to see some more statistics about the observations and which/why/how some were excluded due to inconsistency or low quality. This could demonstrate nicely your ability to control data quality. Perhaps provide a set of example-observations (recordings, pictures, videos)? This could give the reader a feeling of what the data looks like (again, perhaps this is in the supp material that I couldn’t access from my phone)
Response 4: We have added a figure in the supplementary file showing evidence of observations that are valid and those that are rejected due to no identifiable species or blurry photos.
Comment 5: Is the approach sustainable? Is it reasonable to plan to continue payment-for-observations in the future in the longer run (perhaps yes; I’m not implying otherwise, just noting that this needs to be considered/mentioned/discussed).
Response 5: This is a question that we often get asked. We are very cognizant of the fact that we don’t want to raise expectations only for the project to stop in a few years. We are actively pursuing avenues to secure long-term funding either through performance-based green finance or linking maintenance of biodiversity (as measured through the wildlife observation system and associated metrics) to existing government village funds. We have added some text on this in the manuscript in the future directions section.
Comment 6: Ethics: is it a good idea to be paying sums of money that are comparable to those gained from traditional work? It may attract some people to working in observations and setting aside traditional sources of income? Is this OK, or does it create a risk that in the future if the program is stopped, a problem will emerge? In other words – would we be creating a problematic reliance on this scheme? Similarly, is there a risk that some necessary but not-very-rewarding agricultural resources would be neglected? (e.g. when international conglomerates recruited local farmers in Ethiopia to grow plants for production of oil, a local shortage of traditional foods like Sorgom was created, costs increased, and some people went hungry).
Since the income is comparable to that from agriculture, we must ask about the societal impact: does it promote certain parts of society more than others (e.g. those that have smartphones, increasing society gaps)? Is there a risk that traditional kin/economic structures will be broken? (e.g. in some traditional societies in which traditional sources of income are commanded by the head of the family/clan, the introduction of paid labor led to societal restructuring, since young people get their own salaries and can choose to not adhere to the elders’ instructions/requests). These are questions that need to be considered when introducing a monetary scheme that is non-negligible compared to traditional rates of income. Perhaps in the specific society you dealt with these aren’t problems at all – I don’t know – but these are points to consider; they should be discussed (or at least mentioned) with respect to their relevance or irrelevance in this society, and highlighted as things to consider if the approach is to be adopted in other places/societies in the world.
These questions relate to the specific setting, as well as to the broader implementation of the approach, which is recommended in the manuscript.
Response 6: We are cognizant of the potential impacts that introducing a monetary reward system can have on other livelihood strategies, and are implementing longer-term socio-economic impact studies to understand this better. On the other hand, we understand that the four villages in which we operate have been fully integrated into local and international economies for decades, with the illegal logging opportunities generating major income opportunities in this area (see references below). Villages are provided with annual development assistance from the national government (Dana Desa) based on village planning, and villagers make rational decisions about which opportunities provide them with the immediate benefits they seek. Our financial incentive scheme seeks to increase the value of wildlife with the objective that people start considering wildlife to be an asset worth protecting. The hope is that this will spur other economic activities that build on forest protection, for example, the production and sale of forest commodities with certified positive wildlife impacts. While all these new opportunities may indeed affect existing income or production opportunities (such as in the Ethiopian example mentioned by the reviewer), we don’t think this is a reason not to introduce our scheme. It is very obvious from our many decades of experience working in this region that without some form of financial benefit, forest and wildlife protection by villagers is unlikely to outcompete opportunities such as the conversion of forest to oil palm or rubber.
Furthermore, we see anecdotal evidence that our system is not displacing existing agricultural activities (i.e., we are not paying so much that all other work is halted). The wildlife observations vary seasonally and relate to the need for agricultural labour input. For instance, during August 2025, when Iban communities were busy with clearing swidden land and Melayu communities were busy fishing, the number of observations declined sharply as people were busy with communal village activities. Participation rates per village as a percentage of the total population is currently as follows: Village A 41.46%; Village B 14.42%; Village C 18.8%; Village D 7.04%.
Casson, A., Obidzinski, K., 2002. From New Order to Regional Autonomy: Shifting Dynamics of “Illegal” Logging in Kalimantan, Indonesia. World Development 30, 2133-2151.
Obidzinski, K., 2004. Illegal Logging and the Fate of Indonesia's Forests in Times of Regional Autonomy.
Comment 7: We must ask what the long-term consequences for the population are – positive and negative – and what measures we use to assess these (again, for this population and for others in which the approach might be implemented).
Response 7: Thank you for your question. This is a very valid point, and we take this aspect very seriously. We have just had a social baseline survey conducted by external consultants, and this will be followed up with another social survey in year 3 of the current funding. This is the empirical means by which we will understand the consequences of this approach on the communities involved. In addition, we monitor and respond to any informal feedback from the observers or the community as a whole with regard to the program. The local project coordinator visits all villages once a month and documents any positive and negative feedback he receives.
Comment 8: What is the end-game in this respect – for how long is the scheme reasonably sustainable, and what might replace it eventually? (should be mentioned/discussed)
Response 8: The end-game is that we secure long-term financial support to pay for observations, which are used to calculate an annual metric for biodiversity that is a measure of how well the community is maintaining its biodiversity values (based on species occupancy against a baseline). We are in the process of seeking private investment (e.g. orangutan bonds) or Indonesian government funds (Dana Desa/Village Funds) that will provide communities with capital for developing economic activities as well as financing the ongoing wildlife monitoring and management. We have highlighted this in our future directions section.
Comment 9: Finally, to play the devil’s advocate for a moment, one might point out: well, what you did was pay for wildlife observations; what did you expect would happen, other than a huge increase in the number of observations? I mean, isn’t this what you’d expect, in almost any scenario? What’s new / not-obvious here?
Response 9: Thank you for the thoughtful critique. We agree that an incentive should increase the number of observations; however, our data show that the response is not predictable. The increase varied markedly across the four villages and continued to fluctuate month-to-month despite a constant, readily available payment. For example, in August, when competing priorities arose, observations fell sharply even though payments were unchanged; similar dips occurred during major cultural ceremonies. These patterns indicate that incentives interact with local opportunity costs, social calendars, and capacity constraints, rather than simply buying a uniform surge in observations.
Comment 10: I would think the challenge is in demonstrating that the data accumulated in this way can be helpful despite the many biases/challenges that might be involved (highlighted above). If so, a major effort in the paper should be put into highlighting these challenges and discussing how the data you collected can be useful despite the drawbacks. Merely the possibility of collecting observations via payment per-observations to local residents is in itself not particularly impressive.
Response 10: Thank you for this helpful comment. We have revised the manuscript to foreground likely biases and explain their usefulness despite them, and we clarify that this is an early-stage pilot. We are prioritising data verification and accuracy now and will begin statistical validation in the next phase. We also note the novelty in Indonesia: to our knowledge, incentive-based, community-led wildlife observation has not been tested with rural communities at scale. The conventional approach typically involves hiring villagers as daily labour on scientist-led transects; our approach compensates community-originated, verified observations, extending spatial/temporal coverage and building local monitoring capacity. We believe this model has strong potential and appreciate the guidance to keep strengthening these components.
Comment 11: I couldn’t see the reference list; perhaps because I’m in the field and viewing the manuscript through my phone.
Response 11: We have checked the reference list and updated it according to the requirements of the journal.
Comment 12: There’s some inconsistency in the currency used with respect to costs of analogous sampling/monitoring (USD and rupias, I think).
Response 12: Thank you, we have corrected all descriptions of cost to ensure that we have consistent and uniform use of USD.
Author Response File: Author Response.docx
Reviewer 3 Report
Comments and Suggestions for AuthorsThe topic is interesting and the paper is very well structured, presenting valuable information in this field work. The article is adequately related to the relevant literature. The paper needs improvements of the data that are presented.
- Add supplementary data at the introduction (see comments in the text)
- Insert the table S1 and figure S1 in the main text
- In order to improve the quality of the manuscript, please provide graphs and additional data (see comments in the text)
The cited references are relevant and support any claims made in the article.
Comments for author File: Comments.pdf
Author Response
Overall Comment: The topic is interesting and the paper is very well structured, presenting valuable information in this field work. The article is adequately related to the relevant literature. The paper needs improvements of the data that are presented.
- Add supplementary data at the introduction (see comments in the text)
- Insert the table S1 and figure S1 in the main text
- In order to improve the quality of the manuscript, please provide graphs and additional data (see comments in the text)
The cited references are relevant and support any claims made in the article.
Response: We thank the reviewer for their positive assessment of our manuscript, including the recognition of the relevance of the topic, the clear structure, and the value of the field data presented. We also appreciate the acknowledgement that the cited references were appropriate and supportive of the article’s claims.
In response to the suggestions for improvement, we have added supplementary data in the introduction as indicated, and we have moved Table S1 and Figure S1 into the main text. Furthermore, we have provided additional graphs and data as recommended in the detailed comments, which we believe improve the clarity and quality of the manuscript.
Comment 1: Add ‘sustainable management’ to keywords
Response 1: We appreciate this suggestion and agree that this term is relevant to our study. We have, therefore, added it to our list of keywords.
Comment 2: Abstract - “CS initiatives have been applied across a wide range of disciplines, including 36 biodiversity monitoring, climate change studies, and conservation biology. These projects” - realign text and add references
Response 2: We have added references to support this statement.
Comment 3: Figure 1 - add a graph showing the top observers for each of the four villages
Response 3: Thank you. We have added a figure showing the top 5 observers for each village.
Comment 4: Under Figure 1 - provide a list with all species/each village (as supplementary file) and a graph with the number of species identified in each phase of the project (in the main text)
Response 4: We have included the species list for each village in the supplementary file. We have also added a graph showing the number of species identified per village per phase.
Comment 5: “stickers and focus group discussions (FGD), incorporating” - change to ‘FGDs’
Response 5: We have corrected all uses of abbreviations.
Comment 6: “Among these observers, 229 49 observers have recorded more than 100 observations, and the top 10 contributors have 230 individually recorded between 1023 and 463 observations, underscoring the dedication of 231 key individuals in biodiversity monitoring (Figure 1). “ - “Re-write the data in the text. In the figure 1 you presented top 20 observers with more than 100 recorded”
Response 6: Thank you, we have updated the text to reflect what is shown in Figure 1.
Comment 7: “The most frequently reported are orangutan nests (9,766 observa-265 tions), white-rumped shamas (Copsychus malabaricus, 1,814 observations), sun bears 266 (Helarctos malayanus, 467 observations), stingless bees (Trigona spp., 370 observations), Bor-267 nean orangutans (Pongo pygmaeus, 316 observations), long-tailed macaques (Macaca fascic-268 ularis, 305 observations), and Oriental magpie-robin (Copsychus saularis, 247 observations).” - “As an example of study case you should provide figures with the distribution of orangutan nests across four villages; or other important species (rare, endangered)”
Response 7: Thank you, we have provided the maps of the four villages showing the distribution of Orangutan sightings.
Comment 8: “Scientific field teams conducting transects typically incur costs 315 estimated at approximately USD 95 per transect [9], which is 107 times higher than our 316 CS-based model.” - “previously, you mentioned that cost was 0.85 USD/observation. is that the price for transect? Because is a difference between transect and observation.”
Response 8: Thank you for highlighting that the cost comparison is not directly equivalent. We have revised the paragraph to clarify that our 0.85 USD per observation estimate is not directly comparable to per-transect or per camera trap cost, due to differences in scale and methodology. The text now emphasizes that, despite these differences, community-based approaches can collect large volumes of data at relatively low cost compared with traditional monitoring methods.
Comment 9: for the uniformity of data, provide the cost in USD.
Response 9: Thank you for noting this inconsistency in our values. While the reference to Rupiah was used in a generic statement, rather than a precise description of costs, we acknowledge that mixing two currencies in our discussion undermines the strength of our narrative.
Comment 10: Discussion - discuss results in relation to similar studies in the literature and “As community members engaged in wildlife observation, some began reporting rare 359 and endangered species more frequently, enriching biodiversity data and contributing to 360 the identification of conservation priorities. “ - “provide data in the text. how many species/village/phase?’
Response 10: Thank you for this suggestion. We have added a figure showing the unique species count per village per phase, and also the data in the main text.
Comment 11: Re-write the references according to the author's guidelines
Response 11: Thank you. We have revised all references to comply with the journal’s author guidelines.
Author Response File: Author Response.docx
Round 2
Reviewer 2 Report
Comments and Suggestions for Authors- The revised manuscript does not provide almost any detail regarding possible biases, except noting that biases are expected. Since biases are perhaps THE main problem with CS data, it seems odd not to discuss this in more detail, in a paper whose point is basically a call to the community to use more CS.
- In particular, as I noted, the specific form that you are proposing of CS (pay-for-observations) is expected to create unique biases that are NOT characteristic of many/most CS datasets (see my earlier comments). These must be highlighted, if the point of the manuscript is to argue for the utilization of the approach you suggest.
The avoidance of this discussion (even after I pointed out its importance) seems as though there’s some misunderstanding regarding the role of a paper like this: it should be a service to the community, helping researchers and conservationists realize the utility of your approach and how to implement it. A sincere service to the community would highlight the drawbacks and discuss pitfalls and challenges, not only point out the good things and “hide under the rug” the problems; doing so would be a disservice.
- I would expect at least some engagement with the nitty-gritty details of the problems related to biases in the data, if only to demonstrate to the readers that this is a major issue in such data; outlining the specific biases that are known and how they might be treated, pointing out biases that may be unique to the system you propose (e.g. the problem of extreme pseudo-replication, the problem that people will start setting up attractors like food that would attract wildlife creating count-bias in the short term and behavioral change and population-dynamics’ change in the longer run) and how these might be addressed.
The current mention of these in section 4.2 is very minimal, and the changes in the revised version are surprisingly minor. (I mean, you added the sentence “We also recognise that citizen science can present a risk of bias due to the unstructured nature of sampling and observation.” And responded “there, we fixed it”. The elaboration later on doesn't provide much more (except highlight again that you think occupancy models can overcome some of the biases, which you do not detail; even in my brief comments there's more detail regarding which biases might come up and why they are a problem).
- The occupancy modelling that you plan to rely on is definitely more robust than naiive models (as in many cases, where presence/absence is more reliable than quantity measurements), but I’m not completely sure that lessons learned from butterfly patch occupancy necessarily hold for orangutans – this requires testing, and – again – the utter reliance on this as a solve-it-all of the multiple expected biases makes me worry that you haven’t fully considered these challenges. Perhaps I’m wrong, of course.
- Note that one well-known bias in CS data is that you generally don’t know which sites were surveyed and did not have observations in them; that you don’t know surveying effort and intensity, etc.; this can readily be overcome in your system, but may require collection of data that you currently might not be collecting – the trajectory in time/space of each of the smartphones participating. You can get GPS readings that would tell you how much time was spent in the forest and where exactly the surveyors walked. This is potentially a powerful advantage of the scheme you’re setting up.
Minor: I don’t know the system, but I doubt that drone surveys provide ground truth free of biases. Most methods have biases, and typically no single method can provide a full assessment of the monitored species/population. I’d use the methods to validate one another and to complement one another, not treating either as “ground truth”. In this spirit, I’d aim to use multiple complementary methods, including transects, cameras, etc – and not only drone survey. (by the way, if drone surveys are anywhere close to providing full data – the future of monitoring in this system should go in that direction, because drones are becoming really cheap).
- You respond to comment 3 that you didn’t calculate population sizes, but Table 1 seems to provide population estimates. Can you elaborate in the text how you reach these? Can you add error bars / confidence intervals?
- Response 6-7-8: your responses are very system-specific (along the lines of “in our case this isn’t a problem because...”), but that misses the point of what my comment was about: the potential general problems of the approach you’re suggesting. As noted earlier – a ms like this is intended to be a service to the scientific community that suggests that a certain approach be adopted. For this to be a service, you should aim to highlight points that would carry over to other instances and similar – but also different – systems. If I point out a potential problem in the approach you suggest, saying “this isn’t a problem in this specific system” doesn’t obviate the need to point it out as a potential problem and discuss it (perhaps suggesting that people use a setup similar to yours that overcomes the problems, and use tests of whether it is a problem similar to those that you used, for example). Again – the point of a paper isn’t to report what you did and defend why it was OK, but rather to use it as a case study to make general points/discussion of the possibilities and challenges of the sort of thing you’re suggesting. In my opinion.
Author Response
Comment 1: The revised manuscript does not provide almost any detail regarding possible biases, except noting that biases are expected. Since biases are perhaps THE main problem with CS data, it seems odd not to discuss this in more detail, in a paper whose point is basically a call to the community to use more CS. In particular, as I noted, the specific form that you are proposing of CS (pay-for-observations) is expected to create unique biases that are NOT characteristic of many/most CS datasets (see my earlier comments). These must be highlighted, if the point of the manuscript is to argue for the utilization of the approach you suggest.
Response 1. Thank you for pointing out this oversight. Our main objective for the current paper was to describe the uptake of the citizen science approach in Kalimantan under different conditions (voluntary, incentive-based, paid observations), rather than to present data analyses. The ultimate goal of our program, however, is to generate robust statistics about wildlife populations, in addition to changing community perspectives about managing wildlife. We agree therefore that adding text on dealing with data biases is indeed required and we have added a new text section (3.3.) that, hopefully, addresses the reviewer’s concerns. We also added a link in the Abstract stating that “While the current manuscript does not provide statistical analyses of the wildlife data, we describe how we plan to overcome data biases that are inherent to opportunistic, unstructured survey efforts.” Firstly in the new text section, we point out that, while citizen science data are often opportunistic and result from unstructured survey efforts, they can nevertheless result in statistically robust occupancy estimates. This relies on Bayesian statistics in which the detection and non-detection of species other than the target species, provide prior information that indirectly indicates survey effort for the target species. We work closely with a statistical expert who discusses these statistical models here (van Strien et al. 2013), and we have tested these models in a previous 6-year citizen science pilot program elsewhere in Kalimantan (van Strien et al. 2025; Maharani et al. 2025). We are currently evaluating whether the same statistical methods that we used in oil palm citizen science can be employed in our current community program, but we are not yet in a position to publish the data in the current paper.
van Strien, A.J., van Swaay, C.A.M., Termaat, T., 2013. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models. Journal of Applied Ecology 50, 1450-1458. van Strien, A.J., Meijaard, E., Nardiyono, Suief, S., Zaini, S., 2025. Occupancy estimation of wild species in a palm oil plantation using unstructured data. bioRxiv, 2025.2007.2011.664290. Maharani, N., Nardiyono, Munthe, C.R., Swayanuar, P., Ni’Matullah, S., Sueif, S., Zaini, S., Supriatna, J., Kusrini, M., Dennis, R., van Balen, B., van Strien, A., Meijaard, E., 2025. A novel citizen science-based wildlife monitoring and management tool for oil palm plantations. bioRxiv, 2025.2001.2012.632638.
Comment 2: The avoidance of this discussion (even after I pointed out its importance) seems as though there’s some misunderstanding regarding the role of a paper like this: it should be a service to the community, helping researchers and conservationists realize the utility of your approach and how to implement it. A sincere service to the community would highlight the drawbacks and discuss pitfalls and challenges, not only point out the good things and “hide under the rug” the problems; doing so would be a disservice. I would expect at least some engagement with the nitty-gritty details of the problems related to biases in the data, if only to demonstrate to the readers that this is a major issue in such data; outlining the specific biases that are known and how they might be treated, pointing out biases that may be unique to the system you propose (e.g. the problem of extreme pseudo-replication, the problem that people will start setting up attractors like food that would attract wildlife creating count-bias in the short term and behavioral change and population-dynamics’ change in the longer run) and how these might be addressed. The current mention of these in section 4.2 is very minimal, and the changes in the revised version are surprisingly minor. (I mean, you added the sentence “We also recognise that citizen science can present a risk of bias due to the unstructured nature of sampling and observation.” And responded “there, we fixed it”. The elaboration later on doesn't provide much more (except highlight again that you think occupancy models can overcome some of the biases, which you do not detail; even in my brief comments there's more detail regarding which biases might come up and why they are a problem).
Response 2: As mentioned above, our new text section 3.3. addresses these concerns more transparently. While we do not yet provide a full analysis of the biases in citizen science data – this is beyond the scope of the current paper – we provide some preliminary data analysis and references to statistical methods for addressing biases in citizen science-based biodiversity monitoring data.
Comment 3: The occupancy modelling that you plan to rely on is definitely more robust than naiive models (as in many cases, where presence/absence is more reliable than quantity measurements), but I’m not completely sure that lessons learned from butterfly patch occupancy necessarily hold for orangutans – this requires testing, and – again – the utter reliance on this as a solve-it-all of the multiple expected biases makes me worry that you haven’t fully considered these challenges. Perhaps I’m wrong, of course.
Response 3: This is correct. The validity of occupancy analyses on the current citizen science data requires testing. On the other hand, we do know that citizen science data can robustly estimate the occupancy of low density species, such as orangutans, as we did in our previous studies in oil palm citizen science (see references above). What we don’t know yet is whether our current paid observation system undermines the assumptions of occupancy modeling, when relative detection rates between species are influenced by observer effort (i.e., observers focus on detecting species they get paid for most). We believe that we could overcome these challenges, but we are not yet in a position to share evidence of this, as data collection and analyses (and decisions on which statistical models to run, and with which covariates) are still ongoing. We hope that our additional explanations in the new text section provide the reviewer with enough confidence that biases can be overcome.
Comment 4: Note that one well-known bias in CS data is that you generally don’t know which sites were surveyed and did not have observations in them; that you don’t know surveying effort and intensity, etc.; this can readily be overcome in your system, but may require collection of data that you currently might not be collecting – the trajectory in time/space of each
of the smartphones participating. You can get GPS readings that would tell you how much time was spent in the forest and where exactly the surveyors walked. This is potentially a powerful advantage of the scheme you’re setting up
Response 4: Again this is correct, but we refer to our comments above and the methods that are used to model survey efforts in Bayesian statistical frameworks. Our total number of observations in the study area currently stands at 44,000 and we expect to reach 75,000 observations in the first year, and some 250,000-300,000 observations at the end of our 3-year study period. Because we can provide financial incentives for conducting surveys in grid cells that are more remote or harder to reach, we expect that ultimately we will have nearly full coverage of the entire forest area under the management of the 4 villages. That coverage should include repeat surveys for each grid cell, and species encountered during the visits will provide insight into detection and non-detection rates of our target species. While we haven’t fully tested the above methodological assumptions, we believe that robust occupancy patterns can be derived from the large data volume generated through the program.
Comment 5: I don’t know the system, but I doubt that drone surveys provide ground truth free of biases. Most methods have biases, and typically no single method can provide a full assessment of the monitored species/population. I’d use the methods to validate one another and to complement one another, not treating either as “ground truth”. In this spirit, I’d aim to use multiple complementary methods, including transects, cameras, etc – and not only drone survey. (by the way, if drone surveys are anywhere close to providing full data – the future of monitoring in this system should go in that direction, because drones are becoming really cheap).
Response 5: The latest developments in orangutan drone surveys, especially those employing thermal as well as optical cameras can provide a complete picture of orangutans in a study area, i.e., all orangutans present are counted. This is done in the early evening or very early morning when orangutans are on their sleeping platforms (or “nests”) and the difference between body and ambient temperature is at its maximum. Because of limitations in drone capacity and flight distance, realistically drone surveys still require extrapolation from sampled
areas to the overall study site. Nevertheless, we aim to ground-truth our citizen science-based occupancy estimates with drone surveys, at least for the drone sampling areas. Because with drones actual orangutans are counted, rather than orangutan nests which is normally what is used as an indicator for orangutan density, drone counts are much more accurate than nest counts. This also relates to high uncertainty in nest decay rates and building rates. Where we disagree with the reviewer is that drones should be used instead of citizen science observations, unless the drones are owned and used by communities with data and information going back to communities. Otherwise, these new technologies disempower communities rather than empower them. This, for us, is a key concern when bringing technologies into communities. Who owns the technology and benefits from its use. This is why for us the citizen science methods are so important. Not only do they generate large data volumes, but data are collected by those who are ultimately responsible for the translation of data into management. Furthermore, even though drone prices are dropping, thermal and high resolution optical cameras remain expensive, with standard models currently used for orangutan surveys costing around USD 18,000 per unit. Conducting a survey with drone experts could therefore easily cost USD 25,000 or more. For that money, we can keep citizen science programs in one village going for 2 years, so for the time being, drones are not a cost-effective monitoring tool. They should be used in a complementary way to verify that citizen science-based estimates are indeed precise and reliable.
Comment 6: You respond to comment 3 that you didn’t calculate population sizes, but Table 1 seems to provide population estimates. Can you elaborate in the text how you reach these? Can you add error bars / confidence intervals?
Response 6: As stated in the Methods section, “Targeted inquiries, such as a participatory orangutan (Pongo pygmaeus) population survey using the Pooled Local Expert Opinion (PLEO) method (van der Hoeven et al 2004), enriched our understanding of wildlife perceptions across different age groups.” This is an interview-based method to get a very general understanding of past population trends based on the pooled opinions among villagers. We have added some text to explain how the method works. We now provide estimates in Table 1 of the standard deviation. Note that these values are low, because the method uses the e-log
of individual estimates before pooling them. While this method provides an indication of perceived population trends, it is not a reliable population estimate. Nevertheless, we believe that the perceived trend reflects reality, and that indeed orangutans in the study area have significantly declined over the past decades (see, for example, Yuliani et al. 2023).. van der Hoeven, C.A., de Boer, W.F., Prins, H.H.T., 2004. Pooling local expert opinions for estimating mammal densities in tropical rainforests. Journal for Nature Conservation 12, 193—204. Yuliani, E.L., Bakara, D.O., Ilyas, M., Russon, A.E., Salim, A., Sammy, J., Sunderland-Groves, J.L., Sunderland, T.C.H., 2023. Bornean orangutan Pongo pygmaeus pygmaeus population estimate within and around Danau Sentarum National Park, Kapuas Hulu, West Kalimantan. Conservation Science and Practice 5, e12916.
Comment 7: Response 6-7-8: your responses are very system-specific (along the lines of “in our case this isn’t a problem because...”), but that misses the point of what my comment was about: the potential general problems of the approach you’re suggesting. As noted earlier – a ms like this is intended to be a service to the scientific community that suggests that a certain approach be adopted. For this to be a service, you should aim to highlight points that would carry over to other instances and similar – but also different – systems. If I point out a potential problem in the approach you suggest, saying “this isn’t a problem in this specific system” doesn’t obviate the need to point it out as a potential problem and discuss it (perhaps suggesting that people use a setup similar to yours that overcomes the problems, and use tests of whether it is a problem similar to those that you used, for example). Again – the point of a paper isn’t to report what you did and defend why it was OK, but rather to use it as a case study to make general points/discussion of the possibilities and challenges of the sort of thing you’re suggesting. In my opinion.
Response 7: Our main objective of writing this manuscript is to provide the wildlife conservation community with new ideas about low-cost wildlife monitoring methods that deeply engage the communities who, in community-based conservation, will ultimately be responsible for management threats to wildlife. These methods are new and experimental, and we are learning on a day-to-day basis what works, what doesn’t work, what constraints exist and what errors we made. We also learn that each village in our sample of four reacts differently to
incentives. Currently, we have only a limited understanding about how this approach would work in other socio-ecological settings, although our near-future scaling up plans with provide important insights. So, while we want to promote this new way of monitoring wildlife and would like others to become as enthused about the approach as we are, we cannot yet claim that our approach applies more generally. In the manuscript we have tried to find a balance between promoting the general applicability of the idea, while ensuring that the reader understands that we are describing the very specific conditions of our study villages. Ultimately it is up to the individual reader to take key insights from our manuscript and see how it applies to the specific context of their own wildlife management challenge. Does that sufficiently address your point?
Reviewer 3 Report
Comments and Suggestions for AuthorsThe supplementary file - Species list - must be improved. Provide data regarding rare/endangered species from this area (see comments in the text). Improve the data from discussions (see comments in the text). Add supplementary data for Figure 5 and 6 (see comments in the text).
Comments for author File: Comments.pdf
Author Response
Comment 1: The supplementary file - Species list - must be improved. Provide data regarding rare/endangered species from this area (see comments in the text). Improve the data from discussions (see comments in the text). Add supplementary data for Figure 5 and 6 (see comments in the text).
Response 1: We have revised the supplementary file as requested. The species list has been updated to include information on class, rare and endangered species from the study area. We have also incorporated the additional data highlighted in the Discussion section and provided supplementary data for Figures 5 and 6.
Comment 2: In Figure 2, reorganise the list with all species/each village (as supplementary file) in excel file. Insert a new column mentioning the class as a major taxonomic unit. The information is redundant in the supplementary file. In the Species name column, either you mention only the genus, or the scientific name of the species, or the order.
Response 2: Thank you for your suggestion. We have reorganized the list with all species, and added a column for class. We have also fixed the redundancy of the data.
Comment 3: In Figure 5, why did you insert four more photos? They are similar with to the ones above. Mention the village (A, B, C, D) for each photos.
Response 3: Thank you for the suggestion. We have updated the figures with labels. Figures 4 and 5 show the distribution of wildlife observations and orangutan observations, illustrating that these records include rare, endangered, and critically endangered species.
Comment 4: In Figure 6, mention the village (A, B, C, D) for each photos.
Response 4: Thank you for your suggestion. The figures have been updated with labelling.
Comment 5: In the discussion, discuss your results in relation to similar data in the literature. present other case studies on involving local communities in biodiversity studies
Response 5: We have added a paragraph to the discussion section regarding similar case studies involving local communities which explore whether financial incentives lead to conservation outcomes.
Comment 6: In the discussion, where you state ‘Rare and endangered species were detected throughout the study, demonstrating that guided community participation can generate valuable biodiversity data.”, please provide data in the text. how many rare/endangered species/village/phase.
Response 6: Thanks for your suggestion. We have added in Figure 3 the number of rare and endangered species per village per phase.
Response to remaining in-text comments: Thank you for your thorough review of our manuscript. We have addressed all suggested changes, including updating the figure numbers throughout the text.