Climate Resilience and Adaptation in West African Oyster Fisheries: An Expert-Based Assessment of the Vulnerability of the Oyster Crassostrea tulipa to Climate Change

Round 1
Reviewer 1 Report
no comments
Author Response
There was no comment from this reviewer
Reviewer 2 Report
The authors describe the importance of C. tulipa in the area and how various factors such as climate change may affect the status of the resource.
The manuscript is fairly clearly written, the methods and results are clearly reported but the details of the vulnerability assessment framework estimates are missing.
The text with some minor revisions is attached.
Comments for author File: Comments.pdf
Author Response
We have provided further details on the vulnerability assessment framework with a newly inserted fig. 1 now summarizing the methodology. All changes suggested by the reviewer have been effected and highlighted yellow in the text.
Reviewer 3 Report
this paper used an 'expert assessment' method to understand the vulnerability of an oyster species, Crassostrea tulipa, to climate change. The topic is timely, given the high interest for bivalve mariculture and the potential that this can have: understanding what are the biggest threats and the vulnerability of the species is therefore of paramount importance. The authors do a good job in highlighting the need for the study in the introduction, a well written section of the manuscript on which I don't have many suggestions.
What I am struggling with is the methods (and therefore the results) sections. In primis I have to say I have no access to the supplementary materials, they were nowhere to download and not at the end of the MS; maybe they would have helped in my understanding - on the other hand, the supplementary materials should't hold such a big role for the understanding of the manuscript. I would suggest expanding on the way the experts conducted the assessment. If there was a species profile (which as i understand is unpublished - is it getting published elsewhere? Otherwise it would be nice to include it), and there are scenarios : why relying on 'expert assessments' ? Also I am wondering how were the expert answers kept independent especially as the expert met (the MS mentioned the answers could be changed after the meeting, but perhaps the meeting could still have influenced the answering). I am no expert in these types of assessment, but it does not seem independent to me. Getting some profiles of the experts could also be useful. Was 9 enough? Were they diverse enough and spanning all levels of expertise? Re the quality table: surely if there were data avaialble then all of the experts would have answered the same (since it's based on data vs judgement) - should these be kept separate somehow?
The results suffer from the uneasiness to understand the methodology
The discussion is well written, but here we have a lot of information about the influences of temperature on the physiology etc that seem somewhat out of place and perhaps should be given a further space in the methods or results as a species profile.
Author Response
Comment
What I am struggling with is the methods (and therefore the results) sections. In primis I have to say I have no access to the supplementary materials, they were nowhere to download and not at the end of the MS; maybe they would have helped in my understanding - on the other hand, the supplementary materials shouldn’t hold such a big role for the understanding of the manuscript.
Response
Supplementary sheets 1, 2 and 3 were submitted together with manuscript. Chances are that the supplementary materials were not made available to the reviewer. We understand that supplementary materials should not play such a big role understanding the manuscript. However, the supplementary information are mainly bulky tables which may take too much space in the manuscript. Supplementary sheet one is a description of each climatic and biological attribute including criteria used for scoring each. The information is very bulky. Sheet 2 is a table (with blank spaces) used by each expert for scoring. Sheet 3 is a large excel file which contains all the raw dataset obtained from the experts. We have provided some extra texts in the manuscript (highlighted yellow) to guide readers on what information in the supplementary sheets hold.
Comment
I would suggest expanding on the way the experts conducted the assessment.
Response
We have provided further clarifications on the assessment process by including a figure which summarizes the assessment framework. Please see figure 1.
Comment
If there was a species profile (which as i understand is unpublished - is it getting published elsewhere? Otherwise it would be nice to include it)
Response
We have now provided a summary of the species profile in the text. Please see table 2
Comment
Why relying on 'expert assessments' ? Why 9 experts?
Response
Expert elicitation/judgement is a structured approach to systematically consult experts on uncertain issues. It is most often used to quantify ranges for poorly known parameters but may also be useful to further develop qualitative issues such as definitions, assumptions or conceptual (causal) models. A thorough preparation and systematic design and execution of an expert elicitation process may increase the validity of its outcomes and transparency and trustworthiness of its conclusions. Various expert elicitation protocols and methods exist. However, these are often not universally applicable, and need customization to suit the needs of a specific study. In this paper, we adapted the Methodology for assessing the vulnerability of marine fish and shellfish species to a changing climate used by Morrison et al 2015. We have now provided a justification to why we used the expert assessment approach. This includes a figure which summarizes the framework to bring more clarity to the reader. Please see lines 152-166 for this additional new information.
The number of experts will vary depending on the application (time and money, number of available experts, etc.). There is no optimum number of experts, but the literature suggests a range of four to seven experts is appropriate (Linstone and Turoff, 2002; Angus et al., 2003).
Comment
Also, I am wondering how the expert answers were kept independent especially as the expert met (the MS mentioned the answers could be changed after the meeting, but perhaps the meeting could still have influenced the answering). I am no expert in these types of assessment, but it does not seem independent to me. Getting some profiles of the experts could also be useful. Was 9 enough? Were they diverse enough and spanning all levels of expertise? Re the quality table: surely if there were data available then all of the experts would have answered the same (since it's based on data vs judgement) - should these be kept separate somehow?
Response
The goal of the in-person meeting was to offer experts the chance to explain the rationale behind their scores and a chance to change their scores based on new information provided during the workshop. This scoring process is not intended to gather consensus among the group and make the scoring process not independent, rather, it helped identify and fix errors, reduce individual bias, encourage buy-in from the experts, and increases the precision of the final scores. We have incorporated this explanation into the text. Expert may or may not change after the meeting.
Comment
The results suffer from the uneasiness to understand the methodology
Response
We believe that by addressing the preceding concerns, these results should now be clearer to our readers.
Comment
The discussion is well written, but here we have a lot of information about the influences of temperature on the physiology etc that seem somewhat out of place and perhaps should be given a further space in the methods or results as a species profile.
Response
We have revised the beginning text on temperature in the discussion to situate it in the context of the findings of the study.