Next Article in Journal
Student Reactions to Just-in-Time Formative and Summative Feedback in a Tablet-Based Family Medicine MCQ Exam
Previous Article in Journal
Preparation for Residency: Effect of Formalized Patient Handover Instruction for Fourth-Year Medical Students
 
 
Article
Peer-Review Record

Generative AI in Healthcare: Insights from Health Professions Educators and Students

Int. Med. Educ. 2025, 4(2), 11; https://doi.org/10.3390/ime4020011
by Chaoyan Dong 1,*, Derrick Chen Wee Aw 1,2, Deanna Wai Ching Lee 3, Siew Ching Low 1,4 and Clement C. Yan 1,5
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Int. Med. Educ. 2025, 4(2), 11; https://doi.org/10.3390/ime4020011
Submission received: 6 March 2025 / Revised: 14 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This article examines the perceptions and adoption of Generative AI tools among health professions educators and students at a single academic hospital, employing a mixed-methods approach. While it offers initial insights into AI's educational potential, the study is limited by a narrow sample, a lack of student interviews, and an overreliance on ChatGPT for qualitative analysis. The findings emphasise the need for institutional guidelines and training, but fall short in methodological depth and novelty.

  • The study is based on data from a single hospital, Sengkang General Hospital, which restricts the generalizability of the findings to diverse healthcare and educational institutions.
  • There is a significant imbalance between students (n = 83) and educators (n = 26), as only educators were interviewed, resulting in a lack of deeper qualitative insights from students.
  • Quantitative data were analysed using only Microsoft Excel for frequencies and percentages, with no inferential statistics (e.g., chi-square, t-test) to support the claims.
  • Thematic analysis of qualitative data was initially conducted by ChatGPT-4, raising concerns about methodological rigor, depth, and reproducibility.
  • The article does not mention any steps taken to validate the survey instrument, such as piloting or calculating internal consistency (e.g., Cronbach’s alpha).
  • Though TAM and UTAUT are cited, the application of these models in survey design and result interpretation is shallow, lacking depth in conceptual integration.
  • The lack of figures or graphical summaries weakens the communication of data. Charts summarising adoption rates or comparative perceptions could enhance clarity.
  • Multiple points are repeated across sections without offering new insights (e.g., the need for institutional guidelines is mentioned in the Introduction, Results, Discussion, and Conclusion).
  • The research questions are not explicitly stated. While the study’s aim is implied, formal and testable research questions are absent.
  • Claims such as “our findings contribute to ongoing discourse” are not strongly supported by the novelty of the study or its limited scale.
  • There is no explanation of the sampling methods (e.g., convenience, purposive sampling), raising concerns about potential selection bias.
  • Although some citations are used, critical engagement with the global literature on AI in medical education is limited, often missing key systematic reviews or global comparisons.

Author Response

Comments1: The study is based on data from a single hospital, Sengkang General Hospital, which restricts the generalizability of the findings to diverse healthcare and educational institutions. 

[Response: We acknowledge that the study was conducted at a single site. However, the intent of this study was not to produce generalizable findings, but rather to provide in-depth insights from a real-world setting where educators and students across multiple health professions interact. The hospital hosts clinical placements for students from three medical schools, four nursing schools, and two allied health institutions, and the educators surveyed represent a wide range of professional backgrounds. This diversity within a single institution enhances the relevance and richness of the findings, even as we remain cautious in interpreting them beyond similar contexts.]

Comments 2: There is a significant imbalance between students (n = 83) and educators (n = 26), as only educators were interviewed, resulting in a lack of deeper qualitative insights from students.

[Response: We disseminated the survey to all clinical educators at the hospital, as well as to students who were doing clinical placements during the study period. In practice, students responded more to the survey invitation, while many clinical educators did not participate, likely due to competing clinical and teaching commitments.

The interviews were conducted between December 2024 and January 2025, several months after the survey period (March – May 2024). By the time interviews began, the students who completed the survey had already rotated to other institutions, making it logistically challenging to conduct follow-up interviews with them. As such, the qualitative component focused on educators. We acknowledge this as a limitation and suggest future studies incorporate student interviews to capture their perspectives more deeply.]

Comments 3: Quantitative data were analysed using only Microsoft Excel for frequencies and percentages, with no inferential statistics (e.g., chi-square, t-test) to support the claims.

[Response: As this is an exploratory study, the primary objective was to capture the current usage and perspectives regarding GenAI use among educators and students. Descriptive analysis using frequencies and percentages was appropriate to answer our research questions. Given the study’s focus on identifying patterns rather than testing hypotheses, inferential statistics were not applied. Future studies with larger and more balanced samples could incorporate inferential analysis to explore relationships and differences in greater depth.]

Comments 4: Thematic analysis of qualitative data was initially conducted by ChatGPT-4, raising concerns about methodological rigor, depth, and reproducibility.

[Response: As outlined in Section 2.4 (Data Analysis), we used OpenAI’s ChatGPT-4o for preliminary coding to support thematic analysis, followed by manual verification by the research team to ensure accuracy and alignment with participant responses. Each research team member independently coded assigned transcripts and cross-checked them against the AI-generated coding. Any discrepancies were reviewed collaboratively to ensure consistency and rigor in theme development.

We also cited a recent peer-reviewed study (Bijker et al., 2024) to support the use of ChatGPT for qualitative analysis, acknowledging both its potential and limitations. This hybrid approach aimed to enhance transparency, reproducibility, and depth while maintaining methodological rigor.]

Comments 5: The article does not mention any steps taken to validate the survey instrument, such as piloting or calculating internal consistency (e.g., Cronbach’s alpha).

[Response: As described in Section 2.2 (Study Design), Phase 1 involved a structured cross-sectional survey (Appendix A) guided by the TAM and UTAUT frameworks to explore perceptions of usefulness, ease of use, and concerns. The survey instrument was developed specifically for this study and did not undergo formal validation procedures such as piloting or internal consistency testing (e.g., Cronbach’s alpha). We acknowledge this as a limitation and noted it in the manuscript accordingly. Future research could build on this pilot by refining and validating the instrument for broader application.]

Comments 6: Though TAM and UTAUT are cited, the application of these models in survey design and result interpretation is shallow, lacking depth in conceptual integration

[Response: As previously mentioned, the survey questions were developed based on the key constructs of the TAM and UTAUT frameworks, such as perceived usefulness, ease of use, performance expectancy, effort expectancy, social influence, and facilitating conditions.

To address the comment on conceptual integration, we have revised the Discussion section to interpret the findings through the lens of both models more explicitly.]

Comments 7: The lack of figures or graphical summaries weakens the communication of data. Charts summarising adoption rates or comparative perceptions could enhance clarity (4 tables in the manuscript already. Info included is sufficient.)

[Response: We appreciate the suggestion. The manuscript includes four detailed tables summarising key quantitative and qualitative findings. We believe these tables convey the data in a clear and organized manner. Given space considerations and the descriptive nature of the study, we present data in tabular rather than graphical form.]

Comments 8: Multiple points are repeated across sections without offering new insights (e.g., the need for institutional guidelines is mentioned in the Introduction, Results, Discussion, and Conclusion).

[Response: We acknowledge the reviewer’s observation regarding the repetition of points, particularly around the need for institutional guidelines. For the introduction, we did not mention the institutional guidelines.

This theme emerged consistently across multiple data sources, including survey responses, interview findings, and thematic analysis, and was therefore reflected in several sections of the manuscript to represent participants’ perspectives accurately. We restructured the results section (Section 3.2, 3.4) to reduce redundancy and ensure that each mention contributes a distinct insight or builds upon previous points.]

Comments 9: The research questions are not explicitly stated. While the study’s aim is implied, formal and testable research questions are absent.

[Response: Thank you. We reformatted the text, and the research questions can be found L98-103.]

Comments 10: Claims such as “our findings contribute to ongoing discourse” are not strongly supported by the novelty of the study or its limited scale.

[Response: We acknowledge that the study’s scale and single-site limit the generalizability of the findings. However, as an exploratory study, our intention was not to generalize the findings, but to offer early, practice-based insights into the adoption and perceptions of GenAI within a clinical education setting. The study contributes to the ongoing discourse by highlighting profession-specific perspectives, differences in adoption between students and educators, and the need for institutional support.]

Comments 11: There is no explanation of the sampling methods (e.g., convenience, purposive sampling), raising concerns about potential selection bias.

[Response: We added the following to Section 2.3: “The convenience sampling strategy was used.”]

Comments 12:  Although some citations are used, critical engagement with the global literature on AI in medical education is limited, often missing key systematic reviews or global comparisons.

[Response: Thank you for this helpful comment. We have revised the Introduction to include a more critical engagement with the global literature on AI in medical education. Specifically, we have incorporated insights from a recent comprehensive scoping review published as BEME Guide No. 84, which synthesizes global trends in GenAI adoption across knowledge acquisition, assessment, and emerging areas such as clinical reasoning and self-regulated learning. ]

 

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

Review Reports

A brief summary

This study used mixed methods to assess educators’ and students’ perceptions of generative artificial intelligence (GenAI) in health professions education. The results revealed favorable responses to GenAI and some concerns, such as AI literacy or its negative impact on our critical thinking skills. This study included educators with various clinical specialties, resulting in various responses.

General concept comments

Overall, the manuscript is presented well. However, the authors could improve some points in the manuscript (see specific comments).

 

Specific comments

“1.1 Theoretical Framework...in integrating GenAI into HPE.” (page 2, line 58 - page 3, line 96)

This part could be moved to the 2.2 Research Design section (page 3, line 110-) in the Materials and Methods. It seems more natural to summarize explanations of The Technology Acceptance Model and The Unified Theory of Acceptance and Use of Technology in one place.

 

“A total of...” (page 5, line 175-)

I recommend adding the numbers of clinicians and students you emailed for the online survey and its response rate (i.e., the percentage of people who responded to a survey) at the beginning of the Results section.

 

“Table 4. Interview findings by combining the data from 3 professions” (page 21, line 620-)

I think each theme is a bit long. As Braun and Clarke described, concise themes are better (Braun, V., & Clarke, V. (2006). Qualitative Research in Psychology, 3(2), 77–101). Could you make each theme more concise?

Author Response

Comments 1:

“1.1 Theoretical Framework...in integrating GenAI into HPE.” (page 2, line 58 - page 3, line 96)

This part could be moved to the 2.2 Research Design section (page 3, line 110-) in the Materials and Methods. It seems more natural to summarize explanations of The Technology Acceptance Model and The Unified Theory of Acceptance and Use of Technology in one place.

[Response: Thank you for the suggestion. We appreciate the perspective on reorganizing the theoretical frameworks. However, presenting the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) in Section 1.1 provides important conceptual grounding early in the manuscript. This allows readers to understand the rationale for our study and how these models inform the research questions and methodological choices. We have also referred to these models in Section 2.2 (Study Design) to demonstrate their role in guiding the survey development. We hope this structure maintains clarity while strengthening the theoretical coherence of the study.]

Comments 2:

“A total of...” (page 5, line 175-)

I recommend adding the numbers of clinicians and students you emailed for the online survey and its response rate (i.e., the percentage of people who responded to a survey) at the beginning of the Results section.

[Response: Thank you. We added the total number of educators and students invited for the survey, and respective response rates to the beginning of Section 3 - Results.]  

Comments 3:

“Table 4. Interview findings by combining the data from 3 professions” (page 21, line 620-)

I think each theme is a bit long. As Braun and Clarke described, concise themes are better (Braun, V., & Clarke, V. (2006). Qualitative Research in Psychology, 3(2), 77–101). Could you make each theme more concise?

[Response: Thank you for sharing this article. We have shortened the themes in Table 4.]

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

This manuscript addresses a highly relevant and timely topic: the integration of Generative Artificial Intelligence (GenAI) in health professions education (HPE). It offers a useful snapshot of educators’ and students’ perceptions within a clinical training setting. The mixed-methods approach and the use of established theoretical models (TAM and UTAUT) are appropriate and relevant. However, the manuscript would benefit from revisions to improve clarity, depth, and structure. The following suggestions are intended to support the authors in strengthening the manuscript.

 Introduction and theoretical framework

  • The introduction provides helpful context but would benefit from a clearer articulation of the study’s objectives or research questions. As it stands, the aim is implied rather than explicitly stated.
  • While the TAM and UTAUT frameworks are briefly introduced, their integration into the study design, analysis, and discussion is limited. A more detailed explanation of how these models guided the research would be useful.
  • The review of previous literature could be expanded, especially in relation to GenAI in clinical education, to better position the study within current academic debates.

Methodology

  • The mixed-methods design is appropriate, and the study includes a relatively large and diverse sample.
  • It is not clear whether the survey instruments were developed specifically for this study or adapted from existing validated tools. More detail on item development and design would be helpful.
  • Although the survey questions are structured around key constructs from TAM and UTAUT, it is not clear whether the items were adapted from existing validated instruments or developed specifically for this study.
  • No information is provided on whether the internal consistency of conceptually related items was assessed. Including basic reliability measures (e.g. Cronbach’s alpha) or explaining why they were not applicable would strengthen the methodological rigour.

Discussion

  • The integration of quantitative and qualitative findings could be more explicitly discussed, for example by highlighting where the two strands converge or diverge.
  • The discussion could reconnect more explicitly to the study’s stated frameworks, reflecting on how the findings confirm or challenge existing assumptions in TAM and UTAUT.
  • This section would benefit from clearer structure, using subheadings such as:
    • Theoretical implications (e.g. insights about TAM/UTAUT in HPE)
    • Practical implications (e.g. institutional policy, faculty development)
    • Limitations (the current section is brief; issues such as generalizability, self-report bias, and the use of AI tools for coding should be further acknowledged)
    • Future research directions (e.g. comparative studies, longitudinal follow-ups)

Conclusions

  • Consider ending with a concise summary of the study’s key contributions and takeaways.
  • Emphasize the value of including both educator and student perspectives, and the potential for GenAI to enhance, but also challenge, clinical education.

Author Response

Comments 1:

  • The introduction provides helpful context but would benefit from a clearer articulation of the study’s objectives or research questions. As it stands, the aim is implied rather than explicitly stated.

    [Response: Thank you for your comment. A similar suggestion was made by Reviewer 1. In response, we have clarified the research questions, which can now be found in Lines 98–101 of the revised manuscript.]

Comments 2:

  • While the TAM and UTAUT frameworks are briefly introduced, their integration into the study design, analysis, and discussion is limited. A more detailed explanation of how these models guided the research would be useful.

    [Response: Thank you. Reviewer 1 raised similar suggestion. Please refer to our replies to Reviewer 1:

    As previously mentioned, the survey questions were developed based on the key constructs of the TAM and UTAUT frameworks, such as perceived usefulness, ease of use, performance expectancy, effort expectancy, social influence, and facilitating conditions.

    To address the comment on conceptual integration, we have revised the Discussion section to interpret the findings through the lens of both models more explicitly.]

Comments 3:

  • The review of previous literature could be expanded, especially in relation to GenAI in clinical education, to better position the study within current academic debates.

    [Response: Thank you for the helpful suggestion. A similar point was raised by Reviewer 1. In response, we have expanded the Introduction to include the 2024 BEME Guide No. 84, a comprehensive scoping review that offers a global perspective on GenAI in medical education, including its use in clinical learning contexts. Given the word limit, we hope this addition helps strengthen our study's positioning within the current academic discourse.]

Comments 4:

Methodology

  • The mixed-methods design is appropriate, and the study includes a relatively large and diverse sample.
  • It is not clear whether the survey instruments were developed specifically for this study or adapted from existing validated tools. More detail on item development and design would be helpful.
  • Although the survey questions are structured around key constructs from TAM and UTAUT, it is not clear whether the items were adapted from existing validated instruments or developed specifically for this study.
  • No information is provided on whether the internal consistency of conceptually related items was assessed. Including basic reliability measures (e.g. Cronbach’s alpha) or explaining why they were not applicable would strengthen the methodological rigour.

    [Response: The last three points under Methodology were about the survey design. Reviewer 1 made similar suggestions. Please refer to our reply to Reviewer 1:

    As described in Section 2.2 (Study Design), Phase 1 involved a structured cross-sectional survey (Appendix A) guided by the TAM and UTAUT frameworks to explore perceptions of usefulness, ease of use, and concerns. The survey instrument was developed specifically for this study and did not undergo formal validation procedures such as piloting or internal consistency testing (e.g., Cronbach’s alpha). We acknowledge this as a limitation and noted it in the manuscript accordingly. Future research could build on this pilot by refining and validating the instrument for broader application.]

Discussion

Comments 5:

  • The integration of quantitative and qualitative findings could be more explicitly discussed, for example by highlighting where the two strands converge or diverge.
  • The discussion could reconnect more explicitly to the study’s stated frameworks, reflecting on how the findings confirm or challenge existing assumptions in TAM and UTAUT.

    [Response: Reviewer 1 raised a similar suggestion. We have revised Discussion accordingly. Please refer to 4.1, 4.2, and 4.4 for the revision.]

Comments 6:

 

  • This section would benefit from clearer structure, using subheadings such as:
    • Theoretical implications (e.g. insights about TAM/UTAUT in HPE)
    • Practical implications (e.g. institutional policy, faculty development)
    • Limitations (the current section is brief; issues such as generalizability, self-report bias, and the use of AI tools for coding should be further acknowledged)
    • Future research directions (e.g. comparative studies, longitudinal follow-ups)

[Response: Thank you for the valuable suggestions. We revised the Discussion by following the structure you suggested.]

Comments 7:

Conclusions

  • Consider ending with a concise summary of the study’s key contributions and takeaways.
  • Emphasize the value of including both educator and student perspectives, and the potential for GenAI to enhance, but also challenge, clinical education.

[Response: Thank you for the valuable suggestions. We revised the Conclusion by following the structure you suggested.]

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

Thanks for allowing me to review this article and please find my comments below
Abstract:
Clarify when and where the data was collected.
Introduction:
I suggest adding a paragraph about the history of AI adoption in healthcare education, and the
previous educational technologies such as simulation-based learning and how this may inform
current approaches to AI in health education.
You may also discuss the ethical and professional challenges in AI-driven education.
Methods.
Elaborate more on the sampling approach by clarifying the following issues:
1- Specifying if convenience sampling, purposive sampling, or other methods were used.
2- Discussing how representative this sample is of broader educator/student populations in
healthcare education and the limitations to generalizability.
Data analysis needs to cover validity, reliability, and limitations of AI-assisted qualitative
analysis, and how manual verification was conducted to ensure accuracy and avoid potential bias
introduced by AI-based coding.
Sampling needs to exactly explain how participants (students and educators) were identified and
selected, and what criteria was used to include/ exclude participants.
Clarify if the survey items were adapted from validated scales or were newly developed and
were they validated if they were developed by the authors.
How were participants chosen for qualitative interviews from the initial survey respondents?
Justify why ChatGPT-4o was chosen over traditional qualitative software or manual coding?

Comments on the Quality of English Language

English is fine

Author Response

Comment 1:

Abstract:
Clarify when and where the data was collected.

[Response: We added “a tertiary hospital in Singapore” in the Abstract.]

Comment 2:

Introduction:


I suggest adding a paragraph about the history of AI adoption in healthcare education, and the
previous educational technologies such as simulation-based learning and how this may inform
current approaches to AI in health education.
You may also discuss the ethical and professional challenges in AI-driven education.

[Response: Thank you for the thoughtful suggestions. Due to word count constraints, we could not include a full paragraph detailing the history of AI adoption and previous educational technologies such as simulation-based learning. However, we acknowledge the relevance of this context and have aimed to position our study within current developments in GenAI integration. Regarding the ethical and professional challenges of AI-driven education, we have addressed these concerns in both the Results and Discussion sections, particularly in relation to critical thinking, plagiarism, data accuracy, and institutional responsibility.]

Comment 3:

Methods.


Elaborate more on the sampling approach by clarifying the following issues:
1- Specifying if convenience sampling, purposive sampling, or other methods were used.
2- Discussing how representative this sample is of broader educator/student populations in
healthcare education and the limitations to generalizability.

[Response: Thank you for your comment. A similar point was raised by previous reviewers. In response, we have clarified the sampling strategy in the revised manuscript. Please refer to Line 137, where we specify the use of convenience sampling. Additionally, we have addressed the representativeness of the sample and the limitations related to generalizability in the study’s limitations section (Lines 389–394). We hope these revisions sufficiently address your concerns.]

Comment 4:
Data analysis needs to cover validity, reliability, and limitations of AI-assisted qualitative
analysis, and how manual verification was conducted to ensure accuracy and avoid potential bias introduced by AI-based coding.

[Response: Thank you for this valuable suggestion. In Data Analysis section (Lines 177–187), we described how manual verification was conducted to ensure the accuracy and integrity of the preliminary codes generated by the AI tool. This includes cross-checking by multiple researchers to minimize bias and enhance trustworthiness. Additionally, we acknowledge the limitations of using AI-assisted qualitative analysis in the study's Limitations section (Lines 407–412), including potential concerns around validity and interpretive depth.]

Comment 5:
Sampling needs to exactly explain how participants (students and educators) were identified and selected, and what criteria was used to include/ exclude participants.

[Response: Thank you for your comment. We have clarified the sampling approach in Lines 136–141 of the revised manuscript, where we describe how participants (both students and educators) were identified and invited to participate in the study. This section outlines the inclusion criteria based on their involvement in clinical placements or educational roles at the study site during the research period. We hope this provides sufficient detail regarding participant selection.]

Comment 6:
Clarify if the survey items were adapted from validated scales or were newly developed and
were they validated if they were developed by the authors.

[Response: Thank you for your comment. As noted in Section 2.2 (Study Design), the survey items were developed by the authors based on key constructs from the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT). The items were not adapted from previously validated scales, and the instrument did not undergo formal validation procedures. We acknowledge this as a limitation of the study and have noted it in the Limitations section.]

Comment 7:
How were participants chosen for qualitative interviews from the initial survey respondents?
[Response: Thank you for the suggestion. Please refer to Line 153-156.]

Comment 8:

Justify why ChatGPT-4o was chosen over traditional qualitative software or manual coding?

[Response: Thank you. ChatGPT-4o was used for preliminary coding as part of an exploratory approach to examine the potential of AI-assisted qualitative analysis. This was not a replacement for traditional methods; rather, it served as a support tool. The research team subsequently conducted manual verification and cross-checking of the AI-generated codes to ensure accuracy and rigor. Our choice reflects the growing interest in using GenAI in research , especially in the context of a study exploring GenAI adoption. We have also acknowledged the limitations of this approach in the manuscript (L

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Although the authors addressed some comments superficially, they failed to address the main concern, which was the limited literature review. Without a review of similar work/literature, the novelty of the work cannot be justified. 
This work is a simple survey and summary of the survey without any specific novelty. 

Author Response

Thank you for highlighting the need to strengthen the literature review and justify the novelty of our study. In the previous version we submitted, we revised the Introduction and the Discussion section to integrate more of the existing literature on GenAI in health professions education (HPE). For example:

  • We included studies demonstrating the capabilities of ChatGPT in medical licensing exams (USMLE, MRCP) (Page 1, Lines 38 - 44).
  • We included literature on the educational benefits and risks of GenAI to position our study within the broader discourse (Page 2, Lines 54 - 63; Page 9, Lines 305 - 308; Page 9, Lines 334 - 340; Page 10, Lines 367-369).
  • We cited AMEE Guide 172, which includes guidance on preparing for AI integration in HPE (Page 11, Lines 387 - 389).

We have also clarified the novel contributions of our work, including:

  1. Conducting the mixed-methods studies of GenAI adoption involving both clinical educators and students in a tertiary teaching hospital setting (Page 9, Lines 309 - 320).
  2. Applying the TAM and UTAUT frameworks to interpret technology adoption in HPE (Pages 9 - 10, Lines 321 - 352).
  3. Highlighting differences between educators and students in GenAI usage, motivation, and concerns, providing practical insights for institutional planning, faculty development, and ethical policy design (Page 10, Lines 354 - 361).

These revisions have already been made; as a result, we propose that no additional changes are necessary at this time.

Reviewer 2 Report

Comments and Suggestions for Authors

The authors appropriately revised the manuscript. The manuscript is acceptable for publication.

Author Response

Thank you very much for reviewing our manuscript and providing constructive feedback for us to improve the manuscript.

Back to TopTop