Next Article in Journal
TableExtractNet: A Model of Automatic Detection and Recognition of Table Structures from Unstructured Documents
Previous Article in Journal
Why Do People Gather? A Study on Factors Affecting Emotion and Participation in Group Chats
 
 
Systematic Review
Peer-Review Record

In-Bed Monitoring: A Systematic Review of the Evaluation of In-Bed Movements Through Bed Sensors

Informatics 2024, 11(4), 76; https://doi.org/10.3390/informatics11040076
by Honoria Ocagli, Corrado Lanera, Carlotta Borghini, Noor Muhammad Khan, Alessandra Casamento and Dario Gregori *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Informatics 2024, 11(4), 76; https://doi.org/10.3390/informatics11040076
Submission received: 1 July 2024 / Revised: 26 September 2024 / Accepted: 1 October 2024 / Published: 22 October 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This manuscript is a review on the use of machine learning  methods to evaluate patient in-bed movements. Some issues should be considered before the acceptance:

1. Please add "monitoring" and "systematic review" to the keywords.

2. Please mention the "information sources ....." section before the "eligibility criteria".

3. Does the number 3 of the exclusion criteria mean different form of articles such as letters, commentaries, reviews,....???

4. The search strategy that has been shown in the supplementary file is only for PubMed. What was the search strategy for the other databases?

5. The reason that has been mentioned for the assessments of risk of bias is not reasonable.

6. The PRISMA flowchart needs to be edited. For example, the number of retrieved articles from each database should be mentioned.

Author Response

We thank the reviewer for their valuable comments and insightful suggestions, which have helped us improve the quality of our manuscript. We have carefully considered each point raised and have made the necessary revisions to address the concerns. Below, we provide detailed responses to each comment, highlighting the changes made in the revised version of the manuscript.
This manuscript is a review on the use of machine learning methods to evaluate patient in-bed movements. Some issues should be considered before the acceptance:

  1. Please add "monitoring" and "systematic review" to the keywords.
    Thanks for the suggestion?
  2. Please mention the "information sources ....." section before the "eligibility criteria".
    Done.
  3. Does the number 3 of the exclusion criteria mean different form of articles such as letters, commentaries, reviews,....???
    Thanks for pointing this out, clarified.
  4. The search strategy that has been shown in the supplementary file is only for PubMed. What was the search strategy for the other databases?

Thanks for the suggestion. We have put the search strategies for the all sources.

  1. The reason that has been mentioned for the assessments of risk of bias is not reasonable.

Thank you for highlighting this issue. After careful consideration, we agree with the reviewer’s suggestion to include a quality assessment using the PROBAST tool, even though it may not be the most ideal option for our study.

  1. The PRISMA flowchart needs to be edited. For example, the number of retrieved articles from each database should be mentioned.

Thanks, revised.

 

Submission Date

01 July 2024

Date of this review

05 Aug 2024 07:38:48

Reviewer 2 Report

Comments and Suggestions for Authors

Interesting literature review that aims to evaluate and synthesize the growing number of studies on the use of machine learning (ML) techniques to characterise patient in-bed movements and bedsore development

The review carries out a significant number of studies (56 papers), ML models (76 models).

The assortment of ML models encompassed artificial neural networks, deep learning architectures, and multimodal sensor integration approaches

The methodology is clear and relevant. Results were presented and discussed.

The review has identified some gaps in the literature.

 

There is a lack of assessment of risk of bias

Although the review recognizes heterogeneity in ML models, it does not discuss in detail what such variability may imply for the results or the comparability of diverse studies.

How would you judge the generalizability of the findings in terms of the studies included in the review?

Did you take into consideration the selection bias by the PICO criteria and the chance it could excluded relevant studies?

 

 

 

Comments on the Quality of English Language

Overall, very good.

Author Response

We thank the reviewer for their valuable comments and insightful suggestions, which have helped us improve the quality of our manuscript. We have carefully considered each point raised and have made the necessary revisions to address the concerns. Below, we provide detailed responses to each comment, highlighting the changes made in the revised version of the manuscript.

Interesting literature review that aims to evaluate and synthesize the growing number of studies on the use of machine learning (ML) techniques to characterise patient in-bed movements and bedsore development

The review carries out a significant number of studies (56 papers), ML models (76 models).

The assortment of ML models encompassed artificial neural networks, deep learning architectures, and multimodal sensor integration approaches

The methodology is clear and relevant. Results were presented and discussed.

The review has identified some gaps in the literature.

 

  • There is a lack of assessment of risk of bias
  • Thank you for highlighting this issue. After careful consideration, we agree with the reviewer’s suggestion to include a quality assessment using the PROBAST tool, even though it may not be the most ideal option for our study.
  • Although the review recognizes heterogeneity in ML models, it does not discuss in detail what such variability may imply for the results or the comparability of diverse studies.

We recognize the heterogeneity in the machine learning models included in the review, which encompasses differences in data inputs, preprocessing methods, and model architectures. This variability impacts the comparability of results and limits the generalizability of the findings. We will include a more detailed discussion about how such heterogeneity influences the robustness and application of the results.

 

  • How would you judge the generalizability of the findings in terms of the studies included in the review?

The generalizability of the findings is limited by the fact that many studies were conducted in controlled environments or with small sample sizes. Furthermore, few studies involved real-world clinical settings, which restricts the applicability of the findings to broader patient populations. Future research should aim to validate these findings in more diverse, real-world settings.

 

  • Did you take into consideration the selection bias by the PICO criteria and the chance it could excluded relevant studies?

Thanks for pointing this out.

The PICO framework allowed us to maintain a focused approach to the research objectives, selecting studies that were clearly relevant to the review’s questions. Our inclusion criteria were deliberately broad, encompassing both patients and volunteers, to ensure a comprehensive view and minimize the risk of selection bias. This approach allowed us to capture a wide range of relevant studies, reducing the likelihood of excluding important works due to the PICO criteria.

 

 

 

 

Comments on the Quality of English Language

Overall, very good.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Thank you for your effort to respond to the comments. I think it can be accepted for publication.

Reviewer 2 Report

Comments and Suggestions for Authors

None

Comments on the Quality of English Language

None

Back to TopTop