Next Article in Journal
Neonatal Sepsis Caused by Streptococcus gallolyticus Complicated with Pulmonary Hypertension: A Case-Report and a Systematic Literature Review
Next Article in Special Issue
Computational Evidence for Laboratory Diagnostic Pathways: Extracting Predictive Analytes for Myocardial Ischemia from Routine Hospital Data
Previous Article in Journal
Electrochemical Polymerisation of Glutamic Acid on the Surface of Graphene Paste Electrode for the Detection and Quantification of Rutin in Food and Medicinal Samples
Previous Article in Special Issue
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review
 
 
Article
Peer-Review Record

Clinical Evaluation of the ButterfLife Device for Simultaneous Multiparameter Telemonitoring in Hospital and Home Settings

Diagnostics 2022, 12(12), 3115; https://doi.org/10.3390/diagnostics12123115
by Francesco Salton 1,*, Stefano Kette 1, Paola Confalonieri 1, Sergio Fonda 2, Selene Lerda 3, Michael Hughes 4, Marco Confalonieri 1 and Barbara Ruaro 1
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Diagnostics 2022, 12(12), 3115; https://doi.org/10.3390/diagnostics12123115
Submission received: 30 October 2022 / Revised: 1 December 2022 / Accepted: 2 December 2022 / Published: 10 December 2022

Round 1

Reviewer 1 Report

The study was scientifically sound. Consecutive patients were recruited and the reference device was sound. Daily data collection for a suitable duration ensured clear signal. I also felt the writing emphasized the correct points - drawing out what is perhaps the main finding: that the system was usable and largely accurate - expect for RR. 

My question is really around RR: "which was deemed to be clinically acceptable". I do not understand how the clinicians came to this conclusion given the fact that the domain was the respiratory ward. I imagine a more suitable test would have been to pre-define the acceptable accuracy level or to cite a study that found the clinically meaningful error in RR. Given the mean RR was 17.8, an error of 26% seems clinically significant to me. Perhaps the clinical team did not trust the reference device. 

I think expounding on this may be useful to readers: Why did the clinicians deem this level of error to be clinically acceptable?

Also: it would be good to understand if the errors were at the higher ends of the RR spectrum? Or at all ends? We have done these studies before and it is useful to find the range of RR values for which the system is accurate.

 

I commend the authors for a robust study with encouraging results and a well-written paper.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

no cpecific comments

Author Response

We would like to thank the Reviewer for having taken part to the review process of our paper.

Reviewer 3 Report

Dear Authores,

 

The paper showed the studies in context of practical and current issues in mHealth. The aim is to show the evaluation of the ButterfLife device.

 

Please describe the knowledge gap of mHealth studies and the aim of your paper in the introduction, and add the research methodology with methods for appropriate research aims.

 

Additionally describe the limitation, and consider to compare your research findings to previous studies in the research field. Please show the background of the study with the appropriate references of the research field of artificial intelligency in health care; the context and the scope of the researh need to be added. 

 

Best regards

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

Dear Authors,

 

Thank you for your answers.

Please add the knowledge gap, and knowledge implications of your research. 

Additionally, instead of practical issues, the theoretical background of study should be described.

 

Best regards

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop