Innovative Approach in Nursing Care: Artificial Intelligence-Assisted Incentive Spirometry
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsI reviewed your article “Innovative Approach in Nursing Care: Artificial Intelligence-Assisted Incentive Spirometry”. In this study, the authors developed an AI-assisted diagnostic and monitoring system to address the lack of feedback in traditional mechanical spirometers. The study is quite interesting, and the authors demonstrate considerable effort. An examination of the abstract reveals a sufficient flow and reflects the study’s content. The introduction is well-founded, but only a reference to the literature comparison table is made. It would be appropriate to add a comment regarding Table 1. It would also be helpful to detail the relevant section to clarify the originality of the study and its contribution. The term “simulated scenario” should be explained in the Materials and Methods section. If real patient data were not used, this should be clearly stated. The significant imbalance in performance classes is outlined, but its impact on model generalizability is not discussed. SMOTE has been implemented, but the risks of excessive synthetic data generation should also be briefly discussed. The low class, in particular, contains only one record. If data augmentation is performed for this class, there will be a risk of excessive synthetic data generation. This should be explained. Multithreading and 33ms performance are mentioned, but no information is provided about the hardware used. Providing this information would be appropriate for feasibility purposes. It's appropriate to present performance results in tables and graphs in the Results section. However, if similar studies exist in the literature, a comparison table would be helpful. This will make comparing the model's success more realistic. The conclusion section is detailed and appropriate. Revisions made in line with the suggestions will increase the scientific quality of the study and contribute significantly to the literature.
Author Response
Comments 1:
I reviewed your article “Innovative Approach in Nursing Care: Artificial Intelligence-Assisted Incentive Spirometry”. In this study, the authors developed an AI-assisted diagnostic and monitoring system to address the lack of feedback in traditional mechanical spirometers. The study is quite interesting, and the authors demonstrate considerable effort. An examination of the abstract reveals a sufficient flow and reflects the study’s content.
The introduction is well-founded, but only a reference to the literature comparison table is made. It would be appropriate to add a comment regarding Table 1. It would also be helpful to detail the relevant section to clarify the originality of the study and its contribution.
Response 1:
Table 1 presents a comparative analysis of AI and digital spirometry approaches. While existing solutions offer portability or clinical decision support, they often rely on additional hardware or specialized sensors, which can increase costs and limit accessibility. In contrast, the system proposed in this study utilizes a standard tablet camera and powerful image processing algorithms, providing a low-cost, software-based solution without requiring any additional hardware. The developed system is designed for use in both hospital and home environments. By combining real-time image processing, machine learning classification, and a user-friendly interface within a single platform, it offers a solution that contributes to nursing care and supports patient management.
Comments 2: The term “simulated scenario” should be explained in the Materials and Methods section. If real patient data were not used, this should be clearly stated.
Response 2: The dataset used in the study included clinical data and spirometric measurements from 250 patients based on simulated scenarios. It is important to note that this dataset was simulated and not collected from real patients. The simulated data were generated based on clinical parameters and spirometry patterns observed in standard postoperative care protocols.
Comments 3: The significant imbalance in performance classes is outlined, but its impact on model generalizability is not discussed. SMOTE has been implemented, but the risks of excessive synthetic data generation should also be briefly discussed. The low class, in particular, contains only one record. If data augmentation is performed for this class, there will be a risk of excessive synthetic data generation. This should be explained.
Response 3:
The original class distribution exhibited significant imbalance for the ‘low’ performance class, which contained a single sample (Table 2). Such an imbalance can bias machine learning models toward the majority classes (“nice” and “perfect”). This can mitigate the problem of generalizing to classes that are underrepresented in real-world settings. To mitigate this, the SMOTE algorithm was used to oversample the minority classes synthetically. The inherent limitations of this approach, particularly for the lower class, were acknowledged. Generating a specific number of synthetic samples from a single data point introduces the risk of creating artifacts that may not accurately reflect the true variance within that class. Consequently, the SMOTE model's ability to recognize the possibility of a “low” performance classification is improved. At the same time, its generalizability to a specific class across a diverse clinical population provides a model for future validation with a larger, real-world patient dataset. The primary goal of data augmentation was to prevent the model from completely ignoring these classes, rather than perfectly modeling the complex features of the minority classes.
Comments 4: Multithreading and 33ms performance are mentioned, but no information is provided about the hardware used. Providing this information would be appropriate for feasibility purposes.
Response 4: A multi-threaded architecture was used for real-time performance optimization. The image processing pipeline was designed with a pipelined architecture, ensuring real-time performance by keeping the processing time per frame below 33 ms [26]. All performance measurements and timing results obtained in the study were obtained on a tablet PC equipped with a mid-range Intel Core i5 series processor and 16 GB of RAM. This demonstrated the system's applicability on consumer-grade hardware.
Comments 5: It's appropriate to present performance results in tables and graphs in the Results section. However, if similar studies exist in the literature, a comparison table would be helpful. This will make comparing the model's success more realistic.
Response 5:
To illustrate the performance of the developed models, a comparative analysis with similar studies in the field is presented in Table 7. While direct comparisons are difficult due to differences in datasets, specific tasks (e.g., validity checking and performance classification), and evaluation criteria, the ensemble models achieved highly competitive performance benchmarks. This comparison highlights the unique contribution of the study to providing high accuracy in an integrated, low-cost, nurse-centered solution designed for practical application in a variety of care settings.
Table 7. Comparative analysis of model performance with similar studies in the literature.
|
Study |
Technology Used |
Primary Task |
Best Reported Accuracy / R² |
Key Differentiator of This Study |
|
Viswanath et al. [14] |
Smartphone + ML |
Spirometry validity |
~95% (Accuracy) |
Camera-based low-cost alternative; Classifies performance, not just validity. |
|
Burns et al. [15] |
Digital Spirometer + Data Log |
Data capture & usability |
N/A (Feasibility Study) |
Software-only solution; No additional hardware required. |
|
Al-Anazi et al. [10] |
AI-supported monitoring |
Clinical decision support |
High (Review) |
Adds real-time, image-based monitoring and patient-facing feedback for adherence. |
|
This Study |
Tablet camera + ML |
100% (Accuracy), 1.0 (R²) |
100% (Accuracy), 1.0 (R²) |
Integrated, low-cost solution for both clinical and home use, reducing nursing workload. |
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsDear authors, i read with interest your manuscript.
Here are my concerns:
-The introduction is well structured, but it is not clear if the problem that the study aims to address concerns the fact that the nurses must visually supervise the spirometry to be sure that it is properly conducted. If so, i would add a more concise knowledge gap definition.
-Figure 1 texts are not of sufficient quality to be intelligible. Please fix this.
-line 130: to be reproducible, the simulated dataset should be included as supplementary materials. Moreover, it is not clear if a real person simulated the data or if another real dataset acted as source.
-lines 130 and 140-146: the correct methodological way to conduct the analysis is to use SMOTE only on the train set and obviously not on the entire dataset, even if simulated. That because class imbalance is definetly real, and the performance must be evaluated against the reality. From the text this aspect is not clear, please give some assurance that this methodology has been followed.
-line 149: Figure 1 may not be a correct reference, please check this.
-lines 265, 303, 340... these parts are redundant. My suggestion is to build a separate section titled "metrics" in which to explain the performance metrics, which should be equal for all the algorithms.
-usually when dealing with ML algortihms such a small dataset (250) is a matter of concern. I think this aspect deserves a deep discussion.
Kind Regards
Author Response
Comments 1: The introduction is well structured, but it is not clear if the problem that the study aims to address concerns the fact that the nurses must visually supervise the spirometry to be sure that it is properly conducted. If so, i would add a more concise knowledge gap definition.
Response 1: The effectiveness of incentive spirometry depends on accurate and consistent application. This has traditionally required direct or one-on-one visual supervision by nurses to achieve correct technique [4,8]. This manual monitoring model is very difficult to maintain in modern healthcare settings characterized by high nurse-to-patient ratios and clinical demands. It creates a significant workload for nurses and limits the frequency and quality of supervision, ultimately compromising patient outcomes. This study addresses this gap by developing an AI-powered system that automates the supervision and feedback process. This system aims to augment the nurse's oversight role (ensuring correct execution of exercises) while also optimally utilizing nursing time for other critical tasks, potentially improving both care efficiency and patient compliance.
Comments 2: Figure 1 texts are not of sufficient quality to be intelligible. Please fix this.
Response 2: Necessary corrections were made in Figure 1.
Comments 3: -line 130: to be reproducible, the simulated dataset should be included as supplementary materials. Moreover, it is not clear if a real person simulated the data or if another real dataset acted as source.
Response 3: The following explanations have been added to the relevant section.
No real patient data was used as a direct source, and no human subjects were in-volved in data generation.
Comments 4: -lines 130 and 140-146: the correct methodological way to conduct the analysis is to use SMOTE only on the train set and obviously not on the entire dataset, even if simulated. That because class imbalance is definetly real, and the performance must be evaluated against the reality. From the text this aspect is not clear, please give some assurance that this methodology has been followed.
Response 4: The following explanations have been added to the relevant section.
To ensure a realistic performance evaluation while preventing data leakage, all data preprocessing steps, including the application of the SMOTE (Synthetic Minority Oversampling Technique) algorithm to address class distribution imbalances, were applied only to the training set.
Comments 5: -line 149: Figure 1 may not be a correct reference, please check this.
Response 5: The following explanations have been added to the relevant section.
In this study, a real-time data recording and analysis process was designed to determine the levels of a white, moving object on the spirometer device (as shown in the system flowchart, Figure 1).
Comments 6: -lines 265, 303, 340... these parts are redundant. My suggestion is to build a separate section titled "metrics" in which to explain the performance metrics, which should be equal for all the algorithm
Response 6:
The following explanations have been added to the relevant section.
2.10. Performance Metrics
To provide a comprehensive comparison of the performance of all machine learning models, the following metrics were used:
- Mean Squared Error (MSE): Measures the average of the squares of the errors between actual and predicted values.
- Root Mean Squared Error (RMSE): The square root of MSE, providing error in the same units as the target variable.
- Coefficient of Determination (R²): Indicates the proportion of the variance in the dependent variable that is predictable from the independent variables.
Sınıflandırma görevi ("zayıf", "iyi", "mükemmel") için modeller ÅŸu ÅŸekilde deÄŸer-lendirildi:
- Accuracy: The proportion of total correct predictions.
- Precision, Recall, and F1-Score: Metrics that provide a more nuanced view of performance, especially for imbalanced classes.
- Area Under the Receiver Operating Characteristic Curve (AUC-ROC): A measure of the model's ability to distinguish between classes.
A 10-fold cross-validation strategy was employed to assess model robustness and generalization performance.ms.
Comments 7: -usually when dealing with ML algortihms such a small dataset (250) is a matter of concern. I think this aspect deserves a deep discussion.
Response 7:
The following explanations have been added to the "Limitations" section of the Discussion section.
The relatively small size of the dataset (n = 250) is a known limitation in machine learning. While the use of simulated data allows for controlled scenario creation and initial proof-of-concept, a larger dataset would provide greater statistical power and increase the robustness and generalizability of the models to the full spectrum of real-world patient variability. This concern was addressed with several strategies: (a) using robust cross-validation to maximize the utility of the available data and obtain reliable performance estimates; (b) using ensemble methods such as Random Forest and XGBoost, which are known to perform well on smaller datasets by reducing overfitting through averaging or boosting; and (c) applying regularization techniques. However, future studies should prioritize the collection of larger, real-world clinical datasets to validate and further develop these models.
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe section on the use of AI in healthcare should be further developed and explained in the introduction, as it is currently unclear. In particular, it should specify what type of AI is being used (generative, etc.).
The training data are simulated, not from real patients, which may limit clinical validity; greater emphasis is recommended on the need for validation in real populations and clinical trials, as well as discussing possible biases.
The discussion on technical limitations of image processing is somewhat brief; it is suggested to expand the robustness section with respect to real-world conditions (low light, motion, diversity of devices).
Points to clarify or reinforce: How are data privacy and security aspects handled? The article mentions little about this, which is essential in digital health.
Providing more detail about the user interface to assess actual usability and clinical acceptance would be helpful.
Delve deeper into strategies for integration with existing clinical systems (EHR).
Improve the comparative discussion with similar works to clearly position the novel contribution.
Review issues related to figures, for example Figure 4.
This section should be explained better: “Instant feedback and automatic data logging can improve adherence and clinical follow-up, contributing to reducing postoperative pulmonary complications.” How exactly does it help improve adherence?
This study should be considered a pilot, and this should be made clear.
Author Response
Comments 1:
The section on the use of AI in healthcare should be further developed and explained in the introduction, as it is currently unclear. In particular, it should specify what type of AI is being used (generative, etc.).
Response 1:
The following explanations have been added to the relevant section.
This study specifically leverages machine learning (ML), a subset of AI that enables systems to learn and make predictions from data without being explicitly programmed for every rule. It is important to distinguish this from generative AI, which creates new content; our work focuses on discriminative and predictive AI for analysis and classification.
Comments 2:
The training data are simulated, not from real patients, which may limit clinical validity; greater emphasis is recommended on the need for validation in real populations and clinical trials, as well as discussing possible biases.
Response 2:
The following explanations have been added to the "Limitations" section of the Discussion section.
The use of a simulated dataset was clearly recognized as a fundamental limitation affecting the immediate clinical validity of the models. While simulation allows for controlled proof-of-concept development, it may not capture the full complexity, noise, and heterogeneity of real-world patient data. This can create spectrum bias, where the model performs well in simulated scenarios but fails to generalize to more challenging, real-world cases. Therefore, it was clearly stated that this study should be considered a pilot study or proof-of-concept. The next important step is external validation with a large, prospectively collected dataset of real-world patients across diverse demographics and clinical settings. Consequently, randomized controlled trials are necessary to definitively assess the system's impact on challenging clinical outcomes such as pulmonary complication rates, length of hospital stay, and nursing workload measures.
Comments 3:
The discussion on technical limitations of image processing is somewhat brief; it is suggested to expand the robustness section with respect to real-world conditions (low light, motion, diversity of devices).
Response 3:
The following explanations have been added to the "Limitations" section of the Discussion section.
- The performance of existing image processing algorithms can be sensitive to suboptimal real-world conditions. These include variable illumination (e.g., low light or glare), unwanted camera motion (motion blur), and the physical characteristics of different spirometer models or tablet cameras. While this study implemented metrics such as histogram equalization and robust HSV color space, future iterations will utilize more advanced approaches. Integrating deep learning-based object detection models (e.g., YOLO, SSD) can significantly increase robustness to these variables by learning invariant features from a larger and more diverse training set of images. Furthermore, developing a standardized calibration protocol that accounts for different device models and environmental factors will be vital for widespread deployment [14].
Comments 4:
Points to clarify or reinforce: How are data privacy and security aspects handled? The article mentions little about this, which is essential in digital health.
Response 4:
The following explanations have been added to the "Limitations" section of the Discussion section.
- Data security and patient privacy are paramount in digital health solutions, but they are not the primary focus of this study. In the current prototype, data is stored locally on the device. However, a deployable system must incorporate robust security measures such as end-to-end encryption for both data at rest and in transit, compliance with regulations such as GDPR and HIPAA, and secure user authentication protocols. Any cloud integration for data synchronization with Electronic Health Records (EHRs) requires a rigorously secure API framework. Addressing these issues is an essential prerequisite for clinical implementation and will be the focus of future development.
Comments 5:
Providing more detail about the user interface to assess actual usability and clinical acceptance would be helpful.
Response 5:
The following explanations have been added to the relevant section.
Figure 4 illustrates the user interface of the developed AI-powered incentive spirometry application, which was designed with a focus on usability and clinical acceptance. The interface provides: (1) a large, real-time visual display of the spirometer level and target, offering clear visual feedback akin to a video game, which is known to improve patient engagement; (2) on-screen instructions and encouragement to guide the patient through the exercise correctly; (3) immediate post-session summary with performance classification ('good', 'excellent') and simple graphs, reinforcing positive behavior; and (4) a clean, intuitive layout with minimal buttons to reduce cognitive load, making it suitable for elderly or less tech-savvy patients. This design, which runs on a familiar tablet, is intended to lower the barrier to use for both patients in home settings and nurses in clinical wards, thereby facilitating higher adoption rates.
Comments 6:
Delve deeper into strategies for integration with existing clinical systems (EHR).
Response 6:
The following explanations have been added to the relevant section.
For seamless integration into clinical workflows, future versions of the system will need to interface with existing Electronic Health Record (EHR) systems. This could be achieved through the development of standardized data export protocols (e.g., generating HL7 FHIR resources) that summarize a patient's spirometry session, including performance classification, volume trends, and adherence metrics. This data could then be securely transmitted to the EHR, either via dedicated middleware or a secure API. Such integration would allow the spirometry data to become a part of the patient's official medical record, enabling physicians and nurses to track progress over time, identify at-risk patients efficiently, and make more informed clinical decisions without needing to access a separate application.
Comments 7:
Improve the comparative discussion with similar works to clearly position the novel contribution.
Response 7:
The following explanations have been added to the relevant section.
Focusing on the existing literature (Table 1), the novel contribution of this study becomes clear. While previous studies have examined digital spirometry, smartphone-based analysis, or artificial intelligence for clinical decision support, the developed system is unique in its integration of three key elements: (1) a hardware-independent, cost-effective approach that utilizes a standard tablet camera, eliminating the need for specialized sensors; (2) a real-time, automated feedback loop that directly addresses nursing supervision burden, a critical gap in traditional care; and (3) a design that supports a continuum of care model, explicitly designed for both hospital and home settings. This combination aims not only to measure but also to actively enhance and scale the nursing care process for respiratory rehabilitation.
Comments 8:
Review issues related to figures, for example Figure 4.
Response 8:
Necessary corrections were made in Figure 4.
Comments 9:
This section should be explained better: “Instant feedback and automatic data logging can improve adherence and clinical follow-up, contributing to reducing postoperative pulmonary complications.” How exactly does it help improve adherence?
Response 9:
The following explanations have been added to the relevant section.
The system's immediate feedback and automatic data recording are hypothesized to improve adherence through several mechanisms. First, immediate visual feedback (e.g., reaching the target level on the screen) and positive reinforcement (e.g., a 'good performance' message) make exercise more engaging. Second, automatic monitoring reduces a significant barrier to implementation by removing the burden of manual recordkeeping from both patients and nurses. Patients can view their information graphically, which increases motivation. For clinicians, objective data allows them to easily monitor a patient's adherence and progress remotely, enabling them to implement timely interventions if adherence declines.
Comments 10:
This study should be considered a pilot, and this should be made clear.
Response 10:
The following explanations have been added to the relevant section.
This pilot study demonstrates the technical feasibility and potential of this AI-driven approach, laying the groundwork for future clinical validation.
In conclusion, this study successfully developed and tested a proof-of-concept for an AI-assisted incentive spirometry system. The findings from this study are promising.
Comments 10:
Response 10:
Author Response File:
Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThank you for the well written manuscript. However, the claims need to be toned down significantly. The main issue is with the data used for reaching at the conclusion. The study's technical innovation in image processing has merit, but requires validation
against real clinical data and established spirometry standards before any meaningful
conclusions about its clinical impact can be drawn. You have asserted that the system "
reduces the time nurses spend at the bedside and that it is expected to increase motivation
and exercise compliance without providing quantitative evidence from real clinical
settings measuring actual time savings, workflow improvements, or patient adherence
rates. As for the methods section, there is only one patient (0.4%) classified as "low" performance out of the 250 total sample, which represents a critical statistical flaw that fundamentally compromises the model reliability. Another problem is the exclusive reliance on
simulated patient data for training and validating machine learning models that are intended for
clinical use. The perfect 100% accuracy achieved by the current method suggests these models are learning deterministic patterns in the simulation rather than providing clinically meaningful relationships. My suggestion is to start revising the title to accurately reflect the prototype nature and simulation-based validation, such as "Development of an AI-Assisted Incentive Spirometry Prototype: Simulation-Based Feasibility Study." Then include the methodology related limitations in the discussion.
Author Response
Comments 1:
Thank you for the well written manuscript. However, the claims need to be toned down significantly. The main issue is with the data used for reaching at the conclusion. The study's technical innovation in image processing has merit, but requires validation against real clinical data and established spirometry standards before any meaningful conclusions about its clinical impact can be drawn.
Response 1:
The explanations have been added to the relevant section.
Comments 2:
You have asserted that the system " reduces the time nurses spend at the bedside and that it is expected to increase motivation and exercise compliance without providing quantitative evidence from real clinical settings measuring actual time savings, workflow improvements, or patient adherence rates.
Response 2:
The following explanations have been added to the relevant section.
All assertions regarding the system's impact on nursing workload, workflow efficiency, and patient adherence are hypothetical and based on the system's designed functionality, not on empirical evidence. This study did not measure actual time savings for nurses or conduct long-term adherence trials with patients. These claimed benefits remain unproven and constitute key objectives for future validation studies and randomized controlled trials.
Comments 3:
As for the methods section, there is only one patient (0.4%) classified as "low" performance out of the 250 total sample, which represents a critical statistical flaw that fundamentally compromises the model reliability. Another problem is the exclusive reliance on simulated patient data for training and validating machine learning models that are intended for clinical use.
Response 3:
The following explanations have been added to the relevant section.
The original class distribution exhibited significant imbalance for the ‘low’ performance class, which contained a single sample (Table 2). Such an imbalance can bias machine learning models toward the majority classes (“nice” and “perfect”). This can mitigate the problem of generalizing to classes that are underrepresented in real-world settings. To mitigate this, the SMOTE algorithm was used to oversample the minority classes synthetically. The inherent limitations of this approach, particularly for the lower class, were acknowledged. Generating a specific number of synthetic samples from a single data point introduces the risk of creating artifacts that may not accurately reflect the true variance within that class. Consequently, the SMOTE model's ability to recognize the possibility of a ‘low’ performance classification is improved. At the same time, its generalizability to a specific class across a diverse clinical population provides a model for future validation with a larger, real-world patient dataset. The primary goal of data augmentation was to prevent the model from completely ignoring these classes, rather than perfectly modeling the complex features of the minority classes.
Comments 4:
The perfect 100% accuracy achieved by the current method suggests these models are learning deterministic patterns in the simulation rather than providing clinically meaningful relationships.
Response 4:
The following explanations have been added to the relevant section.
The primary limitation of this study is its reliance on simulated patient data. The excellent (100%) classification accuracy achieved by the models strongly suggests that they learned deterministic rules and patterns embedded in the simulation, rather than capturing the complex, noisy, and non-deterministic relationships found in real physiological data. Therefore, these results should be interpreted as an indication of technical feasibility on a controlled dataset, not as evidence of clinical efficacy. Performance does not guarantee similar results in real patients, emphasizing that this study is a proof of concept.
Comments 5:
My suggestion is to start revising the title to accurately reflect the prototype nature and simulation-based validation, such as "Development of an AI-Assisted Incentive Spirometry Prototype: Simulation-Based Feasibility Study." Then include the methodology related limitations in the discussion.
Response 5:
The following explanations have been added to the relevant section.
In conclusion, this study successfully developed a prototype and demonstrated its technical feasibility in a simulated environment. The findings suggest that an AI-assisted approach to incentive spirometry is a promising avenue for future research. However, it is crucial to emphasize that this work represents a pilot study. The limitations of simulated data and class imbalance preclude any definitive conclusions about clinical impact. The essential next steps include validating the system against established spirometry standards, conducting studies with real patient populations to assess clinical validity, and performing randomized controlled trials to measure its true effect on nursing workload, patient adherence, and, ultimately, clinical outcomes such as postoperative pulmonary complications.
Author Response File:
Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsI congratulate the authors for carefully considering all the feedback mentioned in the previous peer review and improving the work significantly. In the revised version, the original aspects and contributions of the study were clearly highlighted, and comparisons with the literature, especially Tables 1 and 7, were made clear and understandable. Explanations regarding the use of simulated data are sufficient in terms of methodological transparency. Additionally, discussion of the limitations of SMOTE application and its possible effects on generalizability increased the scientific depth of the study. The hardware information and explanation of the multi-threaded architecture also concretized the feasibility of the system. The scientific aspect of the revised version has increased significantly and is at a level that can make a significant contribution to the literature.
Author Response
Comments 1:
I congratulate the authors for carefully considering all the feedback mentioned in the previous peer review and improving the work significantly. In the revised version, the original aspects and contributions of the study were clearly highlighted, and comparisons with the literature, especially Tables 1 and 7, were made clear and understandable. Explanations regarding the use of simulated data are sufficient in terms of methodological transparency. Additionally, discussion of the limitations of SMOTE application and its possible effects on generalizability increased the scientific depth of the study. The hardware information and explanation of the multi-threaded architecture also concretized the feasibility of the system. The scientific aspect of the revised version has increased significantly and is at a level that can make a significant contribution to the literature.
Response 1:
Thank you for your support throughout the process.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsDear Authors,
after the amendments, I now repute the manuscript of sufficient quality to be published.
Kind Regards
Author Response
Comments 1:
Dear Authors,
after the amendments, I now repute the manuscript of sufficient quality to be published.
Kind Regards
Response 1:
Thank you for your support throughout the process.
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsIt would be helpful to specify which algorithms were used, or at least include a sentence such as “the employed models were X, Y.” This information should appear in both the Introduction and Methods sections, and the corresponding sources should be cited in the references.
Please consider adding concise, concrete validation plans—for instance, the intended sample size, variables to be collected, and strategies to mitigate bias (e.g., demographic stratification, multicenter data acquisition). If these details cannot yet be provided, it would be advisable to indicate this explicitly.
More information should also be included regarding whether ethical review or approval from an ethics committee was obtained for data collection (even if simulated). It would be important to clarify whether the simulated data are linked to real identifiers, whether an anonymization process was applied, and whether local data security is ensured. At least one specific statement could be included in the Methods or Limitations section, such as: “The study received ethical approval X (IRB no. …) / No real data were processed / The data were anonymized / Data were stored using AES-256 encryption (if applicable).
Author Response
Comments 1:
It would be helpful to specify which algorithms were used, or at least include a sentence such as “the employed models were X, Y.” This information should appear in both the Introduction and Methods sections, and the corresponding sources should be cited in the references.
Please consider adding concise, concrete validation plans—for instance, the intended sample size, variables to be collected, and strategies to mitigate bias (e.g., demographic stratification, multicenter data acquisition). If these details cannot yet be provided, it would be advisable to indicate this explicitly.
More information should also be included regarding whether ethical review or approval from an ethics committee was obtained for data collection (even if simulated). It would be important to clarify whether the simulated data are linked to real identifiers, whether an anonymization process was applied, and whether local data security is ensured. At least one specific statement could be included in the Methods or Limitations section, such as: “The study received ethical approval X (IRB no. …) / No real data were processed / The data were anonymized / Data were stored using AES-256 encryption (if applicable).
Response 1:
Information about the algorithms used is available in the article. The simulated data does not relate to real patients.
Author Response File:
Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThanks for the addition of needed explanation. This is presented now much more clearly.
The title change could also help. I had suggested this Title: Development of an AI-Assisted Incentive Spirometry Prototype: Simulation-Based Feasibility Study.
Author Response
Comments 1:
Thanks for the addition of needed explanation. This is presented now much more clearly.
The title change could also help. I had suggested this Title: Development of an AI-Assisted Incentive Spirometry Prototype: Simulation-Based Feasibility Study.
Response 1:
Thank you for your support throughout the process.
Author Response File:
Author Response.pdf

