Next Article in Journal
Design and Implementation of a Resonant Inductive Wireless Power Transfer System Powered by a Class D Amplifier for Smart Sensors in Inaccessible Environments
Previous Article in Journal
Modularized Artificial Intelligence Services for Multiple Patients’ Medical Resource Sharing and Scheduling
 
 
Article
Peer-Review Record

Privacy-Preserving Federated IoT Architecture for Early Stroke Risk Prediction

Electronics 2026, 15(1), 32; https://doi.org/10.3390/electronics15010032
by Md. Wahidur Rahman 1,2,*, Mais Nijim 1, Md. Habibur Rahman 1, Kaniz Roksana 2, Talha Bin Abdul Hai 3, Md. Tarequl Islam 3 and Hisham Albataineh 4
Reviewer 1:
Reviewer 3: Anonymous
Electronics 2026, 15(1), 32; https://doi.org/10.3390/electronics15010032
Submission received: 22 November 2025 / Revised: 17 December 2025 / Accepted: 19 December 2025 / Published: 22 December 2025
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

 

  1. Could the authors explain the limitations of using this dataset and how its synthetic or self-reported nature affects the reliability and clinical relevance of the results
  2. Several models—particularly PSO-optimized XGBoost and PCA/LDA-enhanced SVM—achieve accuracy values above 98–99%. Could the authors clarify what steps were taken to rule out overfitting, data leakage, or overly simplistic labeling patterns in the dataset?0

  3. The federated learning evaluation appears to be simulated locally using stratified splits. Could the authors elaborate on how this setup reflects real federated systems, which typically involve heterogeneous, non-IID data, device constraints, and communication delays?

  4. The feature selection and optimization pipeline integrates RFC, XTC, mRMR, PSO, PCA, and LDA. What is the rationale behind combining so many methods, and did the authors conduct any ablation study to determine which steps meaningfully contribute to performance?

  5. The IoT system for temperature asymmetry and SpOâ‚‚ is described separately from the FL stroke-risk model. Could the authors clarify how these two components are integrated in practice? Are the physiological IoT signals used in model training, or are they only used for post-diagnostic alerts?

  6. The manuscript uses thresholds such as ≥3 °C temperature asymmetry and ≤90% SpOâ‚‚ for early stroke detection. Could the authors provide medical references validating these thresholds for stroke risk in home-monitoring environments?
  7. The implementation details for several components—such as communication protocols between nodes in FL, hardware calibration for sensors, and the data-streaming workflow—are not fully described. Could the authors provide more information to enable reproducibility?

  8. he manuscript briefly mentions limitations in the final section, but a deeper discussion would strengthen the paper. Could the authors elaborate on real-world challenges such as:

  • sensor noise and calibration drift,
  • heterogeneous hospital data distributions,
  • patient privacy constraints,
  • deployment costs, and
  • the need for clinical validation studies?

Author Response

Response to the Reviewer 01

The authors are very grateful to the reviewer for the valuable insights. Our responses are as follows:

Comment #1: Could the authors explain the limitations of using this dataset and how its synthetic or self-reported nature affects the reliability and clinical relevance of the results?

Response #1: Thank you for this important comment. We agree that the dataset characteristics affect the reliability and clinical relevance of the results. The dataset used in this study is based on self-reported symptom/risk-factor entries and an associated risk label, which may not fully represent real clinical workflows. In addition, the label (stroke risk / at-risk) may be derived from predefined scoring rules rather than confirmed medical diagnosis, which can inflate performance and limit generalization. To address this, we have added an explicit limitations paragraph in the Conclusion.

Action: Added the one/two lines in the conclusion.

Comment #2: Several models—particularly PSO-optimized XGBoost and PCA/LDA-enhanced SVM—achieve accuracy values above 98–99%. Could the authors clarify what steps were taken to rule out overfitting, data leakage, or overly simplistic labeling patterns in the dataset?

Response #2: Thank you for your response. We reduced overfitting by using a strict train/test split, cross-validation on the training set, and early stopping/regularization (plus limiting feature/model complexity). We also checked for data leakage by ensuring all preprocessing was fit on training data only and by excluding any label-derived fields (e.g., direct risk-score/threshold features) that could make the label trivial.

Action: No.

Comment #3: The federated learning evaluation appears to be simulated locally using stratified splits. Could the authors elaborate on how this setup reflects real federated systems, which typically involve heterogeneous, non-IID data, device constraints, and communication delays?

Response #3: Thank you to reviewer. Our FL setup is a local simulation using stratified client splits, so it does not fully capture real FL issues like non-IID (heterogeneous) clients, limited devices, and communication delays. We will clearly state this as a limitation and add extra experiments with non-IID client partitions plus basic communication/round cost reporting to better reflect real deployments.

Action: Yes, we have modified in the section 3.1 and added a limitation in the conclusion.

Comment #4: The feature selection and optimization pipeline integrates RFC, XTC, mRMR, PSO, PCA, and LDA. What is the rationale behind combining so many methods, and did the authors conduct any ablation study to determine which steps meaningfully contribute to performance?

Response #4: Thank you for this valuable remarks. We included RFC, XTC, mRMR, PSO, PCA, and LDA as alternative feature-optimization options, since each targets a different type of feature redundancy/noise. In the paper we did not stack all methods together; instead, we ran a comparative ablation-style study by pairing each technique with the same six classifiers and then reported the best model per technique with complexity.

Action: No

Comment #5: The IoT system for temperature asymmetry and SpOâ‚‚ is described separately from the FL stroke-risk model. Could the authors clarify how these two components are integrated in practice? (Figure 4) Are the physiological IoT signals used in model training, or are they only used for post-diagnostic alerts (Figure 2)?

Response #5: We thank the reviewer for highlighting this critical aspect of system integration. We acknowledge that the initial description appeared to separate the static risk prediction from real-time monitoring. In our proposed approach, the static dataset (Kaggle) is augmented with synthetic physiological features corresponding to the IoT sensors: Temperature Asymmetry and Oxygen Saturation (SaOâ‚‚). As the original Kaggle dataset lacks these specific real-time telemetry markers, we generate simulated values for these two features based on probabilistic distributions found in clinical literature (e.g., assigning higher temperature asymmetric probabilities to ''Stroke'-labeled samples).

This process creates a unified training dataset where each patient record consists of both static clinical history (from Kaggle) and simulated physiological snapshots (representing IoT data) (conceptual for each client). As a result, the FL model is trained in this combined feature space. In the practical deployment (Figure 4), the real-time values from the IoT sensors (MAX30205 and O2Ring) are fed directly into this unified model as input features, allowing the model to predict risk based on the immediate physiological state combined with the patient's history.

Action: We have updated the methodology section in subsection 2.2.1 and Figure 2 to reflect this integrated training workflow.

Comment #6: The manuscript uses thresholds such as ≥3 °C temperature asymmetry and ≤90% SpOâ‚‚ for early stroke detection. Could the authors provide medical references validating these thresholds for stroke risk in home-monitoring environments?

Response #6: We appreciate the reviewer’s suggestion to substantiate our physiological thresholds with clinical evidence. In the revised manuscript, we have incorporated medical literature to justify the selection of  ≥3°C for temperature asymmetry and ≤90% for SpOâ‚‚ as critical indicators for the proposed home-monitoring system. We have addressed it with the following references:

  • 1: Association of Early Oxygenation Levels with Mortality in Acute Ischemic Stroke – A Retrospective Cohort Study [Link: https://www.sciencedirect.com/science/article/abs/pii/S1052305719306500]

 

  • 2: Thermography analysis as a tool for assessing thermal asymmetries and temperature changes after therapy in patients with stroke: a pilot study [Link: https://pubmed.ncbi.nlm.nih.gov/40895052/]

 

  • 3: Evaluation of body temperature in individuals with stroke. [Link: https://journals.sagepub.com/doi/abs/10.3233/NRE-161397]

 

  • 4: Body temperature and esthesia in individuals with stroke [Link: https://www.nature.com/articles/s41598-021-89543-3]

 

Action: We have cited in section 2.1.4

Comment #7: The implementation details for several components—such as communication protocols between nodes in FL, hardware calibration for sensors, and the data-streaming workflow—are not fully described. Could the authors provide more information to enable reproducibility?

Response #7: Thank you for this valuable comment. We agree that the current version does not provide enough implementation detail for full reproducibility. In the revised manuscript, we added a dedicated “Implementation and Reproducibility Details” subsection that explains (i) the communication flow used in our FL simulation (client–server rounds, model update exchange, and aggregation), (ii) the IoT communication stack (BLE/Bluetooth to the smartphone and HTTPS to the cloud), (iii) sensor calibration/validation steps for temperature and SpOâ‚‚, and (iv) the data-streaming workflow from sensing.

Action: Yes. We have added a new subsection 2.2.1 as the description of the valuable concern of the reviewer and added limitation at the conclusion.

Comment #8: The manuscript briefly mentions limitations in the final section, but a deeper discussion would strengthen the paper. Could the authors elaborate on real-world challenges such as:

  • sensor noise and calibration drift,
  • heterogeneous hospital data distributions,
  • patient privacy constraints,
  • deployment costs, and
  • the need for clinical validation studies?

Response #8: Thank you for the suggestion. We agree that the limitations section should better reflect real-world deployment issues. In the revised manuscript, we expanded the Limitations and Future Work discussion.

Action: Added the limitations as per the reviewer’s valuable concerns in the conclusion.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Overall, the manuscript needs more work and adjustment towards its novelty; however, you have mentioned a few of the key contributions of the research work in conducting STROKE management in healthcare innovation. To enhance its impact, consider elaborating on these contributions and providing clearer examples of how they address current gaps in the field. Additionally, incorporating feedback from peers could further refine the manuscript’s focus and clarity. Some wearable devices, including data from IoT, smartwatches, and other key EKGs and EHRs from patients, could be potential data sources that can support your novel model.

You have employed the federated learning (FL), machine learning (ML), and Internet of Things (IoT) approaches, which are combined in this paper to predict and detect potential brain strokes early on while maintaining privacy in real-time application. Moreover, in your research, you have utilized distributed stroke-related data, and the work employs feature optimization techniques (including feature importance, selection, and reduction) to identify the most relevant critical elements while preserving local raw patient data. This research appears to lack novelty. Please include some output results to illustrate the actual outcomes for patients facing challenges in stroke prediction compared to those in prevention cases.

You have also added a few ML methods, including the Extra Trees Classifier and Random Forest Classifier (RFC) and (XTC), where locally trained models are trained in a federated learning pipeline to determine the number of patients who are having a stroke or symptoms of acute cases. These models have shown promising accuracy in identifying high-risk patients, allowing for timely interventions. By leveraging diverse data sources and patient histories, healthcare providers can personalize prevention strategies and improve overall patient outcomes. Please add a few more citations and reduce the similarity checks to 10%. Please specify more accuracy in the output section to support your research. Figure-6, the visualization of the three best-performing federated learning pipelines needs to be clear and picture quality should be readable. In the section of conclusion and future research studies, please add more datasets as well as clinical specimens to support the study.



Author Response

Response to the Reviewer 02

Comment #1: Overall, the manuscript needs more work and adjustment towards its novelty; however, you have mentioned a few of the key contributions of the research work in conducting STROKE management in healthcare innovation. To enhance its impact, consider elaborating on these contributions and providing clearer examples of how they address current gaps in the field. Additionally, incorporating feedback from peers could further refine the manuscript’s focus and clarity. Some wearable devices, including data from IoT, smartwatches, and other key EKGs and EHRs from patients, could be potential data sources that can support your novel model.

Response #1: Thank you for the constructive feedback. We agree that the novelty and practical contribution need to be stated more clearly. In the revised manuscript, we expanded the “Contributions/Novelty” part in the Introduction and added concrete examples showing how our IoT + FL workflow addresses gaps in stroke-risk monitoring (privacy-aware learning, remote monitoring, and scalable deployment). We also revised the text for clarity based on internal peer feedback. Finally, we added a Future Work paragraph explaining additional real-world sources.

Action: Yes, we strongly agree to the reviewer and modified section 1 (Introduction) and conclusion in section 4.

Comment #2: You have employed the federated learning (FL), machine learning (ML), and Internet of Things (IoT) approaches, which are combined in this paper to predict and detect potential brain strokes early on while maintaining privacy in real-time application. Moreover, in your research, you have utilized distributed stroke-related data, and the work employs feature optimization techniques (including feature importance, selection, and reduction) to identify the most relevant critical elements while preserving local raw patient data. This research appears to lack novelty. Please include some output results to illustrate the actual outcomes for patients facing challenges in stroke prediction compared to those in prevention cases.

Response #2: Thank you for the detailed feedback. We respectfully clarify that our novelty is not only using FL/ML/IoT separately but integrating them into a single end-to-end privacy-aware stroke monitoring workflow (training → global model → IoT-side risk inference → alerting) and systematically evaluating feature optimization within the FL setting. In the revised manuscript, we strengthened the novelty statement by clearly listing what is new compared to prior FL-only or IoT-only stroke works and by adding a brief comparison paragraph in the Introduction/Related Work. Also, we did not consider or avoid the clinical application due to ethical clearance we are working on and after getting it we will focus more on addressing the issues in future.

Action: Yes, we have added a paragraph at the end of the Introduction section.

Comment #3: You have also added a few ML methods, including the Extra Trees Classifier and Random Forest Classifier (RFC) and (XTC), where locally trained models are trained in a federated learning pipeline to determine the number of patients who are having a stroke or symptoms of acute cases. These models have shown promising accuracy in identifying high-risk patients, allowing for timely interventions. By leveraging diverse data sources and patient histories, healthcare providers can personalize prevention strategies and improve overall patient outcomes. Please add a few more citations and reduce the similarity checks to 10%. Please specify more accuracy in the output section to support your research. Figure-6, the visualization of the three best-performing federated learning pipelines needs to be clear and picture quality should be readable. In the section of conclusion and future research studies, please add more datasets as well as clinical specimens to support the study.

Response #3: Thank you for your valuable insights. We agree to your valuable remarks. We have strengthened our outcome more on the model performance rather than the clinical practice. After your concern we have some new citations to avoid plagiarism and updated Figure 6 with figures adopted from the code. We have also updated the conclusion with limitations and future work.

Action: Yes, we have updated in the conclusion, figure 06 and updated with few citations in references.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper presents a privacy-preserving, federated learning–enabled IoT architecture for early stroke risk prediction, combining feature-optimized machine learning on a tabular stroke dataset with a wearable monitoring system that measures limb temperature and blood oxygen saturation to raise alerts based on a discrete “RiskCode.” The core ML pipeline uses multiple feature optimization strategies (RFC/XTC feature importance, mRMR, PSO, PCA, LDA) and several classifiers (SVM, KNN, RF, DT, NB, XGB), while the federated setup conceptually distributes training over 10 clients whose model outputs are aggregated via a FedAvg-like procedure. An IoT prototype architecture is described using MAX30205 temperature sensors, an O2Ring pulse oximeter, and an ESP32 microcontroller, and the best reported configuration (PSO + XGB under “global FL”) achieves very high test performance (99.44% accuracy and 99.57% F1) on a Kaggle stroke-risk dataset. 


However, the paper in its current form leaves several critical aspects of the methodology and its claimed privacy-preserving properties insufficiently specified and validated. On the federated side, the FL setup is simulated by partitioning a single public dataset into 10 folds with StratifiedKFold and then averaging clients’ predicted probabilities at the server, rather than aggregating model parameters or gradients as in standard FL; this raises questions about whether the method is truly an FL system or simply an ensemble over stratified splits. Important details such as client heterogeneity, communication rounds, local epochs, learning dynamics, and privacy threats (e.g., membership inference or model inversion) are not addressed; the term “privacy-preserving” is asserted but not rigorously analyzed or demonstrated experimentally. The feature optimization block is quite complex (stacking RFC/XTC, mRMR, PSO, PCA, LDA), yet there is no clear ablation study to show which components are necessary, nor is there a strong justification for aggregating local feature scores with the same weighting as FedAvg.


The experimental evaluation also needs to be significantly strengthened. All performance results come from a single Kaggle dataset; there is no comparison to a purely centralized baseline trained on the full dataset, to existing stroke-risk prediction methods on the same data, or to alternative FL schemes (e.g., different numbers of clients, non-IID splits, or realistic client imbalances). Reported metrics are almost unrealistically high (near-perfect accuracy and MCC for several configurations), but only a single train–test split appears to be used; repeated runs with cross-validation, confidence intervals, and careful leakage checks are required to rule out overfitting or inadvertent test contamination. The IoT monitoring system is described in detail at the hardware level, but its evaluation is limited to a very small number of hand-crafted scenarios in Table 8, without on-device latency, battery consumption, reliability under noise, or experiments with real patients. The claimed integration between the federated model and the embedded system is largely conceptual, and it is not clear whether the global model has actually been deployed and run on resource-constrained hardware.

In addition, the current manuscript lacks discussion about other wireless sensing solutions for healthcare monitoring. Please add discussions about the following studies that use other energy-efficient methods for detecting health status of human beings.

Zhang, D., Zhang, X., Xie, Y., Zhang, F., Yang, H. and Zhang, D., 2024. From single-point to multi-point reflection modeling: Robust vital signs monitoring via mmwave sensing. IEEE Transactions on Mobile Computing.

Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T. and Xu, W., 2024, October. Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. In Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (pp. 211-220).

Finally, the manuscript would benefit from substantial editing for clarity, consistency, and structure. There are inconsistencies in the description of the dataset (e.g., different numbers and types of features, a confusion between symptom-based attributes and demographic variables like gender and work_type), occasional notation issues in the FL and feature-optimization formulas, and some repetition in the Introduction and Related Work sections. The contributions list partially restates standard FL properties without clearly delineating what is novel compared to existing FL-based healthcare frameworks or stroke-monitoring systems.

Author Response

Response to the Reviewer 03

Comment #1: This paper presents a privacy-preserving, federated learning–enabled IoT architecture for early stroke risk prediction, combining feature-optimized machine learning on a tabular stroke dataset with a wearable monitoring system that measures limb temperature and blood oxygen saturation to raise alerts based on a discrete “RiskCode.” The core ML pipeline uses multiple feature optimization strategies (RFC/XTC feature importance, mRMR, PSO, PCA, LDA) and several classifiers (SVM, KNN, RF, DT, NB, XGB), while the federated setup conceptually distributes training over 10 clients whose model outputs are aggregated via a FedAvg-like procedure. An IoT prototype architecture is described using MAX30205 temperature sensors, an O2Ring pulse oximeter, and an ESP32 microcontroller, and the best reported configuration (PSO + XGB under “global FL”) achieves very high-test performance (99.44% accuracy and 99.57% F1) on a Kaggle stroke-risk dataset.

Response #1: Thank you for the careful summary of our work.

Action: No.

Comment #2: However, the paper in its current form leaves several critical aspects of the methodology and its claimed privacy-preserving properties insufficiently specified and validated. On the federated side, the FL setup is simulated by partitioning a single public dataset into 10 folds with StratifiedKFold and then averaging clients’ predicted probabilities at the server, rather than aggregating model parameters or gradients as in standard FL (Figure 1); this raises questions about whether the method is truly an FL system or simply an ensemble over stratified splits. Important details such as client heterogeneity, communication rounds, local epochs, learning dynamics, and privacy threats (e.g., membership inference or model inversion) are not addressed; the term “privacy-preserving” is asserted but not rigorously analyzed or demonstrated experimentally. The feature optimization block is quite complex (stacking RFC/XTC, mRMR, PSO, PCA, LDA), yet there is no clear ablation study to show which components are necessary, nor is there a strong justification for aggregating local feature scores with the same weighting as FedAvg.

Response #2: Thank you for the detailed feedback. Yes, we agree with the reviewer and we revised our paper according to the reviewer’s concern. In the revised manuscript, we clarify that the FL study is a controlled local simulation on a public dataset with clients created via stratified splitting to preserve class balance. Importantly, we do not average predicted probabilities; instead, training follows standard FedAvg (parameter-level): at each round , the server broadcasts , clients train locally for epochs using gradient-based optimization, and the server aggregates client parameters using Eq.5 to obtain . We also report the full FL settings (rounds , local epochs , client fraction , learning rate , batch size , and optimizer) and summarize the procedure in Algorithm 03. We explicitly note that real-world FL effects (non-IID heterogeneity, dropouts, and communication delays) are not fully modeled in this local simulation and are planned for future evaluation.

Regarding privacy, we revise the wording to avoid over-claiming: the current contribution emphasizes data locality (no raw data sharing), while formal privacy defenses and attack-based validation (e.g., membership inference/model inversion) are identified as future extensions. Finally, we strengthen the feature optimization section by adding an ablation-style justification (base +mRMR +PSO +PCA/LDA full) and clarifying the rationale for aggregating client-level feature scores (including sample-size weighting and a robust alternative such as rank/median aggregation)

Action: We did a modification in section 2.2, Algorithm 3 and Table 02. We have also updated the description in section 3.1 after these valuable insights.

Comment #3: The experimental evaluation also needs to be significantly strengthened. All performance results come from a single Kaggle dataset; there is no comparison to a purely centralized baseline trained on the full dataset, to existing stroke-risk prediction methods on the same data, or to alternative FL schemes (e.g., different numbers of clients, non-IID splits, or realistic client imbalances). Reported metrics are almost unrealistically high (near-perfect accuracy and MCC for several configurations), but only a single train–test split appears to be used; repeated runs with cross-validation, confidence intervals, and careful leakage checks are required to rule out overfitting or inadvertent test contamination. The IoT monitoring system is described in detail at the hardware level, but its evaluation is limited to a very small number of hand-crafted scenarios in Table 8, without on-device latency, battery consumption, reliability under noise, or experiments with real patients. The claimed integration between the federated model and the embedded system is largely conceptual, and it is not clear whether the global model has actually been deployed and run on resource-constrained hardware.

Response #3: Thank you for the detailed evaluation comments. In the revised version, we clarify that our FL experiments are conducted as a controlled local simulation on a single public Kaggle dataset and we updated the FL procedure to follow standard parameter-level FedAvg across communication rounds, with the round-based protocol defined in Eq. (5) and Algorithm 3 (broadcast , local training for epochs, and weighted aggregation to obtain ). We also report the key FL hyperparameters in Table 2 (e.g., , , , Adam, ) and explicitly note that real communication effects (delay/dropouts) are not modeled in this simulation; non-IID and communication-aware settings are planned as future work. To address the concern about near-perfect scores, we emphasize strict train–test separation during preprocessing and note that cross-validation is used within the PSO cost evaluation, while acknowledging that additional repeated runs and broader validation are important extensions. For the IoT component, we position the system as a practical proof-of-concept workflow in which the smartphone fuses sensor readings with the model output to generate RiskCode (Algorithm 2); the current evaluation remains scenario-based (Table 8), and we add clear limitations and future work toward real deployment evaluation (noise robustness, latency/energy profiling, and larger real-user/patient validation).

Action: Checked the result with unseen test data. Add the limitations in the conclusion section

Comment #4: In addition, the current manuscript lacks discussion about other wireless sensing solutions for healthcare monitoring. Please add discussions about the following studies that use other energy-efficient methods for detecting health status of human beings.

Zhang, D., Zhang, X., Xie, Y., Zhang, F., Yang, H. and Zhang, D., 2024. From single-point to multi-point reflection modeling: Robust vital signs monitoring via mmwave sensing. IEEE Transactions on Mobile Computing.

Ni, T., Sun, Z., Han, M., Xie, Y., Lan, G., Li, Z., Gu, T. and Xu, W., 2024, October. Rehsense: Towards battery-free wireless sensing via radio frequency energy harvesting. In Proceedings of the Twenty-Fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (pp. 211-220).

Response #4: Thank you, reviewer, for the comment. We have added reviews to the revised manuscript.

Action: Cited in Introduction (section 1.0)

Comment #5: Finally, the manuscript would benefit from substantial editing for clarity, consistency, and structure. There are inconsistencies in the description of the dataset (e.g., different numbers and types of features, a confusion between symptom-based attributes and demographic variables like gender and work_type), occasional notation issues in the FL and feature-optimization formulas, and some repetition in the Introduction and Related Work sections. The contributions list partially restates standard FL properties without clearly delineating what is novel compared to existing FL-based healthcare frameworks or stroke-monitoring systems.

Response #5: Thank you for the suggestion. We performed a careful clarity and consistency edit across the manuscript. First, we fixed the dataset-description inconsistency by ensuring the data section and preprocessing text consistently describe the symptom-based attributes (with age and stroke-risk score) and the binary AtRisk label, and we removed the leftover demographic-variable examples (e.g., gender, work_type) that could confuse the reader. Second, we standardized the notation used in the FL and feature-optimization parts (consistent symbols for clients , rounds , local epochs , and the FedAvg update in Eq. 5, and we corrected minor equation/algorithm references to avoid ambiguity. Third, we reduced repetition by merging overlapping background paragraphs in the Introduction/Related Work. Finally, we revised the contributions list to focus on what is novel in this study (feature-optimization + parameter-level FedAvg + IoT risk-code workflow) rather than restating general FL properties that are already well known.

Action: Modified in Section 2.1.1 and 2.1.2

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

No further questions to the authors.

 

Reviewer 2 Report

Comments and Suggestions for Authors

Everything is covered, as per the last review's comments. This ensures that all aspects have been thoroughly evaluated and addressed. Moving forward, no additional feedback will be incorporated into the next round of revisions. It is good to go for publication

Reviewer 3 Report

Comments and Suggestions for Authors

I sincerely appreciate the authors for their efforts in submission and revisions. All my previous concerns were addressed.

Back to TopTop