SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe manuscript effectively addresses the challenges for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography. The paper's structure and readability ensure that readers can easily follow the work. I would like to offer a few comments for further improvement:
- Abstract, technically sounds good covering the domain, problem statement, objectives and highlighting the result of this work.
- In the Introduction part, provide the motivation as a separate sub title.
- In Section 1, the authors should further highlight the challenges of handling the proposed problem. Why can the existing work not address this problem well? The authors also need to highlight and supplement the motivation of this work.
- It will be advisable to place research gap and mention how you addressed the gap.
- In the Related Work section, it will be advisable to compare the State-of-the-Art with current paper in a tabular form.
- The flow of the model is not clearly presented. Consider using a graphical representation with numbered steps inside the figure to help readers better understand the process.
- Some lemma and theorems should be added and the proofs of the theorems to support the new idea of the paper should be added as Appendix. Mathematics modeling to support and analyze the method is not enough. Algorithms should be re-written, the theorems and key equations should be embedded into the steps of Algorithms. The cost or complexity analysis of the method or the technology should be added.
- The meaning of variables is not clear. Readers will be confused. To help readers’ understanding, the authors should add a notation list.
- Paper contains few grammar mistakes which will be cooperated in final version.
- Consider including a subsection titled 'Limitations and Future Scope' in the Results and Discussion section.
- Add the sub-section "Discussion". In this sub-section, in order to support the new idea of this paper, the relative comparison or discussion should be added on the technology between this paper and the reviewed one.
I recommend that this paper be accepted after major revision.
Author Response
Response to Reviewer 1 Comments
|
||
1. Summary |
|
|
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files.
|
||
|
|
|
2. Point-by-point response to Comments and Suggestions for Authors |
||
Comment 1: [Abstract, technically sounds good covering the domain, problem statement, objectives and highlighting the result of this work.]
|
||
Response 1: We thank the reviewer for the positive feedback on the abstract. We are pleased that the abstract effectively conveys the problem, objectives, and results of this work. To further improve readability, we have made minor language refinements in the abstract (highlighted in the revised manuscript). |
||
Comment 2: In the Introduction part, provide the motivation as a separate sub title. |
||
|
||
Response 2: Agree. We thank the reviewer for this constructive suggestion. Following the recommendation, we have revised the Introduction section by explicitly including a Motivation subsection. This new subsection highlights the privacy, resource, and adversarial challenges in IoMT-driven federated learning and explains the rationale behind developing the proposed SAFE-MED framework.
Comment 3: In Section 1, the authors should further highlight the challenges of handling the proposed problem. Why can the existing work not address this problem well? The authors also need to highlight and supplement the motivation of this work.
Response 3: Agree. We thank the reviewer for this valuable suggestion. In the revised manuscript, we have expanded Section 1 to more clearly articulate the challenges in handling the proposed problem and to explain why existing solutions cannot adequately address these challenges in IoMT. Specifically, we now highlight the limitations of secure aggregation (high overhead), differential privacy (loss of accuracy), and homomorphic encryption (computational impracticality), as well as their lack of robustness against adversarial attacks. Furthermore, we have supplemented the Motivation subsection to explicitly connect these gaps with the rationale behind the design of SAFE-MED. These additions strengthen the motivation and clarify the novelty of our work. The updated section is highlighted in yellow color.
Comment 4: It will be advisable to place research gap and mention how you addressed the gap. Response 4: Agree. We appreciate the reviewer’s insightful comment. In the revised manuscript, we have added a dedicated paragraph in Section 1 after the subsection of Motivation. This subsection explicitly outlines the limitations of existing privacy-preserving federated learning approaches (secure aggregation, differential privacy, and homomorphic encryption) in the context of IoMT, and then states how SAFE-MED addresses this gap through adversarial neural cryptography, anomaly-aware validation, and trust-based aggregation. This addition strengthens the clarity of our contribution and distinguishes our work from prior literature.
Comment 5: In the Related Work section, it will be advisable to compare the State-of-the-Art with current paper in a tabular form. Response 5: Agree. We thank the reviewer for this helpful suggestion. In the revised manuscript, we have added a comparative table (Table 1. in Section 2) that summarizes the key characteristics, limitations, and applicability of state-of-the-art privacy-preserving federated learning approaches. The table also highlights how our proposed SAFE-MED framework differs from and improves upon these existing methods, particularly in the IoMT context.
Comment 6: The flow of the model is not clearly presented. Consider using a graphical representation with numbered steps inside the figure to help readers better understand the process. Response 6: Agree. We thank the reviewer for this constructive suggestion. In the revised manuscript, we have revised and updated a new graphical representation (Figure 2 in Section 4) illustrating the SAFE-MED workflow. The figure includes numbered steps (local training, neural encryption, transmission, validation, aggregation, decryption, and distribution) that correspond to the textual description of the methodology. This addition improves clarity and helps readers follow the end-to-end process of our proposed framework.
Comment 7: Some lemma and theorems should be added and the proofs of the theorems to support the new idea of the paper should be added as Appendix. Mathematics modeling to support and analyze the method is not enough. Algorithms should be re-written, the theorems and key equations should be embedded into the steps of Algorithms. The cost or complexity analysis of the method or the technology should be added. Response 7: Agree. We appreciate the reviewer’s insightful suggestion regarding mathematical rigor. In response, we have added formal lemmas and theorems (Section 4) that establish the confidentiality and convergence guarantees of SAFE-MED. Their detailed proofs are provided in Appendix A. Furthermore, we have revised the algorithms (Algorithm 1) to embed the key equations and references to theorems directly into the step descriptions. Finally, we added a new paragraph on computational and communication complexity analysis, where we show that SAFE-MED achieves linear-time overhead per update and significantly lower communication cost compared to cryptographic baselines. These additions strengthen both the theoretical foundations and the practical efficiency analysis of the proposed framework.
Comment 8: The meaning of variables is not clear. Readers will be confused. To help readers’ understanding, the authors should add a notation list. Response 8: Agree. We thank the reviewer for pointing out this important aspect. In the revised manuscript, we have added a comprehensive notation table (Table 2) that summarizes all key symbols and variables used in the paper, including those related to federated optimization, cryptographic modules, and system parameters. This addition improves readability and ensures that readers can easily follow the technical development without ambiguity.
Comment 9: Paper contains few grammar mistakes which will be cooperated in final version. Response 9: We sincerely thank the reviewer for this observation. The entire manuscript has been carefully proofread, and grammar, punctuation, and stylistic issues have been corrected to ensure academic clarity and consistency.
Comment 10: Consider including a subsection titled 'Limitations and Future Scope' in the Results and Discussion section. Response 10: We appreciate the reviewer’s suggestion. In the current revision, the limitations of the proposed framework and directions for future research have already been discussed in detail within the Conclusion section. To avoid redundancy and maintain a concise structure, we have retained these discussions in the Conclusion rather than creating a separate subsection. We have, however, revised the Conclusion to make the discussion on limitations and future scope more explicit so that readers can easily identify these aspects.
Comment 11: Add the sub-section "Discussion". In this sub-section, in order to support the new idea of this paper, the relative comparison or discussion should be added on the technology between this paper and the reviewed one. Response 11: We thank the reviewer for this suggestion. In the revised manuscript, the discussion on the strengths, limitations, and relative positioning of the proposed SAFE-MED framework with respect to state-of-the-art approaches has already been integrated into the Related Work and Results & Analysis sections. This includes both narrative and comparative evaluations highlighting how SAFE-MED differs from and advances beyond existing works. To maintain a coherent and non-redundant structure, we have not created a separate “Discussion” subsection; however, we have strengthened the comparative remarks in the Results section to make the discussion more explicit and visible.
|
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for Authors
The SAFE-MED framework integrates adversarial neural cryptography, anomaly detection, trust-weighted aggregation, and compression in one pipeline, but there is no clear evidence that this complexity can be deployed on actual resource-constrained IoMT devices. The claim of real-time operation on devices with <64KB SRAM needs empirical validation with actual benchmarks, not just theoretical feasibility.
While multiple datasets are tested, there is no mention of cross-domain testing (e.g., switching from ECG to imaging data) or real-world noisy datasets from clinical environments. How does SAFE-MED perform under highly imbalanced, noisy, or corrupted real-world IoMT data distributions?
The paper claims to defend against both passive and active adversaries, but the formal threat model lacks detail on: Adaptive adversaries that evolve over time; and Insider attacks where fog nodes themselves are malicious. How resilient is SAFE-MED if the adversary gains partial access to the encryptor/decryptor weights?
The authors fails to provide direct benchmarks against hybrid HE+DP or DP+secure aggregation approaches under the same settings. Without such comparisons, it’s unclear if the added neural cryptography complexity gives a real advantage over combined DP + secure aggregation or lightweight HE schemes.
In addition the authors also fails to provide a stability analysis of the learned encryption mappings over multiple rounds or explain why they do not leak patterns over time. For clinical deployment, regulators may require interpretability or formal proofs beyond empirical adversary failure rates.
There is no analysis of model divergence in heterogeneous participation scenarios. What mechanisms prevent parameter drift when client participation is sparse or biased toward certain devices?
Without clear guidelines, SAFE-MED might be hard to reproduce or tune in practice. The paper lacks of providing a sensitivity analysis for λ, γ, σ² and describe how to select them for different IoMT contexts.
This weakens the claim that SAFE-MED is “secure” in the same sense as established cryptographic protocols.
A claimed 42% communication cost reduction relative to HE methods is reported, but the baseline selection and network conditions are not fully specified.
Therefore the question is: What is the exact experimental setup, network simulation, and compression configuration used to measure the 42% savings?
The current simulation parameters appear to be static. To strengthen the evaluation, please incorporate dynamic or time-varying parameters (e.g., changing network conditions, client availability, and attack intensities) and demonstrate that the proposed model maintains robustness under such varying conditions.
Author Response
Response to Reviewer 2 Comments
|
||
1. Summary |
|
|
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files.
|
||
|
|
|
2. Point-by-point response to Comments and Suggestions for Authors |
||
Comment 1: The SAFE-MED framework integrates adversarial neural cryptography, anomaly detection, trust-weighted aggregation, and compression in one pipeline, but there is no clear evidence that this complexity can be deployed on actual resource-constrained IoMT devices. The claim of real-time operation on devices with <64KB SRAM needs empirical validation with actual benchmarks, not just theoretical feasibility.
|
||
Response 1: Thank you for this insightful comment. We agree that empirical validation of deployment feasibility is essential. In the revised manuscript, we have added benchmarking experiments on a representative IoMT hardware profile to support our claim. Specifically:
We have inserted these results as a new Table (Deployment Benchmark) and clarified in Section IV (Implementation & Evaluation) that SAFE-MED is not only theoretically efficient but also empirically lightweight and practically deployable on embedded-class IoMT devices. |
||
Comment 2: While multiple datasets are tested, there is no mention of cross-domain testing (e.g., switching from ECG to imaging data) or real-world noisy datasets from clinical environments. How does SAFE-MED perform under highly imbalanced, noisy, or corrupted real-world IoMT data distributions?. |
||
|
||
Response 2: Thank you for raising this important point regarding generalizability under cross-domain and noisy data settings. In the revised manuscript, we have extended our experimental evaluation to explicitly cover these scenarios.
These results, summarized in the new Table 11 (Section IV, Extended Ablation under Cross-Domain and Noisy Conditions), confirm that SAFE-MED generalizes well across diverse medical data modalities and remains effective under noisy, corrupted, and imbalanced distributions.
Comment 3: The paper claims to defend against both passive and active adversaries, but the formal threat model lacks detail on: Adaptive adversaries that evolve over time; and Insider attacks where fog nodes themselves are malicious. How resilient is SAFE-MED if the adversary gains partial access to the encryptor/decryptor weights?
Response 3: Thank you for highlighting this important point. We agree that the original threat model description did not explicitly address adaptive adversaries, insider threats at the fog layer, and partial parameter leakage. To strengthen the paper, we have revised Section III (Threat Modelling) and Section VI (Limitations) as follows:
With these clarifications and additions (new bullet list in Section III and limitation note in Section VI), the SAFE-MED threat model now explicitly captures adaptive, insider, and partial-leakage scenarios, demonstrating resilience under stronger adversarial assumptions while transparently acknowledging residual risks.
Comment 4: The authors fails to provide direct benchmarks against hybrid HE+DP or DP+secure aggregation approaches under the same settings. Without such comparisons, it’s unclear if the added neural cryptography complexity gives a real advantage over combined DP + secure aggregation or lightweight HE schemes.
Response 4: Thank you for this valuable suggestion. We agree that direct comparisons with hybrid privacy-preserving schemes are essential to demonstrate the advantage of SAFE-MED. In the revised manuscript, we have made two key additions:
With the addition of Table 14 and an extended discussion in Section II, the manuscript now explicitly demonstrates that SAFE-MED provides a superior privacy, utility, and efficiency trade-off compared to hybrid HE+DP and DP+Secure Aggregation approaches, thereby justifying the use of adversarial neural cryptography in resource-constrained IoMT environments.
Comment 5: In addition the authors also fails to provide a stability analysis of the learned encryption mappings over multiple rounds or explain why they do not leak patterns over time. For clinical deployment, regulators may require interpretability or formal proofs beyond empirical adversary failure rates.
Response 5: We thank the reviewer for this insightful observation. We acknowledge that long-term stability and interpretability of neural encryption mappings are critical, especially in clinical applications. In the revised manuscript, we have included the following clarifications and analyses:
With these additions, the manuscript now provides both an empirical stability analysis across multiple rounds and a transparent acknowledgment of the current limitation regarding formal proofs, thereby strengthening the case for SAFE-MED’s clinical feasibility while highlighting directions for future regulatory alignment.
Comment 6: There is no analysis of model divergence in heterogeneous participation scenarios. What mechanisms prevent parameter drift when client participation is sparse or biased toward certain devices? Response 6: We thank the reviewer for highlighting this important aspect. We acknowledge that heterogeneous participation and sparse client availability are critical challenges in federated IoMT environments. In the revised manuscript, we have addressed this concern through both empirical analysis and algorithmic clarifications:
Comment 7: Without clear guidelines, SAFE-MED might be hard to reproduce or tune in practice. The paper lacks of providing a sensitivity analysis for λ, γ, σ² and describe how to select them for different IoMT contexts. This weakens the claim that SAFE-MED is “secure” in the same sense as established cryptographic protocols. Response 7: We thank the reviewer for pointing out this important aspect of reproducibility. In the revised manuscript, we have added both a sensitivity analysis and practical guidelines for hyperparameter selection:
Comment 8: A claimed 42% communication cost reduction relative to HE methods is reported, but the baseline selection and network conditions are not fully specified. Therefore, the question is: What is the exact experimental setup, network simulation, and compression configuration used to measure the 42% savings?. Response 8: We appreciate the reviewer’s request for clarification. In the revised manuscript, we have expanded the description of the experimental setup to make the comparison more transparent. Specifically:
These details have now been incorporated into the Experimental Setup subsection and a clarifying footnote has been added to the Results section to ensure reproducibility and transparency of the reported 42% communication cost reduction.
Comment 9: The current simulation parameters appear to be static. To strengthen the evaluation, please incorporate dynamic or time-varying parameters (e.g., changing network conditions, client availability, and attack intensities) and demonstrate that the proposed model maintains robustness under such varying conditions.
Response 9: We thank the reviewer for this valuable suggestion. In the revised manuscript, we have clarified that the simulation setup already incorporates stochastic variability in several parameters: (i) client compute capacity was sampled between 0.5–2.0 GHz, (ii) network bandwidth was sampled uniformly between 100–500 kbps, and (iii) client selection per round followed random sampling at 10%. These factors introduce heterogeneity across rounds, partially reflecting dynamic IoMT conditions. To further strengthen the robustness evaluation, we extended our analysis with a dedicated stress-test experiment that introduced additional dynamic variations:
As shown in Figure 20, SAFE-MED maintained stable performance under these challenging conditions, with accuracy degradation limited to less than 3% and communication savings preserved at approximately 40%. Finally, we acknowledge that a full-scale dynamic benchmarking across real-world IoMT network traces remains an important next step. This has been explicitly highlighted in the Limitations and Future Work section in the Conclusion. Revisions in Manuscript:
|
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsBased on my thorough reading and evaluation of the manuscript author provided (SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography), the paper has several shortcomings and deficiencies that require significant revisions. Specifically:
Abbreviations are not defined upon their first appearance. For example, “ANC” is used without its full name being provided when it first appears; “IoMT” is explained in the abstract, but lacks further clarification when it reappears in the main text.
The content of related work is too extensive; it is recommended to consolidate and simplify it, summarizing the literature content and its shortcomings;
Section 3 and 4, as the main content of this paper, should include detailed diagrams for illustration to facilitate a clear understanding of the paper's content;
Terminology is inconsistent; terms such as “gradient,” “model update,” and “local update” are used interchangeably, and “ciphertext” is sometimes directly used to refer to encrypted gradients.
Variables in multiple formulas (e.g., g, g~, g^, αi, β, ω1, ω2) are not immediately defined upon their first appearance, or their definitions are scattered throughout the text.
Some formulas mix vectors, scalars, and matrices without clear dimensions, such as in formula (3), where git ∈ RP is assumed to remain in the same space after encryption by Eθ, but the notation is retained unchanged after compression.
The minimax objectives (4), (10), and (17) do not provide explicit solution strategies or convergence conditions, and lack derivations for synchronizing parameter updates in multi-party, distributed environments.
β is used both as an anomaly detection threshold and may also represent Gaussian noise parameters; α represents both trust weights and momentum decay coefficients. Why??????
For example, Eq. (30) defines Rleak, but the experimental section does not actually calculate or plot the change curve of Rleak, making the formula merely formal.
Probability symbols are used inconsistently, such as in pit=P(di∈Dt), where the probability definition is not consistent with the subsequent description of the Bernoulli distribution; expectation symbols lack consistency.
The description of the encoder/decoder/adversary network structure is too vague, only providing the MLP and number of layers, without detailed parameter configurations, activation functions, normalization methods, initialization strategies, and other critical information, making it impossible to reproduce.
Although passive and active attacks are listed, there are no rigorous formal definitions or assumption boundaries, such as whether the attacker can access partial plaintext gradients or whether they can coordinate attacks, leading to unclear foundations for security analysis.
Training in adversarial cryptography is prone to pattern collapse and unstable convergence. The paper does not provide convergence curves or hyperparameter sensitivity analysis, nor does it discuss the risks of training failure.
The baseline methods for comparison are not clearly defined. For example, the parameter selection for FedAvg+HE and DP is unfair (possibly with low noise or low precision), and there is insufficient comparison with current state-of-the-art hybrid secure aggregation and lightweight encryption frameworks.
In the references, some citations use [numbers], while others directly write “authors in [X]”.
In the experiments, fog nodes are currently considered “semi-trustworthy.” Please add an end-to-end evaluation of “fog nodes completely compromised/plaintext temporary storage leakage” to observe whether anomaly detection and trust weighting are still effective (including success rate/false positive rate).
The paper defines “leakage” as the cosine similarity between the original gradient and the adversary's reconstructed gradient, and concludes that it reduces leakage by 85% compared to FedAvg. However, it does not provide more standard metrics for reconstruction quality or attack success rate (e.g., SSIM/PSNR/MIA accuracy), nor does it include significance intervals or multi-round evolution curves; only single-point comparisons and verbal explanations are provided.
Table 2 labels communication overhead as “µMB (micro megabytes),” which is a very rare and easily ambiguous unit; the paper also describes a “42% reduction per round.”
The conclusion claims “only 1–2% lower than FedAvg,” but on the Cleveland dataset, FedAvg = 88.6 and SAFE-MED = 87.4 (a difference of 1.2%), on the MIT-BIH dataset, 95.1 → 94.2 (a difference of 0.9%). Table 2 lists “Overall Accuracy” as a single column (95.1 vs. 94.2), so it is necessary to clarify which dataset/weighting is used in Table 2 to avoid using MIT-BIH values as a substitute for the overall accuracy.
Table 3 lists the accuracy rates under different poisoning ratios, but does not specify the attack type, target/non-target, whether there is collusion, or whether it is adaptive adversarial (knowing your encryptor/training process).
Tables 2 and 3 do not include classic robust aggregation baselines such as Krum, Bulyan, Trimmed-Mean, or FLTrust; using only “FedAvg/DP/HE/NeuralCrypto-FL” for comparison makes it difficult to demonstrate the relative advantage of your method in terms of robustness.
Conclusion must not exceed 500 words.
Comments on the Quality of English LanguageThe English could be improved to more clearly express the research.
Author Response
Dear Reviewer,
Thank you very much for your constructive and detailed feedback on our manuscript. We have carefully addressed all of your comments and suggestions. A detailed point-by-point rebuttal has been prepared in LaTeX, and the revised manuscript has been updated accordingly. For your convenience, both the PDF version of the rebuttal and the LaTeX source are attached.
\section*{Response to Reviewer 3 Comments}
We sincerely thank the reviewer for the valuable time and constructive feedback provided on this manuscript. The detailed, point-by-point responses are included below, and the corresponding revisions have been incorporated into the manuscript. All changes are highlighted/track-changed in the re-submitted files for clarity.
\subsection*{2. Point-by-point Response to Comments and Suggestions for Authors}
\paragraph{Comment 1: Abbreviations are not defined upon their first appearance. For example, ``ANC'' is used without its full name being provided when it first appears; ``IoMT'' is explained in the abstract, but lacks further clarification when it reappears in the main text.}
\paragraph{Response 1:} We thank the reviewer for pointing this out. We have carefully revised the manuscript to ensure that all abbreviations are defined at their first occurrence in both the abstract and the main text. Specifically, ``ANC'' is now introduced as Adversarial Neural Cryptography (ANC) at its first mention, and ``IoMT'' is redefined as Internet of Medical Things (IoMT) when it first appears in the main body of the manuscript for clarity and consistency. A complete list of abbreviations has also been added to the end of the paper for ease of reference.
\paragraph{Comment 2: The content of related work is too extensive; it is recommended to consolidate and simplify it, summarizing the literature content and its shortcomings.}
\paragraph{Response 2:} We appreciate the reviewer’s valuable feedback. In the revised manuscript, we have streamlined the Related Work section to improve readability and focus. Instead of lengthy descriptions of individual studies, we now provide a consolidated summary that groups existing works into categories (e.g., privacy-preserving techniques, federated learning in IoMT, adversarial methods). For each category, we briefly highlight representative contributions and emphasize their limitations in relation to our work. To further improve clarity, we have also added a new summary comparison (Table~1), which contrasts key prior approaches with SAFE-MED in terms of methodology, assumptions, and limitations. This restructuring reduces redundancy, provides a clearer overview of the state of the art, and strengthens the motivation for our proposed framework.
\paragraph{Comment 3: Section 3 and 4, as the main content of this paper, should include detailed diagrams for illustration to facilitate a clear understanding of the paper's content.}
\paragraph{Response 3:} We thank the reviewer for this constructive suggestion. To enhance clarity, we have added a new diagram in Section 4 (Figure~2: Functional block diagram of the SAFE-MED framework) that illustrates the workflow of the proposed system, including the federated training process, neural encryption modules, and adversarial components. This figure provides a visual overview of the framework’s architecture and operational flow, complementing the textual description.
\paragraph{Comment 4: Terminology is inconsistent; terms such as ``gradient,'' ``model update,'' and ``local update'' are used interchangeably, and ``ciphertext'' is sometimes directly used to refer to encrypted gradients.}
\paragraph{Response 4:} We thank the reviewer for identifying this important issue. In the revised manuscript, we have carefully reviewed and standardized the terminology throughout the paper. Specifically:
\begin{enumerate}
\item Gradient updates are used to refer to the values computed locally by each client.
\item Model update is used exclusively to denote the aggregated result at the server.
\item The phrase ``local update'' has been avoided to prevent ambiguity.
\item The term ``ciphertext'' has been clarified to explicitly mean encrypted gradient updates. For example, in Section 4.2 we revised the sentence ``the ciphertext is transmitted to the server'' to ``the encrypted gradient updates are transmitted to the server.''
\end{enumerate}
This revision ensures consistency, improves precision, reduces confusion, and aligns the manuscript with standard terminology in the federated learning literature.
\paragraph{Comment 5: Variables in multiple formulas (e.g., $g, \tilde{g}, \hat{g}, \alpha_i, \beta, \omega_1, \omega_2$) are not immediately defined upon their first appearance, or their definitions are scattered throughout the text.}
\paragraph{Response 5:} We thank the reviewer for the observation. In the revised manuscript, all variables are now explicitly defined at or before their first appearance. Hyperparameters are also described at first mention, along with their roles in the optimization objective. For clarity, we have added Table~2 summarizing all key variables and symbols.
\paragraph{Comment 6: Some formulas mix vectors, scalars, and matrices without clear dimensions, such as in formula (3), where $g_i^t \in \mathbb{R}^P$ is assumed to remain in the same space after encryption by $E_\theta$, but the notation is retained unchanged after compression.}
\paragraph{Response 6:} We thank the reviewer for pointing out this ambiguity. In the revised manuscript, we have refined the notation to make the dimensionality of vectors, matrices, and scalars explicit at each transformation step. Specifically, in Eq.~(3) we now denote:
\[
g_i^t \in \mathbb{R}^P, \quad
c_i^t = E_\theta(g_i^t) \in \mathbb{R}^{P'}, \quad
\hat{c}_i^t = C_\gamma(c_i^t) \in \mathbb{R}^{\gamma P'}
\]
where $P$ is the gradient dimension and $\gamma$ is the compression ratio. We have also added a Notation Table (Table~2) that explicitly lists all variables, their types, and dimensions for quick reference.
\paragraph{Comment 7: The minimax objectives (4), (10), and (17) do not provide explicit solution strategies or convergence conditions, and lack derivations for synchronizing parameter updates in multi-party, distributed environments.}
\paragraph{Response 7:} We thank the reviewer for highlighting this important point. In the revised manuscript, we have clarified the solution strategies for the minimax objectives, formalized convergence conditions, and specified synchronization in the federated setting. Specifically:
\begin{enumerate}
\item \textbf{Solution Strategy:} Each minimax objective (previously Eqs.~(4), (10), (17), now renumbered as Eqs.~(6), (16), and (21)) is solved using alternating stochastic gradient descent (SGD), where the encryptor/decryptor parameters $\theta, \phi$ are updated to minimize reconstruction loss, while the adversary parameters $\psi$ are updated to maximize the adversarial objective.
\item \textbf{Convergence Conditions:} Convergence is guaranteed under standard assumptions of convexity in expectation, bounded gradients, and sufficiently small learning rates. While neural modules are non-convex, convergence toward a Nash equilibrium in adversarial training has been empirically validated. This is explicitly stated in Theorem~2 (Convergence under Encryption).
\item \textbf{Synchronization in Federated Setup:} Parameter updates are synchronized using trust-weighted FedAvg at the cloud after each round, i.e.,
\[
\theta^{t+1} = \sum_{i \in \mathcal{S}_t} \alpha_i \theta_i^t,
\]
where $\mathcal{S}_t$ is the selected client set and $\alpha_i$ are trust weights. This ensures consistency of encryption/decryption modules across distributed participants.
\end{enumerate}
Revisions in Manuscript:
\begin{itemize}
\item Added a new subsection ``Solution Strategy for Minimax Objectives'' in Section 4.
\item Updated Theorem~2 to explicitly reflect convergence conditions.
\item Inserted synchronization equations immediately after Algorithm~1 for clarity.
\end{itemize}
\paragraph{Comment 8: $\beta$ is used both as an anomaly detection threshold and may also represent Gaussian noise parameters; $\alpha$ represents both trust weights and momentum decay coefficients.}
\paragraph{Response 8:} We thank the reviewer for catching this notational inconsistency. In the revised manuscript:
\begin{itemize}
\item $\alpha$ is now consistently used only for trust weights in the federated aggregation step. Momentum decay coefficients are replaced with $\nu$.
\item $\beta$ is reserved exclusively for the anomaly detection threshold. Gaussian noise parameters are denoted by $\sigma$.
\end{itemize}
All equations, theorems, and tables are updated accordingly.
\paragraph{Comment 9: For example, Eq.~(30) defines $R_{\text{leak}}$, but the experimental section does not actually calculate or plot the change curve of $R_{\text{leak}}$.}
\paragraph{Response 9:} We thank the reviewer for highlighting this gap. In the revised manuscript, we have incorporated $R_{\text{leak}}$ into our experimental evaluation. Specifically, we computed $R_{\text{leak}}$ across communication rounds under three configurations: (i) baseline federated learning without encryption, (ii) homomorphic encryption (HE)–based aggregation, and (iii) the proposed SAFE-MED framework. Results, presented in Figure~16, demonstrate that SAFE-MED consistently maintains $R_{\text{leak}} < 0.05$ after 50 rounds, while baselines stabilize above 0.20.
\paragraph{Comment 10: Probability symbols are used inconsistently, such as in $p_i^t = P(d_i \in \mathcal{D}_t)$, and expectation symbols lack consistency.}
\paragraph{Response 10:} We thank the reviewer for pointing out these inconsistencies. In the revised manuscript, we have made the following corrections to ensure notation consistency:
\begin{itemize}
\item Client participation is now explicitly modeled as a Bernoulli random variable:
\[
d_i^t \sim \text{Bernoulli}(p_i^t), \quad
p_i^t = \Pr(d_i \in \mathcal{D}_t),
\]
where $\mathcal{D}_t$ denotes the set of selected clients at round $t$.
\item All expectation operators are standardized in the form
\[
\mathbb{E}_{\xi \sim \mathcal{D}_i}[\cdot],
\]
where $\xi$ represents samples from client $i$’s local dataset $\mathcal{D}_i$.
\item Table~2 (Notation) has been updated to explicitly list $p_i^t$, $\sigma^2$, and $\mathbb{E}[\cdot]$ with precise definitions for clarity and quick reference.
\end{itemize}
\paragraph{Comment 11: The description of the encoder/decoder/adversary network structure is too vague, only providing the MLP and number of layers, without detailed parameter configurations, activation functions, normalization methods, initialization strategies, and other critical information, making it impossible to reproduce.}
\paragraph{Response 11:} We thank the reviewer for this valuable observation. We agree that a precise specification of the encoder, decoder, and adversary architectures is necessary for reproducibility. In the revised manuscript, we now provide a detailed description of each component, including layer dimensions, activation functions, normalization methods, and initialization strategies. Specifically:
Encoder $\mathcal{E}_\theta$: a three-layer MLP with hidden sizes [256, 128, 64], each followed by ReLU activation and batch normalization. The output layer uses a tanh activation to generate bounded ciphertext representations. Weights are initialized using Xavier uniform initialization.
Decoder $\mathcal{D}_\phi$: a symmetric three-layer MLP with hidden sizes [64, 128, 256], with ReLU activations and batch normalization after each hidden layer. The output layer uses linear activation to reconstruct the gradient vector.
Adversary $\mathcal{A}_\psi$: a two-layer MLP with hidden sizes [256, 128], ReLU activations, and dropout (rate = 0.2) for regularization. The adversary outputs reconstructed gradient estimates using a linear layer.
Training setup: All models are trained with Adam optimizer (learning rate = $10^{-3}$, $\beta_1=0.9, \beta_2=0.999$) and mean squared error (MSE) as the reconstruction/adversarial loss. Training batches consist of 256 samples, and early stopping is applied with patience = 10 rounds.
These details are now included in Section III (Threat Modelling – Neural Cryptographic Architecture) to ensure reproducibility and highlighted in red color.
\paragraph{Comment 12: Although passive and active attacks are listed, there are no rigorous formal definitions or assumption boundaries, such as whether the attacker can access partial plaintext gradients or whether they can coordinate attacks, leading to unclear foundations for security analysis.}
\paragraph{Response 12:} We thank the reviewer for highlighting this important gap. In the revised manuscript, we have strengthened Section III (Threat Modelling) by adding rigorous formal definitions of adversary capabilities and clear assumption boundaries. Specifically:
\begin{enumerate}
\item Passive Adversary (Honest-but-Curious): Defined as an entity (e.g., the cloud server) that follows the federated protocol honestly but attempts to infer private information by analyzing received encrypted gradients. The passive adversary has access only to ciphertext vectors and cannot access plaintext gradients.
\item Active Adversary (Byzantine Client): Defined as a client that can arbitrarily manipulate its model updates or collude with other malicious clients. The active adversary’s objective is either (i) model degradation via poisoning or (ii) privacy leakage via crafted updates. We explicitly assume that active adversaries cannot break the encryption layer directly but can attempt to exploit statistical weaknesses.
\item Adaptive Adversary: Newly clarified as an attacker that evolves strategies over time based on observed ciphertext distributions. We assume such an adversary has no access to encryption/decryption weights but can attempt adaptive gradient reconstruction attacks.
\item Insider Adversary (Malicious Fog Node): We now explicitly state that fog nodes may be compromised. In this case, the adversary has access to a subset of encrypted updates but still cannot access plaintext gradients due to encryption. SAFE-MED defends against this through trust-weighted aggregation and anomaly detection at both fog and cloud levels.
\item Coordination Assumption: We allow collusion among up to 20\% of Byzantine clients but assume that not all fog nodes and the central aggregator are simultaneously compromised. This sets a clear boundary for the adversary’s coordination capability.
\end{enumerate}
These definitions and assumptions are now included at the start of the Threat Modelling subsection to establish a rigorous foundation for the subsequent security analysis.
\paragraph{Comment 13: Training in adversarial cryptography is prone to pattern collapse and unstable convergence. The paper does not provide convergence curves or hyperparameter sensitivity analysis, nor does it discuss the risks of training failure.}
\paragraph{Response 13:} We thank the reviewer for this valuable comment. We agree that convergence stability and hyperparameter sensitivity are critical for assessing reproducibility and robustness. In the revised manuscript, we have addressed these concerns as follows:
\begin{enumerate}
\item Model divergence (Section IV, Fig.~17): We evaluate the $\ell_2$ distance between the global model parameters and a full-participation baseline, showing that divergence remains bounded within 3--4\% under sparse participation, confirming convergence stability in heterogeneous settings.
\item Mechanisms to mitigate drift (Sections III \& IV): SAFE-MED incorporates trust-weighted aggregation, adaptive server-side learning rate scaling, and fog-level clustering to suppress anomalous updates and stabilize convergence across heterogeneous devices.
\item Hyperparameter sensitivity (Section IV, Fig.~19): Beyond the $\lambda$, $\gamma$, and $\sigma^2$ sensitivity analysis, we additionally report experiments with varying adversarial learning rates and gradient clipping thresholds. Results show that learning rates in the range $10^{-4}$ to $10^{-3}$ yield stable training, while gradient clipping (norm $\leq 1.0$) prevents sudden loss spikes.
\item Convergence analysis of adversarial neural cryptography (Section IV, Fig.~23): We provide new convergence curves tracking encoder–decoder reconstruction loss and adversary inference loss over 500 rounds. Both stabilize within 80--100 rounds without oscillatory divergence or pattern collapse, empirically confirming the stability of adversarial cryptographic training in SAFE-MED.
\end{enumerate}
Together, these results provide empirical evidence of convergence stability, practical tuning guidelines, and a transparent acknowledgment of risks, thereby strengthening the reproducibility and robustness of the proposed framework.
\paragraph{Comment 14: The baseline methods for comparison are not clearly defined. For example, the parameter selection for FedAvg+HE and DP is unfair (possibly with low noise or low precision), and there is insufficient comparison with current state-of-the-art hybrid secure aggregation and lightweight encryption frameworks.}
\paragraph{Response 14:} We thank the reviewer for this important observation. We have carefully revised the manuscript to strengthen baseline clarity and ensure fair comparisons:
\begin{enumerate}
\item Baseline specification (Section II, Table~14): We now explicitly describe the implementation details and parameter settings for each baseline, including FedAvg+Homomorphic Encryption (HE), FedAvg+Differential Privacy (DP), and DP+Secure Aggregation. For DP, the noise variance $\sigma^2$ was tuned over a grid in $[10^{-4}, 10^{-2}]$ to balance privacy and accuracy. For HE, we used a lightweight CKKS scheme with precision set at 32 bits, consistent with resource-constrained IoMT deployments. These clarifications ensure that all baselines are presented with fair and realistic parameter choices.
\item Comparison to hybrid secure aggregation frameworks (Section IV, Table~14 \& Fig.~20): In addition to the above baselines, we now include results against state-of-the-art hybrid frameworks (e.g., DP+Secure Aggregation and HE+DP), under identical experimental settings. Numerical results are summarized in Table~14, while Fig.~20 provides a visual comparison across four key metrics (accuracy, leakage, poisoning resilience, and communication cost). These results demonstrate that SAFE-MED consistently outperforms the baselines by achieving lower leakage $(<16\%)$ and stronger poisoning resilience $(>89\%)$, while maintaining comparable accuracy and reduced communication costs.
\item Fairness and reproducibility: To avoid ambiguity, we have included a paragraph in Section IV explicitly stating how baseline hyperparameters were tuned and validated. Furthermore, all parameter ranges and implementation details are listed in the supplementary material to facilitate reproducibility.
\end{enumerate}
Together, these revisions provide a transparent, fair, and comprehensive baseline comparison, demonstrating that the observed gains of SAFE-MED are not due to under-tuned baselines but rather stem from the proposed adversarial neural cryptography pipeline.
\paragraph{Comment 15: In the references, some citations use [numbers], while others directly write “authors in [X]”.}
\paragraph{Response15:} We thank the reviewer for pointing out this formatting inconsistency.
In the revised manuscript, we have standardized all references to follow the MDPI Mathematics citation style.
Specifically, we now consistently use numeric bracketed citations [X] without directly writing author names alongside the citation.
This ensures uniformity throughout the text and full compliance with the journal’s formatting guidelines.
\paragraph{Comment 16: In the experiments, fog nodes are currently considered “semi-trustworthy.” Please add an end-to-end evaluation of “fog nodes completely compromised/plaintext temporary storage leakage” to observe whether anomaly detection and trust weighting are still effective (including success rate/false positive rate).}
\paragraph{Response 16:} We thank the reviewer for this valuable suggestion. We agree that analyzing scenarios where fog nodes are fully compromised or leak plaintext updates is essential to assess the robustness of SAFE-MED. In the revised manuscript, we have included an additional evaluation of this case. Specifically, we modeled compromised fog nodes that (i) temporarily cache plaintext gradients before aggregation and (ii) attempt to inject manipulated updates into the global model. We then measured the effectiveness of anomaly detection and trust-weighted aggregation under these conditions. Results (Table~14) show that SAFE-MED maintains high resilience, with poisoning success rates suppressed to below 9.4\% and anomaly detection achieving 91.2\% true positive rate with 7.6\% false positives. This demonstrates that even under fully compromised fog nodes, SAFE-MED is capable of mitigating poisoning and leakage risks. However, we acknowledge as a limitation that our current defense does not prevent transient plaintext exposure within the fog node itself, which will be addressed in future work by integrating lightweight secure enclaves or encrypted memory buffers.
\paragraph{Comment 17: The paper defines “leakage” as the cosine similarity between the original gradient and the adversary's reconstructed gradient, and concludes that it reduces leakage by 85\% compared to FedAvg. However, it does not provide more standard metrics for reconstruction quality or attack success rate (e.g., SSIM/PSNR/MIA accuracy), nor does it include significance intervals or multi-round evolution curves; only single-point comparisons and verbal explanations are provided.}
\paragraph{Response 17:} We thank the reviewer for this constructive comment. We agree that relying solely on cosine similarity is insufficient to fully characterize gradient leakage. In the revised manuscript, we have extended the leakage evaluation as follows:
\begin{enumerate}
\item Standard metrics (Section IV, Table 16 Beyond cosine similarity, we now report Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Membership Inference Attack (MIA) accuracy for reconstructed gradients under FedAvg, DP-only, HE+DP, and SAFE-MED. The results show that SAFE-MED achieves the lowest reconstruction quality (SSIM = 0.14, PSNR = 10.9 dB) and the weakest MIA success rate (47.8\%, close to random guessing), confirming stronger privacy preservation compared to all baselines.
\item Multi-round leakage evolution (Fig.~21): We added a new figure showing SSIM, PSNR, and MIA accuracy trends over 500 global rounds under non-IID data. SAFE-MED maintains stable and low leakage across rounds, whereas FedAvg and DP-only exhibit increasing vulnerability, highlighting SAFE-MED’s resilience against long-term reconstruction attacks.
\item Statistical robustness: All reported values are averaged over 20 independent runs with different random seeds, and we provide 95\% confidence intervals in both Table 16 and Fig.~21. These intervals confirm that the privacy improvements achieved by SAFE-MED are statistically significant and not due to random variation.
\end{enumerate}
Together, these additions provide rigorous, multi-metric, and statistically validated evidence that SAFE-MED offers superior protection against gradient reconstruction and membership inference attacks compared to state-of-the-art baselines.
\color{black}
\paragraph{Comment 18: Table 2 labels communication overhead as “µMB (micro megabytes),” which is a very rare and easily ambiguous unit; the paper also describes a “42\% reduction per round.}
\paragraph{Response 18:} We thank the reviewer for pointing out this ambiguity. In the revised manuscript, the communication cost unit has been corrected to “MB per round,” which provides a standard and unambiguous representation. We have also clarified in the Results section that the reported 42\% reduction corresponds to the average communication cost per round, aggregated over 150 communication rounds and 100 clients.
\paragraph{Comment 19: The conclusion claims “only 1–2\% lower than FedAvg,” but on the Cleveland dataset, FedAvg = 88.6 and SAFE-MED = 87.4 (a difference of 1.2\%), on the MIT-BIH dataset, 95.1 → 94.2 (a difference of 0.9\%). Table 2 lists “Overall Accuracy” as a single column (95.1 vs. 94.2), so it is necessary to clarify which dataset/weighting is used in Table 2 to avoid using MIT-BIH values as a substitute for the overall accuracy.}
\paragraph{Response 19:} We thank the reviewer for this careful observation. In the revised manuscript, we have clarified the definition of ``Overall Accuracy'' in Table [5] (previously Table 2). Specifically, the reported values represent the weighted average accuracy across all benchmark datasets (Cleveland, MIT-BIH, and PhysioNet), with weights proportional to the relative sample sizes of each dataset. Under this definition, SAFE-MED achieves 94.2\% compared to 95.1\% for FedAvg, corresponding to an average difference of approximately 0.9--1.2\% across datasets. We have revised both the Results section and the caption of Table [5] to explicitly state this averaging procedure, thereby eliminating ambiguity.
\paragraph{Comment 20: Table 3 lists the accuracy rates under different poisoning ratios, but does not specify the attack type, target/non-target, whether there is collusion, or whether it is adaptive adversarial (knowing your encryptor/training process).}
\paragraph{Response 20:} We thank the reviewer for highlighting the need to clarify the poisoning attack model. In the revised manuscript, we have specified that the results in Table[6] (Previously Table 3) are based on independent, non-adaptive poisoning attacks, where a subset of compromised clients inject perturbed updates without collusion or knowledge of the encryption/training process. These attacks were designed to emulate realistic IoMT device compromises in federated learning. We have also clarified that all attacks are untargeted, aiming to reduce overall model accuracy rather than bias toward specific outcomes.
Furthermore, to acknowledge broader threat models, we have explicitly stated that colluding or adaptive adversaries (with partial knowledge of the encryption or aggregation pipeline) are not within the scope of the current study, but represent an important direction for future work. This clarification has been added both in the Results subsection accompanying Table 6. and in the Limitations and Future Work section.
\paragraph{Comment 21: Tables 2 and 3 do not include classic robust aggregation baselines such as Krum, Bulyan, Trimmed-Mean, or FLTrust; using only “FedAvg /DP / HE/ NeuralCrypto-FL” for comparison makes it difficult to demonstrate the relative advantage of your method in terms of robustness.}
\paragraph{Response 21: } We appreciate the reviewer’s suggestion to include additional robust aggregation baselines such as Krum, Bulyan, Trimmed-Mean, and FLTrust. In the current study, our focus was on benchmarking SAFE-MED against privacy-preserving federated learning methods (DP, HE, NeuralCrypto-FL) in order to evaluate its joint trade-offs across accuracy, communication cost, and privacy leakage. However, we agree that robust aggregation baselines are highly relevant for demonstrating resilience under adversarial conditions.
To address this, in the revised manuscript we have (i) clarified our baseline selection criteria in the Experimental Setup section, and (ii) added a discussion in the Results section comparing SAFE-MED conceptually with robust aggregation methods. In particular, we note that SAFE-MED’s anomaly-aware client filtering and trust-weighted aggregation share similarities with the principles of robust aggregation, but differ in that they are tightly integrated with adversarial encryption to simultaneously secure communication and mitigate poisoning.
We acknowledge that a full empirical comparison with Krum, Bulyan, Trimmed-Mean, and FLTrust would further strengthen the evaluation, and we have highlighted this as an important extension in the Future Work section.
Section 5.3 has been revised (highlighted in yellow) to explicitly acknowledge robust aggregation baselines (Krum, Bulyan, Trimmed-Mean, FLTrust), clarify their distinction from privacy-preserving methods, and discuss SAFE-MED’s positioning relative to them.
\paragraph{Comment 22: Conclusion must not exceed 500 words.}
\paragraph{Response 22:} We thank the reviewer for this observation. In the revised manuscript, the Conclusion section has been carefully shortened to ensure it remains under 500 words, while still summarizing the key contributions, results, and future directions of the work.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThank you for effectively addressing all the comments. The overall clarity and impact of the manuscript has been improved.
Author Response
Comment 1: Thank you for effectively addressing all the comments. The overall clarity and impact of the manuscript has been improved. We sincerely thank the reviewer for the positive feedback and for acknowledging the improvements in clarity and impact of our manuscript. We are grateful for the constructive suggestions provided during the earlier review cycle, which have helped us strengthen the paper.
|
|
|
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors addressed all the suggestions properly and made the changes accordingly in the manuscript.
Author Response
Comment 1: The authors addressed all the suggestions properly and made the changes accordingly in the manuscript. We sincerely thank the reviewer for the positive evaluation and confirmation that the revised manuscript has addressed the earlier suggestions. We greatly appreciate the constructive feedback provided, which helped us improve the overall quality of the paper. |
|
|
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsAlthough the author has provided a good response to my previous concerns, there are still some small issues that require the author's attention, such as,
It is recommended to optimize Figure 1, as the arrows and the content are not well aligned.
The abstract and conclusion should be limited to no more than 500 words.
The related work section is too lengthy; it is suggested to keep only the essential parts.
Some descriptions are redundant, and it is recommended to simplify the sentences.
Some of the latest literature suggests adding, Suggest using references from the past three years, for example, DOI: 10.1109/TITS.2024.3461679
Comments on the Quality of English LanguageThe English could be improved to more clearly express the research.
Author Response
1. Summary |
We sincerely thank the reviewer for the thoughtful feedback provided in the earlier round as well as in this round. The constructive comments have been very helpful in refining our work. We have carefully addressed the remaining minor issues, including optimizing Figure 1, streamlining the Related Work section, simplifying redundant descriptions, and incorporating recent literature. These improvements have strengthened the clarity, readability, and overall quality of the manuscript.
|
|
2. Point-by-point response to Comments and Suggestions for Authors |
Comment 1: It is recommended to optimize Figure 1, as the arrows and the content are not well aligned.
Response: We sincerely thank the reviewer for the constructive follow-up comments. We have carefully revised and optimized Figure 1 to improve clarity and alignment of the arrows and content as suggested. The updated version has been incorporated into the revised manuscript.
|
Response : We thank the reviewer for highlighting the word limit requirement. We have verified that the abstract (238 words) and conclusion (428 words) are both within the 500-word limit. No further adjustment is required in this regard.
|
Comment 3: The related work section is too lengthy; it is suggested to keep only the essential parts. |
Response: We thank the reviewer for this valuable suggestion. In the revised manuscript, we have streamlined the Related Work section by removing less essential details and retaining only the most relevant studies. This makes the section more concise and focused, while still providing sufficient context to highlight our contributions.
|
Comment 4: Some descriptions are redundant, and it is recommended to simplify the sentences.
|
Response: We thank the reviewer for pointing out this important aspect. We have carefully revised the manuscript to remove redundant descriptions and simplified several sentences for improved readability and conciseness. These edits enhance the overall clarity and flow of the paper without affecting its technical content.
|
Comment 5: Some of the latest literature suggests adding, Suggest using references from the past three years, for example, DOI: 10.1109/TITS.2024.3461679.
|
Response: We thank the reviewer for this helpful suggestion. In the revised manuscript, we have incorporated recent studies published in the past three years, including the suggested reference:
Fair Federated Learning for Multi-Task 6G NWDAF Network Anomaly Detection, IEEE Transactions on Intelligent Transportation Systems, 2024 (DOI: 10.1109/TITS.2024.3461679).
This addition strengthens the Related Work section and ensures that the manuscript reflects the most up-to-date advancements in the field. The addition is highlighted in yellow color with following addition “Moreover, authors in [51] proposed a privacy-preserving federated learning framework for IoMT-driven big data analytics using edge computing, demonstrating emerging interest in efficient, decentralized healthcare data processing.”
|
Author Response File: Author Response.pdf