Next Article in Journal
A Robust Image Watermarking Scheme via Two-Stage Training and Differentiable JPEG Compression
Previous Article in Journal
Large Language and Multimodal Models in Archaeological Science: A Review
 
 
Article
Peer-Review Record

Explainable AI for Federated Learning-Based Intrusion Detection Systems in Connected Vehicles

Electronics 2025, 14(22), 4508; https://doi.org/10.3390/electronics14224508
by Ramin Taheri 1, Raheleh Jafari 2,*, Alexander Gegov 1,3, Farzad Arabikhan 1 and Alexandar Ichtev 3
Reviewer 1:
Reviewer 2: Anonymous
Electronics 2025, 14(22), 4508; https://doi.org/10.3390/electronics14224508
Submission received: 25 October 2025 / Revised: 12 November 2025 / Accepted: 13 November 2025 / Published: 18 November 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The overall paper methodology is well-defined and requires minimal improvement. The following suggestions need to be addressed in manuscripts:

  • Declare acronyms only once, then use the acronyms without their full form throughout the paper. Ensure consistency in the capitalization pattern used for the acronyms, such as in the abstract, where you declared acronyms in two styles: “Internet of Vehicles (IoV)” and intrusion detection system (IDS).
  • In the abstract, you write “AI”; it's good, but provide the full form for a better understanding of the reader. This word is common, but I suggest giving the full form before using acronyms.
  • In the introduction section, the first paragraph “Connected vehicles (CVs) rely on Internet of Things (IoT) technologies and Vehicle-to-Everything (V2X) communications to exchange data, enhance driving coordination, and improve road safety.” It must be cited.
  • Avoid repeating citations in the paper; proofread it again to catch minor issues.
  • In the introduction section, need to proofread once, some acronyms are places without declaration, paper contribution and organization are put into one section, such as “Contribution and Organization”.
  • Need a summary table of related work, it’s gives more clarity to the reader, what type of technique applies to which dataset and the outcomes of this approach.
  • “Vehicle-to-Infrastructure (V2I)” and IoT are cited in that paper as examples of these two terms.  “https://doi.org/10.1155/2022/8259927”.
  • Each equation must be cited in the text, such as Equation 1, and then explained.
  • Evaluation Metrics formula must be put in that section.
  • It’s my suggestion to put “Results and Discussion” into one section and include every figure and table detail mentioned before or after.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript presents an intrusion detection framework—XAI-FL-IDS—that integrates Explainable AI (XAI) mechanisms into a Federated Learning (FL) environment tailored for connected and autonomous vehicles. The system employs a deep neural network model on both CAN bus and IP-based communication data to detect intrusions. To enhance interpretability, the authors incorporate SHAP and LIME explainability techniques at the local client level and aggregate them at the server to build a global explanation map. The proposed system is evaluated and demonstrates competitive accuracy and robustness when compared with centralized and non-explainable FL baselines. The system aims to address challenges of model interpretability, privacy preservation, and robustness to poisoned updates in FL settings.

 

Strengths

The paper tackles the timely intersection of FL, XAI, and vehicular cybersecurity, offering a comprehensive end-to-end framework. Integrating XAI directly into the FL training loop rather than using it post hoc is an insightful design decision that allows for explainability-aware model updates. The use of SHAP and LIME across multiple levels (local, instance, and global) enhances the transparency and interpretability of the model. The experimental design is thorough, involving two realistic datasets with both in-vehicle and external attack vectors, which improves the generalizability of the conclusions. Additionally, the use of robust aggregation techniques like Krum to mitigate poisoning aligns well with real-world constraints in federated vehicular environments. The modular breakdown of the method, especially the inclusion of an attention mechanism, is another well-executed contribution.

 

Weaknesses

 

The key contribution on “Federated Explainability Aggregation” lacks implementation detail and evidential validation. While the paper introduces a mechanism to aggregate SHAP and LIME explanations across clients, the method is described abstractly without algorithmic or mathematical formalism. It is also unclear how conflicting or non-aligned feature importance scores across clients are handled during aggregation. No specific analysis of the effectiveness of the aggregated explanations is provided, such as fidelity or user interpretability assessments.

 

The SHAP and LIME methods are described as co-existing, but their specific roles and computational trade-offs are unclear. Although both techniques are mentioned as being applied at the client level, the paper does not justify why both are needed or whether one is more suitable than the other in FL contexts. Given that LIME is optional, a comparative evaluation of the computational cost or interpretability effectiveness between the two would be informative.

 

The security of the explanation aggregation process is not substantiated. The authors acknowledge the vulnerability of XAI to adversarial manipulation, yet no defense mechanism is introduced to detect or mitigate maliciously crafted explanations. Given that explanations are fed into the global model refinement process, this could represent a serious security gap.

 

Missed discussion on explainability method. Another type of explainability methodology is the post-hoc methods as shown by “Trustworthy graph neural networks: Aspects, methods, and trends” and “A survey of trustworthy graph learning: Reliability, explainability, and privacy protection”. The authors are suggested to discuss the potential of these method in this paper and the limitation of this work.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop