You are currently viewing a new version of our website. To view the old version click .

Review Reports

Computation2026, 14(1), 2;https://doi.org/10.3390/computation14010002 
(registering DOI)
by
  • Diego Armando Pérez-Rosero1,*,
  • Santiago Pineda-Quintero1 and
  • Juan Carlos Álvarez-Barreto2
  • et al.

Reviewer 1: Anonymous Reviewer 2: Sen Li Reviewer 3: Piotr Miller

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  1. The framework is trained and tested exclusively on data from CHEC, a single Colombian utility. The model’s performance in networks with different topologies, regulatory frameworks, or climatic conditions remains unverified. Could mention in the future research direction.
  2. The performance of both the predictive (TabNet) and reasoning (Agentic RAG) modules is highly dependent on the quality and completeness of input data (e.g., outage logs, meteorological data, asset metadata). Missing or inaccurate data could bias predictions and undermine regulatory recommendations. Can address the potential challenges.

  3. A single hyperparameter configuration was used across all settings to ensure reproducibility and comparability. While practical, this may limit localized optimization.
  4. The RAG subsystem is evaluated using BERTScore and expert validation, but there is no quantitative measure of regulatory accuracy or compliance.

  5. While TabNet provides local and global interpretability, the paper does not compare its explanations against other XAI methods such as SHAP and LIME to validate the quality or reliability of the attention-based attributions.

Comments on the Quality of English Language

Please proofread the paper.

Author Response

See attached pdf

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This manuscript introduces CRITAIR, a three-part approach for medium-voltage distribution reliability that couples interpretable prediction with regulation-aware reasoning and auditable artifacts. Using CHEC operational data enriched with weather, lightning, and vegetation signals, the authors report performance on par with strong tabular baselines and provide qualitative evidence that the RAG layer retrieves and cites relevant clauses. The topic is timely for utilities that increasingly need both prediction and policy grounding, and the overall framing is compelling. That said, the paper could be even stronger with a bit more clarity on data handling, evaluation design, and metrics for the regulation module. Accordingly, I have the following suggestions and questions for the authors.

  1. Introduction & related work (structure and synthesis): The research background and motivation are thorough, but the narrative is largely descriptive. If possible, please add one concept map figure and one comparison table to summarize prior approaches and highlight their features/limitations. This would make the gap and contributions easier to grasp at a glance.
  2. Data & splits (Section 3): Defining SAIFI and assembling a component-level dataset with 24-hour antecedent meteorology, lightning, and vegetation is a clear strength. Additionally, because the goal is to predict grid interruptions, would any non-weather factors (e.g., maintenance schedules, asset age, switching operations) materially influence outcomes? And if so, are they available or proxied? Meanwhile, since both weather and asset states drift, a time-aware split (train on earlier years, test on later periods), alongside the current random split, would reassure readers that performance holds under drift. A small sensitivity table/figure would suffice.
  3. Method stack & system view (Section 3; Figures 9–10): The reviews of regression/GBMs/TabNet and of LLM/Agentic RAG are helpful. For Figure 9 (criticality UI), is this a local application or a web interface? And in Figure 10 (architecture), which environments (PC, on-prem server, or cloud) do you target in practice? If available, it would be helpful to note additional RAG capabilities beyond regulation lookup—e.g., diagnostics or fault-pattern triage using MV-L2 data plus the document corpus.
  4. Reproducibility of the regulation agent (Section 4): The implementation details and repository pointer are appreciated. If possible, an appendix with the exact prompt template, retrieval parameters (chunk size/overlap), and any guardrails would make replication straightforward across proprietary and local models. For local models, listing typical latency and hardware next to token/context limits could guide on-prem deployment.
  5. Predictive tuning & presentation (Section 5; Figures 17–18): For reliability-indicator estimation, a short table of the main hyperparameters tuned for each model (and their final values/ranges) would be useful. In Figure 17, if available, please showcase a couple of side-tab features (e.g., Q&A, graphs/statistics) to convey end-user utility. For the interpretable reasoning graph in Figure 18, enlarging or splitting subfigures would improve readability (node/edge labels, evidence anchors).

Author Response

See attached pdf

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The manuscript presents CRITAIR, an integrated and interpretable analytical framework designed for reliability-oriented and regulation-aware decision support in medium-voltage (MV) distribution networks. The framework combines (i) TabNet-based predictive modeling with feature attribution, (ii) an Agentic RAG layer enabling multi-step regulatory reasoning based on RETIE, NTC 2050, and internal documents, and (iii) interpretable reasoning graphs providing end-to-end auditability.

The results demonstrate competitive predictive accuracy and robust regulatory-grounded recommendations.

Detailed Comments for the Authors

The article was perfectly prepared. I found a few places where there were punctuation errors. 

Line 430: “…determines which actions to execute …” → determines, which actions to execute …”

Line 558: “…and F1-which are calculated from…” → …and F1, which are calculated fromcute …

Line 144–145: „…regulated environments where interpretability …” → … regulated environments, where interpretability …”

Author Response

See attached pdf

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Thank you for revising the manuscript in line with my earlier suggestions. In general, the revised paper reads clearly, presents strong technical detail, and is supported by excellent figures and tables. As a final step, it would be helpful to proofread for any existing grammatical or formatting inconsistencies before submission.

Author Response

See attached pdf

Author Response File: Author Response.pdf