You are currently viewing a new version of our website. To view the old version click .

Review Reports

Energies2025, 18(24), 6395;https://doi.org/10.3390/en18246395 
(registering DOI)
by
  • Duncan Kibet,
  • Min Seop So* and
  • Jong-Ho Shin*

Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Hassan M. Farh

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This study demonstrates that a Transformer-based solar power forecasting model. It utilizes only two key features and achieves comparable accuracy to traditional multivariate models. The effectiveness of minimal feature selection in reducing complexity is highlighted while maintaining predictive performance. This paper is interesting and contains novelty in this field. The reviewer recommends a revision of the submission. Some comments:

  1. While solar irradiance and temperature are key, the exclusion of other potentially influential variables like cloud cover, humidity, or wind speed, without a more rigorous ablation study for all possible minimal sets, is a limitation.
  2. The dataset spans only three years (2019-2021). A longer time series would be necessary to capture long-term climate patterns and validate model robustness against decadal weather variations. For enriching the readership, the role of thermal storage in solar energy should be commented such as the recently published paper Applied Energy 390 (2025) 125817. Solar energy is intermittent can must to be integrated into thermal storage for thermal energy applications.
  3. For the results discussions, it does not specifically discuss the model's performance during rare but impactful events like storms, heavy pollution days, or atypical cloud cover, where additional features might be critical.
  4. Could authors add sensitivity analysis to the choice of window size? Currently, the use of a simple sliding window of 5 hours may not optimally capture relevant temporal dynamics.
  5. Although the Transformer's self-attention mechanism is mentioned as a benefit for interpretability, authors should provide a concrete analysis of what specific temporal patterns the model learned. This is quite important to the mechanism understanding.
  6. The model is compared against other deep learning models but lacks comparison with simpler, well-established statistical benchmarks for time series (e.g., ARIMA) on the same minimal feature set, making the performance gain less contextualized.
  7. The 70/30 split for training and validation is mentioned, but it's unclear if this is a simple random split or a time-series-aware spli. A random split can lead to data leakage and overoptimistic performance in time-series forecasting. Could authors add some explanations?

Author Response

Response to Reviewer 1 Comments

1. Summary

Thank you very very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions highlighted in the re-submitted files.

2. Point-by-Point Response

Comment 1: “While solar irradiance and temperature are key, the exclusion of other influential variables without a rigorous ablation study is a limitation.”

Response: Thank you for the comment. The two input features were selected using a SHAP feature importance analysis, which showed that solar irradiance and temperature consistently had the largest marginal impact on model predictions. Other variables such as cloud cover, humidity, and wind speed exhibited low and inconsistent SHAP contributions and did not improve short-term forecasting accuracy in a meaningful way. Therefore, the minimal two-feature set was chosen based on data-driven evidence rather than arbitrary exclusion.

Revised:
The full ablation discussion is provided in Section 3.2 from line 601- 612

Comment 2: “The dataset covers only three years. A longer dataset is needed to capture long-term climate patterns. Also discuss thermal storage (Applied Energy 390 (2025) 125817).”

Response: Thank you for the important suggestion. The manuscript now clearly acknowledges that a three-year dataset does not capture long-term climatic cycles, and additional explanation has been added to describe how multi-year weather patterns could influence generalization. A short discussion on the role of thermal energy storage in mitigating the intermittency of solar power has also been included, with reference to the suggested Applied Energy article.

Revised:

We acknowledge that a three-year dataset does not capture long-term climatic cycles and may require more years has been added in lines 952–959.
A short discussion on the role of thermal energy storage, referencing the Applied Energy article, has been added in lines 959–963.

 Comment 3: “Model performance during rare events (storms, pollution, atypical cloud cover) is not discussed.”

Response: Thank you for the helpful comment. Additional explanation has been added to the Results and Discussion section to clarify that the model may have reduced performance during rare or extreme conditions such as storms, heavy pollution episodes, and atypical cloud-cover patterns. These events introduce sudden fluctuations in irradiance that cannot be fully represented using only irradiance and temperature as inputs. This limitation has now been acknowledged, and a short paragraph has been included to describe why such conditions may require additional meteorological variables for improved performance.

Revised:
A sentence has been added in Section 4.3.1 (lines 870 -878)

Comment 4: “Add sensitivity analysis for window size.”

Response: Thank you for the valuable suggestion. A window size sensitivity analysis has now been conducted to examine how different temporal contexts affect forecasting performance. Sliding windows of 3, 5, 8, and 12 hours were tested. The 5-hour window achieved lower errors than the 3-hour window (3-hour: MAE = 1.3734, RMSE = 1.9682; 5-hour: MAE = 1.2260, RMSE = 1.7693), indicating that extending the look-back period helps capture short-term fluctuations more effectively. Window sizes of 8 and 12 hours were not evaluated, as longer look-back periods require a larger dataset to generate enough supervised training sequences. These results confirm that the 5-hour window provides a practical balance between temporal context and data availability.

 

Revised:
The description and analysis have been added to (lines 508-516).

 

Comment 5: “Provide concrete analysis of what the self-attention mechanism learned.”

Response: Thank you for this valuable suggestion. We have added a new explanatory paragraph at the end of Section 4.3 to describe the specific temporal patterns identified through the self-attention mechanism. The revised text clarifies how the model assigns higher attention to recent rapid changes in solar output such as early-morning increases, late-afternoon decreases, and short-term fluctuations caused by cloud movement, helping illustrate which parts of the input sequence most strongly influence the predictions. This addition enhances interpretability and addresses the reviewer’s concern regarding the model’s internal behavior.

 

Revised: The description has been added to (lines 827–834).

Comment 6: “Add comparison with simpler statistical baselines such as ARIMA.”

Response: Thank you for this valuable suggestion. We agree that including classical statistical baselines strengthens the contextual understanding of the model’s performance. In response, we conducted additional experiments using well-established time-series forecasting models including Linear Regression, Random Forest, XGBoost, LightGBM, KNN, Support Vector Regression (SVR), and a multilayer perceptron (MLP) all trained on the same minimal two-feature input (solar irradiance and temperature). These models represent both classical statistical approaches and widely used machine-learning benchmarks. The results show that the best-performing baseline (SVR) achieved an MAE of 1.2284, while other models produced MAE values between 1.24 and 1.34. In contrast, the proposed Transformer model achieved an MAE of 1.1325, outperforming all traditional and machine-learning baselines. This additional comparison confirms that the Transformer not only surpasses deep learning alternatives but also consistently outperforms simpler, well established statistical methods on the same feature set.

 

Revised: The description has been added to (lines 879–886).

 

Comment 7: “Clarify if the 70/30 split is random or time-series aware.”

Thank you for the comment. The manuscript has been updated to clarify that the 70/30 division is time-series aware. The revised text explains that the split follows the chronological order of the observations, ensuring that earlier data are used for training and later data are reserved exclusively for validation. This prevents information leakage and reflects a realistic forecasting scenario. The clarification has been added to lines 775–801.

 

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

Please find the attachment file.

Comments for author File: Comments.pdf

Author Response

Enhanced Solar Power Prediction Using Minimal Feature Selection

 

 

Summary: The research suggests a solar power forecasting model based on Transformers that only needs two important data: solar irradiance and temperature. These features were chosen using SHAP + XGBoost.

 

The minimal-feature Transformer gets almost the same accuracy (MAE 1.1325 vs. 1.14) as typical multivariate models, even though it uses a lot fewer variables.

 

The research demonstrates that meticulous feature selection can diminish complexity while preserving robust predictive efficacy.

 

But is it possible for a Transformer-based solar power forecasting algorithm to reach multivariate-level accuracy with just two carefully chosen input variables instead of a whole meteorological dataset?"

 

More particularly, it looks into whether basic feature selection (solar irradiance + temperature, and prior generation if desired) can match or come close to the performance of traditional multivariate deep learning models.

 

Strengths of the Paper

  1. Clear incentive linked to the demand for renewable

 

  1. Uses a real-world solar dataset from 2019 to

 

  1. Uses SHAP + XGBoost to choose features, which is a good methodological

 

  1. Shows that two variables can do as well as a full multivariate

 

  1. A good comparison with RNN, GRU, LSTM, and

 

  1. Shows that it is more efficient at doing

 

  1. Gives you graphs (loss curves, scatter plots, and expected actual).

 

  1. Strong correlation metrics (r = 925).

 

  1. Uses the sliding window method

 

  1. Easy to repeat with thorough procedure

 

  1. Points out how this can help with energy management in real

 

  1. A good talk about the benefits of attention

 

  1. Clearly shows where future research should

 

  1. Uses SHAP for interpretability, which is better than standard feature

 

  1. Shows generalisation (train and validation are in line).

 

  1. Gives benchmarks for performance based on a previous

 

  1. Using a small set of features lowers the possibility of

 

  1. The flow of the section is well-structured (background → techniques → outcomes).

 

  1. Each modelling option has a good reason for being

 

  1. Presents a compelling case for model simplification in

 

To make these points more clear:

Forecasting solar power is a very significant and active area of research. To be able to use it in real time, save money, and make it easier to understand, it is vital to reduce the number of inputs.

 

Originality: The idea of cutting down on features isn't new, but utilising SHAP + XGBoost for feature selection and then testing a Transformer model with the fewest features is. Most current studies either employ numerous meteorological variables or fail to assess the comparative efficacy of limited features against extensive multivariate models. This work specifically addresses a gap: the deficiency of research into the minimal attributes required by a Transformer to maintain predicting accuracy. Most previous research presumes that "more features equate to enhanced performance," although seldom examines the minimum threshold of requisite input dimensionality. This paper disputes that assumption.

 

 

 

Criticism:

 

  • Tables 3 and 4 don't have the same formatting (columns that aren't lined up, borders that are missing). Because of broken line formatting, certain variables in Table 1 are cut off.

 

Table 6 shows strange multivariate MAE for GRU (3.84) and RNN (4.51). These results are higher than predicted, so they need to be explained.

 

  • Response:
    The formatting issues in Tables 1, 3, and 4 have been corrected by fully aligning all columns, restoring missing borders, and ensuring consistent table structure. The Output data column in Tables 3 and 4 has also been unified so that no values appear shifted or cut off. These adjustments resolve the visual inconsistencies noted by the reviewer.
  • Regarding higher multivariate MAE values for GRU (3.84) and RNN (4.51) result from the difficulty these architectures face when handling high-dimensional inputs. With 36 meteorological variables, both models struggle to isolate relevant features and are affected by vanishing-gradient limitations, which reduces their accuracy. The Transformer, however, uses an attention mechanism that more effectively selects important variables such as irradiance and temperature, leading to stronger multivariate performance (MAE 1.14). A brief explanation has been added after Table 6. In line (555 - 566)

 

  • Figures 5–7 have useful information, however the captions could be better. The sizes of the fonts in the figures are Some of the figures are hard to see since they are low- resolution (particularly the predicted-vs-actual subplots).

 

The text doesn't go into much detail on Figure 3 (SHAP summary graphic).

 

Titles could be more descriptive. For example, "Scatter plot comparing predicted and actual values" should be capitalized. Figures are helpful, but they need to be polished and consistent in how they look.

 

  • Response: The captions for Figures 5–7 have been revised to be more descriptive, and the font sizes and visual styles have been standardized. All figures have been regenerated in higher resolution to improve clarity. Titles have also been adjusted for consistent formatting.
  • Additional explanation has been added for Figure 3 to describe the SHAP summary graphic in more detail in line (589-600)

 

  • Do an ablation study. To validate the minimal feature technique, authors must evaluate: just irradiance just temperature irradiance and temperature with and without historical power generation. This would make it clear what each variable really adds.
  • Response: An ablation study has been conducted as requested. Four configurations were evaluated: (1) irradiance only, (2) temperature only, (3) irradiance + temperature, and (4) irradiance + temperature + historical power generation. These results have been added to the revised manuscript (Lines 601–612).

 

  • Talk about why you chose a sliding window length of 5 The size of the window doesn't matter; a sensitivity analysis is needed.
  • Response: Additional explanation has been added regarding the selection of the 5-hour sliding window. A window-size sensitivity analysis was performed using 3-, 5-, 8-, and 12-hour windows to evaluate how different temporal contexts affect forecasting performance. The 5-hour window achieved better accuracy than the 3-hour window, while longer windows (8 and 12 hours) were excluded due to insufficient data for generating stable training sequences. These details have been incorporated into the revised manuscript (Lines 508-516).
  • Explain why "past power generation (x36)" should be This isn't talked about enough. Is it always available in real-time apps?
  • Response: Additional explanation has been added to clarify the role of the past power generation variable (x₃₆). Solar power exhibits strong temporal continuity, and recent power output is a direct indicator of current system conditions such as irradiance, weather transitions, and panel temperature. Including the previous power value helps the model capture short-term dynamics that are not fully represented by meteorological variables. Past power measurements are standard in real-time PV monitoring systems, making this feature readily available in practical deployments. A more detailed description has been added to the manuscript (Lines 521 - 537).
  • Give information on tuning There isn't enough information about the transformer diameter, heads, embedding size, dropout, and so on.
  • Response: Information about the Transformer hyperparameters and the tuning process has been added. The revised manuscript now specifies the head size, number of attention heads, feed-forward dimension, number of encoder blocks, dropout rates, and MLP structure, along with a brief explanation of how these values were selected based on validation performance (Lines 689-708).

 

  • Make the SHAP feature ranking The text talks about the temperature of the soil at 10 cm, yet the results just say "temperature." This inconsistency has to be straightened up.
  • Response: All references to “temperature” in the SHAP discussion have been revised to explicitly state “soil temperature at 10 cm depth”, which matches the SHAP ranking in Figure 3 and the variable naming in the dataset. The description in Section 3.2 has also been updated to clearly emphasize that the second highest-ranking feature is specifically 10 cm soil temperature, not general air temperature.

 

  • Use k-fold or cross-validation by season

The dataset (2019–2021) has substantial seasonal impacts, so it needs to be broken up correctly to keep data from leaking.

  • Response: Seasonal cross-validation has been added to prevent temporal leakage and to evaluate model generalization across different climatic periods. The dataset was split into Spring, Summer, Fall, and Winter, and the model was tested on each season separately. The updated results Spring (MAE 1.3166), Summer (MAE 1.3738), Fall (MAE 1.1544), Winter (MAE 1.2541) demonstrate consistent performance and confirm that the model effectively adapts to seasonal variability. A detailed explanation and discussion have been included in the revised manuscript. (Lines 835–845).

Overall:

The paper makes a useful and practical addition by showing that solar forecasting models can work well with very few input features when SHAP and XGBoost are used to choose the right ones. The key strength is the balance between simplicity and performance, and the Transformer model is well-supported.

 

I suggest publishing the paper after minor revision.

Author Response File: Author Response.doc

Reviewer 3 Report

Comments and Suggestions for Authors

The paper addresses an important problem in renewable energy forecasting, proposing a minimal feature selection strategy for Transformer architectures and providing a thorough comparison with conventional deep learning methods. To further strengthen the work, the reviewer invites the authors to reply and take these comments into consideration to modify the paper accordingly as follows:

  1. The reviewer suggested that the title can be improved to “Minimalist Deep Learning for Solar Power Forecasting: Transformer-Based Prediction Using Key Meteorological Features”
  2. The abstract should more clearly state the novelty compared to previous studies. Specify what new insights are gained from the minimal feature selection strategy and the significance of achieving comparable accuracy with fewer inputs. The abstract should be improved to include: 1) problem and motivation 2) the main objective of the study 3) the methodology 4) the main quantitative results and major conclusion. Condense the abstract to around 200–250 words, focusing on background, problem, objective and method, key results, and major conclusion.
  3. The introduction part should be improved and reorganized to cover three parts clearly and sequentially (To be easily understood for the reader) as follows: 1) motivation and incitement, and 2) literature review and research gaps: Consolidate previous works in a comparative table or a structured paragraph, clearly stating their gaps., and 3) Novelty/contributions and paper organization.
  4. Justify the selection of solar irradiance and temperature as the only input variables. Although SHAP values are mentioned, present a clearer feature selection pipeline and why other variables (e.g., humidity, wind speed, cloud cover) were excluded.
  5. The description of the Transformer architecture lacks detail. Provide a schematic diagram of the model structure (if already present, ensure it is labeled and referenced properly in the text), and include hyperparameters, input formatting, and training process explanation for reproducibility.
  6. The paper mentions using a sliding window approach, but window size selection is not justified. Explain the rationale for choosing a 5-hour window and provide sensitivity analysis regarding window size.
  7. The dataset description is detailed but lacks information about missing data, outlier removal, and possible selection biases (e.g., only from one region in South Korea).
  8. Results may not generalize beyond the specific location and conditions tested. Discuss limitations and propose plans for multi-region or multi-year datasets in future work.
  9. Model testing on truly unseen data should be clarified. Describe whether there is a temporal split (e.g., year-wise or seasonal hold-out) and how the validation set reflects realistic deployment.
  10. The performance comparison table (Table 6) needs more explanation. Are all models trained on identical data splits, preprocessing methods, and metric calculations? Clarify this for a fair comparison.
  11. Statistical significance of the performance differences should be discussed. Provide error bars or statistical tests if possible.
  12. The reasons for the minimal performance drop from multivariate to minimal Transformer input are hypothesized but not quantitatively analyzed. Add discussion or ablation results to support claims.
  13. The conclusions section lacks numerical results to support the findings and claims made throughout the paper. The absence of quantitative results makes it difficult to assess the actual impact of the different models.

Author Response

Response to Reviewer 3 Comments

1. Summary

Thank you very very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions highlighted in the re-submitted files.

2. Point-by-Point Response

Comment 1: The reviewer suggested that the title can be improved to “Minimalist Deep Learning for Solar Power Forecasting: Transformer-Based Prediction Using Key Meteorological Features”

Response: The suggestion is acknowledged. The proposed title more clearly reflects the study’s focus on a minimal yet influential set of meteorological features for Transformer-based solar power forecasting. The manuscript title has been updated accordingly.

Revised: Minimalist Deep Learning for Solar Power Forecasting: Transformer-Based Prediction Using Key Meteorological Features. (TITLE)

Comment 2: The abstract should more clearly state the novelty compared to previous studies. Specify what new insights are gained from the minimal feature selection strategy and the significance of achieving comparable accuracy with fewer inputs. The abstract should be improved to include: 1) problem and motivation 2) the main objective of the study 3) the methodology 4) the main quantitative results and major conclusion. Condense the abstract to around 200–250 words, focusing on background, problem, objective and method, key results, and major conclusion.

Response: The abstract has been revised to clearly highlight the novelty of the minimal feature selection strategy and its significance. The revised version states the motivation of the problem, study objective, methodology, key quantitative results, and major conclusion good format.

Comment 3: The introduction part should be improved and reorganized to cover three parts clearly and sequentially (To be easily understood for the reader) as follows: 1) motivation and incitement, and 2) literature review and research gaps: Consolidate previous works in a comparative table or a structured paragraph, clearly stating their gaps., and 3) Novelty/contributions and paper organization.

Response:
1. A paragraph expressing the motivation and incitement of the study has been added within the Introduction to provide clearer context for the research direction (in line 106-116)

  1. Section 2 has been renamed Literature Review and Research Gaps, and the previous works have been consolidated in a structured manner to highlight existing limitations more effectively.

 

  1. In addition, a paragraph describing the novelty, contributions, and organization of the paper has been included at the end of Section 2 to present the study’s unique aspects and overall structure in a clear and sequential format.

Comment 4: Justify the selection of solar irradiance and temperature as the only input variables. Although SHAP values are mentioned, present a clearer feature selection pipeline and why other variables (e.g., humidity, wind speed, cloud cover) were excluded.

Response: The feature selection process has been clarified by presenting a complete pipeline that explains how the key variables were identified and why other meteorological variables were excluded. XGBoost was used to evaluate feature importance, and SHAP values were applied to measure the individual contribution of each variable to the prediction output. The SHAP summary plot highlights a clear separation between influential and weak predictors. Solar irradiance and soil temperature at a depth of ten centimeters show the strongest effects on model output, while variables such as humidity, wind speed, cloud cover, and pressure exhibit SHAP values close to zero, indicating minimal influence on forecasting behavior. An ablation study was also included to verify the contribution of each variable through direct performance evaluation. These steps collectively justify the selection of irradiance and temperature as the primary inputs for the minimal feature Transformer model and provide a clear explanation for excluding the remaining variables. (in section3.2)

Comment 5: The description of the Transformer architecture lacks detail. Provide a schematic diagram of the model structure (if already present, ensure it is labeled and referenced properly in the text), and include hyperparameters, input formatting, and training process explanation for reproducibility.

 

Response: Additional details describing the Transformer architecture have been incorporated to address the reviewer’s concerns regarding clarity and reproducibility. The revised text expands Section 4.1 by specifying the number of encoder blocks, attention configuration, feedforward dimensions, embedding components, pooling strategy, and the final regression structure. The explanation now includes the complete set of hyperparameters considered during tuning, as well as the final values selected for training. Section 4.2 has been expanded to include mathematical definitions of the attention mechanism, positional encoding, and the feedforward network to clarify the internal operations of the model. A schematic diagram of the architecture has been added as Figure 4 and is referenced properly in the text. The input formatting process is also described in detail through the five-hour sliding window used to construct sequences of irradiance, temperature, and past power generation values.

 

Comment 6: The paper mentions using a sliding window approach, but window size selection is not justified. Explain the rationale for choosing a 5-hour window and provide sensitivity analysis regarding window size.

Response: The reason for selecting a five-hour sliding window has been clarified by providing a detailed explanation of the windowing strategy and including a sensitivity analysis. The revised text describes how the sliding window approach converts historical climate and power generation data into input output pairs for time series forecasting. A comparison of window sizes of three, five, eight, and twelve hours was conducted to evaluate the effect of different temporal contexts on model performance. The results show that the five-hour window offers lower error values than the three-hour window, indicating improved ability to capture short term variations in solar generation. Longer windows of eight and twelve hours were not suitable due to limited data available for sequence generation. These findings demonstrate that the five-hour window offers a practical balance between temporal coverage and data availability. (in line 508-516)

 

Comment 7: The dataset description is detailed but lacks information about missing data, outlier removal, and possible selection biases (e.g., only from one region in South Korea).

Response: Additional clarification has been included in the dataset section to address the reviewer’s concern. The revised text explains that the meteorological data was supplied in separate yearly datasets and later combined after confirming consistency in measurement formats. Missing values and irregular entries were handled by removing periods of system downtime and incomplete observations. Outliers due to sensor errors or abnormal weather reporting were examined through distribution checks, and values outside the realistic operating range of the photovoltaic system were excluded. (In line 431-439)

 

Comment 8: Results may not generalize beyond the specific location and conditions tested. Discuss limitations and propose plans for multi-region or multi-year datasets for future work.

Response: A discussion addressing the limitation of using data from a single location has been added. The revised text explains that the results may not fully generalize to other regions or climate conditions and outlines plans for future work that include incorporating multi-region and extended multi-year datasets. (In line 944–951)

 

Comment 9: Model testing on truly unseen data should be clarified. Describe whether there is a temporal split (e.g., year-wise or seasonal hold-out) and how the validation set reflects realistic deployment.

Response: Clarification of the model evaluation procedure has been added. The revised text explains that a temporal splitting strategy was used, in which earlier observations were assigned to the training set and later observations were reserved exclusively for validation and testing. (in line 775–783)

Comment 10: The performance comparison table (Table 6) needs more explanation. Are all models trained on identical data splits, preprocessing methods, and metric calculations? Clarify this for a fair comparison.

Response: additional clarification has been added. The revised text explains that all models in Table 6 were trained and evaluated using identical preprocessing steps, temporal data splits, input window structures, and performance metrics. (in lines 870–895)

Comment 11: Statistical significance of the performance differences should be discussed. Provide error bars or statistical tests if possible.

Response: A discussion on statistical significance has been added. The revised text explains that performance variability was assessed using repeated training runs, and the resulting MAE and RMSE distributions were analyzed to determine whether the observed differences were meaningful. (in line 887–895)

Comment 12: The reasons for the minimal performance drop from multivariate to minimal Transformer input are hypothesized but not quantitatively analyzed. Add discussion or ablation results to support claims.

Response: An explanation has been added to clarify why the minimal-feature Transformer performs similarly to the multivariate model. The revised text incorporates SHAP-based feature importance and ablation results, demonstrating that solar irradiance and soil temperature at 10 cm depth account for most of the predictive signal while additional variables provide only marginal improvement. (in line 601–612)

Comment 13: The conclusions section lacks numerical results to support the findings and claims made throughout the paper. The absence of quantitative results makes it difficult to assess the actual impact of the different models.

Response: Numerical results have been added to the conclusions. These additions provide clear quantitative evidence for the findings presented in the paper. (in line 944–951)

 

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The revisions are well addressed for publication in Energies journal.

Reviewer 3 Report

Comments and Suggestions for Authors

The reviewer would like to thank the authors for considering all comments seriously where the paper has been modified accordingly. Only minor comment to reduce the keywords to be 6 or 7 instead of 10.