Next Article in Journal
High-Precision Switched Capacitor Device with an Energy Estimation Circuit
Next Article in Special Issue
Customised Multi-Energy Pricing: Model and Solutions
Previous Article in Journal
A Study on the Interdependence in Sustainable Mobility Tools and Home Energy Equipment Choices
Previous Article in Special Issue
Electric Vehicle Charging Schedules in Workplace Parking Lots Based on Evolutionary Optimization Algorithm
 
 
Article
Peer-Review Record

Solar and Wind Quantity 24 h—Series Prediction Using PDE-Modular Models Gradually Developed according to Spatial Pattern Similarity

Energies 2023, 16(3), 1085; https://doi.org/10.3390/en16031085
by Ladislav Zjavka
Reviewer 1: Anonymous
Reviewer 2:
Energies 2023, 16(3), 1085; https://doi.org/10.3390/en16031085
Submission received: 23 November 2022 / Revised: 14 January 2023 / Accepted: 16 January 2023 / Published: 18 January 2023
(This article belongs to the Special Issue Challenges and Research Trends of Computational Intelligence)

Round 1

Reviewer 1 Report

A very interesting paper, with clear objectives. My remarks are summarized as follows:

(1) Even though the author provides a lot of graphs comparing its method to computational intelligence-based rivals, he does not provide a comparative analysis in terms of computational complexity.

(2) 8 out of 23 references are self-citations and at least 2 of them are cited with no apparent reason. For example, [16] is included in a trivial phrase: "the quality of the model approximation usually depends on the predetermined set of training and testing samples"

(3) It is mentioned that "new assigned day samples can be additionally ... uncertain variances". Did the author make use of this suggestion?

(4) No comparisons with previous models suggested by the author are included.

Author Response

A very interesting paper, with clear objectives. My remarks are summarized as follows:

(1) Even though the author provides a lot of graphs comparing its method to computational intelligence-based rivals, he does not provide a comparative analysis in terms of computational complexity.

The computational complexity of both the compared methods DNN and DLT is included at the end of in Sec.6 Experiment Evaluation (highlighted).

The DNN computational complexity is naturally higher, as the model is gradually composed using node-by-node evolved binomial tree structures producing acceptable sum PDE components, which can be included one by one in the output sum to decrease errors. Several optimization algorithms are to be synchronized to select the optimal 2-input nodes (and reduce combinatorial explosion) and combinations of applicable modular components, parameter adaptation (the gradient method and back-propagation), selection of PDE-substitution functions in model definition (rational, periodical, power), elimination redundant PDE-components (from the first included layers), etc. DNN is self-optimizing in all these automatic procedures, i.e. it does not require the training parameters or network structure predefinition. DLT uses a standard NN architecture and optimization, which requires a sophisticated hand-made design in the model structure, layer types, and parameters. The DNN model complexity is higher as usual in standard soft computing, corresponding to those of data patterns, which allows modelling and representation of high dynamic, chaotic, and uncertain (weather dependent) systems. Model simplification can result in strong failures in statistical AI prediction. However, the computational complexity can differ on each prediction day, according to the trainability of applied patterns represented by PDE-models.

(2) 8 out of 23 references are self-citations and at least 2 of them are cited with no apparent reason. For example, [16] is included in a trivial phrase: "the quality of the model approximation usually depends on the predetermined set of training and testing samples"

The redundant references were removed. New updated relevant references were added to the list.

(3) It is mentioned that "new assigned day samples can be additionally ... uncertain variances". Did the author make use of this suggestion?

Iterative learning (considered in a future study), using the proposed PDE-modular concept, allows additional training of the daily models formed previously. It can accumulate previously learned knowledge and insert new PDE-components in the model along with removing useless ones, adaptation of the current combination / structure, depending on various situations. It requires efficient searching a larger spatial data archive to detect applicable training samples and improved model / net-structure optimization (see the same paragraph below Fig.10).

(4) No comparisons with previous models suggested by the author are included.

The previous models, applied and published by the author, were based on similarity theory and application of the Buckingham PI-theorem [6]. The new presented DNN extends and evolves the previous concept by applying the Laplace transformation to PDE derivatives and the inverse operation in the restoration of original node component functions (based on Operation Calculus). The model development was improved using several optimization algorithms (input selection, combinatorial evolution, and parameter adaptation. The model definition was also improved using several types of base PDE-substitution function (rational, periodical, power). The DNN model is gradually developed by inserting adequate PDE components, one by one produced in a node-by-node evolved parallel PNN structure (starting empty), compared to the previous concept based on the standard neural network fixed architecture (extended and highlighted at the end of Sect.1 Introduction).

Reviewer 2 Report

·       The title could be written in a more attractive way to reflect the value of the presented research

·       The abstract needs to be re-written in a coherent way to reflect the research objectives, contributions, impact of the research and main results of the presented research instead of beginning with what the method does not consider,…etc.

·       Long paragraphs in the introduction, please re-consider the readability

·       Although the author has put the effort in the introduction section, many paragraphs still vague, the author should focus more on the research gap analysis and why and how this research is different from other research in the literature?

·       All the symbols of the equations need to be defined

·       Please explain more about the main results and discussions

·       Few English-related and typo errors appear in the text

·       The reference list must be checked again, it is strange that the paper begins with ref. [5], then [15] and then [1]. That leads to mis-citation and mismatch between the presented info and the related references.

·       I see that a ref. is missing from Figs. 10 and 11 or it is a typo error.

Author Response

  • The title could be written in a more attractive way to reflect the value of the presented research

Corrected.

  • The abstract needs to be re-written in a coherent way to reflect the research objectives, contributions, impact of the research and main results of the presented research instead of beginning with what the method does not consider,…etc.

The Abstract was rewritten to reflect the applied methodology and innovations in statistical RE prediction, including the study contribution and main results. ‘What the method does not consider’ is not mentioned.

  • Long paragraphs in the Introduction, please re-consider the readability

The Introduction and State-of-the-Art Sects. were rewritten and partitioned into shortened paragraphs.

  • Although the author has put the effort in the introduction section, many paragraphs still vague, the author should focus more on the research gap analysis and why and how this research is different from other research in the literature?

The Introduction was improved in this way (highlighted). The novelty and contribution are presented, as compared to traditional soft computing.

  • All the symbols of the equations need to be defined

Corrected.

  • Please explain more about the main results and discussions

Experiment Evaluation and Discussion Sec.6 and 7 were extended and improved in this manner.

  • Few English-related and typo errors appear in the text

The manuscript English was thoroughly revised with two advanced academic writing and language correction tools based on AI (Writefull and Grammarly).

  • The reference list must be checked again, it is strange that the paper begins with ref. [5], then [15] and then [1]. That leads to mis-citation and mismatch between the presented info and the related references.

The references are numbered in sequence as they appear in the text.

  • I see that a ref. is missing from Figs. 10 and 11 or it is a typo error.

The correlation coefficient is a dimensionless parameter, [] is the standard notation.

Round 2

Reviewer 1 Report

The author fully responded to my remarks.

Author Response

O.K.

Reviewer 2 Report

In the abstract, "The pro-posed improvements contribute to better performance of the AI modular concept of PDE models, a cable to represent the dynamics of complex systems. The results are compared with the recent deep learning strategy. Differential learning shows better approximation ability in ramping events, often in RE supply, and more stable predictions without undesirable alterations in day er-rors, namely in over-break situations." It is better if the author presents these improvements with numbers or percentage (..%).

Author Response

The day average percentage approximation of similarity correlation of differential and deep learning models on real data are compared in the Abstract (highlighted).

Author Response File: Author Response.txt

Back to TopTop