Next Article in Journal
Pinch-Based General Targeting Method for Predicting the Optimal Capital Cost of Heat Exchanger Network
Previous Article in Journal
Processing Strategies for Extraction and Concentration of Bitter Acids and Polyphenols from Brewing By-Products: A Comprehensive Review
 
 
Article
Peer-Review Record

Training Feedforward Neural Networks Using an Enhanced Marine Predators Algorithm

Processes 2023, 11(3), 924; https://doi.org/10.3390/pr11030924
by Jinzhong Zhang and Yubao Xu *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Processes 2023, 11(3), 924; https://doi.org/10.3390/pr11030924
Submission received: 14 February 2023 / Revised: 13 March 2023 / Accepted: 16 March 2023 / Published: 17 March 2023

Round 1

Reviewer 1 Report

In this paper, the MPA is employed to train the feedforward neural networks, and the purpose is not only to determine the best combination of connection weight and deviation value but also to acquire the global best solution according to the given input value. Overall , the article is very interesting, but there are some problems in the contribution description, literature review and innovation. I agree with the publication of this paper only after it has been revised according to my opinion. Some comments and suggestions are as follows:

1. It is recommended that the author summarize the contribution of the method proposed in this paper in the introduction to highlight the research advantages of this paper

2. In the article, The MPA is based on the marine predators foraging strategy to utilize a distinctive optimization mechanism of Lévy flight, Brownian motion, and the optimal encounter rate policy to resolve the optimization problem. It is worth mentioning that both MPA and FNN It is a very mature method, so where is the innovation of this article?

3. The literature review of the article is limited to the feedforward neural network when analyzing the neural network. However, in recent years, the authors of the work on DL-based neural networks lack a comprehensive analysis, and it is recommended to analyze more recent work: A variational local weighted deep sub-domain adaptation network for remaining useful life prediction facing cross-domain condition, An Integrated Multitasking Intelligent Bearing Fault Diagnosis Scheme Based on Representation Learning Under Imbalanced Sample Condition, An integrated multi-head dual sparse self-attention network for remaining useful life prediction

4. Based on Table 1 and Table 2, MPA and MPA-based feedforward neural networks seem to have many similarities

5. Why does this article use FNN as the basic network, and the application of FNN is very mature. Why not consider DL methods such as CNN? This might work better.

6. The quality of Figure 3-36 is not clear enough, and it is not clear enough after zooming in. It is recommended to correct the format of the picture to increase the clarity

7.To establish the viability and suitability, the MPA is contrasted with other algorithms that contain ALO, AVOA, DOA, FPA, MFO, SCA, SSA and SSO. Although the author compared many optimization methods, but did not change the basic network, It is recommended to compare with more advanced ML methods, such as A parallel hybrid neural network with integration of spatial and temporal features for remaining useful life prediction in prognostics

8. The conclusion part is too lengthy, it is recommended to streamline it

In short, in its current form, the paper is not suitable for acceptance. The paper needs rewriting, by addressing the above-mentioned comments.

Author Response

Response to Reviewer 1 Comments

 

In this paper, the MPA is employed to train the feedforward neural networks, and the purpose is not only to determine the best combination of connection weight and deviation value but also to acquire the global best solution according to the given input value. Overall , the article is very interesting, but there are some problems in the contribution description, literature review and innovation. I agree with the publication of this paper only after it has been revised according to my opinion. Some comments and suggestions are as follows:

 

Thank you for your kind comments and suggestions. Responses for your suggestions are provided as follows.

 

  1. It is recommended that the author summarize the contribution of the method proposed in this paper in the introduction to highlight the research advantages of this paper.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. The introduction section has been re-written to summarize the contribution of the proposed method and highlight the reesearch advantages of this paper. The ranking-based mutation operator is added to the basic MPA, which not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA is used to solve the FNNs, the proposed method integrates exploration or exploitation to determine the best solution. The EMPA has certain effectiveness and feasibility to achieve a quicker convergence speed and greater calculation accuracy. For the convenience of reading, we have highlighted relevant parts blue in Introduction of revised manuscript.

The modified manuscript is as follows:

  1. Introduction

The MPA is derived from the universal hunting and gathering mechanisms, particularly Lévy flight, Brownian motion, and the optimal encounter rate policy between the predator and prey [42]. To enhanes the availability and practicability, the ranking-based mutation operator is added to the basic MPA, which accelerates the calculation speed and enhances the exploitation to improve the selection probability to mitigate premature convergence. The EMPA is utilized to train the FNNs, and the objective is to attain the minimum classification, prediction and approximation errors by training the FNNs and modifying the connection weight and deviation value. The EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability and easy implementation. The EMPA integrates exploration or exploitation to determine the best solution. The experimental results demonstrate the MPA has certain effectiveness and feasibility to achieve a quicker convergence speed and greater calculation accuracy. Meanwhile, the MPA has strong stability and robustness to achieve a higher classification rate.

 

  1. In the article, The MPA is based on the marine predators foraging strategy to utilize a distinctive optimization mechanism of Lévy flight, Brownian motion, and the optimal encounter rate policy to resolve the optimization problem. It is worth mentioning that both MPA and FNN It is a very mature method, so where is the innovation of this article?

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. The relevant sections have been re-written to explicitly mention the significant contributions of the manuscript and highlight the innovation of the paper. In this paper, an enhanced marine predators algorithm (MPA) based on the ranking-based mutation operator (EMPA) is presented to train FNNs. The ranking-based mutation operator not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs. For the convenience of reading, we have highlighted relevant parts blue in Abstract, Introduction, Section 6 and Section 7 of revised manuscript.

The modified manuscript is as follows:

Abstract: The input layer, hidden layer and output layer are three models of the neural processors that make up the feedforward neural networks (FNNs). The evolutionary algorithms have been extensively employed in training the FNNs, which can correctly actualize any finite training sample set. In this paper, an enhanced marine predators algorithm (MPA) based on the ranking-based mutation operator (EMPA) is presented to train FNNs, and the objective is to attain the minimum classification, prediction and approximation errors by modifying the connection weight and deviation value. The ranking-based mutation operator not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. To assess the significance and stability of the EMPA, a series of experiments on seventeen distinct datasets from the machine learning repository of the University of California-Irvine (UCI) are utilized. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs.

  1. Introduction

The MPA is derived from the universal hunting and gathering mechanisms, particularly Lévy flight, Brownian motion, and the optimal encounter rate policy between the predator and prey [42]. To enhanes the availability and practicability, the ranking-based mutation operator is added to the basic MPA, which accelerates the calculation speed and enhances the exploitation to improve the selection probability to mitigate premature convergence. The EMPA is utilized to train the FNNs, and the objective is to attain the minimum classification, prediction and approximation errors by training the FNNs and modifying the connection weight and deviation value. The EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability and easy implementation. The EMPA integrates exploration or exploitation to determine the best solution. The experimental results demonstrate the EMPA has certain effectiveness and feasibility to achieve a quicker convergence speed and greater calculation accuracy. Meanwhile, the EMPA has strong stability and robustness to achieve a higher classification rate.

  1. Experimental results and analysis

6.4. Results and analysis

Statistically, the EMPA is based on the marine predators foraging strategy to imitate Lévy flight, Brownian motion, and the optimal encounter rate policy to arrive at the overall best solution. The EMPA is employed to resolve the FNNs for the following reasons. First, the EMPA has the properties of straightforward algorithm architecture, excellent control parameters, great traversal efficiency, strong stability and easy implementation. Second, the EMPA utilizes the Lévy flight, Brownian motion, and the optimal encounter rate policy to determine the best solution. The Lévy flight can increase the population diversity, expand the search space, enhance the exploitation ability and improve the calculation accuracy. The Brownian motion and optimal encounter rate policy can filter out the best solution, avoid search stagnation, enhance the exploration ability and accelerate the convergence speed. Third, the ranking-based mutation operator is introduced into the MPA. The EMPA not only balances exploration and exploitation to avoid falling into the local optimum and premature convergence but also utilizes a unique search mechanism to renew the position and identify the best solution. To summarize, the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, and strong stability and robustness. The EMPA has strong overall optimization ability to train the FNNs.

  1. Conclusions and future research

In this paper, an enhanced MPA based on the ranking-based mutation operator is presented to train the FNNs, and the objective is not only to determine the best combination of connection weight and deviation value but also to acquire the global best solution according to the given input value. The ranking-based mutation operator not only enhance the selection probability to filter out the optimal search agent but also mitigate search atagnation to accelerate convergence speed. The EMPA utilizes the distinctive mechanisms of Lévy flight, Brownian motion, the optimal encounter rate policy, and the ranking-based mutation operator to attain the minimum classification, prediction and approximation errors. The EMPA has strong robustness, parallelism and scalability to determine the best value. Compared with other algorithms, the EMPA has excellent reliability and superiority to train the FNNs. The experimental results demonstrate that the convergence speed, calculation accuracy and classification rate of the EMPA are superior to those of other algorithms. Furthermore, the EMPA has strong practicability and feasibility for training FNNs.

 

  1. The literature review of the article is limited to the feedforward neural network when analyzing the neural network. However, in recent years, the authors of the work on DL-based neural networks lack a comprehensive analysis, and it is recommended to analyze more recent work:A variational local weighted deep sub-domain adaptation network for remaining useful life prediction facing cross-domain condition, An Integrated Multitasking Intelligent Bearing Fault Diagnosis Scheme Based on Representation Learning Under Imbalanced Sample Condition, An integrated multi-head dual sparse self-attention network for remaining useful life prediction.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. The question you raised above is very useful. We have utilized the following articles to make the introduction section more productively. We have comprehensively analyzed the work on DL-based neural networks. For the convenience of reading, we have highlighted relevant parts blue in Introduction and References of revised manuscript.

The modified manuscript is as follows:

  1. Introduction

Zhang et al. presented a domain adaptation network to remain useful life prediction, this proposed method had strong stability to determine the best results [38]. Zhang et al. utilized an integrated multitasking intelligent bearing fault diagnosis scheme to realize the detection, classification and fault idnetification [39]. Zhang et al. proposed an integrated multi-head dual sparse self-attention network to remain useful life prediction, this method had excellent superiority and robustness [40].

References

  1. Zhang, J.; Li, X.; Tian, J.; Jiang, Y.; Luo, H.; Yin, S. A Variational Local Weighted Deep Sub-Domain Adaptation Network for Remaining Useful Life Prediction Facing Cross-Domain Condition. Reliab. Eng. Syst. Saf. 2023, 231, 108986.
  2. Zhang, J.; Zhang, K.; An, Y.; Luo, H.; Yin, S. An Integrated Multitasking Intelligent Bearing Fault Diagnosis Scheme Based on Representation Learning Under Imbalanced Sample Condition. IEEE Trans. Neural Netw. Learn. Syst. 2023.
  3. Zhang, J.; Li, X.; Tian, J.; Luo, H.; Yin, S. An Integrated Multi-Head Dual Sparse Self-Attention Network for Remaining Useful Life Prediction. Reliab. Eng. Syst. Saf. 2023, 109096.

 

  1. Based on Table 1 and Table 2, MPA and MPA-based feedforward neural networks seem to have many similarities.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern.We have made the changes accordingly. The question you raised above is very useful. We have deleted Table 1. For the convenience of reading, we have highlighted relevant parts blue in Section 5 of revised manuscript.

The modified manuscript is as follows:

  1. EMPA-based feedforward neural networks

Table 1 Correlation between issue scope and EMPA scope

Issue scope

EMPA scope

A set scheme  to tackle the FNNs

A marine predator population  

The optimal scheme to obtain the best solution

The marine predator or search agent

The evaluation value of FNNs

The fitness value of EMPA

 

  1. Why does this article use FNN as the basic network, and the application of FNN is very mature. Why not consider DL methods such as CNN? This might work better.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. The question you raised above is very useful. This research work is very necessary and valuable, and the workload of the experiment is enormous, which is my future research work. The question you raised provides a new idea of thinking for future work. In this paper, the ranking-based mutation operator is introduced into the basic MPA, which not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs. We will utilize the DL methods such as CNN to verify the stability and robustness of the proposed algorithm in the next paper. We have utilized the following article to make the introduction section more productively. The authors are thankful to the reviewer for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the reviewer in reviewing the manuscript. For the convenience of reading, we have highlighted relevant parts blue in Introduction, Section 7 and References of revised manuscript.

The modified manuscript is as follows:

  1. Introduction

Zhang et al. presented a domain adaptation network to remain useful life prediction, this proposed method had strong stability to determine the best results [38]. Zhang et al. utilized an integrated multitasking intelligent bearing fault diagnosis scheme to realize the detection, classification and fault idnetification [39]. Zhang et al. proposed an integrated multi-head dual sparse self-attention network to remain useful life prediction, this method had excellent superiority and robustness [40]. Zhang et al. designed a parallel hybrid neural network to remain useful life prediction in prognostics, this method had better results [41].

  1. Conclusions and future research

In future research, we will utilize the DL methods, ML methods and CNN. We will modify the activation function, such as RELU, sRELU. We will employ the rnadom forest, XGBBOST, KNN, FNN learned with other optimization algorithms. The EMPA will be utilized to resolve the complex optimization problems, such as intelligent vehicle path planning, intelligent temperature-controlled self-adjusting electric fans and sensor information fusion.

References

  1. Zhang, J.; Li, X.; Tian, J.; Jiang, Y.; Luo, H.; Yin, S. A Variational Local Weighted Deep Sub-Domain Adaptation Network for Remaining Useful Life Prediction Facing Cross-Domain Condition. Reliab. Eng. Syst. Saf. 2023, 231, 108986.
  2. Zhang, J.; Zhang, K.; An, Y.; Luo, H.; Yin, S. An Integrated Multitasking Intelligent Bearing Fault Diagnosis Scheme Based on Representation Learning Under Imbalanced Sample Condition. IEEE Trans. Neural Netw. Learn. Syst. 2023.
  3. Zhang, J.; Li, X.; Tian, J.; Luo, H.; Yin, S. An Integrated Multi-Head Dual Sparse Self-Attention Network for Remaining Useful Life Prediction. Reliab. Eng. Syst. Saf. 2023, 109096.
  4. Zhang, J.; Tian, J.; Li, M.; Leon, J.I.; Franquelo, L.G.; Luo, H.; Yin, S. A Parallel Hybrid Neural Network with Integration of Spatial and Temporal Features for Remaining Useful Life Prediction in Prognostics. IEEE Trans. Instrum. Meas. 2022, 72, 1–12.

 

  1. The quality of Figure 3-36 is not clear enough, and it is not clear enough after zooming in. It is recommended to correct the format of the picture to increase the clarity.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. We have modified all figures 3-36 to enhance the quality and clarity. For the convenience of reading, we have highlighted relevant parts blue in Section 6 of revised manuscript.

The modified manuscript is as follows:

  1. Experimental results and analysis

6.4. Results and analysis

Fig.3 The convergent curves of Blood                  Fig.4 The convergent curves of Scale

Fig.5 The convergent curves of Survival               Fig.6 The convergent curves of Liver

Fig.7 The convergent curves of Seeds                 Fig.8 The convergent curves of Wine

Fig.9 The convergent curves of Iris                  Fig.10 The convergent curves of Statlog

Fig.11 The convergent curves of XOR               Fig.12 The convergent curves of Balloon

Fig.13 The convergent curves of Cancer             Fig.14 The convergent curves of Diabetes

Fig.15 The convergent curves of Gene               Fig.16 The convergent curves of Parkinson

Fig.17 The convergent curves of Splice              Fig.18 The convergent curves of WDBC

Fig.19 The convergent curves of Zoo

Fig.20 The ANOVA test of Blood                 Fig.21 The ANOVA test of Scale

Fig.22 The ANOVA test of Survival               Fig.23 The ANOVA test of Liver

Fig.24 The ANOVA test of Seeds                 Fig.25 The ANOVA test of Wine

Fig.26 The ANOVA test of Iris                   Fig.27 The ANOVA test of Statlog

Fig.28 The ANOVA test of XOR                 Fig.29 The ANOVA test of Balloon

Fig.30 The ANOVA test of Cancer               Fig.31 The ANOVA test of Diabetes

Fig.32 The ANOVA test of Gene                Fig.33 The ANOVA test of Parkinson

Fig.34 The ANOVA test of Splice               Fig.35 The ANOVA test of WDBC

Fig.36 The ANOVA test of Zoo

  1. To establish the viability and suitability, the MPA is contrasted with other algorithms that contain ALO, AVOA, DOA, FPA, MFO, SCA, SSA and SSO. Although the author compared many optimization methods, but did not change the basic network, It is recommended to compare with more advanced ML methods, such as A parallel hybrid neural network with integration of spatial and temporal features for remaining useful life prediction in prognostics.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. The question you raised above is very useful. This research work is very necessary and valuable, and the workload of the experiment is enormous, which is my future research work. The question you raised provides a new idea of thinking for future work. In this paper, the ranking-based mutation operator is introduced into the basic MPA, which not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs. We will compare with more advanced ML methods in the next paper. We have utilized the following article to make the introduction section more productively. The authors are thankful to the reviewer for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the reviewer in reviewing the manuscript. For the convenience of reading, we have highlighted relevant parts blue in Introduction, Section 7 and References of revised manuscript.

The modified manuscript is as follows:

  1. Introduction

Zhang et al. designed a parallel hybrid neural network to remain useful life prediction in prognostics, this method had better results [41].

  1. Conclusions and future research

In future research, we will utilize the DL methods, ML methods and CNN. We will modify the activation function, such as RELU, sRELU. We will employ the rnadom forest, XGBBOST, KNN, FNN learned with other optimization algorithms. The EMPA will be utilized to resolve the complex optimization problems, such as intelligent vehicle path planning, intelligent temperature-controlled self-adjusting electric fans and sensor information fusion.

References

  1. Zhang, J.; Tian, J.; Li, M.; Leon, J.I.; Franquelo, L.G.; Luo, H.; Yin, S. A Parallel Hybrid Neural Network with Integration of Spatial and Temporal Features for Remaining Useful Life Prediction in Prognostics. IEEE Trans. Instrum. Meas. 2022, 72, 1–12.

 

  1. The conclusion part is too lengthy, it is recommended to streamline it.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. The conclusion part has been re-written to streamline it. For the convenience of reading, we have highlighted relevant parts blue in Section 7 of revised manuscript.

The modified manuscript is as follows:

  1. Conclusions and future research

In this paper, an enhanced MPA based on the ranking-based mutation operator is presented to train the FNNs, and the objective is not only to determine the best combination of connection weight and deviation value but also to acquire the global best solution according to the given input value. The ranking-based mutation operator not only enhance the selection probability to filter out the optimal search agent but also mitigate search atagnation to accelerate convergence speed. The EMPA utilizes the distinctive mechanisms of Lévy flight, Brownian motion, the optimal encounter rate policy, and the ranking-based mutation operator to attain the minimum classification, prediction and approximation errors. The EMPA has strong robustness, parallelism and scalability to determine the best value. Compared with other algorithms, the EMPA has excellent reliability and superiority to train the FNNs. The experimental results demonstrate that the convergence speed, calculation accuracy and classification rate of the EMPA are superior to those of other algorithms. Furthermore, the EMPA has strong practicability and feasibility for training FNNs.

 

In short, in its current form, the paper is not suitable for acceptance. The paper needs rewriting, by addressing the above-mentioned comments.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

The authors are thankful to the anonymous reviewers for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the editors and reviewers in reviewing the manuscript.

 

 

Summary

The authors are thankful to the editor and anonymous reviewers for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the editors and reviewers in reviewing the manuscript.

In response to the comments, the authors have revised the final manuscript by addressing all the points mentioned below. The modifications are detailed in the individual replies to the anonymous reviewers and indicated with ‘blue highlight’ fonts in the revised manuscript. We hope the editors and anonymous reviewers find our responses and comments satisfactory.

 

Author Response File: Author Response.docx

Reviewer 2 Report

The paper describes topic of training feedforward neural networks using marine predators algorithm

 

General comments:

Please add the list of abbreviations – there are many in the article

Table 3 – some datasets are too small e.g. XOR, balloon

No comparison of the effectiveness of the neural network with the MPA algorithm with another algorithm. It is difficult to verify whether the proposed solution is more effective than others, e.g. random forest, XGBOOST, KNN, FNN learned with other optimization algorithms.

 

Detailed comments:

Point: 2. Mathematical modeling of FNNs, activation functions are different types (RELU, sRELU, tangh an many others, usually output layer has linear activation function, could you improve the information in this section.

Table 4 did you search for optimal value of FPA?

Author Response

Response to Reviewer 2 Comments

 

The paper describes topic of training feedforward neural networks using marine predators algorithm

 

 Thank you for your kind comments and suggestions. Responses for your suggestions are provided as follows.

 

General comments:

Please add the list of abbreviations – there are many in the article

Table 3–some datasets are too small e.g. XOR, balloon

No comparison of the effectiveness of the neural network with the MPA algorithm with another algorithm. It is difficult to verify whether the proposed solution is more effective than others, e.g. random forest, XGBOOST, KNN, FNN learned with other optimization algorithms.

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. We have added the list of abbreviations. We have utilized a series of experiments on seventeen distinct datasets from the machine learning repository of the University of California-Irvine (UCI) to verify the stability and robustness of the proposed algorithm. The question you raised above is very useful. This research work is very necessary and valuable, and the workload of the experiment is enormous, which is my future research work. The question you raised provides a new idea of thinking for future work. In this paper, the ranking-based mutation operator is introduced into the basic MPA, which not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs. We will utilize the effectiveness of the neural network to compare the EMPA with others, e.g. random forest, XGBOOST, KNN, FNN learned with other optimization algorithms. The authors are thankful to the reviewer for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the reviewer in reviewing the manuscript. For the convenience of reading, we have highlighted relevant parts blue in Section 7 of revised manuscript.

The modified manuscript is as follows:

Abbreviations

The following abbreviations are used in this manuscript:

FNNs

Feedforward Neural Networks

MPA

Marine Predators Algorithm

EMPA

Enhanced Marine Predators Algorithm

UCI

University of California-Irvine

ALO

Ant Lion Optimization

AVOA

African Vultures Optimization Algorithm

DOA

Dingo Optimization Algorithm

FPA

Flower Pollination Algorithm

MFO

Moth Flame Optimization

SSA

Salp Swarm Algorithm

SSO

Sperm Swarm Optimization

MLP

Multilayer Perception

MSE

Mean Squared Error

N/A

Not Applicable

  1. Conclusions and future research

In future research, we will utilize the DL methods, ML methods and CNN. We will modify the activation function, such as RELU, sRELU. We will employ the rnadom forest, XGBBOST, KNN, FNN learned with other optimization algorithms. The EMPA will be utilized to resolve the complex optimization problems, such as intelligent vehicle path planning, intelligent temperature-controlled self-adjusting electric fans and sensor information fusion.

 

Detailed comments:

Point: 2. Mathematical modeling of FNNs, activation functions are different types (RELU, sRELU, tangh an many others, usually output layer has linear activation function, could you improve the information in this section.

Table 4 did you search for optimal value of FPA?

Response:

Thank you for your insightful suggestion. We have improved this issue in the following way:

We can understand your concern. We have made the changes accordingly. We have added some parameters of the FPA to search for optimal value. This research work is very necessary and valuable, and the workload of the experiment is enormous, which is my future research work. The question you raised provides a new idea of thinking for future work. In this paper, the ranking-based mutation operator is introduced into the basic MPA, which not only determines the best search agent and elevates the exploitation ability but also delays premature convergence and accelerate the optimization process. The EMPA integrates exploration and exploitation to mitigate search stagnation, which has sufficient stability and flexibility to acquire the finest solution. The experimental results demonstrate that the EMPA has a quicker convergence speed, greater calculation accuracy, higher classification rate, strong stability and robustness, which is productive and reliable for training FNNs. We will utilize the activation functions are different types to modify the mathematical modeling of FNNs in the future work. The authors are thankful to the reviewer for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the reviewer in reviewing the manuscript. For the convenience of reading, we have highlighted relevant parts blue in Section 6.3 and Section 7 of revised manuscript.

The modified manuscript is as follows:

6.3. Parameter setting

Table 3 Initial parameters of all algorithm

Algorithms

Parameters

Values

ALO

Randomized number

[0,1]

 

Constant number

5

AVOA

Randomized number

[0,1]

 

Randomized number

[0,1]

 

Randomized number

[-1,1]

 

Randomized number

[-2,2]

 

Randomized number

[0,1]

 

Randomized number

[0,1]

 

Randomized number

[0,1]

 

Constant number

1.5

DOA

Randomized vector

[0,1]

 

Randomized vector

[0,1]

 

Coefficient vector

(1,0)

 

Coefficient vector

(1,1)

 

Randomized number

(0,3)

FPA

Switch probability

0.8

 

Step size

1.5

 

Randomized number

[0,1]

MFO

Constant number

1

 

Randomized number

[-1,1]

 

Randomized number

[-2,-1]

SSA

Randomized number

[0,1]

 

Randomized number

[0,1]

SSO

Velocity damping factor

[0,1]

 

Randomized number

[7,14]

 

Randomized number

[7,14]

 

Randomized number

[7,14]

MPA

Uniform randomized number

[0,1]

 

Uniform randomized number

[0,1]

 

Constant number

0.5

 

Probability ofeffect

0.2

 

Binary vector

[0,1]

 

Randomized number

[0,1]

EMPA

Uniform randomized number

[0,1]

 

Uniform randomized number

[0,1]

 

Constant number

0.5

 

Probability ofeffect

0.2

 

Binary vector

[0,1]

 

Randomized number

[0,1]

 

Scaling factor

0.7

  1. Conclusions and future research

In future research, we will utilize the DL methods, ML methods and CNN. We will modify the activation function, such as RELU, sRELU. We will employ the rnadom forest, XGBBOST, KNN, FNN learned with other optimization algorithms. The EMPA will be utilized to resolve the complex optimization problems, such as intelligent vehicle path planning, intelligent temperature-controlled self-adjusting electric fans and sensor information fusion.

 

 

 

Summary

The authors are thankful to the editor and anonymous reviewers for considerations, constructive reviews, and beneficial suggestions of the manuscript. These comments have helped us in improving the attractiveness, readability, and quality of the manuscript. We also appreciate the time and efforts put in by the editors and reviewers in reviewing the manuscript.

In response to the comments, the authors have revised the final manuscript by addressing all the points mentioned below. The modifications are detailed in the individual replies to the anonymous reviewers and indicated with ‘blue highlight’ fonts in the revised manuscript. We hope the editors and anonymous reviewers find our responses and comments satisfactory.

 

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Thanks to the authors for the revisions, the article has been revised very well, and I agree with the publication of this paper

Reviewer 2 Report

Thank you for answer and comments

Back to TopTop