Next Article in Journal
Quasi-Hermitian Formulation of Quantum Mechanics Using Two Conjugate Schrödinger Equations
Next Article in Special Issue
Improved Whale Optimization Algorithm Based on Fusion Gravity Balance
Previous Article in Journal
Using Alternating Minimization and Convexified Carleman Weighted Objective Functional for a Time-Domain Inverse Scattering Problem
 
 
Article
Peer-Review Record

A Learning—Based Particle Swarm Optimizer for Solving Mathematical Combinatorial Problems

by Rodrigo Olivares 1,*, Ricardo Soto 2,*, Broderick Crawford 2, Víctor Ríos 1, Pablo Olivares 1, Camilo Ravelo 1, Sebastian Medina 1 and Diego Nauduan 1
Reviewer 1:
Reviewer 3:
Reviewer 4: Anonymous
Submission received: 18 May 2023 / Revised: 21 June 2023 / Accepted: 26 June 2023 / Published: 28 June 2023

Round 1

Reviewer 1 Report

This paper proposes an auto–adaptive particle swarm optimizer using learning–based strategies. However, some descriptions are not clear. Some revisions are necessary in the manuscript.

1.       Please further explain what learning-based strategies are used to solve the problems in PSO?

2.       Please explain further how to prove that the proposed algorithm can find the optimal solution.

3.       Please explain further why three different Q-learning methods are needed.

4.       Q-learning is uncertain. Will this uncertainty affect the results of PSO?

5.       In the paper, authors have focused on adaptive parameter control methods through reinforcement learning. The comparison of different reinforcement learning methods needs to be analyzed to indicate advantages of your work, which can refer to

[a] IEEE Trans. Power Systems, vol. 37, no. 5, pp. 4067-4077, 2022

[b] IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 12, pp. 7778-7790, 2022

[c] IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 8, pp. 3612-3621, 2022

Minor editing of English language required.

Author Response

We want to thank Reviewer 1 for his suggestions and comments. We submit an end–to–end file with each recommendation and the improved manuscript version.

Author Response File: Author Response.pdf

Reviewer 2 Report

While it is of good quality, I believe the following changes should be made:

- Revise grammatical errors in figures. For example: figure 1

- Cite Sutton in the definition of Reinforcement Learning.

- Q-table only affects PSO by determining particle motions?

- How is PSO included in each of the 3 variants of Q-Learning?

Author Response

We want to thank Reviewer 2 for his suggestions and comments. We submit an end–to–end file with each recommendation and the improved manuscript version.

Author Response File: Author Response.pdf

Reviewer 3 Report

Abstract: Using three control methods based on reinforcement learning, the authors present a set of adaptive parameter control methods to adjust the algorithm optimization problems.

Starting from a nature–inspired optimization techniques, the level of adaptability and the subsets of metaheuristics swarm intelligence optimization algorithm, the goal and main contributions of the paper are described in chapter Introduction.

Chapter 2, Related work, presents a robust analysis of the current relationship work on hybridizations between learning techniques and metaheuristics. Involving reinforcement learning on PSO and new variations, the paper explained how Q–Learning can be modified to provide better results.

Chapter Preliminaries, describes the Reinforcement learning concept, the Parameter setting used in the Q–Learning model–free reinforcement learning algorithm and Single State Q–learning and chapter 4, Developed solution describe different ways to apply Q–Learning on PSO and how implementation improve its performance.

Chapter 5, Experimental setup, describe the applied phases of the experimental design, providing a descriptive and statistical analysis of the obtained results.

Chapter 6, Discussion is well structured and fundamented, describing the testing phase for each instance of the proposed models in order to evaluate the approach’s effectiveness of the obtained experimental results, giving overall performance comparison for instances MKP06, MKP35, MKP70 between NPSO and the performance in the modified Q–Learning and SSQL compared to its classic version and the standard PSO.

The paper ends with conclusions and bibliography.

Finally, a possible recommendation would be to reduce the percentage of plagiarism, taking into account the attached report.

Comments for author File: Comments.pdf

minor editing of English language

Author Response

We want to thank Reviewer 3 for his suggestions and comments. We submit an end–to–end file with each recommendation and the improved manuscript version.

Author Response File: Author Response.pdf

Reviewer 4 Report

Authors have introduced here an autoadaptive particle swarm optimizer based on reinforcement learning. It is an interesting research idea, however the manuscript needs revision before its publication.

 

1.     Abstract must be clear and concise, highlight the contribution in the introduction.

2.     Theoretical depth is very limited, authors must provide detailed theoretical analysis.

3.     In equation 1, use the different notation for the new and old Q.

4.     In algorithm 1, what is S’? define before its application.

5.     As I understand that the dimension of solution vector is ‘n’. what is the ‘N’ in the algorithm 2.

6.     You must compare the complexity of algorithms.

7.     In line 318, it is given ‘limited testing time’. What time is set to terminate the algorithm?

8.     Experimental analysis is poor, have to add more state-of-the-art schemes for the comparison.

 

 

 Moderate editing of English language required

Author Response

We want to thank Reviewer 4 for his suggestions and comments. We submit an end–to–end file with each recommendation and the improved manuscript version.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

All my suggestions were made

Back to TopTop