Multiple Learning Strategies and a Modified Dynamic Multiswarm Particle Swarm Optimization Algorithm with a Master Slave Structure
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis paper proposed multiple learning strategies and a modified dynamic multiswarm particle swarm optimization with a master slave structure (MLDMS-PSO). The comparison results among MLDMS-PSO and the other nine PSOs on the CEC2017 test suite demonstarted better performance MLDMS-PSO on different types of functions.
Comments:
1) In subsect. 4.3.2. Convergence process analysis, the authors analyzing numerous figs 5-8 concluded “On composition functions, the convergence process of the algorithms are shown in Fig. 8. MLDMS-PSO obtains competitive results in f22, f23, f27, f29 and f30 but its advantages are not obvious. By further improving the algorithm, we can make it more accurate in solving the composite function and improve the practicability of the algorithm, which is a direction that needs to be constantly explored. In summary, MLDMS-PSO adopts a multiswarms architecture, allowing various subswarms to fully explore the solution space and demonstrate strong exploration ability in multimodal, hybrid and composition functions. However, during the local development process, the convergence process is relatively slow and the efficiency is low.”. The authors should explicitly prove better performance of their proposal against existing ones. This reviewer thinks that for better observing these figures, the parts of images where the methods exposed below the curves should be presented with larger size, it is impossible to see titles of algorithms.
2) In subsec. 4.3.3. Statistical Results of Solutions, the authors presented the results of statistical investigations in tables 6 and 7. This reviewer did not find results for MLDMS-PSO algorithm in table 7, but in the text for Wilcoxon rank sum test, they wrote “The results are shown in Table 6. The symbols "+", " - " and "=" indicate that MLDMS-PSO performs significantly better, worse and almost the same as the other comparison algorithm, respectively. The symbols "N+", "N-" and "N=" indicate the number that MLDMS-PSO is significantly better than, significantly worse than, and almost the same as the corresponding competitor algorithm, respectively. The comprehensive performance (CP) is equal to the sum of "N+" minus the sum of "N-". Table 6 shows that MLDMS-PSO is significantly better than the other PSO variants on most 10-D, 30-D and 50-D problems.” Please correct this mistake, presenting results in the mentioned table. Other misunderstanding is that presenting three appendixes 1, 2, 3 in this manuscript, the authors never mentioned or analyzed these one. The authors should discuss or analyze all results presented in the paper because without their comments it is impossible for potential reader. understand the significance of this study.
3) This reviewer thinks that authors in conclusions can explicitly highlight the principal advantages and drawbacks (see comment 1) of their approach as well as suggest possible ways to resolve them in future research.
Author Response
Comments 1: In subsect. 4.3.2. Convergence process analysis, the authors analyzing numerous figs 5-8 concluded “On composition functions, the convergence process of the algorithms are shown in Fig. 8. MLDMS-PSO obtains competitive results in f22, f23, f27, f29 and f30 but its advantages are not obvious. By further improving the algorithm, we can make it more accurate in solving the composite function and improve the practicability of the algorithm, which is a direction that needs to be constantly explored. In summary, MLDMS-PSO adopts a multiswarms architecture, allowing various subswarms to fully explore the solution space and demonstrate strong exploration ability in multimodal, hybrid and composition functions. However, during the local development process, the convergence process is relatively slow and the efficiency is low.”. The authors should explicitly prove better performance of their proposal against existing ones. This reviewer thinks that for better observing these figures, the parts of images where the methods exposed below the curves should be presented with larger size, it is impossible to see titles of algorithms.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, We have revised the description of the performance of the proposed algorithm. The proposed MLDMS-PSO has shown significant advantages in multi-modal functions and has a strong ability to escape from local optimal traps. However, the cost of powerful global search capability is the loss of some local development ability, which slows down its local development speed. Compared to the improvement in solution accuracy, the slight loss in convergence speed can be accepted.
These can be found in the revised manuscript—page 17, last paragraph, lines 2-4.
In addition, to improve the visual representation of the convergence graph, we have remade all the convergence curves and enlarged the local areas to present the convergence process of the algorithm more clearly.
These can be found in the revised manuscript—page 17-19, Figures 6-9.
Revised:“However, the cost of powerful global search capability is the loss of some local development ability, which slows down its local development speed. Compared to the improvement in solution accuracy, the slight loss in convergence speed can be accepted.”
Comments 2: In subsec 4.3.3. Statistical Results of Solutions, the authors presented the results of statistical investigations in tables 6 and 7. This reviewer did not find results for MLDMS-PSO algorithm in table 7, but in the text for Wilcoxon rank sum test, they wrote “The results are shown in Table 6. The symbols "+", "-" and "=" indicate that MLDMS-PSO performs significantly better, worse and almost the same as the other comparison algorithm, respectively. The symbols "N+", "N-" and "N=" indicate the number that MLDMS-PSO is significantly better than, significantly worse than, and almost the same as the corresponding competitor algorithm, respectively. The comprehensive performance (CP) is equal to the sum of "N+" minus the sum of "N-". Table 6 shows that MLDMS-PSO is significantly better than the other PSO variants on most 10-D, 30-D and 50-D problems.” Please correct this mistake, presenting results in the mentioned table. Other misunderstanding is that presenting three appendixes 1, 2, 3 in this manuscript, the authors never mentioned or analyzed these one. The authors should discuss or analyze all results presented in the paper because without their comments it is impossible for potential reader. understand the significance of this study.
Response 2: Thank you for pointing this out. We have corrected the error in the table number reference (changed to Table 4) and made modifications to the content of the Table 4 to better express the comparison between the proposed MLDMS-PSO and other algorithms.
These can be found in the revised manuscript—page 20, Table 4.
In section 4.3.1 of the this paper, a comparative analysis was conducted on the performance of the MLDMS-PSO on 10-D, 30-D, and 50-D problems using data from Appendix 1, 2, and 3. We have summarized the average ranking of the experimental results of MLDMS-PSO and nine PSO variants on 29 test functions in Table 3 of the revised manuscript to demonstrate the advantages of the algorithm in solving on 10-D, 30-D, and 50-D. We have also ranked the algorithm comprehensively according to the average rank, demonstrating the advantages of the MLDMS-PSO .
These can be found in the revised manuscript—page 16.
Comments 3: This reviewer thinks that authors in conclusions can explicitly highlight the principal advantages and drawbacks (see comment 1) of their approach as well as suggest possible ways to resolve them in future research.
Response 3: Thank you for pointing this out. We agree with this comment and have revised the relevant statements in the conclusion. Based on the analysis of relevant experiments in the previous section, we have analyzed the exploration and exploitation capabilities demonstrated during its evolution process, pointed out its advantages and disadvantages, and proposed future research prospects in response to the shortcomings of the algorithm.
These can be found in the revised manuscript—page 21,paragraph 1 and 2.
Revised:“the proposed MLDMS-PSO algorithm has better performance in multimodal, hybrid and composition problems. During the evolution process, it can better maintain population diversity, drive continuous population evolution, strengthen the algorithm's exploration ability, and obtain more accurate solutions. To simplify the algorithm, a linear reduction was adopted to control the number of subswarms, without considering the impact of population number on evolutionary efficiency at different stages of evolution. This sacrifices its exploitation ability to some extent, slows down the convergence speed, and also makes its performance on single-mode problems not ideal.”
“A good population diversity can enhance the exploration ability of algorithms, but it also weakens their local exploitation ability, and vice versa. Therefore, based on the characteristics of the problem and the different stages of algorithm evolution, exploring an efficient control strategy for the number and distribution of subswarms to ensure appropriate diversity in the overall population is worth further research.”
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsThe purpose of this work is to improve the particle swarm optimization (PSO) algorithm by introducing multiple learning strategies and a modified dynamic multiswarm structure with a master-slave configuration. The proposed algorithm, MLDMS-PSO, aims to enhance population diversity and exploration-exploitation balance during optimization. It integrates modified comprehensive learning, local dimension learning, and united learning strategies, managed by a reward and punishment mechanism. The performance of MLDMS-PSO is validated through extensive experiments on benchmark functions, demonstrating superior accuracy and efficiency compared to existing PSO variants.
Major comments:
1) Clearly define the specific gap in the current PSO algorithms that your work addresses. That could help frame the importance of your proposed method more effectively.
2)The algorithm descriptions are detailed but could benefit from visual aids such as flowcharts or diagrams. For example, a flowchart for Algorithm 1 (MDMS_Phase) would help readers understand the process at a glance.
3) In Section 4.1, when discussing the parameter tuning results, provide more insight into why certain parameter values (like G=12 and Rc=0.1) were optimal. This could include a brief discussion on the observed effects of varying these parameters.
4)Consider summarizing key results in a table format for quicker reference. For instance, a table comparing the performance of MLDMS-PSO with other algorithms across different dimensions could provide a clear visual summary.
5)Condense the detailed results of the Wilcoxon rank sum test into a more readable format, highlighting the most significant findings.
6) When discussing convergence, include visual comparisons (e.g., graphs) of convergence rates between MLDMS-PSO and other algorithms to illustrate your points more effectively.
7)Include practical examples or case studies demonstrating the application of MLDMS-PSO in real-world scenarios. This can help illustrate the practical significance of your work.
8) Consider adding more performance metrics or benchmarks to provide a more comprehensive evaluation of the algorithm’s effectiveness.
Minor comments:
1)In the introduction section when mentioning "many researchers have developed population-based nature-inspired search optimization algorithms," specify the unique contributions of genetic algorithms, ant colony optimization, etc.
2)The equation numbering in Section 2.1 (Equations 1 and 2) should be in line with standard formatting guidelines (aligned properly and numbered consecutively).
3)Ensure consistent use of terminology throughout the paper. For instance, "master-slave structure" should be consistently referred to without variation.
4) Ensure all references are formatted according to the journal’s guidelines. For instance, ensure consistency in the use of et al., capitalization, and punctuation.
5)Please rely on the journal's template.
Comments on the Quality of English LanguageThere are minor spelling and grammar issues that need correction.
Author Response
Please see the attachment
Author Response File: Author Response.docx
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors did not attend one comment of this reviewer: all appendixes 1-4 did not mention in the text of manuscript.
Author Response
Please see the attachment
Author Response File: Author Response.docx
Reviewer 2 Report
Comments and Suggestions for AuthorsI have no additional remarks on the revised version.
Comments on the Quality of English LanguageMinor editing of English language required.
Author Response
Please see the attachment
Author Response File: Author Response.docx