Next Article in Journal
The Design, Simulation, and Construction of an O2, C3H8, and CO2 Gas Detection System Based on the Electrical Response of MgSb2O6 Oxide
Previous Article in Journal
The Role of Artificial Intelligence in Optometric Diagnostics and Research: Deep Learning and Time-Series Forecasting Applications
 
 
Article
Peer-Review Record

Fuzzy Guiding of Roulette Selection in Evolutionary Algorithms

Technologies 2025, 13(2), 78; https://doi.org/10.3390/technologies13020078
by Krzysztof Pytel
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4:
Technologies 2025, 13(2), 78; https://doi.org/10.3390/technologies13020078
Submission received: 28 December 2024 / Revised: 7 February 2025 / Accepted: 10 February 2025 / Published: 12 February 2025
(This article belongs to the Section Information and Communication Technologies)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper proposed a roulette selection method in evolutionary algorithms using fuzzy logic. The author claimed that this method can address the exploration and exploitation balance challenge. Generally speaking, this paper has some value to be published in this journal, given the following concerns are addressed.

1. The presentation is generally okay. I like the writing style that makes the paper accessible to readers with a background in evolutionary algorithms and fuzzy logic. However, it is better to provide flowchart, pseudocode, or stepwise illustrations to explain the fuzzy logic controller and its integration.

2. Further, some paragraphs, particularly in Sections 2.1 and 2.2, can be improved by using bulleted lists or subheadings to enhance clarity.

3. The review of related works is somewhat limited, focusing mainly on earlier works by the author without a broader comparison to similar approaches in the field. The authors should involve some relevant papers, especially those also addressing the balance between exploration and exploitation. 

4. Moreover, the author should also be crystal clear about the unique contributions of this paper, especially comparing with the papers in the literature.

5. Some figures, such as Figure 1 (block diagram of the algorithm), are not sufficiently detailed, making it difficult to understand the exact workflow. Further, the captions for tables and figures are brief and do not provide enough context for standalone interpretation.

6. The rationale for excluding other popular test cases (e.g., multi-objective optimization) is not clear. While the author has certain discussion on multi-dimensional problems, it is better to have some further visualizations to interpret the results.

Comments on the Quality of English Language

See my above comments.

Author Response

For research article

 

 

Response to Reviewer 1 Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

 

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Can be improved

improved

Is the research design appropriate?

Can be improved

improved

Are the methods adequately described?

Can be improved

improved

Are the results clearly presented?

Can be improved

improved

Are the conclusions supported by the results?

Yes

 

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1:

The presentation is generally okay. I like the writing style that makes the paper accessible to readers with a background in evolutionary algorithms and fuzzy logic. However, it is better to provide flowchart, pseudocode, or stepwise illustrations to explain the fuzzy logic controller and its integration.

Response 1:

 

Therefore, I have added a new paragraph (page 7 line 284).

The Algorithm 1 presents the pseudocode of the modified Evolutionary Algorithm

 

Algorithm 1 Adaptive Parameter Evolutionary Algorithm

    int generation ← 0

 

    initialize first population

    evaluate first population

    keep best in population

 

    // Main evolutionary loop

    while (!termination condition) do

        generation++

 

        // Adjust parameters if enough history is available

        if (generation > history_size) then

            evaluate estimators for FLC

            alter selection probability by FLC

        end if

 

        // Perform genetic algorithm operations

        select individuals to parent pool

        perform crossover and mutation

        evaluate population

        apply elitism

    end while

    end

 

updated text in the manuscript if necessary

Comments 2:

Further, some paragraphs, particularly in Sections 2.1 and 2.2, can be improved by using bulleted lists or subheadings to enhance clarity.

Response 2:

 

Therefore, I have divided Section 2.1 (page 4 line 177) into subsections 2.1.1 (page 5 line 291), 2.1.2 (page 6 line 232), 2.1.3  (page 7 line 277), 2.1.4 (page 8 line 200)

Text in section revised

 

updated text in the manuscript if necessary

Comments 3:

The review of related works is somewhat limited, focusing mainly on earlier works by the author without a broader comparison to similar approaches in the field. The authors should involve some relevant papers, especially those also addressing the balance between exploration and exploitation. 

 

Response 3:

Agree. I have modified text (page 2 line 49)

 

A proper exploration-exploitation relationship (EER) is crucial for the convergence and effectiveness of an algorithm. The EER problem is discussed extensively in the literature. Paper [1] discusses the exploration-exploitation dilemma from the perspective of entropy. It determines the relationship between entropy and the dynamic adaptive process of exploration and exploitation. The authors propose an adaptive framework called AdaZero, which automatically decides whether to explore or exploit, as well as balances the two effectively. Pagliuca, in [2], suggests that finding an effective trade-off between exploration and exploitation depends on the specific task under consideration. Mutating a single gene enables the algorithm to discover adaptive modifications, ultimately leading to improved performance. Conversely, mutating multiple genes simultaneously in each iteration neutralizes the beneficial effects of modifying some gene values due to unfavorable changes in others. In paper [3], the authors designed and analyzed self-adjusting evolutionary algorithms (EAs) for multimodal optimization. They proposed a module called stagnation detection, which can be integrated into existing EAs, including those addressing unimodal problems, without altering their fundamental behavior. The stagnation detection module monitors the number of unsuccessful steps and increases the mutation rate based on statistically significant waiting times without improvement.

A new references added

[1] R. Yan et al. The Exploration-Exploitation Dilemma Revisited: An Entropy Perspective. 2024. ArXiv preprint: https://arxiv.org/pdf/2408.09974

 

[2] P. Pagliuca. Analysis of the Exploration-Exploitation Dilemma in Neutral Problems with Evolutionary Algorithms. Journal of Artificial Intelligence and Autonomous Intelligence. 2024; 1(2):8.

 

[3] A. Rajabi, C. Witt. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 84, 1694–1723. 2022. https://doi.org/10.1007/s00453-022-00933-z

 

updated text in the manuscript if necessary

Comments 4:

Moreover, the author should also be crystal clear about the unique contributions of this paper, especially comparing with the papers in the literature.

Response 4:

Agree. I have modified text (page 3 line 153)

 

This paper provides the following contributions:

• A novel algorithm called FLC-EA is introduced.

• A thorough investigation and discussion of new estimators, which play a crucial role in fine-tuning the selection probability, is presented.

• An FLC that controls the selection probability in EAs was developed.

• Experiments on a wide set of benchmark functions and a real-life problem were conducted.

 

updated text in the manuscript if necessary

Comments 5:

Some figures, such as Figure 1 (block diagram of the algorithm), are not sufficiently detailed, making it difficult to understand the exact workflow. Further, the captions for tables and figures are brief and do not provide enough context for standalone interpretation.

Response 5:

Agree. I have added the pseudocode of the algorithm illustrating the diagram.

Captions modified

 

Page 7 line 282

Figure 1. The modified Evolutionary Algorithm's block diagram; the Fuzzy Logic Controller's block is in gray

Page 9 line 320

Figure 3. FLC rules database for selection probability adaptation

Page 13 line 495

Table 1. Results for the optimization of the f1 function: the number of generations and the running time, including the minimum, average, maximum, and standard deviation, respectively.

Page 14 line 509

Table 2. Results for the optimization of the CEC functions (CEC_f1, CEC_f2, and CEC_f3): the number of generations and the running time, including the minimum, average, maximum, and standard deviation.

Page 14 line 521

Table 3. Results for the optimization of difficult-to-optimize functions (Rastrigin, Styblinski-Tang, Rosenbrock, and Shubert): the number of generations and the running time, including the minimum, average, maximum, and standard deviation.

Page 14 line 548

Table 4. Optimization results for the f2 and f3 functions: the number of generations and the running time, including the minimum, average, maximum, and standard deviation.

Page 16 line 581

Table 5. Optimization results for ConFLP: the number of generations, including the minimum, average, maximum, and standard deviation.

Page 16 line 583

Table 6. The results for optimization ConFLP: the running time, including the minimum, average, maximum, and standard deviation.

Page 18 line 589

Figure 5. Results of the algorithms for ConFLP of different sizes: the number of generations until the stop criterion is met.

Page 18 line 592

Figure 6. Results of the algorithms for ConFLP of different sizes: the running time until the stop criterion is met.

Page 19 line 603

Figure 7. An example distribution of computer terminals and network nodes; large black circles represent cluster centroids: a) tasks with 125 terminals, b) task with 250 terminals, c) task with 500 terminals, d) task with 1000 terminals

 

updated text in the manuscript if necessary

Comments 6:

The rationale for excluding other popular test cases (e.g., multi-objective optimization) is not clear. While the author has certain discussion on multi-dimensional problems, it is better to have some further visualizations to interpret the results.

 

Response 6:

Agree. ConFLP was chosen as an example of a complex optimization problem. This example demonstrates the potential of the proposed method for applications beyond function optimization. Future work includes conducting broader research and analyzing the effectiveness of the proposed method on real-life examples.

 

I have added a new text (page 19 line 610)

3.7. Test of algorithm’s performance on clustering problem

A dataset from The Fundamental Clustering Problems Suite [30] was selected as a benchmark. The experiment utilized two-dimensional tasks containing between 400 and 4096 objects and involving 2 to 3 clusters. The stopping criterion was based on minimizing the sum of the distances of objects from their respective centroids. This criterion value was determined for each task during preliminary experiments. Table 7 presents the outcomes from 30 runs of each algorithm for optimizing clustering problems, with sizes ranging from 400 to 4096 objects.

Table 7. The best results of all running obtained by algorithms in tests on selected benchmarks

Problem name

Algorithm

Lsun

TwoDiamonds

WingNut

EngyTime

Number of objects

400

800

1070

4096

Number of clusters

3

2

2

2

Number of dimensions

2

2

2

2

Stop criterion

600

820

1300

8395

Number of generations to stop criterion

EA

22849

3851

11642

19621

FLC-EA

6056

2916

2350

1350

Running time to stop criterion [s]

EA

18,8

6,5

23,7

158,6

FLC-EA

6,5

4,1

8,2

22,5

 

The FLC-EA algorithm requires fewer generations to converge to a solution and demonstrates reduced running times across all test sizes compared to the standard EA.

 

updated text in the manuscript if necessary

4. Response to Comments on the Quality of English Language

Point 1:

The English could be improved to more clearly express the research.

Response 1:

Text revised according to the suggestions of an English speaker

5. Additional clarifications

 

 

Reviewer 2 Report

Comments and Suggestions for Authors

In this paper, a new method of evolutionary algorithm combined with fuzzy logic is proposed, which has some highlights in theoretical innovation and algorithmic performance. However, the article still needs to be strengthened in several areas to improve its academic value, readability, and the credibility of the experimental results. The following are specific suggestions for improvement:

1The introduction section summarizes the background and existing research on the combination of fuzzy logic and evolutionary algorithms rather sparsely, and lacks discussion of recent research progress. More high-quality related literature could be cited and the shortcomings of existing research could be analyzed.

2The experiments only tested some functional optimization problems and specific real-world problems, and did not involve different types of complex optimization scenarios. It is suggested to add more experiments to verify the applicability of the algorithm on different problems.

3This paper emphasizes the performance improvement of the algorithm, but its computational complexity is not described in sufficient detail. It is suggested to add related discussions.

4Some of the charts are not clear enough, resulting in the inability to quickly understand the experimental results. It is suggested to optimize the design of the diagrams, such as using more intuitive colors or labels, and providing more detailed illustrations.

5Some of the statements are lengthy, such as in the method section and experimental results, which are more verbose. It is suggested to simplify the language and improve the conciseness and logic of the content.

6The experimental results are only compared with traditional genetic algorithms and some benchmark methods. It is suggested to introduce more cutting-edge optimization algorithms for comparison.

7The conclusion section summarizes the results of the study and the outlook for future research is rather simple. Further discussion on the strengths, weaknesses and possible limitations of the methods is recommended.

8The number of references is low and lacks references from the last two years, it is suggested to add the latest literature.

Comments on the Quality of English Language

Can

Author Response

For research article

 

 

Response to Reviewer 2 Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

 

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Can be improved

improved

Is the research design appropriate?

Can be improved

improved

Are the methods adequately described?

Can be improved

improved

Are the results clearly presented?

Can be improved

improved

Are the conclusions supported by the results?

Can be improved

improved

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1:

The introduction section summarizes the background and existing research on the combination of fuzzy logic and evolutionary algorithms rather sparsely, and lacks discussion of recent research progress. More high-quality related literature could be cited and the shortcomings of existing research could be analyzed.

Response 1:

 

Therefore, I have added a new text (page 3 line 99)

 

In paper [12], a fuzzy genetic algorithm is proposed for solving binary-encoded combinatorial optimization problems, such as the Multidimensional 0/1 Knapsack Problem. In this algorithm, a fuzzy logic controller is used to select the crossover operator and selection techniques based on population diversity. Population diversity is evaluated using the genotype and phenotype features of the chromosomes. Syzonov et al., in paper [13], propose two types of fuzzy genetic algorithms. In the first algorithm, EFGA, fuzzy rules are employed to update simulation parameters. The inputs to the fuzzy system include average fitness, best fitness, and average fitness change, while the outputs are the crossover rate, mutation rate, and reproductive group size. In the second algorithm, GFGA, the population is divided into two groups based on gender. A crossover between two individuals is permitted only if they belong to opposite genders. The selection rules vary for male and female genomes. Some male genomes are randomly chosen and paired with female genomes based on population diversity and the age of the male genomes. Selection is further governed by the maximum allowable age of an individual. If a genome's age exceeds this limit, it is removed from the population. Male genomes are selected randomly, and the age of potential female partners is estimated using fuzzy rules that consider the age of the selected males and population diversity. The main idea of paper [14] is to hybridize the genetic algorithm with a fuzzy logic controller to generate adaptive genetic operators. The proposed algorithm is designed to control the crossover and mutation rates by using GA performance measures, such as the minimum and average fitness values between two consecutive generations, as input variables. The outputs of the fuzzy logic controllers are the adjustments to the crossover and mutation rates. The proposed scheme employs two independent FLCs: a fuzzy-crossover controller and a fuzzy-mutation controller, which are implemented separately.

 

A new references added

[12] M. Jalali Varnamkhasti, L.S. Lee, M.R. Bakar, W.J. Leong. A genetic algorithm with fuzzy crossover operator and probability. 2012. Advances in Operations Research 956498.

[13] O. Syzonov, S. Tomasiello, N. Capuano. New Insights into Fuzzy Genetic Algorithms for Optimization Problems. Algorithms 2024, 17, 549. https://doi.org/10.3390/a17120549

[14] A. Saiful at al. Adaptive Fuzzy-Genetic Algorithm Operators for Solving Mobile Robot Scheduling Problem in Job-Shop FMS Environment. 2024. Robotics and Autonomous Systems 176.104683

 

updated text in the manuscript if necessary

Comments 2:

 

The experiments only tested some functional optimization problems and specific real-world problems, and did not involve different types of complex optimization scenarios. It is suggested to add more experiments to verify the applicability of the algorithm on different problems.

Response 2:

Agree. I have added text Subsection 2.3.2 added (page 12 line 472)

 

The clustering problem involves assigning objects to groups of similar objects, called clusters, based on their features. Each object is described in d-dimensional space. The goal is to group the objects into k clusters such that similar objects are assigned to the same cluster. Typically, the Euclidean distance between pairs of objects is used as a similarity measure, although other measures can also be applied. Clustering must satisfy the following constraints: each object can be assigned to only one cluster at a time, and each cluster must contain at least one object.

 

Subsection 3.7 added (page 19 line 611)

 

A dataset from The Fundamental Clustering Problems Suite [30] was selected as a benchmark. The experiment utilized two-dimensional tasks containing between 400 and 4096 objects and involving 2 to 3 clusters. The stopping criterion was based on minimizing the sum of the distances of objects from their respective centroids. This criterion value was determined for each task during preliminary experiments. Table 7 presents the outcomes from 30 runs of each algorithm for optimizing clustering problems, with sizes ranging from 400 to 4096 objects.

Table 7. The best results of all running obtained by algorithms in tests on selected benchmarks

Problem name

Algorithm

Lsun

TwoDiamonds

WingNut

EngyTime

Number of objects

400

800

1070

4096

Number of clusters

3

2

2

2

Number of dimensions

2

2

2

2

Stop criterion

600

820

1300

8395

Number of generations to stop criterion

EA

22849

3851

11642

19621

FLC-EA

6056

2916

2350

1350

Running time to stop criterion [s]

EA

18.8

6.5

23.7

158.6

FLC-EA

6.5

4.1

8.2

22.5

 

The FLC-EA algorithm requires fewer generations to converge to a solution and demonstrates reduced running times across all test sizes compared to the standard EA.

 

A new reference added

 

[30] The Fundamental Clustering Problems Suite. https://www.uni-marburg.de/fb12/datenbionik/data/

 

updated text in the manuscript if necessary

Comments 3:

 

This paper emphasizes the performance improvement of the algorithm, but its computational complexity is not described in sufficient detail. It is suggested to add related discussions.

Response 3:

Agree. I have added subsection (page 21 line 684)

The complexity of the FLC-EA algorithm is determined by its two most computationally expensive operations. The first is the fuzzy inference system, which has polynomial complexity, as stated in [31]. The second is the evolutionary mechanism, whose complexity depends on factors such as the genetic operators, their implementation, the representation of individuals, the population size, and the fitness function. For point mutation, one-point crossover, and roulette wheel selection, the complexity of the evolutionary algorithm (EA) is , where g is the number of generations, n is the population size, and m is the size of individuals. Simplifying this expression, the complexity is on the order of is . This estimate, however, ignores the complexity of the fitness function, which varies depending on the problem being solved. Experiments have confirmed that FLC-EA requires fewer generations compared to the standard EA. Consequently, the computational complexity of FLC-EA is lower. However, the time complexity of the algorithm depends on the fitness function. As the computational complexity of the fitness function increases, the overall time complexity of the algorithm is expected to decrease.

A new reference added

[31] L. T. Kóczy, Computational complexity of various fuzzy inference algorithms, in: Annales Univ. Sci. Budapest, Sect. Comp, Vol. 12, 1991, pp. 151-158.

 

updated text in the manuscript if necessary

Comments 4:

 

Some of the charts are not clear enough, resulting in the inability to quickly understand the experimental results. It is suggested to optimize the design of the diagrams, such as using more intuitive colors or labels, and providing more detailed illustrations.

Response 4:

Agree. Figures 2, 4, 5, 7 and 7 improved - colors introduced

 

updated text in the manuscript if necessary

Comments 5:

 

Some of the statements are lengthy, such as in the method section and experimental results, which are more verbose. It is suggested to simplify the language and improve the conciseness and logic of the content.

Response 5:

Agree. I have revised text in sections 2 and 3 according to the suggestions of an English speaker

revised text (page 4 line 179)

Various variants of EA have been developed, each employing different strategies for mutation, selection, and parameter tuning. Key algorithm parameters, such as selection and mutation probabilities, as well as population size, play a crucial role in shaping the optimization process. Incorporating information from previous generations allows the algorithm to make more informed decisions, improving its ability to adapt to complex optimization landscapes. Dynamic parameter adjustment during the execution of an EA enhances its efficiency by facilitating faster convergence while reducing the risk of entrapment in suboptimal solutions. This adaptivity enables the algorithm to balance the trade-off between exploration and exploitation effectively, improving its performance across a wide range of optimization problems. Given that historical knowledge of optimization progress is often descriptive, uncertain, and imprecise, a FLC can be employed to model and utilize this knowledge efficiently. Using descriptive rules in the form of IF-THEN statements, the FLC can dynamically adjust parameters based on both historical and current optimization progress. Selection probability and fitness value are critical factors in determining an individual’s potential to act as a parent and produce offspring. While well-adapted individuals are typically favored, weaker individuals are also given an opportunity to contribute to the genetic pool. This approach ensures genetic diversity and helps prevent premature convergence. By integrating FLC into the EA selection process, a more balanced and adaptive approach to parental selection can be achieved, ultimately enhancing the algorithm’s overall performance.

revised text (page 6 line 233)

A new selection probability is calculated by FLC for each individual in every generation, based on the following rules:

1.     Enlarge selection probability for high-quality individuals:

·         Individuals with fitness values above the generation's average will have their selection probability increased.

·         This prioritization allows such individuals to contribute more offspring, enhancing the algorithm's ability to exploit existing good solutions.

2.     Enlarge selection probability for populations with many positive reproductions:

·         Individuals that produce offspring with better fitness than their own will gain higher selection probabilities.

·         This encourages the exploitation of successful reproduction patterns.

3.     Enlarge selection probability for populations with a high historical growth ratio:

·         If the average fitness value in the current population exceeds the historical average fitness, it suggests evolutionary progress.

·         Increasing selection probability under these conditions enhances exploitation and accelerates convergence.

4.     Enlarge selection probability when the relative distance to the best solution is small:

·         As the algorithm progresses, the gap between the population’s average fitness and the best-found fitness decreases.

·         Increasing selection probability in this final phase focuses the algorithm on refining the best solutions, improving exploitation and convergence.

5.     Diminish selection probability for low-quality individuals:

·         Individuals with fitness values below the generation's average will have their selection probability reduced.

·         This limits their contribution to the parent pool, promoting exploitation of better solutions.

6.     Diminish selection probability for populations with few positive reproductions:

·         Individuals that produce offspring with lower fitness than their own will have their selection probability diminished.

·         This encourages exploration by reducing focus on unsuccessful reproduction patterns.

7.     Diminish selection probability for populations with a low historical growth ratio:

·         If the population's average fitness is below the historical average, it indicates a lack of evolutionary progress.

·         Reducing selection probability in such cases focuses resources on better exploitation strategies.

8.     Diminish selection probability when the relative distance to the best solution is large:

·         In the initial stages, the average fitness of the population is often far from the best-found fitness.

·         Reducing selection probability during this phase promotes genetic diversity, enhancing the algorithm’s exploratory capabilities.

revised text (page 6 line 278)

Figure 1 presents the block diagram of the modified EA, with the FLC block highlighted in gray. Figure 2 illustrates the input and output linguistic variables and their corresponding membership functions used by the FLC.

revised text (page 8 line 292)

Estimators describing an individual’s quality and the population's historical behaviors are calculated using formulas (1–4), based on data from the current generation and several preceding generations. The tuning of selection probabilities begins with the first generation where all required historical data are available. The number of generations used for these calculations depends on the history size, a parameter adjustable by the user to tailor the algorithm to the specific problem. The FLC computes the selection probability ratio for all individuals using a predefined formula.

revised text (page 8 line 304)

The value of the selection probability ratio is restricted to 20% of its initial value to prevent rapid fluctuations. The Fuzzy Logic Toolbox on the MATLAB platform was utilized to develop, simulate, and test the FLC. The FLC is based on the Mamdani model and employs the “center of gravity” method for defuzzification. During initial experiments, data such as the number of successful reproductions, individual fitness values, and the population’s average fitness were collected. This statistical information was used to define the Membership Functions (MFs) of the input and output Linguistic Variables (LVs), including their number, shape, and characteristics. The rule database, which is depicted in Figure 3, enables the integration of expert knowledge into the optimization process. It comprises a set of simple rules that allow users to tailor the EA to specific problems by adjusting the selection probability. Additionally, other parameters, such as gene representation, population size, and mutation probability, can also be programmed. Users can customize the history size, the number of MFs, and their shape and properties. The algorithm was implemented in C++ using standard libraries and designed to run on the Windows operating system.

revised text (page 10 line 345)

In a Function Optimization Problem (FOP), the objective is to determine the best or a set of optimal parameters that maximize or minimize a given objective function while adhering to specified constraints. FOP, owing to their simplicity and the availability of well-documented test functions, is often used as benchmarks to evaluate and compare the performance of various optimization methods, including Evolutionary Algorithms, metaheuristic algorithms, and other optimization techniques. Several popular approaches for solving FOPs are discussed in the literature, for example, in [13–15]. FOP, with their diverse features encompassing various complexities and characteristics, provide an ideal environment for assessing the impact of tuning the selection probability in EAs. The set of test functions used in the experiments includes functions with a broad range of complexity, differing in difficulty, dimensionality, and the number of local or global optima. These functions include:

revised text (page 11 line 389)

In a Multiple-Objective Optimization Problem (MOP), multiple objective functions must be optimized simultaneously, often with competing or conflicting objectives. Improving one objective may result in the degradation of another. The goal of MOP is to identify a solution or a set of solutions where no other solution is superior in all objectives simultaneously. This solution set, commonly referred to as the Pareto optimal solution, represents the best achievable trade-offs among all the objective functions. Each solution in the Pareto set is non-dominated, meaning that no other solution in the set is better across all objectives.

revised text (page 11 line 398)

The Connected Facility Location Problem (ConFLP) [6,17] serves as a practical example of a MOP. The primary goal of ConFLP is to determine the optimal placement of facilities (or network nodes) and establish the connections between these facilities and demand points (e.g., computer terminals). ConFLP's objectives include minimizing construction or operational costs while maximizing coverage or minimizing total service distance. The problem is inherently complex due to the need to balance multiple conflicting goals and objective functions, with each objective contributing differently to the overall quality of the solution. Additionally, the correlation between network node placement and terminal connectivity further complicates the problem. Traditional single-objective optimization methods are inadequate for solving ConFLP. However, EAs offer a robust approach for handling this problems. The ConFLP model integrates three classical problems fundamental to designing an efficient computer network topology:

·         Network Nodes Location Problem: network nodes must be placed within a given geographic area to minimize the total length of communication links between nodes and terminals. Placing network nodes closer to computer terminals reduces the total communication distance, leading to decreased latency and improved overall network performance. The placement must also consider real-world constraints such as physical barriers, existing infrastructure, and geographic boundaries.

·         Network Nodes Number Problem: the number of network nodes should be minimized to reduce infrastructure costs and simplify network management, while ensuring effective coverage and connectivity. A smaller number of nodes may lead to longer distances between terminals and nodes, potentially increasing communication delays and causing network congestion.

·         Terminal Assignment Problem: each computer terminal must be assigned to its nearest network node to minimize the total link length between nodes and terminals. This assignment involves solving the nearest-neighbor problem efficiently while ensuring proper load balancing to prevent overloading specific nodes.

revised text (page 12 line 439)

In a given service area A, with n network nodes and known locations of computer terminals (x,y), the objective is to determine the optimal placement of network nodes and assign terminals to these nodes to minimize the total link length between terminals and nodes. Designing computer networks presents several practical challenges. Cisco's hierarchical model for local networks divides the network into three layers: core, distribution, and access. This model emphasizes reducing the number of network nodes between the network's extreme points. The Spanning Tree Protocol (STP) was developed and implemented to prevent loops in switched networks with redundancy. To ensure stability, STP mandates that no more than seven nodes exist between any two endpoints. This limitation must be respected when determining network node placement and terminal associations. The constraints introduced by Cisco's hierarchical design model and STP add significant complexity to the network design process.

revised text (page 12 line 451)

The proposed Evolutionary Algorithm can find feasible solutions that not only minimize costs but also comply with the operational and protocol-driven constraints of the network. To achieve this, an additional constant C is introduced to transform the decreasing fitness function into an increasing one. This transformation ensures that solutions with lower total link lengths correspond to higher fitness function values. The constant C must be sufficiently large to ensure that the fitness value remains positive. Both the value of C and the stopping criterion are determined experimentally, depending on the problem's complexity and the desired solution quality. Potential solutions to the problem are encoded using two tables, representing both node positions and terminal-node associations:

·         Node positions by geographical coordinates: this table contains the coordinates of each network node. The number of genes in an individual equals the number of network nodes (n). Each gene consists of two real numbers, representing the x and y coordinates of a node.

·         Terminal-node associations: this table assigns each terminal to a network node using integers. The number of genes in an individual equals the number of terminals. A value i at position k indicates that terminal k is connected to node i.

Different mutation probabilities are set experimentally for the two tables to control the balance between exploration and exploitation, depending on the task size.

revised text (page 13 line 481)

A set of computational experiments was conducted to evaluate the efficiency, performance, and convergence of the proposed FLC-EA algorithm compared to a standard evolutionary algorithm (EA). The experiments, divided into several stages, were performed on a system with the following specifications: an Intel i7 processor supported by 8 GB of RAM. During the initial phase, the algorithm’s parameters—such as population size, selection probability, and mutation probability—were fine-tuned to best fit the test tasks.

revised text (page 13 line 488)

In the first stage of the experiments, the primary goal was to assess the efficiency of the algorithms in finding the exact optimal value. A simple monotonic function f1 with 5, 10, and 20 variables was selected. The results of the proposed FLC-EA algorithm, including performance and efficiency, were compared with those of the standard EA algorithm [2]. Table 1 presents the outcomes from 30 runs of each algorithm (FLC-EA and EA), including the number of generations, the algorithm's runtime until the stopping criterion was met, and their standard deviations.

revised text (page 14 line 498)

Results from the first-stage experiments on easy-to-optimize functions confirm that the FLC-EA algorithm requires 43% to 81% fewer generations and 38% to 82% less runtime compared to the standard Evolutionary Algorithm. Although the inclusion of FLC introduces additional computational effort, this overhead is expected to be offset when applied to computationally complex functions in multidimensional environments, where further performance improvements are anticipated.

revised text (page 14 line 505)

In the second stage of experiments, a set of high-dimensional benchmark functions from CEC2015 was analyzed. Table 2 presents the outcomes from 30 runs of each algorithm (FLC-EA and EA) for optimizing the CEC_f1, CEC_f2, and CEC_f3 functions, along with their standard deviations.

revised text (page 14 line 513)

Tests on multidimensional CEC2015 functions confirm the superiority of the proposed FLC-EA algorithm over the standard EA, both in terms of the number of generations (38% to 81% fewer) and runtime (42% to 81% faster).

revised text (page 14 line 517)

In the third stage of experiments, a set of complex, non-trivial, and difficult-to-optimize functions was analyzed. Table 3 presents the outcomes from 30 runs of each algorithm for optimizing the Rastrigin, Styblinski-Tang, Rosenbrock, and Shubert functions, along with their standard deviations.

revised text (page 15 line 525)

Tests on computationally complex, variable-dimension functions confirm the effectiveness of the FLC-EA algorithm for such tasks. The proposed algorithm completes execution after significantly fewer generations: approximately 60% to 76% fewer for the Rastrigin function, 40% to 78% fewer for the Styblinski-Tang function, 54% fewer for the Rosenbrock function, and 56% fewer for the Shubert function. However, for the Styblinski-Tang function in a two-dimensional domain, the algorithm’s runtime is longer. This may be due to the standard EA handling this function efficiently, with the reduced number of generations failing to offset the computational overhead introduced by the additional FLC mechanism. In all other tasks, the FLC-EA algorithm demonstrated shorter runtimes: approximately 50% to 78% faster for the Rastrigin function, 68% to 73% faster for the Styblinski-Tang function, 7% faster for the Rosenbrock function, and 18% faster for the Shubert function, compared to the standard EA.

revised text (page 15 line 538)

In the fourth stage of the experiments, the performance of the algorithm was evaluated on multi-modal functions f2 and f3. We assessed the ability of the algorithms to escape local optima in these multi-modal functions. The f3 function is particularly challenging due to the complexity of its landscape. The algorithm begins with an initial population centered around a local maximum at point (5, 5) with a value of 1.5. The optimization process requires the algorithm to navigate through valleys and bypass multiple local maxima (values 1.0 and 2.0) in order to reach and populate the global maximum region at point (50, 50) with a value of 2.5. Table 4 presents the outcomes from 30 runs of the FLC-EA and EA algorithms, including the number of generations and the runtime until the stopping criterion is met, as well as their standard deviations.

revised text (page 16 line 551)

The experiments on the f2 and f3 functions confirm the improved exploration capability of the FLC-EA algorithm compared to the standard EA. The proposed algorithm completes tasks after 61% and 62% fewer generations, and in approximately 31% and 50% shorter running times for the f2 and f3 functions, respectively.

revised text (page 14 line 556)

The fifth stage of the experiments focuses on evaluating the FLC-EA algorithm's convergence to global optima using the Rastrigin function in a 30-dimensional space. To validate the performance of FLC-EA, its results were compared with three established optimization methods: the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), as detailed in publication [18]; the Grey Wolf Optimizer (GWO), as introduced in publication [19]; and the Particle Swarm Optimization (PSO), as described in publication [20]. For this comparison, implementations of CMA-ES, GWO, and PSO within the MATLAB environment were utilized, ensuring a consistent and fair evaluation across all algorithms. Figure 4 presents the convergence behaviors of the algorithms, showing the best individual's fitness value after a fixed number of generations.

 

updated text in the manuscript if necessary

Comments 6:

 

The experimental results are only compared with traditional genetic algorithms and some benchmark methods. It is suggested to introduce more cutting-edge optimization algorithms for comparison.

Response 6:

Agree.

 

The traditional genetic algorithm was used as a benchmark due to its stability and well-documented position in the optimization field. Due to the short revision time of the paper, a comparison with other more advanced optimization methods is planned as future work.

Comments 7:

 

The conclusion section summarizes the results of the study and the outlook for future research is rather simple. Further discussion on the strengths, weaknesses and possible limitations of the methods is recommended.

Response 7:

I have added text (page 20 line 651)

Exploration and exploitation are conflicting objectives in the search process, leading to certain weaknesses and limitations of the FLC-EA algorithm. Parameter tuning in EAs is a critical aspect of their design, and the following challenges are particularly notable:

·         Time-Consuming Process of Development: tuning FLC parameters, such as the rule database, the number, and the shape of membership functions, is a complex task. This complexity arises from the high dimensionality of the problem, the size of the dataset, and the intricate interactions between various parameters.

·         Computational Expense: FLC parameter tuning often requires extensive experimentation, which can be both time-consuming and resource-intensive. Developers must evaluate numerous configurations and rely on trial-and-error methods, particularly when using manual tuning approaches. This can delay project timelines and significantly increase costs of software development.

·         High Resource Requirements for Automated Tuning: while automated tuning techniques are generally more efficient than manual methods, they often demand substantial computational resources. This is especially true for complex problems, making such methods expensive and potentially inaccessible in resource-constrained environments.

·         Lack of Reproducibility: the parameter tuning process can introduce variability in model performance, making it challenging to reproduce results consistently. This variability can hinder efforts to replicate models or develop similar projects.

·         Complexity of Parameter Interactions: interactions between parameters are often complex and non-linear, making it difficult to predict how adjustments to one parameter will impact the overall model performance. This complexity can lead to suboptimal tuning decisions, as developer may lack a full understanding of the implications of their choices.

·         FLC parameter tuning is computationally expensive: a standard EA can often be more effective, particularly in problems where the fitness function calculation is computationally simple.

 

updated text in the manuscript if necessary

Comments 8:

The number of references is low and lacks references from the last two years, it is suggested to add the latest literature.

Response 8:

Agree. 11 new references added to text

[1] R. Yan et al. The Exploration-Exploitation Dilemma Revisited: An Entropy Perspective. 2024. ArXiv preprint: https://arxiv.org/pdf/2408.09974

[2] P. Pagliuca. Analysis of the Exploration-Exploitation Dilemma in Neutral Problems with Evolutionary Algorithms. Journal of Artificial Intelligence and Autonomous Intelligence. 2024; 1(2):8.

[3] A. Rajabi, C. Witt. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 84, 1694–1723. 2022. https://doi.org/10.1007/s00453-022-00933-z

[12] M. Jalali Varnamkhasti, L.S. Lee, M.R. Bakar, W.J. Leong. A genetic algorithm with fuzzy crossover operator and probability. 2012. Advances in Operations Research 956498.

[13] O. Syzonov, S. Tomasiello, N. Capuano. New Insights into Fuzzy Genetic Algorithms for Optimization Problems. Algorithms 2024, 17, 549. https://doi.org/10.3390/a17120549

[14] A. Saiful at al. Adaptive Fuzzy-Genetic Algorithm Operators for Solving Mobile Robot Scheduling Problem in Job-Shop FMS Environment. 2024. Robotics and Autonomous Systems 176.104683

[15] A. Santiago et al. Micro-Genetic algorithm with fuzzy selection of operators for multi-Objective optimization: μFAME. Swarm and Evolutionary Computation 61 (2021), DOI: 10.1016/j.swevo.2020.100818.

[16] S. Im, J. Lee. Adaptive crossover, mutation and selection using fuzzy system for genetic algorithms. Artificial Life and Robotics, 13 (2008), DOIhttps://doi.org/10.1007/s10015-008-0545-1

[17] X. Gao et al. Predicting human body composition using a modified adaptive genetic algorithm with a novel selection operator. PloS ONE 15.7 (2020). https://api.semanticscholar.org/CorpusID:220608776

[30] The Fundamental Clustering Problems Suite. https://www.uni-marburg.de/fb12/datenbionik/data/

[31] L. T. Kóczy, Computational complexity of various fuzzy inference algorithms, in: Annales Univ. Sci. Budapest, Sect. Comp, Vol. 12, 1991, pp. 151-158.

 

updated text in the manuscript if necessary

4. Response to Comments on the Quality of English Language

Point 1:

The English could be improved to more clearly express the research.

Response 1:  

Text revised according to the suggestions of an English speaker

5. Additional clarifications

 

 

Reviewer 3 Report

Comments and Suggestions for Authors

 This article addresses an Evolutionary Algorithm (EA) method based on Fuzzy Logic Controller (FLC) in the selection of individuals, using individuals and historical data.

It presents comparative performance analysis (number of generations, computational time and convergence) between the proposed method (Fuzzy Logic Controller + Evolutionary Algorithms - FLC-EA) and others (standard EA, Covariance Matrix Adaptation Evolution Strategy - CMA-ES, Grey Wolf Optimizer - GWO, Paticle Swarm Optimization - PSO, Sorting Genetic Algorithm or Search Group Algorithm- SGA), when applied in function optimization problems (CEC2015) and connected facility location problems (Connected Facility Location Problem - ConFLP).

The article presents the method in detail, as well as the instances used for comparison. Analyses and comments on the results obtained are presented.

I have a few comments.

      - What criteria were adopted in choosing the values ​​associated with the membership functions in Figure 2? It seems to me that introducing an intelligent and flexible procedure in the definition of membership functions, as well as in the definition of the rule base, can improve the proposed algorithm.

-  What criteria were adopted in choosing standard EA, CMA-ES, GWO, PSO, SGA methods to compare with FLC-EA?

-  Figure 4: Using the Rastrigin function in a 30-dimensional space, the GWO algorithm showed much better efficiency than FLC-EA in terms of convergence to global optimization. How would the CMA-ES, GWO, PSO algorithms behave when applied to the ComFLP?

-  What criteria were used in choosing SGA to compare with FLC-EA? Aren't there more efficient algorithms than SGA?

- From Figures 5 and 6, although the SGA method presents a larger number of generations than the FLC-EA method, the average running time is quite similar, except in the case of 500 computer terminals. I believe it is interesting to present results with more cases of Problem Size.

-  I believe it greatly improves the article by making references to the work involving: ​​Evolutionary Algorithms and Artificial Intelligence, Reinforcement Learning (RL) applied to Evolutionary Algorithms (EAs), Integration of machine learning techniques, Hybrid approaches combining with neural networks, Deep Reinforcement Learning (DRL) to enhance RL-EAs applied to optimization in complex environments.

Author Response

For research article

 

 

Response to Reviewer 3 Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

 

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Yes

 

Is the research design appropriate?

Can be improved

improved

Are the methods adequately described?

Yes

 

Are the results clearly presented?

Yes

 

Are the conclusions supported by the results?

Can be improved

improved

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1:

What criteria were adopted in choosing the values  associated with the membership functions in Figure 2? It seems to me that introducing an intelligent and flexible procedure in the definition of membership functions, as well as in the definition of the rule base, can improve the proposed algorithm.

Response 1:

The basic criterion was low computational effort and simplicity of the system. The use of an intelligent and flexible procedure in the definition of the membership function, as well as in the definition of the rule base, can improve the proposed algorithm, but it can also increase the computational effort. Since the procedure is run for each individual in each generation, it can significantly affect the efficiency of the algorithm. The aim of the work in the first stage was to check whether the proposed method brings a positive effect. In the further plans, it is to carry out the optimization of the method and also FLC including estimators, the rule base and the membership function.

Comments 2:

What criteria were adopted in choosing standard EA, CMA-ES, GWO, PSO, SGA methods to compare with FLC-EA?

Response 2:

These algorithms are widely cited in various fields, including operations research and applied mathematics, emphasizing their versatility and effectiveness in various applications.

Comments 3:

Figure 4: Using the Rastrigin function in a 30-dimensional space, the GWO algorithm showed much better efficiency than FLC-EA in terms of convergence to global optimization. How would the CMA-ES, GWO, PSO algorithms behave when applied to the ConFLP?

Response 3:

The analysis of this problem is beyond the time limit for the revision of this article. This problem will be analyzed as future works.

Comments 4:

What criteria were used in choosing SGA to compare with FLC-EA? Aren't there more efficient algorithms than SGA?

Response 4:

The traditional genetic algorithm was used as a benchmark due to its stability and well-documented position in the optimization field. Due to the short revision time of the paper, a comparison with other more advanced optimization methods is planned as future work.

Comments 5:

From Figures 5 and 6, although the SGA method presents a larger number of generations than the FLC-EA method, the average running time is quite similar, except in the case of 500 computer terminals. I believe it is interesting to present results with more cases of Problem Size.

Response 5:

Agree. I have modified text and figure 7 (page 18 line 594)

Figure 7 presents an example distribution of computer terminals and network nodes after optimization for a tasks with 125 to 1000 terminals. In the task with 500 terminals, the standard SGA performed significantly worse than the FLC-EA algorithm. This disparity may be due to the specific terminal layout and the setting of the stopping criterion near the optimal value. The FLC-EA algorithm completed the tasks significantly faster and required fewer generations, primarily due to its ability to dynamically adapt parameters dynamically.

a)

b)

c)

d)

 

Figure 7. An example distribution of computer terminals and network nodes; large black circles represent cluster centroids: a) tasks with 125 terminals, b) task with 250 terminals, c) task with 500 terminals, d) task with 1000 terminals

 

updated text in the manuscript if necessary

Comments 6:

I believe it greatly improves the article by making references to the work involving: Evolutionary Algorithms and Artificial Intelligence, Reinforcement Learning (RL) applied to Evolutionary Algorithms (EAs), Integration of machine learning techniques, Hybrid approaches combining with neural networks, Deep Reinforcement Learning (DRL) to enhance RL-EAs applied to optimization in complex environments.

Response 6:

Agree. I have added 11 new references to text

 

[1] R. Yan et al. The Exploration-Exploitation Dilemma Revisited: An Entropy Perspective. 2024. ArXiv preprint: https://arxiv.org/pdf/2408.09974

[2] P. Pagliuca. Analysis of the Exploration-Exploitation Dilemma in Neutral Problems with Evolutionary Algorithms. Journal of Artificial Intelligence and Autonomous Intelligence. 2024; 1(2):8.

[3] A. Rajabi, C. Witt. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 84, 1694–1723. 2022. https://doi.org/10.1007/s00453-022-00933-z

[12] M. Jalali Varnamkhasti, L.S. Lee, M.R. Bakar, W.J. Leong. A genetic algorithm with fuzzy crossover operator and probability. 2012. Advances in Operations Research 956498.

[13] O. Syzonov, S. Tomasiello, N. Capuano. New Insights into Fuzzy Genetic Algorithms for Optimization Problems. Algorithms 2024, 17, 549. https://doi.org/10.3390/a17120549

[14] A. Saiful at al. Adaptive Fuzzy-Genetic Algorithm Operators for Solving Mobile Robot Scheduling Problem in Job-Shop FMS Environment. 2024. Robotics and Autonomous Systems 176.104683

[15] A. Santiago et al. Micro-Genetic algorithm with fuzzy selection of operators for multi-Objective optimization: μFAME. Swarm and Evolutionary Computation 61 (2021), DOI: 10.1016/j.swevo.2020.100818.

[16] S. Im, J. Lee. Adaptive crossover, mutation and selection using fuzzy system for genetic algorithms. Artificial Life and Robotics, 13 (2008), DOIhttps://doi.org/10.1007/s10015-008-0545-1

[17] X. Gao et al. Predicting human body composition using a modified adaptive genetic algorithm with a novel selection operator. PloS ONE 15.7 (2020). https://api.semanticscholar.org/CorpusID:220608776

[30] The Fundamental Clustering Problems Suite. https://www.uni-marburg.de/fb12/datenbionik/data/

[31] L. T. Kóczy, Computational complexity of various fuzzy inference algorithms, in: Annales Univ. Sci. Budapest, Sect. Comp, Vol. 12, 1991, pp. 151-158.

 

updated text in the manuscript if necessary

4. Response to Comments on the Quality of English Language

Point 1:

The quality of English does not limit my understanding of the research.

Response 1:

Text revised according to the suggestions of an English speaker

5. Additional clarifications

 

 

Reviewer 4 Report

Comments and Suggestions for Authors

The paper deals with using a Fuzzy-based guidance for roulette selection in an evolutionary algorithm. The guidance is described and tested in numerical experiments using benchmark test functions. The results presented show that the proposed scheme has some advantages over standard roulette selection and are a promising addition to the selection mechanisms. In general, I think this is an interesting paper. There are just two minor points. A first is that the genetic operators used are not described. I think this is needed to make the findings reproducible. A second concern is that recent works on fuzzy selection are not considered, for instance: Santiago et al. Micro-Genetic algorithm with fuzzy selection of operators for multi-Objective optimization: μFAME. Swarm and Evolutionary Computation 61 (2021), or Im & Lee. Adaptive crossover, mutation and selection using fuzzy system for genetic algorithms. Artificial Life and Robotics, 13 (2008), or Gao et al. Predicting human body composition using a modified adaptive genetic algorithm with a novel selection operator. Plos one 15.7 (2020). I think this should be done. 

 

Author Response

For research article

 

 

Response to Reviewer 4 Comments

 

1. Summary

 

 

Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions/corrections highlighted/in track changes in the re-submitted files

 

2. Questions for General Evaluation

Reviewer’s Evaluation

Response and Revisions

Does the introduction provide sufficient background and include all relevant references?

Yes/

 

Is the research design appropriate?

Yes

 

Are the methods adequately described?

Yes

 

Are the results clearly presented?

Yes

 

Are the conclusions supported by the results?

Yes

 

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1:

A first is that the genetic operators used are not described. I think this is needed to make the findings reproducible.

Response 1:

Therefore, I have added a new text (page 9 line 321)

In our experiments, we utilize EAs to explore the search space within an n-dimensional domain . The genes of individuals are represented as fixed-length vectors of real numbers. The fitness function f(x), computed for each individual x indicates the quality of that individual

                                                                           (6)

A mutation is performed by adding a randomly generated number to the value of each gene in an individual.

                                                                                     (7)

The mutation process includes mechanisms to ensure that the resulting gene values remain within acceptable domain ranges. The mutation size (σ in formula 7) is a parameter of the algorithm that can be adjusted by the user depending on the problem being addressed. The algorithm employs a single-point crossover with a randomly selected crossover point. During this process, all genes (represented as real numbers) of the individuals are exchanged.

In the algorithm, the roulette wheel method is used to select individuals for the parent pool. Other algorithm parameters, such as selection and mutation probabilities, were determined experimentally during preliminary experiments for each benchmark task. The algorithm terminates when the best individual achieves a predetermined fitness value.

The parameters common to all experiments include:

·         population size: 25,

·         crossover probability: 0.8,

·         mutation probability was selected depending on the number of variables in the optimized function so that at least one of the individual's genes was modified.

updated text in the manuscript if necessary

Comments 2:

A second concern is that recent works on fuzzy selection are not considered, for instance: Santiago et al. Micro-Genetic algorithm with fuzzy selection of operators for multi-Objective optimization: μFAME. Swarm and Evolutionary Computation 61 (2021), or Im & Lee. Adaptive crossover, mutation and selection using fuzzy system for genetic algorithms. Artificial Life and Robotics, 13 (2008), or Gao et al. Predicting human body composition using a modified adaptive genetic algorithm with a novel selection operator. Plos one 15.7 (2020). I think this should be done. 

Response 2:

 

Agree. I have revised text (page 3 line 93)

 

This paper proposes a new model of an Evolutionary. The model incorporates a Fuzzy Logic Controller to analyze the fitness of individuals in the current generation and evolutionary history of previous generations. This analysis is used to fine-tune the probability of an individual's selection for the parent pool. Many EAs controlled by FLC have been presented and discussed in the literature. An overview of these methods, as well as their classification can be found in [10]. The application of FLCs for adaptive adjustment of EA parameters is described, for example, in [11]. Santiago et al. [15] present µFAME, a Micro Genetic Algorithm designed for multi-objective optimization problems. In this algorithm, a Fuzzy Inference System (FIS) is used to select evolutionary operators. The FIS has two inputs, which are updated every windowSize iterations. The FIS output determines the desirability of using each operator for the next time interval. Paper [16] discusses the use of a Fuzzy System to adaptively control crossover, mutation, and selection in Genetic Algorithms. Fuzzy Logic is not the only method for adapting GAs parameters. Paper [17] discusses notable shortcomings of the roulette selection strategy and proposes a selection operator that combines the roulette selection strategy with an optimal retention strategy to improve the adaptability and diversity of individual selection.

The newly developed algorithm differs from previous methods, including the author's earlier works [18-21] in several key aspects:

 

A new references added

[1] R. Yan et al. The Exploration-Exploitation Dilemma Revisited: An Entropy Perspective. 2024. ArXiv preprint: https://arxiv.org/pdf/2408.09974

[2] P. Pagliuca. Analysis of the Exploration-Exploitation Dilemma in Neutral Problems with Evolutionary Algorithms. Journal of Artificial Intelligence and Autonomous Intelligence. 2024; 1(2):8.

[3] A. Rajabi, C. Witt. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 84, 1694–1723. 2022. https://doi.org/10.1007/s00453-022-00933-z

[12] M. Jalali Varnamkhasti, L.S. Lee, M.R. Bakar, W.J. Leong. A genetic algorithm with fuzzy crossover operator and probability. 2012. Advances in Operations Research 956498.

[13] O. Syzonov, S. Tomasiello, N. Capuano. New Insights into Fuzzy Genetic Algorithms for Optimization Problems. Algorithms 2024, 17, 549. https://doi.org/10.3390/a17120549

[14] A. Saiful at al. Adaptive Fuzzy-Genetic Algorithm Operators for Solving Mobile Robot Scheduling Problem in Job-Shop FMS Environment. 2024. Robotics and Autonomous Systems 176.104683

[15] A. Santiago et al. Micro-Genetic algorithm with fuzzy selection of operators for multi-Objective optimization: μFAME. Swarm and Evolutionary Computation 61 (2021), DOI: 10.1016/j.swevo.2020.100818.

[16] S. Im, J. Lee. Adaptive crossover, mutation and selection using fuzzy system for genetic algorithms. Artificial Life and Robotics, 13 (2008), DOIhttps://doi.org/10.1007/s10015-008-0545-1

[17] X. Gao et al. Predicting human body composition using a modified adaptive genetic algorithm with a novel selection operator. PloS ONE 15.7 (2020). https://api.semanticscholar.org/CorpusID:220608776

[30] The Fundamental Clustering Problems Suite. https://www.uni-marburg.de/fb12/datenbionik/data/

[31] L. T. Kóczy, Computational complexity of various fuzzy inference algorithms, in: Annales Univ. Sci. Budapest, Sect. Comp, Vol. 12, 1991, pp. 151-158.

 

updated text in the manuscript if necessary

4. Response to Comments on the Quality of English Language

Point 1:

The quality of English does not limit my understanding of the research.

Response 1:

Text revised according to the suggestions of an English speaker

5. Additional clarifications

 

 

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The paper is revised. In my opinion, there are still some defects,

1、The main problems of the  Roulette Selection. Why do you use fuzzy?

2、In the experiments section, it is necessary to compare your works with the latest variants, such as https://doi.org/10.1016/j.swevo.2025.101844. So that your work can be validated.

 

Comments on the Quality of English Language

Can

Author Response

Comments 1:

The main problems of the Roulette Selection. Why do you use fuzzy?

Response 1:

I have added new text (page 8 line 292)

 

Roulette wheel selection is a popular probability-based technique in EAs, used to randomly select parents for reproduction based on their fitness scores. The selection probability of an individual depends directly on its fitness. Fuzzy Logic can leverage expert knowledge and information from previous generations to adjust selection pressure and guide the evolutionary process toward desirable regions of the solution space. Fuzzy Logic was chosen to control the probability of selection due to its advantages:

  • Human-like decision-making: FL-based systems model human decision-making, making the control model easier to understand.
  • Simplified modeling of complex problems: FL allows for modeling complex systems without requiring precise mathematical formulations.
  • Robustness to uncertainty: FL-based systems are resistant to small changes in input data or noise, ensuring stable operation under uncertain conditions.
  • Scalability and flexibility: These systems can be easily expanded by adding new rules without requiring a complete redesign, making them adaptable to the problem at hand.
  • Real-time efficiency: FL works well in real-time systems due to its low computational requirements.

Comments 2:

In the experiments section, it is necessary to compare your works with the latest variants, such as https://doi.org/10.1016/j.swevo.2025.101844. So that your work can be validated.

Response 2:

I have added new text (page 16 line 570 – page 17 line 606)

 

3.5. Comparition of proposed method to reinforcement learning-based algorithm

Reinforcement Learning (RL) [28, 29] is a branch of machine learning where an intelligent agent learns to make decisions by interacting with an environment. Unlike supervised learning, RL does not require labeled input-output pairs to be presented to the agent. Instead, the agent learns by receiving feedback in the form of rewards or penalties based on its actions. The agent's goal is to learn a policy that maximizes cumulative rewards over time. This policy can be either deterministic (always choosing the best action) or stochastic (choosing actions probabilistically). A significant challenge in RL is balancing exploration (trying new actions to discover rewards) and exploitation (using known actions to maximize immediate rewards).  RL algorithms are highly effective and widely applied in various domains:

  • Gaming: Training artificial agents for games like chess, Go, and video games.
  • Robotics and Autonomous Vehicles: Teaching robots or self-driving cars to navigate and perform tasks.
  • Healthcare: Optimizing treatment plans and aiding in drug discovery.
  • Finance: Portfolio optimization and automated trading strategies.

Reinforcement Learning (RL) has been successfully applied to solve numerous real-world technical problems. For example, it has been used for parameter estimation of photovoltaic models to enhance the efficiency and performance of solar energy systems [30]. Teaching-Learning-based Optimization (TLBO) is a population-based metaheuristic algorithm inspired by the teaching and learning interactions in a classroom [31]. In this approach, a "teacher" guides the "students" (candidate solutions) to improve their performance, while students further refine their understanding through mutual interactions. TLBO, available in publication [32] for MATLAB, will serve as a benchmark for evaluating the convergence of the FLP-EA algorithm, providing a standard for comparison in terms of convergence to optimum.

The experiment aims to compare the convergence of the FLC-EA and TLBO algorithms in solving FOP tasks. Table 5 presents the results from 30 runs of the FLC-EA and TLBO algorithms, including the number of generations required to meet the stopping criterion and their corresponding standard deviations.

Table 5. The convergence of algorithm: the number of generations, including the minimum, average, maximum, and standard deviation.

Algorithm

Function

Size

The number of generations

Min

value

Average

value

Max

value

σ

FLC-EA

f1

2

2114

6188.1

14732

3840.1

5

6790

13363.1

21539

5001.2

10

37795

45190.7

58356

7074.3

CEC_f1

50

15537

17815.7

18904

1134.8

100

22079

25037.3

27040

1344.6

CEC_f2

50

159626

203278.7

259870

30706.9

100

157404

167989.9

188238

10955.1

CEC_f3

50

15859

18180.4

20146

1370.4

100

25334

28594.4

30701

1646.9

Rastrigin

2

419

3040.4

9632

2785.2

5

6597

19526.4

53574

15501.5

10

4696

31862.5

56953

16612.2

Styblinski-Tang

2

472

1215.1

2599

647.6

5

324

2238.7

4036

1238.3

10

4924

10863.9

19091

4289.8

Rosenbrock

2

322

2333.5

4973

1406.6

Shubert

2

1343

3924.2

7654

2115.6

f2

2

248

1482.8

3466

1022.0

f3

2

594

13942.1

32880

9477.1

TLBO

f1

2

19

24.9

31

2.4

5

52

70.6

90

12.0

10

201

248.3

339

37.8

CEC_f1

50

48

50.7

53

1.4

100

54

54.9

56

0.8

CEC_f2

50

30

31.4

33

1.1

100

33

34.4

36

1.2

CEC_f3

50

33

35.8

38

1.4

100

36

38.2

40

1.5

Rastrigin

2

4

15.4

74

3.6

5

145

784.2

4646

1310.3

10*

220

1659.7

4715

1443.8

Styblinski-Tang

2

10

15.4

21

3.6

5*

21

40.4

81

18.4

10*

-

-

-

-

Rosenbrock

2

36

64.9

97

19.9

Shubert

2

32

40.8

48

5.1

f2

2

26

42.2

75

13.1

f3

2

10

17.6

31

6.5

* Algorithm failed to complete the task at least once

** Algorithm failed to complete the task

 

The convergence of the FLC-EA algorithm depends on the type of problem being addressed. For function optimization problems, TLBO generally requires much fewer generations across all considered functions. However, in some tasks, TLBO got stuck in local optima.

 

I have changed text (page 20 line 656 – page 21 line 681)

 

3.8. Evaluation algorithm's convergence to global optima

The fifth stage of the experiments focuses on evaluating the FLC-EA algorithm's convergence to global optima using the Rastrigin function in a 30-dimensional space. To validate the performance of FLC-EA, its results were compared with three established optimization methods: the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), as detailed in publication [34]; the Grey Wolf Optimizer (GWO), as introduced in publication [35]; the Particle Swarm Optimization (PSO), as described in publication [36] and Teaching-Learning-based Optimization (TLBO), as described in publication [32] . For this comparison, implementations of CMA-ES, GWO, and PSO within the MATLAB environment were utilized, ensuring a consistent and fair evaluation across all algorithms. Figure 4 presents the convergence behaviors of the algorithms, showing the best individual's fitness value after a fixed number of generations.

 

Figure 4. The convergence of the algorithms FLC-EA, CMA-ES, GWO, PSO and TLBO respectively

a)

b)

c)

d)

Figure 5. The convergence of the algorithms FLC-EA, CMA-ES, GWO, PSO and TLBO: a) ConFLP with 125 terminals, b) ConFLP with 125 terminals, c) clustering problem with 400 objects, d) clustering problem with 4096 objects

 

The convergence on the 30-dimensional Rastrigin function of the FLC-EA algorithm is better than CMA_ES and PSO and worse than GWO and TLBO. However, for real-world problems such as ConFLP and clustering tasks, which involve significantly more complex function optimization, FLC-EA converges faster overall, but is slightly slower in clustering tasks compared to PSO. In the initial generations, CMA-ES, GWO, PSO, and TLBO demonstrate better convergence than FLC-EA; however, FLC-EA outperforms them in later generations.

 

A new references added

 

[28] L.P. Kaelbling, M.L. Littman, A.W.  Moore, Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research. 4: pp. 237–285 (1996) https://api.semanticscholar.org/CorpusID:1708582.

[29] S. Lu, S. Han, W. Zhou, J. Zhang, Recruitment-imitation mechanism for evolutionary reinforcement learning, Inf. Sci. 553 (Apr. 2021) 172–188, https://doi.org/10.1016/j.ins.2020.12.017.

[30] H. Wang, X. Yu, Y. Lu, A reinforcement learning-based ranking teaching-learning-based optimization algorithm for parameters estimation of photovoltaic models, Swarm and Evolutionary Computation, vol.93, (2025) https://doi.org/10.1016/j.swevo.2025.101844

[31] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems, Computer-Aided Design, Volume 43, Issue 3, pp. 303-315, (2011) https://doi.org/10.1016/j.cad.2010.12.015.

[32] M. K. Heris, Teaching-Learning-based Optimization in MATLAB. https://yarpiz.com/83/ypea111-teaching-learning-based-optimization, Yarpiz, 2015

Reviewer 3 Report

Comments and Suggestions for Authors

The author has addressed most of my comments.

Although I do not fully agree with the responses to my comments 2 and 3, I believe that this is an article that presents a proposal using EA. I think this proposal needs to be analyzed in greater depth.

Even so, I believe that the article can be approved for publication.

My comments:

-          Response 2: These algorithms are widely cited in various fields, including operations research and applied mathematics, emphasizing their versatility and effectiveness in various applications.

This is true. But there are many other bioinspired algorithms, each one better suited to certain scenarios. In addition, algorithms such as SGA, PSO, and standard EA have been improved in previous works.

Response 3: The analysis of this problem is beyond the time limit for the revision of this article. This problem will be analyzed as future works.

It is advisable to emphasize this issue.

Author Response

Comments 1:

These algorithms are widely cited in various fields, including operations research and applied mathematics, emphasizing their versatility and effectiveness in various applications.

This is true. But there are many other bioinspired algorithms, each one better suited to certain scenarios. In addition, algorithms such as SGA, PSO, and standard EA have been improved in previous works.

 

Response 1:

I have added a new comparisons to TLBO algorithm in terms of convergence and number of generations to stop criterion. Details and text in Response 2

Comments 2:

Figure 4: Using the Rastrigin function in a 30-dimensional space, the GWO algorithm showed much better efficiency than FLC-EA in terms of convergence to global optimization. How would the CMA-ES, GWO, PSO algorithms behave when applied to the ConFLP?

The analysis of this problem is beyond the time limit for the revision of this article. This problem will be analyzed as future works.

It is advisable to emphasize this issue.

Response 2:

I have added new text (page 16 line 570 – page 17 line 606)

 

3.5. Comparition of proposed method to reinforcement learning-based algorithm

Reinforcement Learning (RL) [28, 29] is a branch of machine learning where an intelligent agent learns to make decisions by interacting with an environment. Unlike supervised learning, RL does not require labeled input-output pairs to be presented to the agent. Instead, the agent learns by receiving feedback in the form of rewards or penalties based on its actions. The agent's goal is to learn a policy that maximizes cumulative rewards over time. This policy can be either deterministic (always choosing the best action) or stochastic (choosing actions probabilistically). A significant challenge in RL is balancing exploration (trying new actions to discover rewards) and exploitation (using known actions to maximize immediate rewards).  RL algorithms are highly effective and widely applied in various domains:

  • Gaming: Training artificial agents for games like chess, Go, and video games.
  • Robotics and Autonomous Vehicles: Teaching robots or self-driving cars to navigate and perform tasks.
  • Healthcare: Optimizing treatment plans and aiding in drug discovery.
  • Finance: Portfolio optimization and automated trading strategies.

Reinforcement Learning (RL) has been successfully applied to solve numerous real-world technical problems. For example, it has been used for parameter estimation of photovoltaic models to enhance the efficiency and performance of solar energy systems [30]. Teaching-Learning-based Optimization (TLBO) is a population-based metaheuristic algorithm inspired by the teaching and learning interactions in a classroom [31]. In this approach, a "teacher" guides the "students" (candidate solutions) to improve their performance, while students further refine their understanding through mutual interactions. TLBO, available in publication [32] for MATLAB, will serve as a benchmark for evaluating the convergence of the FLP-EA algorithm, providing a standard for comparison in terms of convergence to optimum.

The experiment aims to compare the convergence of the FLC-EA and TLBO algorithms in solving FOP tasks. Table 5 presents the results from 30 runs of the FLC-EA and TLBO algorithms, including the number of generations required to meet the stopping criterion and their corresponding standard deviations.

Table 5. The convergence of algorithm: the number of generations, including the minimum, average, maximum, and standard deviation.

Algorithm

Function

Size

The number of generations

Min

value

Average

value

Max

value

σ

FLC-EA

f1

2

2114

6188.1

14732

3840.1

5

6790

13363.1

21539

5001.2

10

37795

45190.7

58356

7074.3

CEC_f1

50

15537

17815.7

18904

1134.8

100

22079

25037.3

27040

1344.6

CEC_f2

50

159626

203278.7

259870

30706.9

100

157404

167989.9

188238

10955.1

CEC_f3

50

15859

18180.4

20146

1370.4

100

25334

28594.4

30701

1646.9

Rastrigin

2

419

3040.4

9632

2785.2

5

6597

19526.4

53574

15501.5

10

4696

31862.5

56953

16612.2

Styblinski-Tang

2

472

1215.1

2599

647.6

5

324

2238.7

4036

1238.3

10

4924

10863.9

19091

4289.8

Rosenbrock

2

322

2333.5

4973

1406.6

Shubert

2

1343

3924.2

7654

2115.6

f2

2

248

1482.8

3466

1022.0

f3

2

594

13942.1

32880

9477.1

TLBO

f1

2

19

24.9

31

2.4

5

52

70.6

90

12.0

10

201

248.3

339

37.8

CEC_f1

50

48

50.7

53

1.4

100

54

54.9

56

0.8

CEC_f2

50

30

31.4

33

1.1

100

33

34.4

36

1.2

CEC_f3

50

33

35.8

38

1.4

100

36

38.2

40

1.5

Rastrigin

2

4

15.4

74

3.6

5

145

784.2

4646

1310.3

10*

220

1659.7

4715

1443.8

Styblinski-Tang

2

10

15.4

21

3.6

5*

21

40.4

81

18.4

10*

-

-

-

-

Rosenbrock

2

36

64.9

97

19.9

Shubert

2

32

40.8

48

5.1

f2

2

26

42.2

75

13.1

f3

2

10

17.6

31

6.5

* Algorithm failed to complete the task at least once

** Algorithm failed to complete the task

 

The convergence of the FLC-EA algorithm depends on the type of problem being addressed. For function optimization problems, TLBO generally requires much fewer generations across all considered functions. However, in some tasks, TLBO got stuck in local optima.

 

I have changed text (page 20 line 656 – page 21 line 681)

 

3.8. Evaluation algorithm's convergence to global optima

The fifth stage of the experiments focuses on evaluating the FLC-EA algorithm's convergence to global optima using the Rastrigin function in a 30-dimensional space. To validate the performance of FLC-EA, its results were compared with three established optimization methods: the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), as detailed in publication [34]; the Grey Wolf Optimizer (GWO), as introduced in publication [35]; the Particle Swarm Optimization (PSO), as described in publication [36] and Teaching-Learning-based Optimization (TLBO), as described in publication [32] . For this comparison, implementations of CMA-ES, GWO, and PSO within the MATLAB environment were utilized, ensuring a consistent and fair evaluation across all algorithms. Figure 4 presents the convergence behaviors of the algorithms, showing the best individual's fitness value after a fixed number of generations.

 

 

Figure 4. The convergence of the algorithms FLC-EA, CMA-ES, GWO, PSO and TLBO respectively

a)

b)

c)

d)

Figure 5. The convergence of the algorithms FLC-EA, CMA-ES, GWO, PSO and TLBO: a) ConFLP with 125 terminals, b) ConFLP with 125 terminals, c) clustering problem with 400 objects, d) clustering problem with 4096 objects

 

The convergence on the 30-dimensional Rastrigin function of the FLC-EA algorithm is better than CMA_ES and PSO and worse than GWO and TLBO. However, for real-world problems such as ConFLP and clustering tasks, which involve significantly more complex function optimization, FLC-EA converges faster overall, but is slightly slower in clustering tasks compared to PSO. In the initial generations, CMA-ES, GWO, PSO, and TLBO demonstrate better convergence than FLC-EA; however, FLC-EA outperforms them in later generations.

 

A new references added

 

[28] L.P. Kaelbling, M.L. Littman, A.W.  Moore, Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research. 4: pp. 237–285 (1996) https://api.semanticscholar.org/CorpusID:1708582.

[29] S. Lu, S. Han, W. Zhou, J. Zhang, Recruitment-imitation mechanism for evolutionary reinforcement learning, Inf. Sci. 553 (Apr. 2021) 172–188, https://doi.org/10.1016/j.ins.2020.12.017.

[30] H. Wang, X. Yu, Y. Lu, A reinforcement learning-based ranking teaching-learning-based optimization algorithm for parameters estimation of photovoltaic models, Swarm and Evolutionary Computation, vol.93, (2025) https://doi.org/10.1016/j.swevo.2025.101844

[31] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems, Computer-Aided Design, Volume 43, Issue 3, pp. 303-315, (2011) https://doi.org/10.1016/j.cad.2010.12.015.

[32] M. K. Heris, Teaching-Learning-based Optimization in MATLAB. https://yarpiz.com/83/ypea111-teaching-learning-based-optimization, Yarpiz, 2015.

 

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

The paper is revised. The paper can be further improved by revising introduction. The research gap can be presented in detail. 

Comments on the Quality of English Language

Can be improved. 

Author Response

Comment 1:

The paper is revised. The paper can be further improved by revising introduction. The research gap can be presented in detail. 

 

Response:

I have added new text (page 3 line 133)

Although the adaptation of EAs parameters is widely discussed in the literature, many issues in this field still require clarification and further research. Due to the lack of strict adaptation rules that can be expressed through mathematical formulas, the use of fuzzy logic appears to be a promising (though not the only possible) method. The key challenges in this area include the analysis, selection, and optimization of the fuzzy controller structure. To achieve this, it is necessary to:

- define estimators used as input values that best characterize the course of evolution,

- determine the number and shape of membership functions for each estimator,

- design the fuzzy controller, such as deciding the number of input estimators, building a fuzzy rule base, and selecting a defuzzification method,

- choose which parameters of the evolutionary algorithm should be adapted and determine how they will be modified.

Another important aspect is the performance optimization of the proposed method. While parameter adaptation can improve the algorithm's convergence, it also increases computational overhead. The additional computational effort associated with parameter adaptation must be compensated by improved algorithm's convergence. Furthermore, the adaptation method should enhance the algorithm's robustness against getting trapped in local optima.

Optimization using EAs applies to various types of problems, both artificial and real-world. Each of these problems requires an individual approach, with the adaptation method tailored to the specific problem. The newly developed algorithm try to address this challenges. It differs from previous methods, including the author's earlier works [18-22] in several key aspects:

Back to TopTop