Harris Hawks Optimization with Multi-Strategy Search and Application

: The probability of the basic HHO algorithm in choosing different search methods is symmetric: about 0.5 in the interval from 0 to 1. The optimal solution from the previous iteration of the algorithm affects the current solution, the search for prey in a linear way led to a single search result, and the overall number of updates of the optimal position was low. These factors limit Harris Hawks optimization algorithm. For example, an ease of falling into a local optimum and the efﬁciency of convergence is low. Inspired by the prey hunting behavior of Harris’s hawk, a multi-strategy search Harris Hawks optimization algorithm is proposed, and the least squares support vector machine (LSSVM) optimized by the proposed algorithm was used to model the reactive power output of the synchronous condenser. Firstly, we select the best Gauss chaotic mapping method from seven commonly used chaotic mapping population initialization methods to improve the accuracy. Secondly, the optimal neighborhood perturbation mechanism is introduced to avoid premature maturity of the algorithm. Simultaneously, the adaptive weight and variable spiral search strategy are designed to simulate the prey hunting behavior of Harris hawk to improve the convergence speed of the improved algorithm and enhance the global search ability of the improved algorithm. A numerical experiment is tested with the classical 23 test functions and the CEC2017 test function set. The results show that the proposed algorithm outperforms the Harris Hawks optimization algorithm and other intelligent optimization algorithms in terms of convergence speed, solution accuracy and robustness, and the model of synchronous condenser reactive power output established by the improved algorithm optimized LSSVM has good accuracy and generalization ability.


Introduction
Along with the significant increase in the processing power of computer hardware and software, a large number of excellent meta-heuristics were created in the intelligent computing field [1][2][3][4][5]. Meta-heuristics are a large class of algorithms developed in contrast to optimization and heuristics. Optimization algorithms are dedicated to finding the optimal solution to a problem, but they are often difficult to implement due to the unresolvability of the problem [6,7]. Heuristic algorithms are dedicated to customizing algorithms through intuitive experience and problem information, but are often difficult to generalize due to their specialized nature. Compared to these two algorithms, meta-heuristic algorithms are more general and do not require deep adaptation to the problem, and although they do not guarantee optimal solutions, they can generally obtain optimal solutions under acceptable spatial and temporal conditions, although the degree of deviation from the optimal solution is difficult to estimate [8][9][10][11][12].
The main optimization strategies of meta-heuristic algorithms are summarized as follows: (1) diversification of exploration in a wide range to ensure the global optimal solution; and (2) intensification and exploitation in a local range to obtain an optimal solution as close to the optimal solution as possible [13]. The main difference between various meta-heuristic algorithms is how to strike a balance between the two. Almost all meta-heuristic algorithms have the following characteristics: (1) they are inspired by some phenomena in nature, such as simulated physics, biology, and biological behavior; (2) they use stochastic strategies; (3) they do not use the gradient resolution information of the objective function; and (4) they have several parameters that need to be adapted to the problem, and (5) they have good parallel and autonomous exploration. Meta-heuristic algorithms have been widely used in all aspects of social production and life. Many related research papers are published every year in the fields of production scheduling [14,15], engineering computing [16,17], management decision-making [18,19], machine learning (ML) [20,21], system control [22], and many other disciplines. P-meta-heuristics are categorized into four main groups [23,24]: (1) simulated physical process algorithm, (2) evolutionary algorithm, (3) simulated swarm intelligence algorithm, and (4) human behavior [25][26][27]. Algorithms for simulating physical processes include simulated annealing (SA) [28], gravitational search algorithm [29] which simulates earth gravity, artificial chemical reaction optimization algorithm [30], heat transfer search, which simulates the heat transfer search process in thermodynamics [31], Gases Brownian motion optimization, which simulates the phenomenon of Brownian motion in physics [32], Henry gas solubility optimization, which simulates Henry gas solubility process [33]; and evolutionary algorithm. In 1975, American professor Holland proposed the genetic algorithm (GA) based on the Darwinian evolutionary theory and the mechanism of superiority and inferiority in nature. GA [34], evolution strategies [35] and differential evolution [36], genetic programming [37], and Biogeography-Based Optimizer [38]; simulated population intelligence algorithms: the Artificial Bee Colony (ABC) algorithm [39] based on the honey bee harvesting mechanisms, Firefly Algorithm based on the flickering behavior of fireflies [40], Beetle Antennae Search algorithm based on the foraging principle of aspen bark beetles [41], Grey Wolf Optimization (GWO) algorithm inspired by the hierarchy and predatory behavior of gray wolf packs [42], and Virus Colony Search algorithm [43], which is based on the proliferation and infection strategies of viruses to survive and reproduce in the cellular environment through host cells. Simulation of human behavior: Tabu Search [44], Socio Evolution and Learning Optimization [45], Teaching Learning Based Optimization [46], and Imperialist Competitive Algorithm [47].
Harris Hawks Optimization (HHO) [24] is a swarm intelligence optimization algorithm proposed by Heidari et al. in 2019 to simulate the prey hunting process of Harris's hawks in nature. The algorithm was inspired by the three phases of Harris's hawks' predatory behavior: search, search-exploitation conversion, and exploitation. The algorithm has a simple principle, fewer parameters, and better global search capability. Therefore, HHO has been applied in image segmentation [48], neural network training [49], motor control [50] and other fields. However, similar to other swarm intelligence optimization algorithms, HHO has the disadvantages of slow convergence speed, low optimization accuracy, and easily falls into local optimum when solving complex optimization problems. For example, the literature [51] used the information exchange mechanism to enhance the population diversity, thus improving the convergence speed of the HHO algorithm with information exchange (IEHHO) algorithm. The limitation of the IEHHO algorithm is how to set the parameters of the proposed algorithm. Zhang et al. [52] introduced an exponentially decreasing strategy to update the energy factor to increase the exploration and exploitation capability obtained by the relatively higher values of escaping energy; Elgamal et al. [53] made two improvements: (1) they applied chaotic mapping in the initialization phase of HHO; and (2) they used the SA algorithm as the current best solution to improve HHO exploitation; Shiming Song et al. [54] applied Gaussian mutation and a dimension decision strategy of the cuckoo search method into this algorithm to increase the HHO's performance. The mechanism of cuckoo search was useful in improving the convergence speed of the search agents as well as sufficient excavation of the solutions in the search area, while the Gaussian mutation strategy performed well in increasing the accuracy and jumping of the local optimum.
However, according to the no free lunch theory [55], one meta-heuristics algorithm cannot always perform as the best on all operations. The original HHO method could not fully balance the exploration and exploitation phases, which resulted in insufficient global search capability and slow convergence of the HHO method. To alleviate these adverse effects, we propose an improved algorithm model called chaotic multi-strategy search HHO (CSHHO), which introduces chaotic mapping and global search strategy, to solve single-objective optimization problems efficiently.
Here, the initialization phase of HHO is replaced by chaotic mapping, which allows the population initialization phase to be evenly distributed in the upper and lower bounds to enhance the population diversity, simultaneously enabling the population to approach the prey location faster, which accelerates the convergence speed of the algorithm. The adaptive weights are added to the position update formula in the exploration phase of HHO to dynamically adjust the influence of the global optimal solution. In the update phase of the HHO, the optimal neighborhood perturbation strategy is introduced to prevent the algorithm from falling into the local optimal solution and to solve the premature aging phenomenon.
To verify the superior performance of the CSHHO algorithm, this experiment first tests the effect of common chaotic mappings of the HHO algorithm's performance. The selected chaotic mappings are Sinusoidal, Tent, Kent, Cubic, Logistic, Gauss, and Circle, and the experimental results will show that Gauss chaotic mapping improves the accuracy of the HHO algorithm to the greatest extent. Second, the HHO algorithm based on Gauss chaotic mapping with multi-strategy search is tested. Then, it is compared with other classic and state-of-the-art algorithms on 23 classic test functions and 30 IEEE CEC2017 competition functions to verify the significant superiority of the proposed paradigm over other algorithms by Friedman test and Bonferroni-Holm corrected Wilcoxon signed-rank test. Finally, CSHHO is applied to model the reactive power output problem of a synchronous condenser based on LSSVM. The complete results will show that the effectiveness of the proposed optimizer is better than other models in the experiment.
The remainder of this paper is organized as follows: Section 2 introduces the basic theory and structure of the original HHO algorithm. Section 3 introduces the chaotic operator and Global search strategy to integrate it into the original optimizer. Section 4 conducts a full range of experiments on the proposed method and demonstrates the experimental results. It further discusses the proposed method based on experimental results. Section 5 applies the proposed method to the LSSVM-based synchronous condenser reactive power output problem. Finally, Section 6 summarizes the study and proposes research ideas for the future.

Harris Hawks Optimization Algorithm
The HHO algorithm is a swarm intelligence optimization algorithm that is widely used in solving optimization problems. The main idea of the algorithm is derived from the cooperative behavior and chasing strategy of Harris's hawk when catching prey in nature [24]. In the process of prey capture, the HHO algorithm is divided into two segments according to the physical energy E of the prey at the time of escape: exploration and exploitation phases, as shown in Figure 1. During the exploration phase, Harris's hawks randomly select a perching location to observe and monitor their prey. where X rabbit (t) and X rand (t) denote the position of the prey and the individual position at time t, respectively, and q is a random number between (0, 1), the average individual position: where rabbit ( ) X t and rand ( ) X t denote the position of the prey and the individual position at time t, respectively, and q is a random number between (0, 1), the average individual position: As the physical capacity of the prey decreases, the exploration phase changes to the exploitation phase, the prey's physical energy factor E is as follows: In the exploitation phase, the Harris's hawk launches a surprise attack on the target prey found in the exploration phase, and the prey tries to escape when it encounters danger. Let the randomly generated prey escape probability be r, when r< 0.5 the prey successfully escapes; when r > 0.5 the prey does not successfully escape. According to the magnitude of r and |E|, four different location update strategies were proposed in the exploitation phase (see Table 1). As the physical capacity of the prey decreases, the exploration phase changes to the exploitation phase, the prey's physical energy factor E is as follows: In the exploitation phase, the Harris's hawk launches a surprise attack on the target prey found in the exploration phase, and the prey tries to escape when it encounters danger. Let the randomly generated prey escape probability be r, when r < 0.5 the prey successfully escapes; when r > 0.5 the prey does not successfully escape. According to the magnitude of r and |E|, four different location update strategies were proposed in the exploitation phase (see Table 1).
According to the position update condition in the HHO algorithm, the position of the Harris hawk is updated continuously, the fitness value was calculated according to the position of the Harris hawk, and if the fitness threshold was reached, the algorithm was finished. Otherwise, the algorithm continued to execute, and if the maximum number of iterations was reached, the algorithm was finished, and the optimal solution was obtained (See Figure 2).

Strategy
The value of r and |E| Soft besiege Hard besiege r ≥ 0.5 and |E| < 0.5 Soft besiege with progressive rapid dives r < 0.5 and |E| ≥ 0.5 Hard besiege with progressive rapid dives r < 0.5 and |E| < 0.5

Reasons for Improving the Basic HHO Algorithm
Harris's hawks generally gather high in trees to hunt for prey. In the process of hunting for prey, they often hover in a spiral to capture prey; when approaching prey, they rush towards their prey at a faster speed until the distance from the prey is small. They slow down and adjust their body posture to increase the probability of their catching prey [56][57][58]. This mechanism is important in the HHO algorithm. The exploration phase of the basic HHO algorithm uses Equations (1)-(3), where the optimal solution

Reasons for Improving the Basic HHO Algorithm
Harris's hawks generally gather high in trees to hunt for prey. In the process of hunting for prey, they often hover in a spiral to capture prey; when approaching prey, Symmetry 2021, 13, 2364 6 of 41 they rush towards their prey at a faster speed until the distance from the prey is small. They slow down and adjust their body posture to increase the probability of their catching prey [56][57][58]. This mechanism is important in the HHO algorithm. The exploration phase of the basic HHO algorithm uses Equations (1)-(3), where the optimal solution from the previous iteration of the algorithm affected the current solution and caused the algorithm to fall into a local optimum. The search for prey in a linear way led to a single search result. From all iterations of the algorithm, the optimal position of the current algorithm was only updated when the algorithm searched for a better solution than the current one, and the overall number of updates of the optimal position was low, which led to a decrease in the efficiency of the algorithm's search. In reality, when a Harris's hawk chases its prey, it hovers and descends in a spiral manner to catch its prey adaptively, showing better agility when hunting.
Here, the optimal neighborhood disturbance strategy was introduced to enhance the convergence speed of the algorithm and avoid premature maturity of the algorithm. The adaptive weighting and variable spiral position update strategies were introduced to enhance the global search capability of the algorithm by simulating the predation process of Harris's hawks in nature. To make the initial solution generated in the population initialization phase of the HHO algorithm cover the solution space as much as possible, we selected the best chaotic mapping method for HHO among seven commonly used chaotic mapping population initialization methods. It was used as the population initialization method to improve the algorithm. Hence, the above four methods are used to improve the global search capability of the HHO algorithm and to increase the speed of Harris Hawk's search for the optimal solution.

Chaotic Mapping
Chaotic is a deterministic stochastic method found in non-periodic, non-convergence and bounded nonlinear dynamic systems. Mathematically, chaotic is the randomness of a simple deterministic dynamic system, and a chaotic system is considered as the source of randomness. The essence of chaotic is obviously random and unpredictable, and it also has regularity [59].
As an important part of the population initialization algorithm, its result directly affects the convergence speed and quality of the algorithm [60,61]. For example, uniform distribution has more complete coverage of solution space than random distribution, and it is easier to obtain good initial solutions. A classical HHO algorithm uses random population initialization operation, which cannot cover the whole solution space. A chaotic sequence has ergodicity, randomness, and regularity in a certain range. Compared with random search, chaotic sequence searches the search space thoroughly with higher probability, which enables the algorithm to go beyond the local optimum and maintain the diversity of the population. Based on the above analysis, to obtain a good initial solution position and speed up the convergence of the population, seven common chaotic mappings Sinusoidal, Tent, Kent, Cubic, Logistic, Gauss, and Circle were selected [62][63][64][65][66][67][68][69] and used to initialize the population of HHO algorithm. The results were analyzed and the optimal one for the HHO algorithm selected as the population initialization method for the improved algorithm. The following were the mathematical formulas of the 10 chaotic mappings: (1) Sinusoidal chaotic mapping: where P was the control parameter, here P = 2.3, x 0 = 0.7, Equation (14) was written as (2) Tent chaotic mapping (3) Kent chaotic mapping The control parameter µ ∈ (0, 1), when µ = 0.5, the system was Short Period State, µ = 0.5 was not taken here. When using the chaotic mapping, the initial value x 0 had to not be the same as the system parameters µ, otherwise the system evolved into a periodic system. Here, we took µ = 0.4.
(4) Cubic chaotic mapping The standard Cubic chaotic mapping function was expressed as where b and c were the influence factors of chaotic mapping. The range of Cubic chaotic mapping was different for different values to b and c. When c = 3, the sequence generated by Cubic mapping was chaotic. Also when b = 1, x n ∈ (−2, 2); when b = 4, x n ∈ (−1, 1).
Step 2: Iterate each dimension of each Harris hawk M times, resulting in M Harris hawks.
Step 3: After all Harris hawk iterations were completed, chaotic mapping (21) was applied to the solution space.
where ub was the upper bound of the exploration space, lb the lower bound of the exploration space; the d-dimensional coordinates of the i-th Harris hawk were represented by y id , which was generated using Equations (14)- (21); the coordinates of the i-th Harris hawk in the d-dimensional of the exploration space were x id , which was generated using Equation (22). Here, we first proposed the HHO algorithm based on seven different chaotic initialization strategies, respectively, chaotic initialization Harris hawks optimization (CIHHO) algorithm. Obviously, the implementation of CIHHO is basically the same as that of HHO, except that the initialization in Step 2 generates m individual Harris hawks using Equations (14)- (21), and then maps the positions of these m Harris hawks to the search space of the population using Equation (22).

Adaptive Weight
Inspired by the predation process of the Harris's hawk hunting strategy, we added an adaptive weight to the position update of Harris's hawk that changed with the number of iterations. In the early stage of the exploration phase of HHO, the influence of the optimal Harris's hawk position on the current individual position adjustment was weakened to improve the global search ability of the algorithm in the early stage. As the number of iterations increased, the influence of the optimal Harris's hawk position gradually increased, so that other Harris hawks could quickly converge to the optimal Harris hawk position and improve the convergence speed of the whole algorithm. According to the variation of the number of updates in the HHO algorithm, the adaptive weight composed of the number of iterations t were chosen as follows: Such adaptive weight w(t) had a property of nonlinear variation between [0, 1], due to the variation property of the cos function between [0, π 2 ], so that the weights were small at the beginning stage of the exploration phase, but changed slightly faster; at the end of the exploration phase their values were larger, but the speed of change would slow down, so that the convergence of the algorithm was fully guaranteed. The improved HHO algorithm position update formula is: The position update after the introduction of adaptive weights dynamically adjusted the weight size according to the increase of the number of iterations, so that the randomly selected Harris's hawk position X rand (t) and the optimal average Harris's hawk position X rabbit (t) − X m (t) in the population guide the individual Harris's hawks differently at different times. As the number of iterations increased, the Harris's hawk population would move closer to the optimal position, and the larger weights would speed up the movement of Harris's hawk positions, which accelerated the convergence of the algorithm.

Variable Spiral Position Update
In the search phase of the HHO algorithm, Harris's hawk randomly searched for prey in two equal-opportunity strategies based on target location and its own location. However, in nature, Harris's hawks generally hover in a spiral shape to search for prey. To simulate the real process of prey search in nature, we introduced a variable spiral position update strategy in the search phase of the HHO algorithm, so that the Harris's hawk would adjust the distance of each position update according to the spiral shape between the target position and its own position (see Figure 3). prey in two equal-opportunity strategies based on target location and its own location However, in nature, Harris's hawks generally hover in a spiral shape to search for prey To simulate the real process of prey search in nature, we introduced a variable spiral po sition update strategy in the search phase of the HHO algorithm, so that the Harris' hawk would adjust the distance of each position update according to the spiral shape be tween the target position and its own position (see Figure 3). In the exploration phase of the HHO algorithm, Equation (1), a constant b was in troduced to control the shape of the spiral; if this parameter was set to a constant, eac time the Harris's hawk position updated a different spiral arc for speed adjustmen would follow. However, if b was set to a constant value, the spiral movement of the Har In the exploration phase of the HHO algorithm, Equation (1), a constant b was introduced to control the shape of the spiral; if this parameter was set to a constant, each time the Harris's hawk position updated a different spiral arc for speed adjustment would follow. However, if b was set to a constant value, the spiral movement of the Harris's hawk would be too singular when searching for prey, and it would follow a fixed spiral line to approach the target every time, which would easily fall into the misconception of local optimal solution and weaken the global exploration ability of the algorithm. To address this, we introduced the idea of variable spiral search to enable the Harris's hawk to develop more diverse search path strategies for location update and design the parameter b as a variable that changes with the number of iterations to dynamically adjust the shape of the spiral when the Harris's hawk explores, to increase the ability of the Harris's hawk to explore unknown areas; thus, improving the global search capability of the algorithm. After combining the adaptive weights, the new spiral position update was created (see Equation (25)).
The b parameter was designed based on the mathematical model of the spiral, and the spiral shape was dynamically adjusted by introducing the number of iterations on the basis of the original spiral model. The b parameter was designed in such a way that the spiral shape changed from large to small as the number of iterations increased. Early in the exploration phase of the HHO algorithm, the Harris's hawk searches the target with a larger spiral shape, the Harris hawk explores the global optimal solution as much as possible to improve the global optimal search capability of the algorithm; later in the exploration phase of the HHO algorithm, the Harris's hawk searched the target with a small spiral shape to improve the algorithm's search accuracy.

Optimal Neighborhood Disturbance
When updating the position, the Harris's hawk generally takes the current optimal position as the target of this iteration. In the whole iteration, the optimal position is updated only when there is a better position; thus, the total number of updates was not many, which led to the inefficiency of the algorithm search. In this regard, the optimal neighborhood disturbance strategy was introduced to search the neighborhood of the optimal position randomly to find a better global value, which could not only improve the convergence speed of the algorithm, but also avoided premature maturity of the algorithm. The optimal position generated a random disturbance to increase its search of the nearby space, and the neighborhood disturbance formula was: where h and g were random numbers uniformly generated between [0, 1]; X(t) was the new position generated. For the generated neighborhood positions, a greedy strategy was used to determine whether to keep them, and the formula was: where f (x) was the position adaptation value of x. If the generated position was better than the original position, it would be replaced with the original position to make it the global optimum. Otherwise, the optimal position remained unchanged.

Computational Complexity
The computational complexity of the population initialization process of the classical HHO algorithm is O(N), and the computational complexity of the updated mechanism was O(T × N) + O(T × N × D), so the computational complexity of the classical HHO algorithm was O(N × (T + TD + 1)), where T was the maximum number of iterations and D the dimension of the specific problem. The computational complexity of the population initialization process of the CSHHO algorithm was O(ND), and the computational complexity of the update mechanism was the same as that of the classical HHO algorithm, so the computational complexity of the CSHHO algorithm was O(N × (T + TD + D)), where T was the maximum number of iterations, and D the dimensionality of the specific problem.

Algorithm Procedure
Algorithm 1 shows the procedure of the CSHHO optimization algorithm: See Algorithm 1.

Algorithm 1: CSHHO algorithm
Input: The population size N, maximum number of iterations T. Output: The location of rabbit and its fitness value. Using Seven chaotic maps to initialize the population: Through chaotic variables y k i ∈ [0, 1], k = 1, 2, . . . , M. M indicates the initial population dimension and Equations (14)-(21)generate initial chaotic vector; Inverse mapping to get the initial population of the corresponding solution space through Equation (22); While stopping condition is not meet do Calculate the fitness values of hawks; Set X rabbit as best location of rabbit; For each X i do Update the E using Equation (3); if |E| 1 then Update the vector X i using Equations (25) and (2); Random generate parameters: r 1 , if |E| < 1 then if r ≥ 0.5 and |E| ≥ 0.5 then Update the vector X i using Equations (4)-(6); end else if r ≥ 0.5 and |E| < 0.5 then Update the vector X i using Equation (7); end else if r < 0.5 and E |≥ 0.5 Update the vector X i using Equations (8)- (11); end else if r < 0.5and |E| < 0.5 Update the vector X i using Equations (12) and (13); end end end Optimal neighborhood disturbance using Equations (27) and (28); ; end Return X rabbit ;

Experiments and Discussion
In this section, to test and verify the performance of our optimizer proposed, namely CSHHO, a different category of experiments were designed. According to the randomness of the HHO algorithm, this section used a necessary and acceptable set of test functions to ensure that the superior results of the CSHHO algorithm did not happen by accident. Therefore, this section used two different benchmark test suites: classical 23 well-known benchmark functions [70,71] and standard IEEE CEC 2017 [72]. All experiments were as follows: Experiment 1: First, seven chaotic mappings were used as the initialization method of HHO population and tested separately. Second, the seven data sets are analyzed, and the optimal chaotic mapping was selected as the population initialization method of the improved algorithm. Experiment 2: First, on the basis of Experiment 1, a combination test of adaptive weighting mechanism, variable spiral position update and optimal neighborhood disturbance mechanism was executed. Second, we analyzed and compared the CSHHO algorithm with other recently proposed meta-heuristic algorithms such as HHO [24], WOA [38], SCA [73], and Chicken Swarm Optimization (CSO) [74]. Third, we analyzed and compared the CSHHO algorithm with developed advanced variants such as HHO with dimension decision logic and Gaussian mutation (GCHHO) [54] and Hybrid PSO Algorithm with Adaptive Step Search (DEPSOASS) [75] and Gravitational search algorithm with linearly decreasing gravitational constant (Improved GSA) [76] and Dynamic Generalized Opposition-based Learning Fruit Fly Algorithm (DGOBLFOA) [77]. Fourth, based on the third step, the IEEE CEC 2017 was used to perform an algorithm-based accuracy scalability test with test dimensions D = 50, D = 100.
To ensure the fairness of the experiments, the experiments were evaluated using the same parameters, and all population sizes N were set to 30, and dimension D was set to 30; each algorithm on each test instance was performed over 50 independent runs. In each run, the function error value log(F(x) − F(x * )), where F(x) was mean value found at all of the run, and F(x * ) was the optimal value recorded in 23 benchmark functions. The average error (Mean) and standard deviation (Std) of the function error values were considered as two performance metrics for evaluating the performance of the algorithm in all runs. The experimental environment: CPU Intel(R) Xeon(R) CPU E5-2680 v3 (2.50 GHz), RAM 16.00 GB, MATLAB R2019b.

Benchmark Functions Verification
All experiments were performed using the classical 23 test functions [70,71] to test the performance of each algorithm in terms of convergence speed and search accuracy. These Benchmark functions were divided into three categories, including unimodal (UM) and multi-modal (MM). F1-F7 were the UM functions, which had unique global optimality and were used to test the exploitation performance of optimization algorithms. F8-F23 were the MM functions, which were used to test the exploration performance of the optimization algorithm and LO avoidance potentials. As the complexity of the test functions increased, the tested algorithms were more likely to fall into local optima, and all the test functions were used to evaluate the performance of the tested algorithms in various aspects. The convergence curves and test values of the corresponding test functions are given. Appendix A shows the classical 23 test functions.
IEEE CEC 2017 functions were also used in Experiment 2 to evaluate the scalability of CSHHO, other meta-heuristic algorithms, and developed HHO advanced variants. IEEE CEC 2017 Benchmark functions were classified into four categories, consisting of three UM functions (F1-F3), seven MM functions (F4-F10), 10 hybrid functions (F11-F20), and 10 composite functions (F21-F30). To evaluate the scalability of each algorithm more comprehensively, the dimensions of Benchmark functions were set to D = 50, D = 100, and Table 2 records the corresponding accuracy values. It also shows the function formulas for IEEE CEC 2017.
In addition, to compare the performance of various algorithms, the rank was used to rank the mean values of all the algorithms in the simulation experiment in the order of lowest to highest. The lower the rank, the better the algorithm was compared to other algorithms; conversely, the higher the rank, the worse the algorithm was compared to other algorithms. Wilcoxon signed-rank test [78] was used to detect whether there was a significant performance difference among all algorithms, the p-value was corrected by Bonferroni-Holm correction [79]; moreover, the Friedman test [80] was used to rank the superiority of all the algorithms. We used the values of the Friedman test to rank all the algorithms involved in the comparison, if the values of the Friedman test were the same, then the rankings were averaged. Here, the Friedman test was performed on the classical 23 test functions, and the test values were recorded in the average ranking values (ARV) column.

Efficiency Analysis of the Improvement Strategy
First, in the population initialization phase, we selected the Gauss mapping, which had the highest impact on the accuracy of the HHO algorithm, as the population initialization method of CSHHO from seven commonly used chaotic mappings. Second, a global optimization strategy was used to optimize the HHO algorithm, which consisted of three components, including adaptive weight strategy, variable spiral update strategy and optimal neighborhood disturbance strategy. To verify the performance improvement of the HHO algorithm by the two improvements, six algorithms were used for comparison: 1.

Influence of Seven Common Chaotic Mappings on HHO Algorithm
In order to select the best effective chaotic mapping method among seven well-known chaotic mapping methods, which enables us to obtain the best initial solution position and speed up the convergence of the Harris Hawk algorithm population, sinusoidal chaotic mapping, Tent chaotic mapping, Kent chaotic mapping, Cubic chaotic mapping, Logistic chaotic mapping, Gauss chaotic mapping, Circle chaotic mapping were initialized to the population of the HHO algorithm, respectively, forming Sinusoidal-HHO, Tent-HHO, Kent-HHO, Cubic-HHO, Logistic-HHO, Gauss-HHO, Circle-HHO, and we compared the accuracy of these seven algorithms. Table 3 presents the results of the seven algorithms for 23 classical test functions. The results included Best, Worst, Mean, Rank, and Std for each algorithm run 50 times independently. Table 4 shows the Bonferroni-Holm corrected probability values p obtained from the Wilcoxon signed-rank test for the seven chaotic mapping HHO algorithms. Symbols "+\=\−" represent the number of algorithms that were better, similar, or worse than Gauss-HHO. The ARV at Table 5 is the value of the Friedman test for the seven chaotic mapping HHO algorithms. Table 3 shows the data with better experimental results in bold. By analyzing the experimental results, we concluded that under the UM functions (F1-F7), Sinusoidal chaotic mapping achieved optimal results in F3, F5, F7 test functions, Circle chaotic mapping achieved optimal results in F1, F4 test functions, and Sinusoidal chaotic mapping had the most influence on the HHO algorithm, followed by Gauss chaotic mapping and Circle chaotic mapping. Under the MM functions (F8-F23), Gauss chaotic mapping had the most influence on the HHO algorithm. Circle chaotic mapping, Sinusoidal chaotic mapping, Tent chaotic mapping, and Kent chaotic mapping obtained the best results in F21, F15, F20, F13, and F23 test functions, respectively; the results of the seven chaotic mappings were compared in 23 test functions, The Gauss chaotic mapping obtained the most optimal solutions. Table 4 shows the Bonferroni-Holm correction p-values of Wilcoxon signed rank test with 5% confidence level, "+\=\−"indicates whether Gauss-HHO was worse consistent or better with Circle-HHO, Sinusoidal-HHO, Tent-HHO, Kent-HHO, Cubic-HHO and Logistic-HHO. Analyzing the Bonferroni-Holm corrected p-value of Wilcoxon signed rank test and the value of "+\=\−" in each row of the table, better results were obtained using Gauss-HHO based among the 23 tested functions; the experimental results of HHO algorithms based on seven chaotic mappings, respectively, were evaluated comprehensively using Friedman's test at Table 5, compared with the other six chaotic mappings population initialization methods. Gauss-HHO obtained the best results in terms of average ranking, indicating that for the HHO optimization algorithm, Gauss chaotic mapping not only had the randomness, ergodicity and initial value sensitivity of the chaotic mapping itself, but also the population initialization of the HHO optimization algorithm using Gauss chaotic mapping. The Gauss chaos map was used to initialize the population of the HHO optimization algorithm, and to obtain a better optimization accuracy.

Comparison with Conventional Techniques
The Gauss mapping was used for population initialization. Then the adaptive weight mechanism, variable spiral position update mechanism, and adaptive neighborhood disturbance mechanism were introduced to form the CSHHO algorithm. In order to verify the effectiveness of the CSHHO algorithm against the emerging swarm intelligence optimization algorithms in recent years. In this subsection, the CSHHO algorithm is compared with recently published meta-heuristics, including HHO [24], WOA [24], SCA [65] and CSO [66] to calculate the average precision mean and stability Std of each algorithm.
The performance of CSHHO was tested against other optimization algorithms using a nonparametric test: the Bonferroni-Holm corrected Wilcoxon signed rank test. Finally, the non-parametric test method (i.e., the Friedman test) was used to calculate the ARV values of all the participating algorithms and rank them together. As in Experiment 1, this experiment was also based on the test set of 23 classical test functions (see Table 1). The details of the experiment were consistent with the description at the beginning of this section, and Table 6 shows the detailed experimental results. Additionally, Table 7 gives the corrected Wilcoxon signed-rank test based on the 5% confidence level, the "+\=\−" value: the number of CSHHO results that were worse, similar, better or than the comparison algorithm for each test function run 50 times, the result based on the Friedman test was at table.
The optimal results of the tested algorithms under the current Benchmark function are marked in bold. Analyzing the data in Table 6, CSHHO had a strong optimization capability compared to the traditional optimization algorithms. Under the MM functions (F8-F23), CSHHO obtained good results with the best optimization results under the Benchmarks of F9-F13, F15-F19, F21-F23, and CSHHO explored the most optimal region of the above Benchmark and outperformed the other compared optimization algorithms in terms of search performance. The CSHHO explored the above Benchmark optimal regions and outperformed the other participating optimization algorithms in terms of exploration performance. It tied for first place in Benchmark F9, F11, and F11. This showed that CSHHO had strong exploration ability and LO avoidance potentials. Table 7 was analyzed to determine if there was a significant difference between the other algorithms and CSHHO. The "+\=\−" column indicates the number of results that are less than, similar to, or greater than CSHHO for each of the HHO, WOA, SCA, and CSO algorithms run 50 times in each test function. CSHHO has 21 test functions with better results than HHO, CSHHO has 22 test functions with better results than WOA, CSHHO has 23 test functions with better results than SCA, CSHHO has 22 test functions with better results than CSO; the results of the corrected 5% confidence level Wilcoxon signed rank test were analyzed. If the p-value was greater than 0.05, the algorithm was considered to be the same as CSHHO; otherwise, it was considered to be significantly different. In most cases the p-values of Wilcoxon signed-rank test < 0.5, indicating that the CSHHO algorithm was significantly different from the other compared algorithms, all results were corrected by Bonferroni-Holm correction; At Table 8, analysis of the Friedman test value showed that the value of CSHHO was 2.57 lower than the traditional optimization algorithm. The CSHHO algorithm's performance was better than other meta-heuristic algorithms.      The function error value was defined as, where F(x) was the mean value found at all of iterations, and F(x*) was the optimal value recorded in 23 benchmark functions. Among them, under UM functions (F1-F7), the CSHHO algorithm converged with higher accuracy and converged faster than other algorithms, indicating that the development performance of CSHHO algorithm was improved compared with other algorithms; under MM functions (F8-F23), the CSHHO algorithm converged with higher accuracy and converged faster than other algorithms, indicating that the development performance of CSHHO algorithm was improved compared with other algorithms. From F8-F23 CSHHO algorithm did not fall into the local optimum region and could not escape; in F9, F11, F16 CSHHO converged faster in F12, F13, F17, F19, F21-F23. Although the convergence curve of the CSHHO algorithm was smoother and converged slower in the initial iterations as the algorithm searched further. In F10, F15 the CSHHO algorithm not only explores the dominant region with faster convergence speed, but also leads the rest of the algorithms in terms of search accuracy. Therefore, CSHHO algorithm benefits from Gauss chaotic mapping that enhances the population initialization of the algorithm, as well as adaptive weighting mechanism, variable spiral position updating mechanism and adaptive neighborhood disturbance mechanism that enhance the exploration and exploitation ability of the algorithm, CSHHO is less likely to fall into the current search region and increase the ability to jump out of the local optimal region.

Comparison with HHO Variants
In order to verify the effectiveness of the CSHHO algorithm against HHO variants in recent years, this subsection compares the CSHHO algorithm with the recently published advanced HHO variants: GCHHO and DEPSOASS and Improved GSA and DGOBLFOA, observing the mean accuracy mean and stability Std of each algorithm. Table 9 presents the results of the experiments. Next, using nonparametric tests: the Wilcoxon signed-rank test and the Friedman test were used to synthetically assess the performance differences between CSHHO and GCHHO and DEPSOASS and Improved GSA and DGOBLFOA. The configuration of the experiments is the same as in Section 4.2.2, "+\=\−": the number of CSHHO results obtained from 50 runs in each test function that are worse, similar, or better than the comparison algorithm. Table 10 presents the results of the Bonferroni-Holm correction of Wilcoxon signed-rank test experiments, Table 11 presents the results of the Friedman test experiments.
Analyzing the data in Table 9, CSHHO has some advantages over the advanced HHO variants: under the UM functions (F1-F7), CSHHO outperforms the other algorithms in F1-F4, F6, F7, which indicates that CSHHO has further enhanced global best-finding capability compared to the advanced HHO variants. Under the MM functions (F8-F23) the CSHHO's performance outperforms the remaining algorithms in F9, F10, F11, F16, F17, F18, F19, F21, which indicates that CSHHO can explore the peak-to-peak information deeply and effectively to avoid the algorithm from entering the local optimum. In summary, CSHHO has a good ability to develop and explore and avoid local optima. Table 10 shows the results of the corrected Wilcoxon signed rank test at 5% confidence level and Friedman's test for CSHHO and GCHHO and DEPSOASS and Improved GSA and DGOBLFOA. The p-values indicate whether the numerical results of the algorithms involved in the comparison are significant compared to CSHHO; if the p-value is greater than 0.05, the numerical results of the algorithm are considered the same as CSHHO; otherwise, it is considered to be significantly different. Analyzing the Bonferroni-Holm corrected p-value column values in Table 10, under the 23 classical test functions, only GCHHO is less different from CSHHO in the F9-F11 and F16-18 test functions, and in the rest of the test functions. Moreover, at Table 11 upon analyzing the results of the Friedman test, the value of CSHHO is 1.70, which is lower than others. The result indicates that CSHHO has an advantage over the above algorithms in optimization. Figure 5 shows the convergence curves of CSHHO and the variants of the optimization algorithm GCHHO and DEPSOASS and Improved GSA and DGOBLFOA under 23 Benchmark Functions, including the performance under UM functions (F1-F7), MM functions (F8-F23). The function error value is defined as, where F(x) is the mean value found at all of iterations, and F(x * ) is the optimal value is recorded in 23 The CSHHO algorithm under UM functions (F1-F7) has improved convergence accuracy in F1-F4, F6, F7 compared to other algorithms, and convergence speed is better than other algorithms in F1, F2, F3, F4, F5, F6, F7; under MM functions (F8-F23), CSHHO algorithm does not fall into the local optimal region and cannot escape, in F9, F10, F11, F16, F17, F18, F19, F21. CSHHO can explore the dominant region well and is ahead of other algorithms in terms of search accuracy. In  F9, F10, F11, F12, F13, F21, F22, F23, CSHHO has smooth convergence curves and faster convergence speed. In F16-F20, although the convergence speed of the CSHHO algorithm is slow, the convergence curve does not produce large fluctuations, which indicates that the CSHHO algorithm has good search ability, and it does not fall into local optimum and cannot jump out. Functions. In summary, the CSHHO algorithm's performance is further improved compared to GCHHO algorithm and other advanced algorithms. Compared to these advanced variants, CSHHO algorithm are effective.

Scalability Test on CSHHO
Dimensional data are an important basis for analyzing the influence of the number of factors to be optimized on the algorithm, and the purpose of the scalability test is to further verify the overall performance and stability of the optimization model. The experimental subjects in this section are CSHHO, HHO. 29 CEC2017 functions [72] based on 50 and 100 dimensions, respectively, are used for scalability experiments. In this experiment, the experimental parameters and the experimental environment are consistent with the previous experiments except that the dimensionality settings are different from the previous experiments, and Table 12 shows the experimental results using Mean and Std.
The best numerical results in CSHHO and HHO are set in bold, and the numerical results of both equivalents are in bold. Under the UM function (F1-F2), the numerical results of CSHHO are overall better than those of HHO, and CSHHO continues to maintain some advantage as the number of dimensions increases; under the MM function test (F3-F9), CSHHO performs better than HHO in 50 and 100 dimensions. The CSHHO's performance in the 50 and 100 dimensions was generally better than that of HHO in the 50 dimensions in F6 and F7 and in the 100 dimensions in F4. In the hybrid function (F10-F19), CSHHO performs better in the rest of the test sets, except for F11 and F12, where it performs lower than HHO. In the composition function (F20-F29), it still has good ability, and the accuracy in functions F21, F22 and F26 is higher than HHO, and the effect in functions F22, F23, F24, F26 is the same as HHO, and the effect in functions F23 (50 dimensions), F26 (100 dimensions) is lower than HHO. In general, compared with HHO, CSHHO can better balance the exploration and exploitation process as the number of dimensions increases.

Engineering Application
In this chapter, the proposed CSHHO is applied to model the reactive power output of a synchronous condenser. Due to the large number of UHV DC transmission projects in the power system, to ensure DC to DC power consumption and peaking demand, the scale of conventional units on the receiving AC grid is reduced, the dynamic reactive power support capability of the system is weakened, and the voltage stability margin is reduced [81]. This requires dynamic reactive power compensation devices to have instantaneous reactive power support characteristics in case of system failure, and the synchronous condenser to have reactive power output characteristics to meet the dynamic reactive power compensation requirements of the grid [82]. Modeling the reactive power support capability of a synchronous condenser is of great theoretical significance and practical value for the reactive power control of converter stations in high-voltage DC transmission systems with synchronous condenser.
The existing research methods for modeling the reactive power output of synchronous condenser are the mathematical analytical model calculation method and the experimental result fitting method [83][84][85][86] and both require large computational effort and have low accuracy, but few papers have studied the application of LSSVM in modeling the reactive power output of synchronous condenser. The advantages of the least squares support vector machine (LSSVM) are that it is less likely to fall into local minima and has high generalization ability [87]. Researchers have used various intelligent optimization algorithms to find the optimal results of kernel function parameters and regularization parameters, including the GA [88], Particle Swarm Optimization Algorithm (PSO) [89], Free Search Algorithm (FS) [90], Ant Colony Optimization Algorithm (ACO) [91], ABC Algorithm [92], GWO algorithm [93] and Backtracking Search Optimization Algorithm (BSA) [94], etc. However, the traditional swarm optimization algorithm is prone to the defects of falling into local optimum and low convergence accuracy in the search process. According to the results above, the CSHHO not only reduces the probability of the algorithm falling into local optimum and improves the convergence accuracy of the algorithm, but also has the advantages of the basic Harris Hawk optimization algorithm: 1. the steadiness of the searching cores; 2. the fruitfulness in the initial iterations; 3. the progressive selection scheme [5].
This paper proposes a CSHHO-LSSVM-based reactive power modeling method based on the numerical characteristics and global search capability of CSHHO. The optimal values of the penalty parameters, kernel function parameters, and loss function parameters of the LSSVM are found by using CSHHO to build the CSHHO-LSSVM model for the reactive power output of the synchronous condenser.

Principle of LSSVM
Support Vector Machine (SVM) is an ML method based on statistical learning theory, with kernel function as the core, which implicitly maps the data in the original space to the high-dimensional feature space, then finds the linear relationship in the feature space [87].
LSSVM is a regression algorithm that extends the basic SVM. Compared with the SVM algorithm, LSSVM requires fewer parameters and is more stable. LSSVM simplifies the complex constraints, which makes the improved SVM more capable of handling data. Moreover, by setting the error sum of squares as the loss function of the algorithm, LSSVM enhances the performance of regression prediction and improves the prediction accuracy. Simultaneously, the complexity of the algorithm is reduced, which reduces the processing time of the algorithm and provides more flexibility. LSSVM uses a nonlinear model on basis of SVM: The input data were (x i , y i )i = 1, · · · l where x i ∈ R d denoted the different elements, d denoted the dimension, y i ∈ R the expected value of the output, and l the total number of inputs. φ(x) denoted the mapping function. In summary, the LSSVM optimization objective was: where e i denoted the error, the magnitude of which determined the prediction accuracy;e ∈ R l×1 denoted the error vector, γ denoted the regularization parameter r, which determined the magnitude of the error. Adding a Lagrangian multiplier to Equation (29),λ ∈ R l×1 , Equation (30) was expressed as: From the KKT condition, we obtained: By eliminating the slack variables e i and weight vectors ω, the optimization problem was linearized: where A = [α 1 , α 2 , · · · , α N ] T , Q = [1, 1, · · · , 1] T , was an l × 1 dimensional column vector, Y = [y 1 , y 2 , · · · , y N ] T . According to the Mercer condition, K denoted a kernel function: K x i , x j = ϕ(x i ) T ϕ x j i, j = 1, 2, · · · , N. The Radial Basis Function kernel function was chosen for the model: Therefore, the nonlinear prediction model was expressed by Equation (34): When predicting with least squares support vector regression models, the penalty factor and radial basis kernel function parameters were the two parameters to be solved.

Simulation and Verification
The reactive power regulation results of a synchronous condenser based on PSCAD/ EMTDC simulation software were used as training samples and test samples. Table 11 shows the main parameters. The data with serial numbers 9, 14, 26 and 35 in Table 11 were taken as the test samples, and the rest were the training samples.
First, the data were preprocessed, and the LSSVM was trained by using CSHHO to find the penalty parameters, kernel function parameters and optimal parameters of the loss function (γ, σ, S), and the LSSVM was predicted by applying the test sample to the LSSVM, and the regression fitted prediction model was output. The algorithm flow is shown in Figure 6. loss function (γ, σ, S), and the LSSVM was predicted by applying the test samp LSSVM, and the regression fitted prediction model was output. The algorithm shown in Figure 6. Figure 7 compares the output results of the LSSVM model o sample and the CSHHO-LSSVM model were compared with those of the tes and the errors of the LSSVM model of the test sample and the CSHHO-LSSVM The output regression of CSHHO-LSSVM model was better fitted and had hig racy and radial basis kernel function parameters. PSCAD/EMTDC simulation software were used as training samples and test samples. Table 11 shows the main parameters. The data with serial numbers 9, 14, 26 and 35 in Table 11 were taken as the test samples, and the rest were the training samples. First, the data were preprocessed, and the LSSVM was trained by using CSHHO to find the penalty parameters, kernel function parameters and optimal parameters of the loss function (γ, σ, S), and the LSSVM was predicted by applying the test sample to the LSSVM, and the regression fitted prediction model was output. The algorithm flow is shown in Figure 6. Figure 7 compares the output results of the LSSVM model of the test sample and the CSHHO-LSSVM model were compared with those of the test sample and the errors of the LSSVM model of the test sample and the CSHHO-LSSVM model. The output regression of CSHHO-LSSVM model was better fitted and had higher accuracy and radial basis kernel function parameters. CSHHO   To verify the generalization ability of the CSHHO-LSSVM model, it was evaluated by absolute deviation, as shown in Table 13. The table shows the absolute deviation range of CSHHO-LSSVM model from 0.0123 to 0.989, indicating that the accuracy of the CSHHO-LSSVM model was high.  To verify the generalization ability of the CSHHO-LSSVM model, it was evaluated by absolute deviation, as shown in Table 13. The table shows the absolute deviation range of CSHHO-LSSVM model from 0.0123 to 0.989, indicating that the accuracy of the CSHHO-LSSVM model was high. From the reactive power and system voltage simulation results of the synchronous condenser in Tables 14-16, we can see that the maximum absolute error of the reactive power simulation result of CSHHO-LSSVM model was 0. 989 Mvar, and the maximum absolute error of the system voltage simulation result was 0.0415 kV, which were smaller than the simulation results of LSSVM model, indicating that the CSHHO-LSSVM model had higher accuracy and better regression fitting performance. The LSSVM model was more accurate and had better regression fitting performance.

Conclusions
Here, we analyzed the shortcomings of the basic HHO algorithm and applied the chaotic mapping population initialization, adaptive weighting, variable spiral position update and optimal neighborhood disturbance mechanisms to the classical HHO algorithm, in which the Gauss chaotic mapping population initialization increased the coverage of the solution space by the initial solution of the algorithm, the adaptive weighting mechanism sped up the movement of Harris hawk populations to the optimal solution, and the variable spiral position update increased the ability of Harris hawk populations. The optimal neighborhood disturbance mechanism helped the improved algorithm to increase the algorithm's global search capability and avoided premature maturity. To verify the optimal performance of the four strategies, the experiments were separated into two groups.
First, seven commonly used chaotic mappings were selected for the population initialization of the HHO algorithm, including Sinusoidal, Tent, Kent, Cubic, Logistic, Gauss, and Circle mappings. The HHO algorithm's performance after population initialization of each of these seven mappings was evaluated. The HHO algorithm's performance after population initialization of Gauss mapping was significantly better than that of the HHO algorithm after population initialization of other mappings in terms of solution accuracy. Second, based on the results of the first set of experiments, the Gauss mapping was used for population initialization, and adaptive weights, variable spiral position update, and optimal neighborhood disturbance mechanisms were introduced into the algorithm after population initialization. Next, CSHHO was compared with other classical algorithms including WOA, SCA, CSO and advanced algorithms including GCHHO and DEPSOASS and Improved GSA and DGOBLFOA based on 23 classical test functions and the means and standard deviations of all algorithms were analyzed. Subsequently, each algorithm's performance was evaluated comprehensively using Friedman's test and the Bonferroni-Holm corrected Wilcoxon signed-rank test with 5% confidence level, where numerical analysis concluded that CSHHO outperformed the other algorithms. In detail, analyzing the experimental results of this work, in the population initialization phase, Gauss chaos mapping had the best results in F2, F6, F12, F17, and F23 test functions, and comparing the results of the remaining six chaotic mappings, Gauss chaos mapping obtained the most optimal solutions; CSHHO algorithm outperformed HHO in 17 benchmark functions out of 23 classical test functions, outperformed WOA in 21 results, SCA in 23 results, and CSO in 22 experiments. It outperformed GCHHO in 9 results. Meanwhile, in the statistical experiments of advanced meta-heuristic and classical meta-heuristic, the ARVs obtained by CSHHO were 1.93 and 2.57, respectively, which were lower than the values obtained by other pairwise meta-heuristics in the same group of experiments. Additionally, dimensional scalability tests were conducted for CSHHO on the IEEECEC2017 dataset, including 50 and 100 dimensions, and the results showed that the improved optimizer effectively handled high-dimensional data with good stability. Meanwhile, in the statistical experiments of advanced meta-heuristic and classical meta-heuristic, CSHHO obtained ARVs of 1.70 and 2.57, respectively, which were lower than the values obtained by other meta-heuristic algorithms in the same set of experiments. Furthermore, dimensional scalability tests were conducted for CSHHO on the IEEE CEC 2017 dataset, including 50 and 100 dimensions, and the results showed that the improved optimizer effectively handled high-dimensional data with excellent stability.
Here, the CSHHO algorithm was also applied to the engineering problem of reactive power output modeling of the synchronous condenser. In view of the defects of the many calculations and low accuracy of the traditional reactive power output modeling method of the synchronous condenser, CSHHO-LSSVM was used to model the reactive power output of the synchronous condenser based on the advantages of LSSVM, which was not easy to fall into local minimum and had strong generalization ability, and CSHHO had high search accuracy and strong global search ability. The excitation current and excitation voltage of the synchronous condenser were used as the input of the LSSVM model, and the reactive power and system voltage were used as the LSSVM model's output. CSHHO was used to find the optimal values of the penalty parameter, kernel function parameter, and loss function parameter of LSSVM. The experiment showed that the CSHHHO-LSSVM model had better accuracy and better regression fitting performance compared with LSSVM.
In future work, we will try to improve the convergence speed and search accuracy of the algorithm and balance the exploration and exploitation phases of the algorithm to obtain better search performance. Additionally, the next step will be to investigate how CSHHO can be used to solve multi-objective optimization problems. In addition, CSHHO can also be used for evolutionary ML, such as extreme learning machines and parameter tuning of convolution neural networks. Other problems include grid scheduling and 3D multi-objective tracking.

Institutional Review Board Statement:
The study did not involve human and animals.

Informed Consent Statement:
The study did not involve human and animals.

Data Availability Statement:
In the paper, all the data generation information has been given in detail in the related chapter. [0,10] −10.1532