Abstract
Optimization is a broad field for researchers to develop new algorithms for solving various types of problems. There are various popular techniques being worked on for improvement. Grey wolf optimization (GWO) is one such algorithm because it is efficient, simple to use, and easy to implement. However, GWO has several drawbacks as it is stuck in local optima, has a low convergence rate, and has poor exploration. Several attempts have been made recently to overcome these drawbacks. This paper discusses some strategies that can be applied to GWO to overcome its drawbacks. This article proposes a novel algorithm to enhance the convergence rate, which was poor in GWO, and it is also compared with the other optimization algorithms. GWO also has the limitation of becoming stuck in local optima when used in complex functions or in a large search space, so these issues are further addressed. The most remarkable factor is that GWO purely depends on the initialization constraints such as population size and wolf initial positions. This study demonstrates the improved position of the wolf by applying strategies with the same population size. As a result, this novel algorithm has enhanced its exploration capability compared to other algorithms presented, and statistical results are also presented to demonstrate its superiority.
Keywords:
swarm intelligence (SI); metaheuristic algorithm; local search; global search; multi level inverter; total harmonic distortion (THD) MSC:
68W99; 65K10
1. Introduction
In the last few decades, the importance of swarm intelligence and metaheuristic algorithms has grown in the optimization community. The population-based approach is called swarm intelligence, which converges to produce optimal outcomes. The output of the metaheuristic changes over time and is problem-specific.
Studying swarm intelligence involves looking at the behavior of a decentralized system. It involves a large number of search agents communicating with each other and their environment to accomplish a shared objective [1]. This interdisciplinary field draws on concepts of biology, computer science, and physics to understand how complex behavior can emerge from the interactions of many simple individuals. The study of swarm intelligence has many potential applications, including robotics, distributed computing, and optimization. The concept of swarm intelligence originated from the work of pioneering biologists, such as William Morton Wheeler, who studied the behavior of ants in the early 20th century [2]. However, it was not until the 1950s and 1960s that computer scientists and engineers explored the potential of a decentralized system for solving complicated issues. Gerardo Beni and Jing Wang originally used the term “swarm intelligence” in 1989 to describe the collective behavior of simple agents [3]. Swarm intelligence can be observed in a broad area of a natural system, such as in ant colonies, bird flocks, and fish schools [4]. Ant colonies, for example, can efficiently search for food and build complex nests through the interactions of individual ants. Bird flocks can perform coordinated maneuvers, such as turning in unison, through the interactions of individual birds. Fish schools can evade predators and locate food through the interactions of the individual fish.
Metaheuristic algorithms are classified into two classes, i.e., population-based and single solution-based. Optimization algorithms, known as population-based metaheuristic algorithms, use the population of a potential solution to iteratively search for the best response [5]. These algorithms frequently draw their inspiration from organic phenomena, including evolution, swarm behavior, and immune systems. Some of the population-based metaheuristic algorithms are the genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], ant colony optimization (ACO) [8], differential evolution (DE) [9], and cultural algorithm (CA) [10]. Single solution-based metaheuristic algorithm start with a single candidate solution and then iteratively refine it by making small changes to it until the optimal solution is found or a stopping criterion is met. These algorithms often employ randomization and probabilistic search techniques to bypass local optima and completely investigate the search space. These algorithms frequently take their cues from physical processes such as annealing or movements in space. Some examples of single solution-based metaheuristic algorithms are simulated annealing (SA) [11], tabu search (TS) [12], and harmony search (HS) [13].
A class of optimization approaches known as metaheuristics is used to solve complex issues. They are called metaheuristics because they are a higher level of heuristics, meaning they are a set of heuristics that are used to guide the search for a solution [14]. These algorithms are known for their ability to find approximate solutions to problems that are difficult or impossible to solve exactly. They are extensively utilized in disciplines including operations research, computer science, and engineering. Metaheuristic algorithms first appeared in the 1940s and 1950s in the work of George Dantzig. Dantzig developed the simplex algorithm, which is still widely used today for solving linear programming problems [15]. However, metaheuristic algorithms are widely used in several fields. The most popular applications include:
- Engineering: Metaheuristic algorithms are used in engineering to optimize the design of structures, such as bridges and buildings.
- Computer science: Metaheuristic algorithms are used in computer science to optimize the performance of the algorithms and systems, such as scheduling and routing.
- Operations research: Metaheuristic algorithms are used in operations research to solve problems in areas such as logistics and supply chain management.
This paper proposes a new metaheuristic algorithm, Levy flight-based improved grey wolf optimization [16], to overcome the limitation that was faced by grey wolf optimization. GWO has several drawbacks as it is stuck in local minima, has a low convergence rate, and has poor exploration. To overcome these drawbacks, IGWO, i.e., improved grey wolf optimization, was proposed recently, but even IGWO did not provide satisfactory results, especially for complex engineering problems. Therefore, an improved version of this technique, viz. IGWO, is presented in this article with statistical analysis proving its superiority over the most popular optimization techniques proposed recently.
Section 2 discusses what optimization is and the different techniques of optimization. Section 3 presents grey wolf optimization, improved grey wolf optimization, and the newly presented technique LF-IGWO is discussed. Section 4 implements the technique on a benchmark function, and the results are evaluated in comparison to other approaches. Section 5 presents the implementation of the engineering problem and 31-level inverter problem.
2. Optimization
Finding the optimum solution to a problem in a list of possible alternatives is the process of optimization [16]. It is a fundamental concept in fields such as engineering, computer science, and operations research. There are many different optimization techniques, each with its own special qualities and traits. A few well-known techniques include mathematical programming, gradient-based methods [17], and metaheuristic algorithms.
The work of mathematicians and engineers in the 19th century is where optimization first emerged. The first optimization methods were based on calculus and were used to solve problems in engineering and physics. However, optimization did not start to be used in other disciplines, including economics and computer science, until the 20th century.
There are many different optimization strategy types, each with unique characteristics. The most popular types include:
- Mathematical programming: These techniques are based on the use of mathematical models to represent the problem and constraints. The solution is then found by solving the mathematical equations.
- Gradient-based methods: These techniques are based on the use of gradients, which are the directions of the steepest ascent or descent. The solution is found by moving in the direction of the gradient until a local or global optimum is reached.
- Metaheuristic algorithm: These techniques are based on the use of metaheuristics, which are rules of thumb, to guide the search for a solution. The solution is found by iteratively improving an initial solution over time.
Many researchers are moving toward the optimization field because of the term no free lunch (NFL). The term NFL means that no technique or algorithm is suitable for all kinds of problems [18]. Therefore, there is a wide area to work on as each problem may lead towards developing new techniques.
In this optimization field, nature-inspired based optimization techniques play a significant role. They are defined as problem-solving methodologies that take inspiration from nature or a natural process [19]. These nature-inspired optimization techniques are further classified, as shown in Figure 1, and a summary of nature-inspired algorithms is given in Table 1.
Figure 1.
The hierarchical distribution of nature-inspired algorithms.
Table 1.
Literature Review.
3. Optimization Algorithm and New Proposed Algorithms
The optimization algorithms used in swarm intelligence place a strong emphasis on the social relationships and interactions that occur between members of the swarms as they search for and pursue food sources. Several SI algorithms have been created and suggested over the past few years. One of the most well-known and frequently employed SI-based methods is Grey Wolf Optimization (GWO). The grey wolf’s natural behavior of looking for the most effective way to pursue prey served as the model for the GWO algorithm. This led to a good exploration–exploitation balance. A limitation of this is that it suffers from poor performance in global search. To eliminate this limitation, improved grey wolf optimization (IGWO) was proposed with the help of the dimension learning-based hunting (DLH) search strategy. The advantage of this is that GWO performs better in multi-dimensional functions but is not too efficient in uni-dimensional functions. To resolve this uni-dimensional issue, a new method is proposed in this paper, i.e., Levy flight-based improved grey wolf optimization (LF-IGWO). In this section, GWO, IGWO, and LF-IGWO are discussed.
3.1. Grey Wolf Optimization
The metaheuristic-based grey wolf optimization (GWO) technique was influenced by the communal exploration actions of grey wolves [36]. To identify a given problem’s global optimum, the program imitates the hierarchy of command and the hunting style of grey wolves. GWO has been employed to address several difficulties in optimization, including those involving machine learning, image processing, and function optimization.
There are three basic steps in the GWO algorithm: initialization, iteration, and update. The initial population is produced at random during the startup process. Every member of this population is given a fitness value during the iteration phase. In the update step, the individuals are updated based on their fitness values. Leaders are selected from those with the highest fitness rating, and the others follow them to update their positions.
One of the advantages of GWO is its simplicity. In contrast to other optimization algorithms, GWO does not need any settings to be pre-set. Additionally, compared to other methods, such as differential evolution (DE) and particle swarm optimization (PSO), GWO has been demonstrated to converge more quickly and create better solutions.
Numerous optimization issues, such as function optimization, image processing, and machine learning, have been tackled with GWO. In function optimization, GWO has been used to optimize benchmark functions. In image processing, GWO has been used for image enhancement, segmentation, and compression. In machine learning, GWO has been used for feature selection, classification, and clustering.
Despite its successes, GWO is not without its limitations. One limitation is that GWO is sensitive to the original population and could reach a local optimum if it is not sufficiently diversified. Another limitation is that GWO may not perform well on problems with a high number of variables or constraints.
Figure 2 shows the hierarchy level of the grey wolves. This is the population-based algorithm that signifies that there is a group of wolves and they are divided into different levels based on their work or task.
Figure 2.
The hierarchy distribution of the grey wolf.
The dominating wolves in this situation are those from alpha packs, and it can be said that they are the group’s leaders. They give us the GWO algorithm’s best-fitting solution. These groups of wolves make choices regarding hunting, planning, and delegating specific tasks to other wolves in their pack [36]. These wolves are thought to be the strongest in their pack.
Beta is the group’s next level. They give us the value or solution that is best suited to our needs after the alpha. This group’s responsibility is to guide the alpha group’s decision-making and to rule the two groups that follow it.
Omega is the lowest category. They act out the part of the victim. Their solutions have no bearing on the GWO algorithm’s final result [37]. They are given their opportunities in the end of the assignment. For example, after hunting, they are permitted to eat last after all other wolves have finished, because they are at the bottom of the pack and must follow the order of their dominant wolf. Below is the pseudo code for GWO [37] in Algorithm 1:
| Algorithm 1 GWO Algorithm |
| Form the grey wolf population. Evaluate the accuracy of every response. For each iteration: For every single grey wolf in the population: Generate a new solution. Analyze the new solution’s potential. Update a grey wolf’s location in accordance with the updated solution. Sort the grey wolves based on their fitness. Each grey wolf’s location will be updated dependent on where the other grey wolves in the population are. Return the best solution as the result. |
3.2. Improved Grey Wolf Optimization
In GWO, the omega wolves are led by the α, β, and γ wolves to the areas of the search space where it is most likely that the ideal solution will be found. This behavior could keep you in a locally ideal solution. A decline in population diversity, which causes GWO to enter the regional optimum, is another adverse effect. Improved grey wolf optimization was introduced to address this problem.
IGWO algorithms typically involve modifications to the original GWO algorithm in one or more areas such as initialization, updating steps, and adaptation.
In the initialization phase, IGWO algorithms may use a different method for initializing the population, such as using a combination of random initialization [6].
Movement phase: An additional movement technique that is part of I-GWO is the dimension learning-based hunting (DLH) search method [38]. The wolves in DLH are aware of each other as possible candidates for the new role [39].
Dimension learning-based hunting (DLH) search strategy: Originally, in GWO, three pop leader wolves are used to produce a new position for each wolf. By doing this, the population loses diversity too soon, the GWO displays delayed convergence, and the wolves become caught in the local optimum. The suggested DLH search approach takes these flaws into account and includes individual wolf hunting that neighbors can observe.
In comparison to the original GWO algorithm, IGWO algorithms have been shown to have the following advantages in Algorithm 2:
| Algorithm 2 IGWO Algorithms |
Set the grey wolf population (solutions). Evaluate the accuracy of every response. Arrange solutions in decreasing fitness order. Make the best response the dominant α wolf. Make the second best response the β wolf. Make the third best response the δ wolf. For each iteration: Generate new solutions for each grey wolf. Examine the appropriateness of each new solution. According to the new answer, adjust each grey wolf’s location. Based on a combination of the α wolf’s answer, the β wolf’s solution, and the δ wolf’s solution, the α wolf’s location should be updated. Using the α wolf’s location and their solutions, adjust the positions of the wolves. Evaluate the fitness of the updated solutions. Arrange the grey wolves according to their most recent fitness. Update the 𝑎, 𝛽, and δ wolves based on the new sorting. Return the best solution (the alpha wolf) as the result. |
3.3. Levy Flight Improved Grey Wolf Optimization
Levy flight is a sort of unplanned walk in which walkers’ step sizes are drawn from a Levy distribution, which is a probability distribution with heavy tails. This means that the distribution has a high probability of generating large steps, and these large steps occur more frequently than they would in a normal distribution [40].
The Levy distribution is characterized by a power law tail, and its probability density function can be expressed as:
where:
- μ is the parameter for location (the mean of the distribution);
- σ is a scale parameter (the standard deviation of the distribution);
- γ is the tail index parameter (controls the shape of the distribution).
The tail index parameter γ determines the shape of the distribution. When γ is between 0 and 2 [36], the distribution has infinite variance, which means that the variance of the distribution is undefined. When γ is greater than 2, the distribution has finite variance, and when γ is less than or equal to 1, the distribution has an infinite mean.
With the help of this technique, the random position is chosen and updated to improve the exploration capability. The equation for the finding the position of the wolf is:
With the help of these equations, the new position is initialized and given to the wolves. The algorithm updates each wolf’s position after each iteration using the following (Equation (7)) [41]:
For the alpha wolf:
Similarly, the next position of β, δ, and ω is calculated using varying distance vector D. Here, xi(t) is the current position of the ith wolf at time t, xi(t + 1) is the updated position of the ith wolf at time t + 1, Aj is the scaling factor for the jth wolf, and Dj is the distance vector between the ith wolf and the jth wolf. The scaling factors Aj are updated in each iteration as follows:
where u is a uniform random number in the range [0, 1], a is the current iteration number, and j corresponds to the alpha, beta, delta, and omega wolves.
The distance vector Dj is calculated as follows:
where Cj is a random vector in the range [0, 1] that is generated for each wolf in each iteration.
The algorithm terminates when a stopping criterion is met, such as a maximum number of iterations or a minimum level of improvement in the objective function [42]. The pseudo code for the algorithm is given below in Algorithm 3:
| Algorithm 3 Newly Proposed |
| Set the grey wolf population (solutions). Evaluate the accuracy of every response. Arrange solutions in decreasing fitness order. Make the best response the dominant α wolf. Make the second best response the β wolf. Make the third best response the δ wolf. Evaluate the new position of the wolves and update it with the initial position. For each iteration: Generate new solutions for each grey wolf using Levy flight. Evaluate the accuracy of every new solution. According to the new answer, adjust each grey wolf’s location. The 𝛼 wolf’s location is updated based on a combination of its solution and the solutions of the 𝛽 and 𝛿 wolves. Based on a combination of the α wolf’s answer, the β wolf’s solution, and the δ wolf’s solution, the α wolf’s location should be updated. Evaluate the fitness of the updated solutions. Arrange the grey wolves according to their most recent fitness. Update the 𝛼, 𝛽, and 𝛿 wolves based on the new sorting. Return the best solution (the 𝛼 wolf) as the result. |
4. Results and Discussion
Through the resolution of the classical benchmark problem, the suggested approach is validated in this section. A total of 23 sets of well-known functions make up the classical benchmark function and the CEC 2017 benchmark functions. In this classical set of functions, F1 through F7 are uni-modal functions, F8–F13 are multi-modal functions, and the last ten functions, i.e., F14–F23, are fixed dimensional functions [43]. Detailed information on this classical benchmark function with dimension and range is given in Table 2, Table 3 and Table 4. The minima of these 23 traditional benchmark functions are aimed for by the novel algorithm that has been proposed. This is performed on other well-known algorithms such as PSO, GA, GWO, and I-GWO to compare with the LF-IGWO algorithm.
Table 2.
Unimodal benchmark function equations.
Table 3.
Multimodal benchmark function equations.
Table 4.
Fixed dimensional benchmark function equations.
For a fair comparison with all these techniques, the size of the population used is 30 and each algorithm is run 20 times. Table 5 provides the specified parameters. Table 6 compares how these strategies’ means compare to one another when applied to the benchmark function. The best value among all the functions is highlighted in bold letters. The functions F1 to F7 are used to test the exploitation capabilities. Apart from functions F5 and F7, the new technique LF-IGWO performs better than other algorithms in these. From function F8 to F13, there are many local minima, so the technique should have exploration capability. Apart from F8, the new technique LF-IGWO performed better than other techniques in these. Additionally, from function F14 to F23, all the algorithms gave almost the same result, except for F14 and F15. Therefore, it can be said that the overall performance of the new technique is more outstanding than the others. The benchmark function model and results of these functions are depicted in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25.
Table 5.
Selection of parameters for different algorithms [35,41].
Table 6.
Comparison of mean values obtained from classical benchmark functions.
Figure 3.
(a) Search space for unimodel function F01 (b) Comparison of convergence curve for F01.
Figure 4.
(a) Search space for unimodel function F02 (b) Comparison of convergence curve for F02.
Figure 5.
(a) Search space for unimodel function F03 (b) Comparison of convergence curve for F03.
Figure 6.
(a) Search space for unimodel function F04 (b) Comparison of convergence curve for F04.
Figure 7.
(a) Search space for unimodel function F05 (b) Comparison of convergence curve for F05.
Figure 8.
(a) Search space for unimodel function F06 (b) Comparison of convergence curve for F06.
Figure 9.
(a) Search space for unimodal function F07 (b) Comparison of convergence curve for F07.
Figure 10.
(a) Search space for multi model function F08 (b) Comparison of convergence curve for F08.
Figure 11.
(a) Search space for multi model function F09 (b) Comparison of convergence curve for F09.
Figure 12.
(a) Search space for multi model function F10 (b) Comparison of convergence curve for F10.
Figure 13.
(a) Search space for multi model function F11 (b) Comparison of convergence curve for F11.
Figure 14.
(a) Search space for multi model function F12 (b) Comparison of convergence curve for F12.
Figure 15.
(a) Search space for multi model function F13 (b) Comparison of convergence curve for F13.
Figure 16.
(a) Search space for fixed dimensional function F14 (b) Comparison of convergence curve for F14.
Figure 17.
(a) Search space for fixed dimensional function F15 (b) Comparison of convergence curve for F15.
Figure 18.
(a) Search space for fixed dimensional function F16 (b) Comparison of convergence curve for F16.
Figure 19.
(a) Search space for fixed dimensional function F17 (b) Comparison of convergence curve for F17.
Figure 20.
(a) Search space for fixed dimensional function F18 (b) Comparison of convergence curve for F18.
Figure 21.
(a) Search space for fixed dimensional function F19 (b) Comparison of convergence curve for F19.
Figure 22.
(a) Search space for fixed dimensional function F20 (b) Comparison of convergence curve for F20.
Figure 23.
(a) Search space for fixed dimensional function F21 (b) Comparison of convergence curve for F21.
Figure 24.
(a) Search space for fixed dimensional function F22 (b) Comparison of convergence curve for F22.
Figure 25.
(a) Search space for fixed dimensional function F23 (b) Comparison of convergence curve for F23.
The results of functions F1 to F7 are depicted in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, in which the newly proposed technique is represented in green, and one can infer from the convergence curves shown in the figures that the LF-IGWO has a good convergence rate and gives a better result than the IGWO and GWO. Therefore, the results indicate that the LF-IGWO has good exploitation capabilities.
With regard to the multi-model functions F8–F13 depicted in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, it seems that in function F9, GWO is better than IGWO and the newly proposed technique LF-IGWO. However, here, it seems that the new technique LF-IGWO has a good convergence rate compared to IGWO, which is shown in Figure 12. For functions 10 and 12, the convergence rates of IGWO and LF-IGWO are nearly the same, but LF-IGWO gives a somewhat better result at the end of 1000 iterations. From this, it can be concluded that the newly defined technique LF-IGWO has better exploration capabilities.
For the fixed dimension functions F14–F23, which are depicted in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 and Figure 25, it seems that the convergence rate is almost same for the newly proposed technique LF-IGWO and IGWO. However, the final solution of GWO, IGWO, and LF-IGWO are the same at the end of the 1000 iterations.
Table 7 presents a comparison of the algorithms with the newly proposed algorithm, i.e., Levy flight-based improved grey wolf optimization, on the CEC 2017 benchmark functions. From the table, it can be clearly seen that it performs better than other popular algorithms when run with a population size of 30 each with 51 independent runs at 1000 iterations with a dimension of 30.
Table 7.
Comparison of mean values obtained from CEC 2017 benchmark functions.
4.1. 31-Level Cascaded H-Bridge MLI
A multilevel cascaded H-bridge (CHB) inverter is a type of power electronic device that can generate high-voltage, high-quality AC waveforms by synthesizing the output of multiple low-voltage DC sources [44]. It is a popular choice for high-power applications including motor drives and renewable energy sources.
The basic building block of a CHB multilevel inverter is the H-bridge module. An H-bridge is a configuration of four switches (typically MOSFETs or IGBTs) that can control the polarity and magnitude of the output voltage across a load [45]. By cascading multiple H-bridge modules, it is possible to generate a staircase-like voltage waveform that approximates a sinusoidal waveform.
A CHB multilevel inverter can produce several output voltage levels by switching the H-bridge modules on and off in a specific pattern. For instance, a two-level CHB inverter can generate an output voltage that switches between two voltage levels, while a three-level CHB inverter can generate an output voltage that switches between three voltage levels.
A CHB multilevel general equation for the output voltage waveform can be written as:
where:
- is the output voltage waveform;
- is the resulting waveform’s fundamental frequency;
- is the inverter’s H-bridge module count;
- is the DC voltage input of the kth H-bridge module.
In practice, the output waveform becomes closer to a sine wave as the number of voltage levels increases.
The output in the inverter is obtained by combining the outputs of the individual H-bridge circuits. This allows for a higher number of voltage levels to be generated, resulting in a more efficient and higher-quality output waveform. Figure 26 depicts the H-bridge circuit [36].
Figure 26.
H-Bridge circuit model.
When switches S1 and S4 are closed (and S2 and S3 are open), there will be a positive voltage applied to the entire load. By turning OFF the S1 and S4 switches and turning ON the S2 and S3 switches, the voltage is reversed, permitting a negative voltage [46].
The S1 and S2 switches are never kept off at the same time though, as it may result in a short-circuiting of the supply of I/P voltage. The same caution should be utilized while using switches S3 and S4. The technical term for this situation is “shoot-through.”
As can be seen from the waveform in Figure 27, the CHB inverter can produce a waveform that closely approximates a sinusoidal waveform with low harmonic distortion. This makes it a suitable choice for high-power applications that require high-quality AC power.
Figure 27.
Cascaded H-bridge waveform.
4.2. Harmonics
In electrical engineering, harmonics refer to the sinusoidal components of an alternating current (AC) signal that have frequencies that are integer multiples of the fundamental frequency [47]. In most electrical power systems, the fundamental frequency, which is the lowest frequency included in an AC signal, is commonly 50 or 60 Hz. The presence of harmonics in an AC signal is caused by non-linear electrical loads, such as electronic devices and power electronic equipment, which generate distorted waveforms when they receive an AC signal [48]. Harmonics can have several negative impacts on electrical systems, such as:
- Increased current levels: Harmonics can cause an increase in the RMS current levels, which can result in the overloading of conductors, transformers, and other electrical equipment.
- Decreased power factor: A reduction in the power factor, a measurement of an electrical system’s efficiency, can be brought on by the presence of harmonics. This can result in increased energy costs and decreased system efficiency.
- Increased heating: The additional current levels caused by harmonics can result in increased heating in conductors and transformers, which can reduce their life expectancy and cause safety issues.
- Interference with communication systems: Harmonics can interfere with communication systems and cause problems such as data corruption and interference with radio and television signals.
Several measures can be employed to mitigate these issues, such as harmonic filters, active harmonic filters, and passive filters. These filters work by attenuating or filtering out the harmonic components from the AC signal, resulting in a more sinusoidal waveform with fewer harmonics. Other measures, such as the use of balanced loads and power electronic devices with low harmonic distortion, can also help to reduce the level of harmonics in electrical systems.
4.3. Total Harmonics Distortion
A measure of the amount of distortion present in an alternating current (AC) waveform is called total harmonic distortion (THD) [49]. This is defined as the proportion between the total of all the waveform’s harmonic components and the fundamental frequency component. The THD is usually expressed as a percentage and is used to characterize the quality of an AC signal.
Harmonic distortion in an AC waveform can be caused by non-linear loads, such as electronic devices and power electronic equipment, which generate distorted waveforms when they receive an AC signal [50]. These harmonics can have several negative impacts on electrical systems, such as increased current levels, decreased power factor, increased heating in conductors and transformers, and interference with communication systems.
The THD is an important parameter for characterizing the performance of electrical power systems, as well as electronic devices and power electronic equipment. For example, in power systems, a low THD indicates a high-quality waveform and a more efficient system, while a high THD indicates a distorted waveform and a less efficient system. Similarly, in electronic devices, a low THD indicates a high-quality signal and a more reliable device, while a high THD indicates a distorted signal and a less reliable device.
Several measures can be employed to reduce THD in electrical systems, such as harmonic filters, active harmonic filters, and passive filters. These filters work by attenuating or filtering out the harmonic components from the AC signal, resulting in a waveform that is more sinusoidal and has fewer harmonics. Other measures, such as the use of balanced loads and power electronic devices with low harmonic distortion, can also help to reduce THD. To eliminate the passive filter in order to simplify the circuit, optimization techniques have been used, which give a low THD value and make the circuit less complex. The equation for total harmonic distortion (THD) is:
where the fundamental frequency is the first harmonic, and the other harmonic frequencies are multiples of the fundamental frequency. Table 8 shows a comparison of the firing angles with the different optimization techniques. Additionally, Figure 28, Figure 29, Figure 30 and Figure 31 show the waveforms obtained from the different optimization techniques and THD spectrum. Table 9 shows a comparison of THD values.
Table 8.
Comparison of firing angles of LF-IGWO with other techniques.
Figure 28.
In this figure, (a) represents LF-IGWO waveform and (b) represents the THD spectrum of LF-IGWO.
Figure 29.
In this figure, (a) represents IGWO waveform and (b) represents the THD spectrum of IGWO.
Figure 30.
In this figure, (a) represents GWO waveform and (b) represents the THD spectrum of GWO.
Figure 31.
In this figure, (a) represents PSO waveform and (b) represents the THD spectrum of PSO.
Table 9.
Comparison of THD values of LF-IGWO with other techniques.
5. Engineering Problems
5.1. Tension Compression Spring
A tension/compression spring’s weight must be reduced while taking into account several constraints, such as shear stress, surge frequency, and minimum deflection [35]. The diameter of the wire (), the mean diameter of coil , and the deciding variable are the number of active coils [51]. Below is the mathematical formulation:
This issue was resolved using numerous metaheuristic algorithms. The method resolved the problem, and the results in Table 9 show how it performs in comparison to other algorithms.
The ideal weight according to Equation (13) is 0.01267, as determined by the LF IGWO algorithm. This outcome is very similar to the ideal value discovered by GWO and PSO. The ideal weight obtained by the LF IGWO method is less than that of other algorithms, such as GA, PSO, and I-GWO, and is equal to GWO, according to Table 10.
Table 10.
Comparison of the data values of LF-IGWO with other techniques.
5.2. Pressure Vessel Design
In this instance, it is asserted that the optimization will result in a reduction in the overall price, which includes material price, overall expense in the welding process, and overall expense of constructing a cylindrical vessel. The decision-making factors are the cylindrical section’s length without taking into account the head (), the thickness of the shell (), the height of the head (), and internal radius () [52]. There are four inequality constraints in this problem, three of which are linear and one of which is nonlinear. The following equations are the mathematical representation:
Metaheuristic algorithms were employed to address this issue. Table 11 compares the performance of the LF-IGWO algorithm with that of other techniques. It is clear from Table 11 that, in comparison to many other algorithms, the LF-IGWO method achieves the lowest cost calculated using Equation (19). The acquired cost is reasonably close to GA. GWO appears to perform better than others in this. However, LF-IGWO provides us with a superior outcome than GA, PSO, and IGWO.
Table 11.
Comparison of cost values obtained from LF-IGWO with other techniques.
5.3. Welded Beam Design
For a reduction in the cost of making a design, the optimization approach is put forward. The deciding elements are the width of the weld (h), the height of the bar (t), the length of the attached component of the bar (l), and the bar’s thickness (b) [53]. There are seven inequality restrictions [54]. The given problem has the following mathematical formulation:
This design challenge was subjected to the LF-IGWO method, and the outcomes are displayed in Table 12. Table 11 depicts how the LF-IGWO algorithm outperforms other competing algorithms in minimizing the cost. It can be concluded that the LF-IGWO method performs better in some design challenges than other algorithms and is capable of tackling limited engineering design problems [55,56,57].
Table 12.
Comparison of cost values obtained from LF-IGWO with other techniques.
The statistical Wilcoxon rank sum test was performed on the newly proposed technique. The statistical data obtained by running the functions given in Table 2, Table 3 and Table 4 and the engineering problems for the different algorithms are depicted in Table 13. From this test, it was found that the data are not significant when compared to the engineering problem. There is a minor improvement in the result of the newly proposed Levy flight-based improved grey wolf optimization algorithm when it is compared with improved grey wolf optimization. It can be concluded that LF-IGWO is not significantly improved as compared to I-GWO. However, when it is compared with the benchmark function, it performs better than GWO, PSO, and GA, but when compared with the IGWO and SSR, it performs moderately.
6. Conclusions
A unique, metaphor-free metaheuristic algorithm is suggested in this study. Iteratively reducing the search space yields the best result. By utilizing this method, 23 common statistical benchmark functions are minimized. The outcomes demonstrate the strong exploration and exploitation capabilities of the suggested method. The LF-IGWO algorithm performs better than other algorithms, particularly in multimodal benchmark functions, based on the outcomes of the test functions. This highlights the capacity of the LF-IGWO algorithm to avoid local optima, which increases its competitiveness and supports the statements made in the research. One of the drawbacks that can be seen is that the algorithm becomes somewhat complex and hard to understand. As there is already a DLH strategy applied in the improved grey wolf optimization (IGWO) method, the addition of one more strategy to improve IGWO, which is Levy flight, makes it somewhat difficult for new researchers to understand.
Furthermore, three restricted engineering design issues are resolved using the LF-IGWO algorithm. The outcomes demonstrate that the LF-IGWO algorithm is also capable of solving limited design issues. Out of the three design challenges that were taken into consideration, according to performance comparison data, the LF-IGWO approach outperforms the other algorithms in two of them. Overall, it can be said that this method, despite having straightforward logic, may provide very competitive outcomes when compared to the well-known optimization techniques. Additionally, the suggested technique is used to solve the 31-level inverter for THD minimization to demonstrate its functionality. It can be seen from the results that the proposed algorithm gives us the appropriate firing angles that give us a total harmonic distortion (THD) value less than five, as per the standard of IEEE 519.
Author Contributions
Conceptualization, B.B.; Data curation, H.S. and K.A.; Funding acquisition, G.P.J. and B.S.; Investigation, G.P.J. and B.S.; Methodology, B.B. and H.S.; Project administration, B.S.; Resources, G.P.J. and B.S.; Software, B.B., H.S. and K.A.; Supervision, G.P.J. and B.S.; Validation, H.S., K.A. and B.S.; Visualization, H.S. and K.A.; Writing—original draft, K.A.; Writing—review and editing, B.B. and G.P.J. All authors have read and agreed to the published version of the manuscript.
Funding
The present research has been conducted by the Research Grant of Kwangwoon University in 2023.
Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated during the current study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
- Kaur, K.; Kumar, Y. Swarm Intelligence and its applications towards Various Computing: A Systematic Review. In Proceedings of the 2020 International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 17–19 June 2020; pp. 57–62. [Google Scholar] [CrossRef]
- Trianni, V.; Tuci, E.; Passino, K.M.; Marshall, J.A.R. Swarm Cognition: An interdisciplinary approach to the study of self-organising biological collectives. Swarm Intell. 2011, 5, 3–18. [Google Scholar] [CrossRef]
- Sneha; Wajahat. Swarm Intelligence. Int. J. Sci. Eng. Res. 2017, 8, 10. [Google Scholar]
- Hazem, A.; Glasgow, J. Swarm Intelligence: Concepts, Models and Applications; School of Computing, Queen’s University: Kingston, ON, Canada, 2012. [Google Scholar] [CrossRef]
- Zahra, B.; Siti Mariyam, S. A review of population-based meta-heuristic algorithm. Int. J. Adv. Soft Comput. Its Appl. 2013, 5, 1–35. [Google Scholar]
- Colin, R. Genetic Algorithms. In Handbook of Metaheuristics; Springer: Boston, MA, USA, 2010; pp. 109–139. [Google Scholar] [CrossRef]
- Seyedali, M. Particle Swarm Optimisation. In Evolutionary Algorithms and Neural Networks; Springer: Cham, Switzerland, 2019; pp. 15–31. [Google Scholar] [CrossRef]
- Yi, G.; Jin, M.; Zhou, Z. Research on a Novel Ant Colony Optimization Algorithm. In Advances in Neural Networks—Lecture Notes in Computer Science; Zhang, L., Lu, B.L., Kwok, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6063. [Google Scholar] [CrossRef]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Ribeiro, M.d.R.; Aguiar, M.S.d. Cultural Algorithms: A Study of Concepts and Approaches. In Proceedings of the 2011 Workshop-School on Theoretical Computer Science, Pelotas, Brazil, 24–26 August 2011; pp. 145–148. [Google Scholar] [CrossRef]
- Rutenbar, R.A. Simulated annealing algorithms: An overview. IEEE Circuits Devices Mag. 1989, 5, 19–26. [Google Scholar] [CrossRef]
- Prajapati, V.K.; Jain, M.; Chouhan, L. Tabu Search Algorithm (TSA): A Comprehensive Survey. In Proceedings of the 2020 3rd International Conference on Emerging Technologies in Computer Engineering: Machine Learning and Internet of Things (ICETCE), Jaipur, India, 7–8 February 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, P. A study on harmony search algorithm and applications. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 736–739. [Google Scholar] [CrossRef]
- Kytöjoki, J.; Nuortio, T.; Bräysy, O.; Gendreau, M. An efficient variable neighborhood search heuristic for very large scale vehicle routing problems. Comput. Oper. Res. 2007, 34, 2743–2757. [Google Scholar] [CrossRef]
- Mohammad, H.; Mohammad, H. Simplex method to Optimize Mathematical manipulation. Int. J. Recent Technol. Eng. (IJRTE) 2019, 7, 5. [Google Scholar]
- Alonso, G.; del Valle, E.; Ramirez, J.R. 5—Optimization methods. In Woodhead Publishing Series in Energy, Desalination in Nuclear Power Plants; Woodhead Publishing: Shaxton, UK, 2020; pp. 67–76. ISBN 9780128200216. [Google Scholar] [CrossRef]
- Lian, P.; Wang, C.; Xiang, B.; Shi, Y.; Xue, S. Gradient-based optimization method for producing a contoured beam with single-fed reflector antenna. J. Syst. Eng. Electron. 2019, 30, 22–29. [Google Scholar] [CrossRef]
- Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
- Shukla, P.; Singh, S.K.; Khamparia, A.; Goyal, A. Nature-inspired optimization techniques. In Nature-Inspired Optimization Algorithms; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar] [CrossRef]
- Ingber, A.; Lester, I. Simulated annealing: Practice versus theory. Math. Comput. Model. 2002, 18, 29–57. [Google Scholar] [CrossRef]
- Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
- Anita; Yadav, A.; Kumar, N. Artificial electric field algorithm for engineering optimization problems. Expert Syst. Appl. 2020, 149, 113308. [Google Scholar] [CrossRef]
- Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
- Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
- Bansal, J.C.; Sharma, H.; Jadon, S.S. Artificial bee colony algorithm: A survey. Int. J. Adv. Intell. Paradig. 2013, 5, 123–159. [Google Scholar] [CrossRef]
- Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
- Guo, C.; Tang, H.; Niu, B.; Lee, C.B.P. A survey of bacterial foraging optimization. Neurocomputing 2021, 452, 728–746. [Google Scholar] [CrossRef]
- Zou, F.; Chen, D.; Xu, Q. A survey of teaching–learning-based optimization. Neurocomputing 2019, 335, 366–383. [Google Scholar] [CrossRef]
- Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar] [CrossRef]
- Zhang, T.; Yang, C.; Zhao, X. Using Improved Brainstorm Optimization Algorithm for Hardware/Software Partitioning. Appl. Sci. 2019, 9, 866. [Google Scholar] [CrossRef]
- Abdelhamid, M.; Kamel, S.; Mohamed, M.A.; Aljohani, M.; Rahmann, C.; Mosaad, M.I. Political Optimization Algorithm for Optimal Coordination of Directional Overcurrent Relays. In Proceedings of the 2020 IEEE Electric Power and Energy Conference (EPEC), Edmonton, AB, Canada, 9–10 November 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Huynh, N.T.; Nguyen, T.V.T.; Nguyen, Q.M. Optimum Design for the Magnification Mechanisms Employing Fuzzy Logic–ANFIS. Comput. Mater. Continua. 2022, 73, 5961–5983. [Google Scholar] [CrossRef]
- Kler, R.; Gangurde, R.; Elmirzaev, S.; Hossain, S.; Vo, N.V.T.; Nguyen, T.V.T.; Kumar, P.N. Optimization of Meat and Poultry Farm Inventory Stock Using Data Analytics for Green Supply Chain Network. Discret. Dyn. Nat. Soc. 2022, 2022, 8970549. [Google Scholar] [CrossRef]
- Huynh, T.T.; Nguyen, T.V.T.; Nguyen, Q.M.; Nguyen, T.K. Minimizing Warpage for Macro-Size Fused Deposition Modeling Parts. Comput. Mater. Contin. 2021, 68, 2913–2923. [Google Scholar] [CrossRef]
- Al-Khazraji, H. Optimal design of a proportional-derivative state feedback controller based on meta-heuristic optimization for a quarter car suspension system. Math. Model. Eng. Probl. 2022, 9, 437–442. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
- Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
- Jia, H.; Sun, K.; Zhang, W.; Leng, X. An enhanced chimp optimization algorithm for continuous optimization domains. Complex Intell. Syst. 2022, 8, 65–82. [Google Scholar] [CrossRef]
- Gai, W.; Qu, C.; Liu, J.; Zhang, J. An improved grey wolf algorithm for global optimization. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 2494–2498. [Google Scholar] [CrossRef]
- Banaie-Dezfouli, M.; Nadimi-Shahraki, M.H.; Beheshti, Z. R-GWO: Representative-based grey wolf optimizer for solving engineering problems. Appl. Soft Comput. 2021, 106, 107328. [Google Scholar] [CrossRef]
- Kamaruzaman, A.F.; Zain, A.M.; Yusuf, S.M.; Udin, A. Levy Flight Algorithm for Optimization Problems—A Literature Review. Appl. Mech. Mater. 2013, 421, 496–501. [Google Scholar] [CrossRef]
- Li, J.; An, Q.; Lei, H.; Deng, Q.; Wang, G.-G. Survey of Lévy Flight-Based Metaheuristics for Optimization. Mathematics 2022, 10, 2785. [Google Scholar] [CrossRef]
- Mahesh, A.; Sushnigdha, G. A novel search space reduction optimization algorithm. Soft Comput. 2021, 25, 9455–9482. [Google Scholar] [CrossRef]
- Prasad, K.N.V.; Kumar, G.R.; Kiran, T.V.; Narayana, G.S. Comparison of different topologies of cascaded H-Bridge multilevel inverter. In Proceedings of the 2013 International Conference on Computer Communication and Informatics, Coimbatore, India, 4–6 January 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Gaikwad, A.; Arbune, P.A. Study of cascaded H-Bridge multilevel inverter. In Proceedings of the 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), Pune, India, 9–10 September 2016; pp. 179–182. [Google Scholar] [CrossRef]
- Krishna, R.A.; Suresh, L.P. A brief review on multi level inverter topologies. In Proceedings of the 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 18–19 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Hassan, N.; Mohagheghian, I. A Particle Swarm Optimization algorithm for mixed variable nonlinear problems. IJE Trans. A Basics 2011, 24, 65–78. [Google Scholar]
- Firdoush, S.; Kriti, S.; Raj, A.; Singh, S.K. Reduction of Harmonics in Output Voltage of Inverter. Int. J. Eng. Res. Technol. (IJERT) 2016, 4, 1–6. [Google Scholar]
- Jacob, T.; Suresh, L.P. A review paper on the elimination of harmonics in multilevel inverters using bioinspired algorithms. In Proceedings of the 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 18–19 March 2016; pp. 1–8. [Google Scholar] [CrossRef]
- Mohd Radzi, M.Z.; Azizan, M.M.; Ismail, B. Observatory case study on total harmonic distortion in current at laboratory and office building. Phys. Conf. Ser. 2020, 1432, 012008. [Google Scholar] [CrossRef]
- BarathKumar, T.; Vijayadevi, A.; Brinda Dev, A.; Sivakami, P.S. Harmonic Reduction in Multilevel Inverter Using Particle Swarm Optimization. IJISET—Int. J. Innov. Sci. Eng. Technol. 2017, 4, 99–104. [Google Scholar]
- Nematollahi, A.F.; Rahiminejad, A.; Vahidi, B. A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization. Appl. Soft Comput. 2017, 59, 596–621. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
- Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems. PLoS ONE 2021, 16, e0255703. [Google Scholar] [CrossRef]
- Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-Qaness, M.A.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
- Boora, K.; Kumar, A.; Malhotra, I.; Kumar, V. Harmonic reduction using Particle Swarm Optimization based SHE Modulation Technique in Asymmetrical DC-AC Converter. Int. J. Electr. Comput. Eng. Syst. 2022, 13, 867–875. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).