Next Article in Journal
Photovoltaic Solar Energy from Urban Sprawl: Potential for Poland
Previous Article in Journal
The Development of a Model of Economic and Ecological Evaluation of Wooden Biomass Supply Chains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices

1
College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China
2
Department of Electronics, Islamia College University, Peshawar 25000, Pakistan
*
Author to whom correspondence should be addressed.
Energies 2021, 14(24), 8575; https://doi.org/10.3390/en14248575
Submission received: 25 November 2021 / Revised: 8 December 2021 / Accepted: 9 December 2021 / Published: 20 December 2021

Abstract

:
Particle Swarm Optimization (PSO) is a member of the swarm intelligence-based on a metaheuristic approach which is inspired by the natural deeds of bird flocking and fish schooling. In comparison to other traditional methods, the model of PSO is widely recognized as a simple algorithm and easy to implement. However, the traditional PSO’s have two primary issues: premature convergence and loss of diversity. These problems arise at the latter stages of the evolution process when dealing with high-dimensional, complex and electromagnetic inverse problems. To address these types of issues in the PSO approach, we proposed an Improved PSO (IPSO) which employs a dynamic control parameter as well as an adaptive mutation mechanism. The main proposal of the novel adaptive mutation operator is to prevent the diversity loss of the optimization process while the dynamic factor comprises the balance between exploration and exploitation in the search domain. The experimental outcomes achieved by solving complicated and extremely high-dimensional optimization problems were also validated on superconducting magnetic energy storage devices (SMES). According to numerical and experimental analysis, the IPSO delivers a better optimal solution than the other solutions described, particularly in the early computational evaluation of the generation.

1. Introduction

Recently, inverse problems, or real world design problems, have been recognized as an active research topic in the fields of academia and engineering sciences, and the optimal solution to such kinds of problems is difficult and hard due to the presence of multimodal cost functions. Because traditional optimization methods are incapable of resolving inverse or real-world problems, a wealth of studies has consequently contributed to the development of nature-inspired algorithmic models, to improve computational capabilities and diversity of the search space in engineering complex and complicated problems. At the same time, researchers have tried to design various nature-inspired algorithmic models in the state of the art to enhance the computational capabilities as well as increase the diversity of search space in engineering optimization problems.
In this modern world of optimization, when one wishes to solve the engineering optimization problems arising from electromagnetics, more devotion will be paid to optimization techniques. From the previous work, we knew that the optimization problems have more minima and one optimum solution, while the current existence of the stochastic algorithm will try to reach the global optimum region or space. One of these methods’ limitations is that they have a slow rate of convergence or require additional computational modifications. In order to relieve unnecessary computational engagement and develop a robust method for the case study, such techniques play an imperative role in improving and makes the algorithms more efficient while building a decent balance between clarity, reliability, and computational performance.
There are a series of metaheuristic algorithms in order to finds the global best solution of inverse problems, but still, there is no evolutionary method to solve most of multimodal optimization problems. Thus, many efforts of the scientist and researchers have been made to optimize the general structure of the algorithm to resolve real-world engineering optimization problems. In this regard, various algorithms have been developed as reported in the following paragraph.
In the field of engineering, a variety of optimal optimization algorithms are used, including ant colony optimization, differential evolution, glowworm swarm optimization, artificial bee colony, genetic algorithm, cuckoo search algorithm, and particle swarm optimization. Among all these methods, PSO is the most recent and simple algorithm [1]. In the search process of the PSO, each candidate shares information with other candidates to expand the search area or space [2]. The PSO optimization algorithm aims to iteratively optimize an issue, starting with a set or population of candidate solutions, referred to as a swarm of particles in this perspective, in which each particle knows both the global best position within the swarm (and its resultant worth in the perspective of the problematic), as well as its personal best position (and its fitness cost) revealed so far during the search [3]. The particles travel randomly in the search space in an iterative process until the entire swarm converges to the global minima.
The PSO comprises three parameters: one control parameter and two learning parameters. Each parameter plays a significant role in the search process. The constant cognitive c 1 , and the social constant c 2 , give experiences to the personal p b e s t and global best g b e s t . The inertia weight balances the exploration and exploitation search domain [4].
The fundamental equations for updating position and velocity in a PSO are:
V i k = w V i k + c 1 r 1 . ( p b e s t i k X i k ) + c 2 r 2 . ( g b e s t k X i k )
X i k = X i k + V i k
where i denotes the i th particle, k is the generation number, v k i is the i t h particle’s velocity, and X k i is its position. For the learning parameters, the cognitive constant represented by c 1 and the social constant by c 2 , c 1 attempting to bring the particle into P b e s t where c 2 pushing the particle into g b e s t , and r 1 and r 2 are random values ranging from 0 to 1.
Many researchers and scientists developed various formulations and strategies for the basic three parameters that were explained and described in [5]. When solving a high-dimensional optimization issue, the basic PSO converges early because the parameters are inappropriately chosen and the mutation operators are incapable to optimize the problems. Researchers have recently modified the traditional PSO by adding mutation operators, hybridization with other algorithms, changing the topological structure, and introducing new inertia weight approaches for various problems and produced better results.
In order to control the premature convergence, many researchers have used different mutation operators to make the optimal algorithm more robust and improve the capability of exploration and exploitation searches of the particles. However, most of the strategies are problem-oriented; for example, student “T” mutation is used in local search, but it may fail if the distance between the current search and the optimal position is too wide [6]. The literature illustrates that the performance of a PSO is related to three basic parameters, i.e., inertia weight w , cognitive constant c 1 , and social constant c 2 . However, in the basic PSO, the values of w , c 1 , and c 2 are not appropriately designed to keep a decent balance between local and global search. Consequently, the values of the parameters must be correctly adjusted. A new concept known as the smart particle swarm optimization (SPSO) process is applied in [7] to address the aforementioned problems. The smart particle is based on the convergence factor (CF) technique, which combines memory of particle positions, the second stage is for comparison, and finally the leader declaration, to find the best optimal solution. Furthermore, some researchers have worked on energy system management and design algorithms for the purpose of developing smart artificial intelligence [8,9,10,11,12,13].
In this paper, a new approach is proposed that is focused on dynamic inertia weight with novel mathematical equations and mutation mechanisms. The mutation process is followed by the personal best particle and global best particles by a unique design roulette wheel selection method to overcome the premature convergence problem by developing proper stability between the exploration and exploitation search.
The remaining of this paper is organized as: the related work of the research is reviewed in Section 2; The novel IPSO is described in Section 3; The numerical results analysis are given in Section 4; A discussion is presented in Section 5; The application of the work is reported in Section 6; and the conclusion is given in Section 7.

2. The Related Work

The previous literature work is mainly categorized into the following four categories.

2.1. Proper Adjustment of Parameters

Researchers have modified the basic three PSO parameters to achieve a decent balance between exploration and exploitation search. Eberhart and Shi first used an inertia weight in the PSO algorithm to control the searching capabilities of the particles [4].
The velocity equation is modified after the incorporation of the inertia weight w :
V i k = w V i k + c 1 r 1 ( p b e s t i k X i k ) + c 2 r 2 ( g b e s t i k X i k )
To control the diversity of the population and improve the performance of PSO, the authors presented a tactic where the inertia weight can be determined based on Euclidean distance [14]. In [15], an updated version of PSO that sought to solve the drawbacks of traditional PSO in perspective of photovoltaics (PV) parameter estimation has been reported. In this work two ways for controlling the inertia weight and an acceleration coefficients are designed to improve the performance of PSO and to ensure an adequate balance between local and global search, a sine chaotic inertia weight mechanism is first used. Thus, in search of an optimal solution, a tangent chaotic technique is used to steer acceleration coefficients. In [16], an improved multi-strategy particle swarm optimization (IMPSO) approach is described. It proposes to optimize the structure and parameters for better mapping the highly nonlinear characteristics of railway traction braking employing multi-strategy evolution methods with a nonlinear decreasing inertia weight to enhance the global optimizing performance of particle swarms. In the PSO velocity update equation, an adaptive inertia weight factor (AIWF) is added. The main feature is that, unlike a traditional PSO, where the inertia weight is held constant during optimization, the weights are attuned adaptively built on the particle’s feat rate in reaching the optimum solution [17].

2.2. Mutation Methods

Many scholars have been working to update the traditional PSO by introducing mutation operators to preserve the diversity of the population and solve the problem of premature convergence. Some of the updated mutation mechanisms are reviewed in the following paragraph. An adaptive mutation strategy is described using the extended non-uniform mutation operator, in which adaptive mutation is used to help trapped particles and extract them from local optima [18]. The hybridizing inertia weight modification tactic, based on new particle diversity and adaptive mutation strategy, has been used to escape local algorithm convergence in complex networks [19]. In [20], they applied different mutation operators on particles in instruction to increase the search capability of particles and avoid them stagnating. In [21], the author proposes a novel idea using an adaptive mutation-selection strategy to conduct local pursuit of the global optimal particle in the up-to-date population, which could help to improve the exploratory potential of the search domain and speed up the convergence speed of the candidates. In [22], the work’s aim is to find the best solution with a combination of stochastic methods and PSO with an adaptive cauchy mutation method to design the new algorithm. In [23], the author presents a multiple scale self-adaptive cooperative mutation strategy-based particle swarm optimization algorithm (MSCPSO) to address the two fundamental drawbacks of PSO. To improve the capability of sufficiently searching the whole solution space, we use multi-scale Gaussian mutations with varying standard deviations in the suggested approach. Equations (4) and (5) are the mathematical representation:
G d ( t ) = G d ( t 1 ) + i = 1 N c i d ( t )
in which
c i d ( t ) = 0 , v i d ( t ) > T d 1 , v i d ( t ) < T d
if G d ( t ) > k 1 then
G d ( t ) = 0 ; T d = T d k 2
In [24], the authors proposed a novel approach to the learning parameters. According to this idea, the two learning variables are dynamically modified in order to affect the particles escaping from a local optimum and converge to the global optimal solution. In [25], the application of Cauchy mutation and Gaussian mutation in the modified PSO is investigated. The major aim is to obtain greater convergence and obtain the best results in the solutions of various real-world problems. In the domain of swarm intelligence, the PSO serves as a basis. The proposed PSO used an improved weight factor compared to the traditional PSO to achieve better convergence.

2.3. Topological Structure

When dealing with complex and high-dimensional optimization problems, researchers are currently working on changing the topological structure of particle swarm optimization to escape the issue of premature convergence. In [26], an example-based learning PSO has been reported to improve swarm and convergence speed diversity. According to the ELPSO idea, many global best particles are set as examples to participate in the velocity update equation, selecting from the current best candidates instead of the g b e s t particle. The proposed work mathematically is shown as:
V i k = w V i k + c 1 r a n d 1 i k ( p b e s t r i k X i k ) + c 2 r a n d 2 i k ( g b e s t r i k X i k )
In [27], instead of p b e s t and g b e s t particles, only the “historical best info” has been used in the conventional PSO velocity update equation to maintain the population diversification. In [28], the exact particles location and position were described and explained for the purpose of adjusting the balance for exploration and exploitation in the search process and is mathematically expressed as:
X i k + 1 = ( 1 β ( t ) ) p i k + β ( t ) p g r + α ( t ) R i k
In [29], an advanced particle swarm optimization algorithm (APSO) approach is presented. The algorithm uses an improved velocity to modify the equation to ensure that the particles reach the best solution speedily as compared to traditional PSO. In [30], PSO with combined Local and global expanding neighborhood topology (PSOLGENT) is proposed that employs a novel expanding neighborhood topology. In [31], a local search strategy was developed where every candidate tries to reach a better position during the search process and then tries to get the best in the whole swarm.

2.4. Hybridization

Researchers also modified the PSO algorithm by combining it with other optimizers for the purpose of enhancing the performance and expanding the search ability of the particles during the evolution process. According to recent research work, when PSO integrates with other evolutionary operators such as crossover, selection, and mutation, the efficiency of the PSO improves and the PSO is strengthened in terms of robustness, stability, and convergence rate. In [32], the genetic algorithm (GA) is used to amend the decision vectors using genetic operators, while the PSO is used to boost vector position. In [33], the PSO algorithm is paired with the sine cosine algorithm (SCA) and levy flight distribution. According to the SCA algorithm, the updating solution is based on the sine and cosine functions, while levy flight is a random walk that uses the levy distribution to produce search steps and then uses big spikes to search the exploration space more effectively. A new hybrid algorithm is proposed that combines the exploitation capabilities of the PSO with the integration of the exploration capabilities of the grey wolf optimizer (GWO). On the basis of the idea, it combines two methods by substituting a particle from the PSO with a low probability for a partially better particle from the GWO [34]. The hybridization method of PSO and differential evolution (DE) has been reported in [35]. The main idea of the proposal is to control diversity and keep a good balance between the local and global searches of the candidates.
Indeed, PSO has been widely used in large areas of research such as in the application of face recognition systems [36], artificial neural network [37], Internet of Things [38], reliability engineering [39], power-system [40], indoor navigation [41], control-systems [42], EEG signals [43], deep-learning [44], wireless sensor networks [45], cloud computing [46], energy grid [47], Image segmentation [48], and electromagnetics [49,50].

3. The Proposed Work

As explained previously, the traditional PSO algorithm is facing challenges. The main challenge in the PSO process is premature convergence and lack of diversity problems due to unbalance between exploration and exploitation searches of the particles. The PSO technique demands significant testing in order to establish the right parameters required to address the aforementioned difficulties. Therefore, we developed a novel strategy for the control parameter and presented a modified mutation mechanism for the personal best and global best particles.
In the traditional PSOs, the inertia weight value is constant in the search process, so the particles are unable to find the best solution. On the other hand, many researchers are practices the maximum and minimum inertia weight values for exploration and exploitation searches respectively. As the values of inertia weight have an imperative role in a dynamic environment, to solve real world problems in a dynamic environment, we developed a novel strategy for the inertia weight which will try to maintain the best balance between exploration and exploitation search of the candidates in the PSO process. Based on global best function particle values, the inertia weight value is frequently changed during the development process. In the search procedure, the proposed inertia weight strategy is important and works with the current mutation mechanism, and this process mathematically stated as:
w i = G b e s t v a l u e M g
where w is inertia weight, i denotes the i th particle, G b e s t value is the best objective function value of global best particle and M g represents the maximum number of generation.
Furthermore, because of the presence of static fitness, the traditional PSO technique experiences a lack of diversity problem in the early phases of the evolution process for global best particle g b e s t and personal best particles p b e s t . During the search process, all the particles follow the g b e s t particle, it may be possible that if the g b e s t does not know the best solution, then all the particles are trapped in a local optimal region. During the optimization process, the difference between the global best particle and the current particle is so small due to the increasing number of generations that it causes the particles to become static or stagnant, and as a consequence, the particle velocity is approaching zero, which causes the algorithm to prematurely convergence.
To tackle the aforementioned issues and difficulties in the conventional PSO algorithm, we introduced a new mechanism and strategy that chooses a different mutation operator based on the selection ratio. The mutation operators are accompanied by personal best particles and global best particle for the purpose of enhancing the performance of the PSO process as well as preserving the diversity of the swarm. The proposed adaptive mutation operators are mathematically expressed by:
Q 1 = p b e s t i j 1 = p b e s t i j + R l y i j
Q 2 = g b e s t j 1 = g b e s t j + R l y j
Q 3 = p b e s t i j 1 = p b e s t i j + s t d i j
Q 4 = g b e s t j 1 = g b e s t j + s t d j
Q 5 = p b e s t i j 1 = p b e s t i j + g a m a i j
Q 6 = g b e s t j 1 = g b e s t j + g a m a j
The inspiration of the mutation operators is described in the following paragraph.
The basic PSO is inspired by the flocking of birds or school fishes, such as the birds flying in the air randomly, and the learning rate of each particle in the PSO process is randomized as well. Also, during the motion of birds, the wings of birds play an imperative role in order to continue flight. At the same time, the wings of the birds need randomized energy for their flight to spend more time in the air. Consequently, in the flying mode, the wings of birds are tired due to the presence of less energy during a long journey, and as a consequence, the birds are unable to explore more search space. Viewing the same procedure in the PSO process, where the two particles play a primary role during the search procedure, if the values of personal best and global best particles (energy of the given particles) are less or reduced during the passing of computational time, the velocity of the particles approaches zero, and as a result, the algorithm converges prematurely. In order to avoid this kind of issue, we conducted the mutation operators on particles with the purpose of improving the searching process of the PSO process and enabling the personal and global best particle to explore more optima space. Thus, the novel mutation operators generate random numbers that will provide more energy to the particles and explore more space regions in the evolution process.
In the PSO optimization process, each mutation operator plays a key role in the proposed strategy and has a self-determining selection ratio. The optimum proposed ratios of Q 1 and Q 2 denoted by X, Q 3   and Q 4 by   Y and Q 5   and Q 6 by Z respectively. Where X , Y and Z are all set to 0.3 during the initial phases of the optimization process, which ensures that each mutation is chosen an equal number of times. The mutation ratio is updated during the search process depending on the previous mutation operator success rate to summarize the information gained from the history of the objective function. Explicitly, the following updated equations for the novel mutation of mechanism as:
X = l + l 3 l o u t R l y o u t n
Y = l + l 3 l o u t s t d o u t n
Z = l + l 3 l o u t g a m a o u t n
The number of successful mutations of unique mutation operators in the primary mutation operations is represented by probability ( o u t ) in the above equation. The minimum ratio of each mutation operator is predefined by a constant l , and its value is set to 0.04 . Furthermore, during the evolution process the values of X , Y and Z are updated after every generation. The selection process of the best mutation is adapted to the roulette wheel selection method on the basis of the selection ratio of mutation operators, as the roulette wheel selection mechanism is such that the ratio of mutation operators having a longer stay (high selection ratio) will be chosen with a high probability.

4. Numerical Results Analysis

The proposed Improved PSO has been compared to five other well-known optimal algorithms on ten mathematical test functions having dimensions 100. The details are given as under:
  • A Particle swarm optimization with adaptive mutation for multimodal optimization (AMPSO) [20].
  • A modified PSO algorithm with dynamic parameters for solving complex engineering design problem (MPSOED) [24].
  • Analysis of gaussian & cauchy mutations in modified particle swarm optimization algorithm (GCMPSO) [25].
  • Global Particle Swarm Optimization for High Dimension Numerical Functions Analysis (GPSO) [27].
  • Modified particle swarm optimization algorithm for scheduling renewable generation (MPSO) [51].
  • Modified particle swarm optimization with effective guides (MPSOEG) [52].
For the current research work, we use mathematical test functions for the purpose to evaluate the novel method as well as other algorithms, as the said benchmark problems are popular in the field of engineering and are normally considered benchmark problems. In this paper, we employed ten mathematical functions to examine the effectiveness of particle swarm optimization with parameter adjustment. All these are unimodal and multimodal to validate the proposed IPSO algorithm’s performance, and the results are compared to the various PSO variants such as, GPSO, AMPSO, MPSO, MPSOED, GCMPSO and MPSOEG, in tabulated data and plots of various methods indicated from 1~10. Table 1 shows these test functions along with the search space in which they are commonly optimized.
To judge a proper comparison among the various methods while analyzing the factual analysis of these optimization functions, we employed the same parameter values for all algorithms in the computational testing. The maximum generation was set to 2000 and the dimension to 100. In 60 trial runs, Table 2 records and reports the best values while the worst, mean, variance solution values for are available in Appendix A.

5. Discussion

On the basis of these comparable data metrics, we claim that our proposed approach (IPSO) performs better as compared to the well-known other algorithms and strategies. The following are the most complicated benchmark problems that are chosen for the validation to recheck the performance of various algorithms. Consequently, the best objective function values for various techniques and our proposed algorithm are indicated in Table 2, while worst, mean and variance results are tabulated in the Appendix A.
Consider the test function, namely the “Rastrigin function”, which is a complex multimodal function with a single global optimal solution and multiple local minima. According to tabulation results, we know that our new approach surpasses other methods such as GPSO, AMPSO, MPSO, MPSOED, GCMPSO, and MPSOEG. The test results of Rastrigin function shows that our proposed method performed well as compared to other ones, so it comes in the first category.
To recheck the stability and power of our proposed PSO, we validated the test function, i.e., the “Alpine 1” test function. The Alpine is also a complicated and complex multimodal function, having many local minima and one global optimal solution, while having the range between [−10, 10]. The tabulation value of Alpine function indicates that our algorithms gives minimum result as compared to others. We conclude that our novel approach shows outclassed results on Alpine function as compared to other algorithms.
Similarly, if we check the results of our modified PSO (IPSO) on sphere function, which is unimodal and complex, the global optimal solution of the sphere function is zero and having the range of the search space is [−10, 10]. The tabulation results shows that our modified PSO optimized the said function.
In addition, our modified method produced the top results on the HappyCat benchmark function. The HappyCat function is frequently used to validate the algorithms, due to the presence of so many local minima and complicated structures. If we observe the results of the Quartic function, it shows that our modified approach also gave the top results as compared to the other ones. In summary, the Schwefel’s Problem 1.2 function and De Jong’s, Bent Cigar, Step, Quartic, Alpine1, and Griewank were all these complex and complicated optimization problems that are commonly used to validate algorithms. In short, our novel IPSO shows good results for most optimization problems as compared to other well-known modified algorithms.
The convergence curve based on test functions f 1 , f 5 and f 7 is represented in Figure 1, Figure 2 and Figure 3 respectively, while the curves for f 2 , f 3 , f 4 , f 6 , f 8 , f 9 and f 10 are availble in appendix shows the convergence characteristics for various algorithms. Viewing the critical study of test function f1 we notice that our approach finds the required solution space after 500 generations and other methods such as AMPSO, GPSO, MPSOED, and GCMPSO perform badly, which indicates their low performance and robustness.
From the study of second test function plots, we understand the low performance of other comparable methods and the efficacy of our proposed approach, as in the whole search process, other well-defined methods could not converge to a global region, while our novel modified approach finds the main region after 2000 generations. Similarly, our observation on the third function f3 plot is reported as the said idea converged before 600 generation, while other algorithms never found the optimal solution of the said algorithms.
If we observe the plot of the sixth test function, we conclude that MPSO performs a little bit better than AMPSO and while the IPSO (proposed approach) performs outclass as compared to all other algorithms, which shows its stability and maturity. So, from the plots, it is obvious that the novel algorithm shows the best performance.
In this article, we employ the logarithm values of the objective function for comparison. From the graphical results of the test functions, the proposed IPSO converges to the global optimal region faster than the GPSO, AMPSO, MPSO, MPSOED, GCMPSO, and MPSOEG. The reasons are (1) the proposed novel adaptive mutation operator has prevented the diversity loss of the optimization process, (2) the proposed dynamic factor comprises the balance between exploration and exploitation in the search domain. Thus, we conclude from the plots that the suggested approach convergence plots for various test functions proves its superiority compared with others. From the convergence trajectories, it is clear that the novel technique is more efficient, stable and robust. Viewing the numerical results, the proposed IPSO’s final solution has significantly greater quality as compared to the others, namely “GPSO”, “AMPSO”, “MPSO”, “MPSOED”, “GCMPSO” and “MPSOEG”.

6. Application

For better performance analysis of our proposed approach, we choose an engineering electromagnetic device i.e., “TEAM workshop problem 22 (SMES)” as another case study. The optimal design of a SMES device is a popular problem in computational electromagnetics, and it is the 22nd benchmark problem for testing electromagnetic analysis methods (TEAM 22) [53]. The SMES device stores energy in the form of magnetic fields which is generated from the superconducting coils. The TEAM workshop problem 22, is also known as an optimization case of the SMES that has been adapted as a magneto-statics benchmark problem. The following diagram of TEAM 22’s design goal, as illustrated in Figure 4, is that the main idea of the problem is to keep the stored energy as close as 180 M Joule, while minimizing the magnetic stray field observed on lines a and b . The first coil is charged to store energy, and the second should be built to reduce the first coil’s high magnetic stray. In addition, to maintain the superconductivity of the inside and outside coils, the quenching condition should not be violated. As, the manufacturing tolerance in geometric variables (e.g., R 2 , d 2 and h 2 in Figure 4), as well as perturbation compensation of the current controller, can lead to a faulty device.
According to the design procedure of the problem, it incorporates three parameters related to the creation of SMES [54,55].
m i n f = B s t r a y 2 / B n o r m 2 + E n e r g y E r e f / E r e f s . t ( J i < 6.4 | ( B m a x ) i + 54 ) ( A / m m 2 ) ( i = 1 , 2 )
Obviously, this SMES device is a single objective function design problem, but it actually combines two objective functions to integrate magnetically stored energy in a couple of coils W m , W e r f = 180 , M Joule, N = 22 , and B n o r m = 3 m T e s l a .
The mathematical equation for the stray magnetic field as follows:
O F = B s t r a y 2 B r e f 2 + w m w m . r e f w m . r e f
B s t r a y 2 = i = 1 N B s t r a y . i 2 N
The finite element method is applied to calculate the performance parameters in the above two equations in current research work. When a magnetic field is created, it is essential to keep the physical condition of coils in order to guarantee superconductivity within the solenoids.
Because the current density is 22.5 A/mm2 then B max must be less than 4.92.
J i < ( 6.4 ( B max ) i + 54 ) A mm 2
where, J 1 , indicates the coil’s current density, and B max , represents the maximum magnetic flux density of the ith coil, while i , denotes the coil’s number.
The inner solenoid is fixed in this electromagnetic problem, optimization of SMES device is; r 1 = 2   m , d 1 = 0.27   m and h 1 2 = 0.8   m , whereas the outer-solenoid geometrical dimensions is 0.6 r 2 3.4   m and 0.1 r 2 0.4   m are optimized.
The super conducting magnetic energy storage device conveys currents in opposing directions, associated with radius, height, thickness, and search space of the stray field, as demonstrated in Table 3. For the sake of fair comparison, we set all of the parameters to the same values for IPSO, GPSO, AMPSO, MPSO, MPSOED, GCMPSO, and MPSOEG, and the average value of the objective function was reported in Table 3. The results demonstrate that the novel IPSO recorded output is superior to those of the others.
To synthesize a magnetic field with a desired distribution, appropriately designed current-carrying coils can be used. There are several applications in biomedical engineering: a uniform magnetic field is the background of nuclear magnetic resonance spectroscopy, and a linear profile of the field is required for magnetic resonance imaging. Furthermore, in magneto-fluid hyperthermia (MFH), field uniformity aids in the uniform dispersion of heat generated in the nano-particle fluid that was previously injected into the target region, such as a tumor mass being treated. As a result, major practical applications influenced the concept behind this benchmark problem.

7. Conclusions

PSO is a relatively new metaheuristic for global optimization of a multimodal objective function with continuous variables, and has been recognized a standard global optimizer. Although a wealth of efforts have been devoted to improve its convergence speed, solution quality, and algorithm stability, the performance of the existing PSOs are still unsatisfactory. For example, a premature convergence and the loss of diversity are two challenging issues to be addressed for existing PSOs. In this respect, a novel adaptive mutation operator is designed to ensure the diversity of particles in the optimization process, and a dynamic factor is proposed to ensure a good balance between exploration and exploitation searches. The numerical results on mathematical test problems and an engineering application prototype have validated the effectiveness of the proposed PSO algorithm. Consequently, the present work provides a feasible global optimizer for optimizations of multimodal functions with continuous variables.
In future study, we would really want to analyze the convergence problem using a hybrid optimization algorithm (PSO & ABC) and introducing novel formulations for the cognitive and social components, designing novel selection methods for the leader particle, and creating new equations for the personal best particle using the idea of neighborhood. At the same time, we may choose other case studies such as, solenoid problems, as well as using some novel shifted or rotated mathematical test functions.

Author Contributions

Conceptualization, R.A.K. and S.Y.; methodology, software and validation, R.A.K. formal analysis, investigation and resources, S.F.; writing—original draft preparation, R.A.K.; writing—review and editing, S.K.; visualization, K.; supervision, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not available.

Informed Consent Statement

All authors agree.

Data Availability Statement

The data that support the study’s findings, such as numerical simulation, model, or code generated or used during the study, are available upon request from the journal and the corresponding author.

Acknowledgments

Thanks to the China Scholarship Council, Zhejiang University, Hangzhou, China (www.zju.edu.cn, accessed on 5 December 2021) and the University of Science & Technology Bannu, Pakistan (www.ustb.edu.pk, accessed on 5 December 2021) for providing us great research environment during this work.

Conflicts of Interest

The corresponding author declares that there is no contradiction on behalf of all authors.

Abbreviations

PSOParticle Swarm Optimization
IPSOImproved Particle Swarm Optimization
C1Cognitive Constant
C2Social Constant
pbestPersonal Best
gbestGlobal Best
WInertia Weight
SPSOSmart Particle Swarm Optimization
CFConvergence Factor
AIWFAdaptive Inertia Weight Factor
GAGenetic Algorithm
SCASine Cosine Algorithm
GWOGrey Wolf Optimizer
DEDifferential Evolution
PVPhotovoltaics
MgMaximum Generation
RlyRayleigh’s method
StdStudents
OutOutcome
QMutation Operator
SMESSuper Conducting Magnetic Storage System
TEAMTesting Electromagnetic Analysis Method
WmMagnetic Energy
OFObjective Function
JiCurrent Coil Density
BmaxMaximum Magnetic Flux

Appendix A

Performance Comparison based on Worst, Mean and Variance.
f1 Rastrigin
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.000.000.200.100.206.701.10
Mean−12.18−2.14−2.10−1.49−0.62−0.21−3.80
Variance4.140.771.130.840.683.452.35
f2 De Jong’s
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst1.001.000.000.000.0016.901.20
Mean−50.64−1.25−3.22−14.41−7.09−10.70−9.77
Variance30.131.081.877.443.5912.086.63
f3 Bent Cigar
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.001.001.801.701.100.002.60
Mean−27.01−7.32−7.37−5.69−3.24−8.02−1.01
Variance10.122.926.223.022.845.932.20
f4 Step
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst−1.000.901.101.801.701.010.10
Mean−45.65−5.65−3.79−7.13−15.65−9.95−11.49
Variance26.145.123.756.1012.918.787.70
f5 Quartic
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.001.000.001.000.000.000.75
Mean−54.37−11.58−7.79−22.93−5.77−11.18−8.16
Variance16.188.705.5214.993.7610.006.71
f6 Sphare
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.800.901.600.801.001.410.93
Mean−5.55−1.09−1.18−2.85−2.90−0.82−3.73
Variance3.901.051.432.672.620.902.09
f7 Schwefel’s Problem 1.2
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst5.908.3010.0010.108.906.006.11
Mean−68.62−62.69−10.44−12.20−10.38−12.75−6.26
Variance33.8421.2317.2515.1513.0618.289.42
f8 HappyCat
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst1.902.402.302.502.608.001.19
Mean−0.750.601.461.881.171.77−0.44
Variance1.170.430.210.290.401.340.71
f9 Alpine1
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.003.202.900.903.002.903.30
Mean−14.57−2.281.74−4.37−1.34−0.28−1.69
Variance6.331.701.252.624.152.434.20
f10 Griewank
IPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
Worst0.000.901.101.002.001.601.10
Mean−17.02−8.47−5.60−7.01−2.50−13.59−2.35
Variance10.866.174.144.683.277.942.17
Figure A1. Algorithms’ convergence plots on f 2 .
Figure A1. Algorithms’ convergence plots on f 2 .
Energies 14 08575 g0a1
Figure A2. Algorithms’ convergence plots on f 3 .
Figure A2. Algorithms’ convergence plots on f 3 .
Energies 14 08575 g0a2
Figure A3. Algorithms’ convergence plots on f 4 .
Figure A3. Algorithms’ convergence plots on f 4 .
Energies 14 08575 g0a3
Figure A4. Algorithms’ convergence plots on f 6 .
Figure A4. Algorithms’ convergence plots on f 6 .
Energies 14 08575 g0a4
Figure A5. Algorithms’ convergence plots on f 8 .
Figure A5. Algorithms’ convergence plots on f 8 .
Energies 14 08575 g0a5
Figure A6. Algorithms’ convergence plots on f 9 .
Figure A6. Algorithms’ convergence plots on f 9 .
Energies 14 08575 g0a6
Figure A7. Algorithms’ convergence plots on f 10 .
Figure A7. Algorithms’ convergence plots on f 10 .
Energies 14 08575 g0a7

Discription of Mathematical Test Function

Rastrigin is a multimodal function and is challenging to solve because it contains several local minima regions where an optimization algorithm with limited exploratory power is likely to become trapped. The function’s lone globally optimal solution, 0, is located within the domain of [−5.12, 5.12] at f (x*) = [0, 0,…, 0].
The step mathematical test function is one of the more complicated and complex problems due to the lack of suitable direction. The minimum value is fixed at zero. The search region of said benchmark problem is [−100, 100], and the shape of the given benchmark problem is flat.
Researchers are using the Quartic function as a benchmark problem due to its unimodal quality. Its global point is zero and the space of its search is mentioned and presented between the values of [−1.28] and [1.28].
The test function, namely “Sphere,” is a unimodal and continuous function, and the solution of such types of problems is easy. The search domain should be mentioned in the brackets [−5.12, 5.12]. The zero is the minimum value of the given sphere function where the value is derived by the computational scientist and the research optimization school.
Griewank is the mathematical test function that is used in the field of engineering design for the validation of computational techniques. The mentioned problem is complex and multimodal, and its feasible range of function is [−100, 100]. The scientist has already discovered that the global optimal solution to the aforementioned benchmark problem is zero.
The Alpine 1 function is a mathematical test function used to validate computing strategies in the field of engineering optimization. This function is Multimodal and Continuous with a −10 ≤ xi ≤ 10 constraint. The origin is the location of the global minimum, where x = (0, 0) and f(x) = 0.

References

  1. Wahab, M.N.A.; Nefti-Meziani, S.; Atyabi, A. A Comprehensive Review of Swarm Optimization Algorithms. PLoS ONE 2015, 10, e0122827. [Google Scholar] [CrossRef] [Green Version]
  2. Kennedy, J.; Eberhart, R.C. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  3. Freitas, D.; Lopes, L.G.; Morgado-Dias, F. Particle Swarm Optimization: A Historical Review up to the Current Developments. Entropy 2020, 22, 362. [Google Scholar] [CrossRef] [Green Version]
  4. Eberhart, R.C.; Shi, Y. Tracking and Optimizing Dynamic Systems with Particle Swarms. In Proceedings of the Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; pp. 94–100. [Google Scholar]
  5. Banks, A.; Vincent, J.; Anyakoha, C. A Review of Particle Swarm Optimization. Part II: Hybridisation, Combinatorial, Multicriteria and Constrained Optimization, and Indicative Applications. Nat. Comput. 2008, 7, 109–124. [Google Scholar] [CrossRef]
  6. Imran, M.; Hashim, R.; Khalid, N.E.A. An Overview of Particle Swarm Optimization Variants. Procedia Eng. 2013, 53, 491–496. [Google Scholar] [CrossRef] [Green Version]
  7. Khan, R.A.; Yang, S.; Fahad, S.; Khan, S.U.; Kalimullah. A Modified Particle Swarm Optimization with a Smart Particle for Inverse Problems in Electromagnetic Devices. IEEE Access 2021, 9, 99932–99943. [Google Scholar] [CrossRef]
  8. Eysenck, G. Sensor-Based Big Data Applications and Computationally Networked Urbanism in Smart Energy Management Systems. Geopolit. Hist. Int. Relat. 2020, 12, 52–58. [Google Scholar]
  9. Konecny, V.; Barnett, C.; Poliak, M. Sensing and Computing Technologies, Intelligent Vehicular Networks, and Big Data-Driven Algorithmic Decision-Making in Smart Sustainable Urbanism. In Contemporary Readings in Law and Social Justice; Addleton Academic Publishers: New York, NY, USA, 2021; Volume 13, pp. 30–39. [Google Scholar]
  10. Harrower, K. Networked and Integrated Urban Technologies in Sustainable Smart Energy Systems. Geopolit. Hist. Int. Relat. 2020, 12, 45–51. [Google Scholar]
  11. Nica, E.; Stehel, V. Internet of Things Sensing Networks, Artificial Intelligence-Based Decision-Making Algorithms, and Real-Time Process Monitoring in Sustainable Industry 4.0. J. Self Gov. Manag. Econ. 2021, 9, 35–47. [Google Scholar]
  12. Valderrama Bento da Silva, P.H.; Camponogara, E.; Seman, L.O.; Villarrubia González, G.; Reis Quietinho Leithardt, V. Decompositions for MPC of Linear Dynamic Systems with Activation Constraints. Energies 2020, 13, 5744. [Google Scholar] [CrossRef]
  13. Sun, L.; You, F. Machine Learning and Data-driven Techniques for the Control of Smart Power Generation Systems: An Uncertainty Handling Perspective. Engineering 2021, 7, 1239–1247. [Google Scholar] [CrossRef]
  14. Du, C.; Yin, Z.G.; Zhang, Y.P.; Liu, J.; Sun, X.D.; Zhong, Y.R. Research on Active Disturbance Rejection Control with Parameter Autotune Mechanism for Induction Motors Based on Adaptive Particle Swarm Optimization Algorithm with Dynamic Inertia Weight. IEEE Trans. Power Electr. 2019, 34, 2841–2855. [Google Scholar] [CrossRef]
  15. Kiani, A.T.; Nadeem, M.F.; Ahmed, A.; Khan, I.A.; Alkhammash, H.I.; Sajjad, I.A.; Hussain, B. An Improved Particle Swarm Optimization with Chaotic Inertia Weight and Acceleration Coefficients for Optimal Extraction of PV Models Parameters. Energies 2021, 14, 2980. [Google Scholar] [CrossRef]
  16. Kong, X.; Zhang, T. Non-Singular Fast Terminal Sliding Mode Control of High-Speed Train Network System Based on Improved Particle Swarm Optimization Algorithm. Symmetry 2020, 12, 205. [Google Scholar] [CrossRef] [Green Version]
  17. Kumar, E.V.; Raaja, G.S.; Jerome, J. Adaptive PSO for Optimal LQR Tracking Control of 2 DOF Laboratory Helicopter. Appl. Soft Comput. 2016, 41, 77–90. [Google Scholar] [CrossRef]
  18. Cui, Q.; Li, Q.; Li, G.; Li, Z.; Han, X.; Lee, H.P.; Liang, Y.; Wang, B.; Jiang, J.; Wu, C. Globally-Optimal Prediction-Based Adaptive Mutation Particle Swarm Optimization. Inf. Sci. 2017, 418–419, 186–217. [Google Scholar] [CrossRef]
  19. Li, X.; Wu, X.; Xu, S.; Qing, S.; Chang, P.C. A Novel Complex Network Community Detection Approach using Discrete Particle Swarm Optimization with Particle Diversity and Mutation. Appl. Soft Comput. 2019, 81, 105476. [Google Scholar] [CrossRef]
  20. Wang, H.; Wang, W.; Wu, Z. Particle Swarm Optimization with Adaptive Mutation for Multimodal Optimization. Appl. Math. Comput. 2013, 221, 296–305. [Google Scholar] [CrossRef]
  21. Dong, W.Y.; Kang, L.L.; Zhang, W.S. Opposition-Based Particle Swarm Optimization with Adaptive Mutation Strategy. Soft Comput. 2017, 21, 5081–5090. [Google Scholar] [CrossRef]
  22. Zou, Y.; Liu, P.X.; Li, C.; Cheng, Q. Collision Detection for Virtual Environment using Particle Swarm Optimization with Adaptive Cauchy Mutation. Cluster Comput. 2017, 20, 1765–1774. [Google Scholar] [CrossRef]
  23. Tao, X.; Guo, W.; Li, Q.; Ren, C.; Liu, R. Multiple Scale Self-Adaptive Cooperation Mutation Strategy-Based Particle Swarm Optimization. Appl. Soft Comput. 2020, 89, 1568–4946. [Google Scholar] [CrossRef]
  24. Khan, S.; Kamran, M.; Rehman, O.U.; Liu, L.; Yang, S. A Modified PSO Algorithm with Dynamic Parameters for Solving Complex Engineering Design Problem. Int. J. Comput. Math. 2017, 95, 2308–2329. [Google Scholar] [CrossRef]
  25. Sarangi, A.; Samal, S.; Sarangi, S.K. Analysis of Gaussian & Cauchy Mutations in Modified Particle Swarm Optimization Algorithm. In Proceedings of the 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; pp. 463–467. [Google Scholar]
  26. Huang, H.; Qin, H.; Hao, Z.; Lim, A. Example-Based Learning Particle Swarm Optimization for Continuous Optimization. Inf. Sci. 2012, 182, 125–138. [Google Scholar] [CrossRef]
  27. Jamian, J.J.; Abdullah, M.N.; Mokhlis, H.; Mustafa, M.W.; Bakar, A.H.A. Global Particle Swarm Optimization for High Dimension Numerical Functions Analysis. J. Appl. Math. 2014, 2014, 329193. [Google Scholar] [CrossRef]
  28. Guedria, N.B. Improved Accelerated PSO Algorithm for Mechanical Engineering Optimization Problems. Appl. Soft Comput. 2016, 40, 455–467. [Google Scholar] [CrossRef]
  29. Ali, K.T.; Ling, H.S.; Mohan, S.A. Advanced Particle Swarm Optimization Algorithm with Improved Velocity Update Strategy. In Proceedings of the 2018 IEEE International Conference on Systems, Man and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 3944–3949. [Google Scholar]
  30. Marinakis, Y.; Migdalas, A.; Sifaleras, A. A Hybrid Particle Swarm Optimization-Variable Neighborhood Search algorithm for Constrained Shortest Path problems. Eur. J. Oper. Res. 2017, 261, 819–834. [Google Scholar] [CrossRef]
  31. Ding, J.; Liu, J.; Chowdhury, K.R.; Zhang, W.; Hu, Q.; Lei, J. A Particle Swarm Optimization using Local Stochastic Search and Enhancing Diversity for Continuous Optimization. Neurocomputing 2014, 137, 261–267. [Google Scholar] [CrossRef]
  32. Garg, H. A Hybrid GSA-GA Algorithm for Constrained Optimization Problems. Inf. Sci. 2019, 478, 499–523. [Google Scholar] [CrossRef]
  33. Chegini, S.N.; Bagheri, A.; Najafi, F. PSOSCALF: A new Hybrid PSO based on Sine Cosine algorithm and Levy Flight for Solving Optimization Problems. Appl. Soft Comput. 2018, 73, 697–726. [Google Scholar] [CrossRef]
  34. Senel, F.A.; Gökçe, F.; Yüksel, A.S.; Yigit, T. A Novel Hybrid PSO–GWO Algorithm for Optimization Problems. Eng. Comput. 2019, 35, 1359–1373. [Google Scholar] [CrossRef]
  35. Tang, B.; Xiang, K.; Pang, M. An Integrated Particle Swarm Optimization Approach Hybridizing a New Self-Adaptive Particle Swarm Optimization with a Modified Differential Evolution. Neural Comput. Appl. 2018, 32, 4849–4883. [Google Scholar] [CrossRef]
  36. Hermosilla, G.; Rojas, M.; Mendoza, J.; Farias, G.; Pizarro, F.T.; Martin, C.S.; Vera, E. Particle Swarm Optimization for the Fusion of Thermal and Visible Descriptors in Face Recognition Systems. IEEE Access 2018, 6, 42800–42811. [Google Scholar] [CrossRef]
  37. Shariati, M.; Mafipour, M.S.; Mehrabi, P.; Bahadori, A.; Zandi, Y.; Salih, M.N.A.; Nguyen, H.; Dou, J.; Song, X.; Poi-Ngian, S. Application of a Hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) Model in Behavior Prediction of Channel Shear Connectors Embedded in Normal and High-Strength Concrete. Appl. Sci. 2019, 9, 5534. [Google Scholar] [CrossRef] [Green Version]
  38. Liu, J.; Yang, D.; Lian, M.; Li, M. Research on Intrusion Detection Based on Particle Swarm Optimization in IoT. IEEE Access 2021, 9, 38254–38268. [Google Scholar] [CrossRef]
  39. Bai, B.; Guo, Z.; Zhou, C.; Zhang, W.; Zhang, J. Application of adaptive reliability importance sampling-based extended domain PSO on single mode failure in reliability engineering. Inf. Sci. 2021, 546, 42–59. [Google Scholar] [CrossRef]
  40. Hantash, N.; Khatib, T.; Khammash, M. An Improved Particle Swarm Optimization Algorithm for Optimal Allocation of Distributed Generation Units in Radial Power Systems. Appl. Comput. Intell. Soft Comput. 2020, 2020, 8824988. [Google Scholar]
  41. Sun, D.; Wei, E.; Ma, Z.; Wu, C.; Xu, S. Optimized CNNs to Indoor Localization through BLE Sensors Using Improved PSO. Sensors 2021, 21, 1995. [Google Scholar] [CrossRef]
  42. Wu, T.Z.; Shi, X.; Liao, L.; Zhou, C.J.; Zhou, H.; Su, Y.H. A Capacity Configuration Control Strategy to Alleviate Power Fluctuation of Hybrid Energy Storage System Based on Improved Particle Swarm Optimization. Energies 2019, 12, 642. [Google Scholar] [CrossRef] [Green Version]
  43. Arican, M.; Polat, K. Binary Particle Swarm Optimization (BPSO) Based Channel Selection in the EEG Signals and its Application to Speller Systems. J. Artif. Intell. Syst. 2020, 2, 27–37. [Google Scholar] [CrossRef]
  44. Rajagopal, A.; Joshi, G.P.; Ramachandran, A.; Subhalakshmi, R.T.; Khari, M.; Jha, S.; Shankar, K.; You, J. A Deep Learning Model Based on Multi-Objective Particle Swarm Optimization for Scene Classification in Unmanned Aerial Vehicles. IEEE Access 2020, 8, 135383–135393. [Google Scholar] [CrossRef]
  45. Edla, D.R.; Kongara, M.C.; Cheruku, R. A PSO Based Routing with Novel Fitness Function for Improving Lifetime of WSNs. Wirel. Pers. Commun. 2019, 104, 73–89. [Google Scholar] [CrossRef]
  46. Farid, M.; Latip, R.; Hussin, M.; Abdul Hamid, N.A.W. A Survey on QoS Requirements Based on Particle Swarm Optimization Scheduling Techniques for Workflow Scheduling in Cloud Computing. Symmetry 2020, 12, 551. [Google Scholar] [CrossRef] [Green Version]
  47. Azab, M. Multi-Objective Design Approach of Passive Filters for Single-Phase Distributed Energy Grid Integration Systems using Particle Swarm Optimization. Energy Rep. 2020, 6, 157–172. [Google Scholar] [CrossRef]
  48. Farshi, T.R.; Drake, J.H.; Ozcan, E. A Multimodal Particle Swarm Optimization-Based Approach for Image Segmentation. Expert Syst. Appl. 2020, 149, 13. [Google Scholar] [CrossRef]
  49. Khan, S.U.; Yang, S.; Wang, L.; Liu, L. A Modified Particle Swarm Optimization Algorithm for Global Optimizations of Inverse Problems. IEEE Trans. Magn. 2015, 52, 1–4. [Google Scholar] [CrossRef]
  50. Fahad, S.; Yang, S.; Khan, R.A.; Khan, S.; Khan, S.A. A Multimodal Smart Quantum Particle Swarm Optimization for Electromagnetic Design Optimization Problems. Energies 2021, 14, 4613. [Google Scholar] [CrossRef]
  51. Gholami, K.; Dehnavi, E. A Modified Particle Swarm Optimization Algorithm for Scheduling Renewable Generation in a Micro-Grid under Load Uncertainty. Appl. Soft Comput. 2019, 78, 496–514. [Google Scholar] [CrossRef]
  52. Karim, A.A.; Isa, N.A.M.; Lim, W.H. Modified Particle Swarm Optimization with Effective Guides. IEEE Access 2020, 8, 188699–188725. [Google Scholar] [CrossRef]
  53. Alotto, P.; Baumgartner, U.; Freschi, F. SMES Optimization Benchmark: TEAM Workshop Problem 22. COMPUMAG TEAM Workshop 2008; pp. 1–4. Available online: http://www.compumag.org/jsite/images/stories/TEAM/problem22.pdf (accessed on 8 September 2021).
  54. Di Barba, P.; Mognaschi, M.E.; Lowther, D.A.; Sykulski, J.K. A Benchmark TEAM Problem for Multi-Objective Pareto Optimization of Electromagnetic Devices. IEEE Trans. Magn. 2018, 54, 1–4. [Google Scholar] [CrossRef] [Green Version]
  55. Li, Y.; Lei, G.; Bramerdorfer, G.; Peng, S.; Sun, X.; Zhu, J. Machine Learning for Design Optimization of Electromagnetic Devices: Recent Developments and Future Directions. Appl. Sci. 2021, 11, 1627. [Google Scholar] [CrossRef]
Figure 1. Algorithms convergence plots on f 1 .
Figure 1. Algorithms convergence plots on f 1 .
Energies 14 08575 g001
Figure 2. Algorithms convergence plots on f 5 .
Figure 2. Algorithms convergence plots on f 5 .
Energies 14 08575 g002
Figure 3. Algorithms convergence plots on f 7 .
Figure 3. Algorithms convergence plots on f 7 .
Energies 14 08575 g003
Figure 4. Schematic diagram of SMES device.
Figure 4. Schematic diagram of SMES device.
Energies 14 08575 g004
Table 1. High Dimensional Classical Benchmark Functions.
Table 1. High Dimensional Classical Benchmark Functions.
Function’s NameMathematical DefinitionRange
Rastrigin f 1 ( x ) = 1 4000 i = 1 n z i 2 i = 1 n c o s z i i + 1 [−600, 600] D
De Jong’s f 2 ( x ) = i = 1 n i x i 2 [−5.12, 5.12] D
Bent Cigar f 3 ( x ) = x i 2 + 1 60 i = 2 n x i 2 [−100, 100] D
Step f 4 ( x ) = i = 1 D x i + 0.5 2 [−100, 100] D
Quartic f 5 ( x ) = i = 1 n x i 4 + r a n d o m ( 0.1 ) [−1.28, 1.28] D
Sphare f 6 ( x ) = i = 1 n x i 2 [−100, 100] D
Schwefel’s Problem 1.2 f 7 ( x ) = i = 1 D I = 1 n z i 2 + f b i a s 1 , z = x 0 and f b i a s 1 = 450 [−100, 100] D
HappyCat f 8 ( x ) = i = 1 n x i 2 n 1 4 + ( 0.5 i = 1 n x i 2 + i = 1 n x i ) n + 0.5 [−100, 100] D
Alpine1 f 9 ( x ) = i = 1 n x s i n ( x i ) + 0.1 x i [−10,10] D
Griewank f 10 ( x ) = 1 4000 i = 1 n z i 2 i = 1 n c o s z i i + 1 + f b i a s 2 , z = x 0 and f b i a s 2 = 180 [−100, 100] D
“D” means search space Dimension.
Table 2. Statistical Analysis of the Best Objective Function Values for 100 Dimensions Benchmark Problems.
Table 2. Statistical Analysis of the Best Objective Function Values for 100 Dimensions Benchmark Problems.
FunctionIPSOGPSOAMPSOMPSOEDMPSOGCMPSOMPSOEG
f1−14.30−3.10−4.00−2.60−1.70−5.40−7.35
f2−99.00−2.80−5.80−22.00−10.80−26.10−16.94
f3−32.30−10.30−14.60−7.68−7.96−15.20−4.65
f4−75.46−14.50−9.70−17.28−39.22−26.30−24.38
f5−60.70−26.71−16.30−47.93−12.04−26.70−19.15
f6−11.30−2.80−3.30−7.48−5.20−2.20−5.30
f7−95.00−72.00−29.00−32.40−28.40−46.20−22.49
f8−1.500.201.201.500.40−0.40−0.90
f9−19.12−3.80−1.50−7.00−7.50−4.80−7.73
f10−31.80−18.10−10.90−10.80−7.80−21.30−4.90
Table 3. Results Comparison of IPSO with other variants on TEAM Workshop Problem 22.
Table 3. Results Comparison of IPSO with other variants on TEAM Workshop Problem 22.
AlgorithmR2h2/2 d2Objective Function Fitness
IPSO2.99180.20280.29390.0717
GPSO2.97130.20370.31920.1287
AMPSO3.00170.60000.32010.1136
MPSO3.00840.82650.27860.1356
MPSOED2.84640.57290.33820.1123
GCMPSO2.60500.20400.10000.1210
MPSOEG3.11030.73250.27310.0821
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khan, R.A.; Yang, S.; Khan, S.; Fahad, S.; Kalimullah. A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices. Energies 2021, 14, 8575. https://doi.org/10.3390/en14248575

AMA Style

Khan RA, Yang S, Khan S, Fahad S, Kalimullah. A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices. Energies. 2021; 14(24):8575. https://doi.org/10.3390/en14248575

Chicago/Turabian Style

Khan, Rehan Ali, Shiyou Yang, Shafiullah Khan, Shah Fahad, and Kalimullah. 2021. "A Multimodal Improved Particle Swarm Optimization for High Dimensional Problems in Electromagnetic Devices" Energies 14, no. 24: 8575. https://doi.org/10.3390/en14248575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop