Next Article in Journal
Thrust Measurements on the High Efficient and Reliable Vacuum Arc Thruster (HERVAT)
Next Article in Special Issue
An Algorithm for Rescheduling of Trains under Planned Track Closures
Previous Article in Journal
Optical Emission Spectroscopy as a Diagnostic Tool for Characterization of Atmospheric Plasma Jets
Previous Article in Special Issue
A Phase Angle-Modulated Bat Algorithm with Application to Antenna Topology Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Multi-Level Search for Global Optimization: An Integrated Swarm Intelligence-Metamodelling Technique

1
Faculty of Printing, Packaging Engineering and Digital Media Technology, Xi’an University of Technology, Xi’an 710048, China
2
School of Engineering, Faculty of Science, University of East Anglia, Norwich NR4 7TJ, UK
3
School of Mechanical Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
4
Faculty of Engineering, University of Leeds, Leeds LS2 9JT, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(5), 2277; https://doi.org/10.3390/app11052277
Submission received: 9 January 2021 / Revised: 21 February 2021 / Accepted: 25 February 2021 / Published: 4 March 2021
(This article belongs to the Collection Heuristic Algorithms in Engineering and Applied Sciences)

Abstract

:
Over the last decade, metaheuristic algorithms have emerged as a powerful paradigm for global optimization of multimodal functions formulated by nonlinear problems arising from various engineering subjects. However, numerical analyses of many complex engineering design problems may be performed using finite element method (FEM) or computational fluid dynamics (CFD), by which function evaluations of population-based algorithms are repetitively computed to seek a global optimum. It is noted that these simulations become computationally prohibitive for design optimization of complex structures. To efficiently and effectively address this class of problems, an adaptively integrated swarm intelligence-metamodelling (ASIM) technique enabling multi-level search and model management for the optimal solution is proposed in this paper. The developed technique comprises two steps: in the first step, a global-level exploration for near optimal solution is performed by adaptive swarm-intelligence algorithm, and in the second step, a local-level exploitation for the fine optimal solution is studied on adaptive metamodels, which are constructed by the multipoint approximation method (MAM). To demonstrate the superiority of the proposed technique over other methods, such as conventional MAM, particle swarm optimization, hybrid cuckoo search, and water cycle algorithm in terms of computational expense associated with solving complex optimization problems, one benchmark mathematical example and two real-world complex design problems are examined. In particular, the key factors responsible for the balance between exploration and exploitation are discussed as well.

1. Introduction

With tremendous advances in computational sciences, information technology, and artificial intelligence, design optimization becomes increasingly popular in many engineering subjects, such as mechanical, civil, structural, aerospace, automotive, and energy engineering. It helps to shorten the design-cycle time and to identify creative designs that are not only feasible but also progressively optimal, given predetermined design criteria.
At the outset of design optimization, running a gradient-based algorithm with a multi-start process proves to be very successful in finding the global optimum of simple problems when gradient information is available [1]. While under the pressure of being faced with increasingly complex optimization problems in which derivative information is unreliable or unavailable, researchers gradually focus on the development of derivative-free optimization methods [2] and metaheuristic methods to address this issue. Followed by Glover’s convention [3], modern metaheuristic algorithms such as simulated annealing (SA) [4], genetic algorithms (GA) [5,6], particle swarm optimization (PSO) [7], and ant colony optimization (ACO) [8] have been applied with good success in solving complex nonlinear optimization problems [9,10]. The popularity of these nature-inspired algorithms lies in their ease of implementation and the capability to obtain a solution close to the global optimum. However, for many real-life design problems, more than thousands of calls for high-fidelity simulations (for example, computational fluid dynamics simulation) may be executed to seek a near-optimal solution. This is the overwhelming part of the total run time required in the design cycle. Thus, it is desirable to retain the appeal of metaheuristic algorithms on a global search while replacing as many as possible calls to the solver with evaluations on metamodels for the purpose of less computational cost [11].
The typical techniques for metamodel building include Kriging [12], polynomial response surface (PRS) [13], radial basis function (RBF) [14], artificial network (ANN) [15], etc. Among them, PRS and ANN are regression methods that have advantages in dealing with convex problems; Kriging and RBF belong to interpolation methods that are more appropriate for nonconvex or multi-modal problems [16]. Therefore, metamodels have been successfully employed to assist evolutionary optimizations [17,18,19] and the PSO method. For example, Tang et al. [20] proposed a hybrid surrogate model formed from a quadratic polynomial and a RBF model to develop a surrogate-based PSO method and applied it to solve mostly low-dimensional test problems and engineering design problems. Regis [21] used RBF surrogates on PSO to identify the most promising trail position surrounding the current overall global best position for solving a 36-dimensional bioremediation problem. However, the inherent nature of the PSO method leads to an extremely large number of calls for function evaluations, which might be prohibitive in simulation-based optimization.
In this paper, an adaptively integrated swarm intelligence-metamodelling technique (ASIM) is proposed, which combines multi-level search and model management during the entire optimization process. It orients the solution of the approximate model to the global optimum with a smaller number of iterations of analyses and achieves a higher level of efficiency than conventional approximation methods. Meanwhile, model management in the optimization process has been established, which integrates an adaptive trust-region strategy with a space reduction scheme implemented in the multipoint approximation method (MAM) framework. The model management has been able to facilitate the optimization process and to improve robustness during iterations. It especially has allowed a small perturbation to be assigned to the current position in case of no updates for the optimal position. The developed ASIM makes full use of the global-exploration potential of PSO and local-exploitation advantage of MAM to efficiently and accurately seek the global optimal solution with low computational cost. In comparison to the results by other algorithms such as conventional MAM, particle swarm optimization [22], hybrid cuckoo search [23], water cycle algorithm [24], etc., the superiority of ASIM has been demonstrated in terms of computational expense and accuracy throughout three case studies.

2. Brief Review of the Multipoint Approximation Method (MAM)

The MAM [25,26] was proposed to tackle black-box optimization problems and has gained continuous development in recent years, e.g., Polynkin [27] enhanced MAM to solve large-scale optimization problems, one of which is the optimization of transonic axial compressor rotor blades; Liu [28] implemented discrete capability into MAM. Recently, Caloni [29] has applied MAM to solve a multi-objective problem. Based on a response surface methodology, multipoint approximation method (MAM) aims to construct mid-range approximations and is suitable to solve complex optimization problems owing to (1) producing better-quality approximations that are sufficiently accurate in a current trust region and (2) affordability in terms of computational costs required for their building. These approximation functions have a relatively small number ( N + 1 where N is number of design variables) of regression coefficients to be determinedm and the corresponding least squares problem can be solved easily [25].
In general, a black-box optimization problem can be formulated as follows:
m i n f ( x ) s . t . g j ( x ) 1   ( j = 1 , , M )   A i x i B i   ( i = 1 , , N )
where x refers to the vector of design variables; A i and B i are the given lower and upper bounds of the design variable x i ; N is the total number of design variables; f ( x ) is the objective function; g j ( x ) is the jth constraint function and M is the total number of the constraint functions.
In order to represent the detailed physical model using the response functions and to reduce the number of calls for the response function evaluations, the MAM replaces the optimization problem with a sequence of approximate optimization problems as follows:
m i n f ˜ k ( x ) s . t . g ˜ j k ( x ) 1   ( j = 1 , , M ) A i A i k x i B i k B i   ( i = 1 , , N )
where f ˜ k ( x ) and g ˜ j k ( x ) are the functions which approximate the functions f ( x ) and g j ( x ) defined in Equation (1); A i k and B i k are the side constraints of a trust sub-region; and k is the iteration number.
Compared with the time spent by evaluation of the actual response functions g j ( x ) , the selected form of approximate functions g ˜ j k ( x )   ( j = 0 , M ) remarkably reduces the computational expense and adequately improves the accuracy in a current trust region. This is achieved by appropriate planning of numerical experiments and use of the trust region defined by the side constraints A i k and B i k . Once the current suboptimization problem is solved, the suboptimal solution becomes the starting point for the next step. Meanwhile, the move limits are modified and the trust region is resized [25,26]. Based on this information, the metamodel is updated in the next iteration until eventually the optimum is reached.
The process of metamodel building in MAM can be described as an assembly of multiple surrogates into one single metamodel using linear regression. Therefore, there are two stages of metamodel building.
In the first stage, the parameter a l of an individual surrogate φ l is determined by solving a weighted least squares problem involving n fitting points as
m i n   i = 1 n ω i F ( x i ) φ l ( x i , a l ) 2
where ω i denotes the weighting parameters and F is the original function that needs to be approximated. Here, the selection of weighting factors ω i should reflect the quality of the objective function and the location of a design point with respect to the border between the feasible and the infeasible design subspace [30], which are defined as
w i = w i o × w i c
w i o = [ f ( x k ) f ( x i ) ] β
w i c = 1 for   objective   f ( x ) [ g ( x ) + 1 ] α if   g ( x ) 0 [ g ( x ) + 1 ] α if   g ( x ) 0
where α , β > 0 are user-defined constants, where, here, α = 4 and β = 1.5 are used; x k is the starting point in the kth iteration; and x i is the ith design point in the fitting points. With this definition, a point with a larger objective function has a smaller weighting coefficient component w i o . For a constraint function g ( x ) , a point that is much closer to the boundary of the feasible region of g ( x ) is given a larger weighting coefficient component w i c . For building a surrogate of the objective function f ( x ) , the weighting coefficient w i only considers the component w i o . However, for building a surrogate of the constraint function g ( x ) , the weighting coefficient w i will also take the constraint component w i c into consideration.
It should be noted here that, in MAM, both the objective and constraint functions will be approximated by Equation (3). The simplest case of φ l is the first-order polynomial metamodel, and more complex ones are intrinsically linear functions (ILF) that have been successfully applied for solving various design optimization problems [25,28,29]. ILFs are nonlinear but they can be led to linear ones by simple transformations. Currently, five functions are considered in the regressor pool { φ l ( x ) } :
φ 1 ( x ) = a 0 + i = 1 N a i x i φ 2 ( x ) = a 0 + i = 1 N a i x i 2 φ 3 ( x ) = a 0 + i = 1 N a i / x i φ 4 ( x ) = a 0 + i = 1 N a i / x i 2 φ 5 ( x ) = a 0 i = 1 N x i a i
In the second stage, for each function ( f ( x ) or g ( x ) ), different surrogates are assembled into one metamodel:
F ˜ ( x ) = l = 1 n l b l φ l ( x )
where n l is the number of surrogates applied in the model bank { φ l ( x ) } and b l is the regression coefficient corresponding to each surrogate φ l ( x ) , which reflects the quality of the individual φ l ( x ) on the set of validation points. Similar to Equation (3), b l can be determined in the same manner:
m i n   i = 1 n ω i F ( x i ) F ˜ ( x i , b ) 2
It should be noted that, in the process of metamodel building, the design of experiments (DOE) is fixed, i.e., ω i remains unchanged across the aforementioned stages.
Figure 1 illustrates the main steps in MAM. Note that, once the metamodels for the objective and constraint functions have been built, the constrained optimization subproblem formulated in the trust region (Equation (2)) could be solved by any existing optimizers. In this paper, the sequential quadratic programming (SQP) method [31] is applied to solve the constrained optimization subproblem for the optimal solution. Since numerical optimization solvers such as SQP are deterministic, the quality of the obtained solution is highly sensitive to the initial point. In other words, MAM could not perform the global search very well. To address this issue, the ASIM framework in Section 4 has been proposed to integrate the stochastic nature with the exploratory search ability of PSO for the global optimal solution.

3. Brief Review of Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO), inspired from swarm behaviors in nature such as fish and bird schooling, was developed by Kennedy and Eberhart [32]. Since then, PSO has attracted a lot of attention and been developed as a main representative form of swarm intelligence. PSO has been applied to many areas, such as image and video analysis applications, engineering designs and scheduling applications, classification and data mining, etc. [33]. There are at least twenty PSO variants as well as hybrid algorithms obtained by combining PSO with other existing algorithms, which are also becoming increasingly popular [34,35,36].
To integrate PSO with MAM to find the global optimum, adaptive multi-level search is proposed in this paper. PSO is employed for the global-level exploration in the first step. A number of particles are first placed in the search space of the optimization problem with initial positions and velocities. However, the particles can fly over the entire design space not only determined by the individual and collective knowledge of positions from the global-level search but also based on the “local” information of each particle. Here, the “local” information means the local-level exploitation in the second step. In the neighborhood of each particle, an adaptive metamodel is constructed using MAM in Section 2, which replaces the original optimization problem by a sequence of mathematical approximations that use much simpler objective and constraint functions. Hence, the critical information about individual constraint functions is kept and this leads to the improved accuracy of metamodels. During the process of metamodel building, each particle is endowed with a horizon in the surrounding region and then is refined with the current individual position to boost the possibility of finding an optimal position. Eventually, the swarm as a whole, similar to a flock of birds collectively foraging for food while each bird brilliantly and directly finds the most tasty food within the limited horizon, has the ability to move toward to a global optimum.
Each particle in PSO represents a point in the design space of an optimization problem with an associated velocity vector. In each iteration of PSO, the velocity vector is updated by using a linear combination of three terms shown in Equation (10). The first term called inertia or momentum reflects a memory of the previous flight direction and prevents the particle from changing directions thoroughly. The second term, called the cognitive component, describes the tendency of particles returning to the previously found best positions. The last term, called the social component, quantifies the group norm or standard that should be attained. In other words, each particle tends to move toward the position of the current global best gbest and the location of the individual best pbest while moving randomly [33]. The aim is to find the global best among all the current best solutions until the objective no longer improves or a certain number of iterations are reached. The standard iteration procedure of PSO is formulated as follows:
V i t + 1 = ω V i t + α ϵ 1 ( pbest i t x i t ) + β ϵ 2 ( gbest i t x i t ) x i t + 1 = x i t + V i t + 1
where ω is the parameter called inertial weight, t is the current iteration number, α and β are parameters called acceleration coefficients, and ϵ 1 and ϵ 2 are two homogeneously distributed random vectors generated within the interval [ 0 , 1 ) , respectively. If the values of ω , ϵ 1 , and ϵ 2 are properly chosen ( ϵ = α + β > 4 and ω = 2 ϵ 2 + ϵ 2 4 ϵ ), it has been proven that PSO could converge to an optimum [37].
Even if PSO has been used in a variety of industry applications, it should be noted that the standard PSO suffers the disadvantages of information loss in the penalty function and high computational cost, especially in solving constrained optimization problems. Therefore, the proposed ASIM framework in the following section takes the advantage of PSO in global searching and reduces the burden on computation by introducing the metamodel building technique, model management, and trust region strategy.

4. Adaptively Integrated Swarm Intelligence-Metamodelling Framework

4.1. Methodology of the ASIM Framework

In this paper, an adaptively integrated swarm intelligence-metamodelling (ASIM) framework is proposed to perform a search for the optimal solution in two levels.
In the first level of optimization, also known as exploration, a number of entities are initially placed in the search space of the particular optimization problem with respective positions x i t and velocities υ i t . Each particle i has its movement controlled by Equation (10). The final global best solution is obtained only if the objective no longer improves or after a certain number of iterations. However, distinguished from the conventional PSO, each particle also gains an insight within its neighborhood. That forces each particle to refine their personal best position by exploiting their neighborhood, which is known as the second level of optimization. In this local level search, an adaptive metamodel is built by MAM within a trust region surrounding the particle, and then the personal best solution x i , M A M obtained by MAM is regarded as a local refinement in position. Following that, the personal and global best position pbest t , gbest t is determined and updated until the termination criterion is satisfied. To sum up, the surrogate helps guide the search direction of each particle and assists in refining the current overall best position until the final global best solution is found. Eventually, the swarm as a whole moves close to a global optimum of the objective function. The flowchart of the ASIM framework is depicted in Figure 2.
It is worth noting that there are three rules applied to compare solutions during the optimization process:
1.
Any feasible solution is preferred to any infeasible solution;
2.
Among feasible solutions, the one with a better objective function value is preferred.
3.
Among infeasible solutions, the one having a fitness value with smaller constraint violations is preferred. In the current implementation, the fitness function is defined by
F i t n e s s ( x ) = f ( x ) if   x   is   feasible f ( x ) × [ g j ( x ) ] 2 elseif   f ( x ) 0 f ( x ) + | f ( x ) | × [ g j ( x ) 1 ] 2 elseif   f ( x ) < 0

4.2. Model Management

4.2.1. Strategy for Particles “Flying out” in PSO

For particles located outside the boundary, they should adjust their positions according to the formulations determined by the current bounds, as follows:
x i , k = a [ k ] + γ × ( b [ k ] a [ k ] ) i f   x i , k a [ k ] b [ k ] γ × ( b [ k ] a [ k ] ) i f   x i , k b [ k ]
where x i , k means the kth dimensional position of x i t , a [ k ] and b [ k ] are kth dimensional side constraints, and γ is a relatively small value randomly generated from the range ( 0 , 0.1 ) . This perturbation of positions could actually force the particles back into the design space if particles violate the boundary constraints during the entire search process and could ensure the efficiency and accuracy in local exploitation.

4.2.2. Modified Trust Region Strategy in MAM

The aim of the trust region strategy in MAM is to control the quality of a metamodel constructed. When the approximation gets better, the trust region is further reduced for the optimal solution. The track of the trust regions also indicates a path of the direction from the initial starting point to the optimum over the entire search domain. At each iteration, a new trust region must be updated, i.e., its new size and its location have to be specified. Several indicators are formulated to support the control of the trust region and to facilitate the search process. The basic knowledge about these indicators was introduced in [38].
The first indicator is to evaluate the quality of the metamodel and focuses on the accuracy of the constraint approximations at the obtained suboptimal point x k + 1 . This is based on the following equation:
E k = M a x | g ˜ x k + 1 g x k + 1 g x k + 1 |
where g ˜ x k + 1 and g x k + 1 are normalized functions of the approximate and true constraints at the suboptimal point x k + 1 , respectively. In this way, a single maximal error quantity between explicit approximation and implicit simulation is defined. Then, the quality of metamodel can be labeled as “bad”, “reasonable”, or “good” shown below.
E k 0.25 × S k Bad 0.01 × S k Good E l s e Reasonable
where S k represents the maximum ratio of the dimension length between the present trust region, and the entire design space is defined by
S k = M a x B i k A i k B i A i   ( i = 1 , , d )
The second indicator is for indicating the location of the current iterate x k + 1 in the present search subregion. For each dimension, if none of the current move limits ( A k , B k ) is active, this solution is regarded as “internal”; otherwise, it is viewed as “External”.
The third and fourth indicators reflect the movement history for the entire optimization process. For this purpose, the angle between the last two move vectors is calculated. The formulation of this measure θ k is given below:
θ k = x k + 1 x k x k + 1 x k × x k x k 1 x k x k 1
If θ k > 0 holds, the movement will be denoted as “forward”, while θ k 0 is denoted as moving “backward”. Moreover, if θ k 0.3 , the convergence history is labelled as “curved”; otherwise, it is “Straight”.
The fifth indicator in MAM, as a termination criterion, is the size of the current search subregion. It can be marked as “small” or “large” according to the quality of the metamodel determined by the first indicator. When the approximations are “bad” and S k 0.0005 , the present search subregion is considered “small”. When the approximations are “reasonable” or “good”, the trust region is denoted as “small” if S k 0.001 .
The sixth indicator is based on the most active constraint. It is considered “close” to the boundary between the feasible and infeasible design space if g m a x ( x k + 1 ) [ 0.1 , 0.1 ] ; otherwise, it is denoted as “far”.
Both reduction and enlargement of the trust region is executed using
B i k + 1 A i k + 1 = 1 τ B i k A i k   i = 1 , , d
where τ is the resizing parameter.
When the approximations are “bad” and the trust region is “small”, the current trust region is considered too small for any further reduction to achieve reasonable approximations and the process is aborted. When the approximations are “bad” and the trust region is “large”, a reduction in the search region should be applied in order to achieve better approximations. When the approximations are not “bad”, the trust region is “large” and the suboptimal point is not “internal”; a “backward” convergence history means that the iteration point progresses in a direction opposite to the previous move vector. In this situation, the trust region has to be reduced. If the iteration point moves “forward” and the approximations are “good”, the same metamodels are reutilized in the next iteration for the purpose of reducing the computational cost. If the optimization convergence history is labelled as “curved” and the approximations are “reasonable”, the trust region is enlarged as the optimization process moves in the same direction.
A summary of termination criteria as well as the move limit strategy is presented in Table 1 and Figure 3, respectively. Note that, in Figure 3, some processes are only executed when the indicators have the same superscript. For example, the process can only output the final optimum when the approximation is “good” (with superscript 1) and the current location (2nd indicator) of the solution is within a small (5th indicator) trust region. If the quality of the metamodel is “bad” with the superscript “3” and the 5th indicator has the value “large”, the 4th indicator is triggered and a move limit should be then determined.

4.2.3. Space Reduction Scheme in the ASIM Framework

As optimization proceeds, the particles narrow down their horizon to improve the local search ability. In other words, for each particle involved, the size of the individual trust region reduces from 1.0 by a factor of 2 in each iteration, i.e., ( 1 2 ) t times the size of the initial design space. Although the particles still fly through the whole design space, each individual seems to behave more cleverly and finds the local optimal position more precisely because the metamodel becomes more accurate.

5. Benchmark Problem

In this section, the parameters used in MAM and the proposed ASIM framework are given in Table 2 for solving complex optimization problems: one benchmark mathematical example and two real-world complex design problems. The MAM parameters (the maximum number of iteration, the number of required sampling points, the size of the initial trust region, and the minimum size of the trust region) are well configured for solving general optimization tasks, as proposed in our previous work [28]. The PSO parameters (the initial weight and the acceleration coefficients) are chosen as the values proposed in [37], which ensure the convergent behavior of the search process.

5.1. Welded Beam

The design optimization of a welded beam in Figure 4 is a complex and challenging problem in nature with many variables and constraints. Usually, conventional optimization methods fail to find global optimal solutions. Hence, the welded beam design problem is often used to evaluate the performance of optimization methods. To determine the best set of design variables for minimizing the total fabrication cost of the structure, the minimum cost optimization is performed considering shear stress ( τ ) , bending stress ( σ ) , buckling load ( p c ) , and end deflection δ constraints. The design variables comprise the thickness of the weld ( x 1 ) , the length of the welded joint ( x 2 ) , the width of the beam ( x 3 ) , and the thickness of the beam ( x 4 ) , and the mathematical formulation of this problem can be expressed as follows:
m i n   f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) s . t .   g 1 ( x ) = τ ( x ) τ m a x 0   g 2 ( x ) = σ ( x ) σ m a x 0   g 3 ( x ) = x 1 x 4 0   g 4 ( x ) = [ 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) ] 5 0   g 5 ( x ) = 0.125 x 1 0   g 6 ( x ) = δ ( x ) δ m a x 0   g 7 ( x ) = p p c ( x ) 0 w h e r e   P = 6000   l b ,   L = 14   i n ,   E = 30 × 10 6   p s i ,   G = 12 × 10 6   p s i ,   τ m a x = 13 , 600   p s i ,   σ m a x = 30 , 000   p s i ,   δ m a x = 0.25   i n   τ = P 2 x 1 x 2 , τ = M R J , M = P L + x 2 2   R = x 2 2 4 + x 1 + x 3 2 2   τ ( x ) = τ 2 + 2 τ τ x 2 2 R + τ 2   J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2   σ ( x ) = 6 P L x 4 x 3 2 , δ ( x ) = 4 P L 3 E x 3 3 x 4   p c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G   0.1 x 1 2 ,     0.1 x 2 10 ,     0.1 x 3 10 ,     0.1 x 4 2
To solve the aforementioned problem, the GA-based method [39], co-evolutionary PSO method (CPSO) [22], ES-based method [40], charged system search (CSS) [41], and colliding bodies optimization (CBO) [42] were used to find the optimal solution.
In Table 3, the optimized design variables and cost obtained by MAM and ASIM have been compared with those obtained in literature. The best solutions ( 1.724852 ) by MAM and ASIM are more competitive than those obtained by other methods. Although Kaveh [42] claimed that 1.724663 was the better cost, the solution actually violated the g 1 constraint and it was an infeasible solution. Based on the statistical results in Table 4, it is concluded that the ASIM technique is very robust and efficient because the standard deviation of different runs of simulations is almost 0 ( 1.1 × 10 7 ) and the number of function analysis (NFEs) is remarkably smaller (565) than that called by other methods except MAM. Both ASIM and MAM demonstrate efficiency in finding the optimal design owing to their accuracy approximations and adaptive trust region strategy at local level exploitation. On average, hundreds of evaluations are required to determine an optimum. It is noted that enhancement of global exploration for the optimal solution by the PSO process in the ASIM framework could be demonstrated by a standard deviation of zero ( 1.1 × 10 7 ) for statistical results, which is approximately four orders of magnitude smaller than the value by MAM (0.0031358). Furthermore, by comparison with the NFEs (200,000) obtained by co-evolutionary PSO [22], the accurate surrogates built by ASIM framework indeed assist each particle in finding the local refinement position and speed up the converged global optimum. In conclusion, ASIM needs less computational cost for a global optimum with improved accuracy and great robustness.

5.2. Design of a Tension/Compression Spring

This problem, first described by Belegundu [43], has arisen from the wide applications of vibration resistant structures in civil engineering. The design objective is to minimize the weight of a tension/compression spring subject to constraints on the minimum deflection g 1 , shear stress g 2 , and surge frequency g 3 and to limit on the outside diameter g 4 . As shown in Figure 5, the design variables include the wire diameter d, the mean coil diameter D, and the number of active coils N. The mathematical description of this problem can be expressed as follows:
m i n   f N ,   D ,   d = N + 2 × D d 2 s . t .   g 1 ( x ) = 1 D 3 N 71 , 785 d 4 0   g 2 ( x ) = 4 D 2 D d 12 , 566 D d 3 d 4 + 1 5108 d 2 1 0   g 3 ( x ) = 1 140.45 d D 2 N 0   g 4 ( x ) = D + d 1.5 1 0 where   0.05 d 1 ,   0.25 D 1.3 ,   2 N 15 .
The statistical results by MAM are in Table 5. From the first row to the sixth row, every row is the optimal results of 40 independent runs of MAM and the last line concludes the average results of the 6 parallel experiments, i.e., each experiment comprises 40 independent runs of MAM with randomly generated starting points. The best optimal design represented by [ d , D , N ] is [0.051656122, 0.355902943, 11.33791803] with the objective value of 0.012666692 . Moreover, the fourth column “Best” in Table 5 indicates that MAM cannot achieve a converged robust solution and falls into the local optima when faced with multimodal function optimization. The optimal result ranges from 0.01266 (the best design in the fourth row) to 0.070 (the worst design in the third row). As a general deficiency of the trajectory-based algorithm, MAM could not find the known optimum 0.0126652 by balancing the efforts between exploration and exploitation.
A more intuitive perspective for understanding the global search mechanism using the ASIM framework is represented in Table 6, which includes the optimal results obtained by 8 independent experiments, each of which is initialized with 5 particles. In Figure 6, the results show the objectives of initial designs and global optima for the tested 40 particles. Even the initial designs are remarkably different at the start of the optimization process due to the random nature of statistical tests; the developed ASIM has the capability to eventually find the converged global optimum. It is concluded that the ASIM algorithm can achieve a robust solution for random starting points, and it will not be trapped into local optima due to its multi-level search and model management strategies. Therefore, these 8 independent experiments could almost obtain the same global optimum. The best optimal design found by ASIM framework is [ 0.051724501 , 0.357570887 , 11.23912608 ] , with the objective value 0.012665259 , which has a good agreement with the known optimum. Additionally, the global solutions from 8 independent experiments have been proven feasible by function evaluations.
Other algorithms recently used to optimize this problem include co-evolutionary particle swarm optimization (CPSO) [22], differential evolution with dynamic stochastic selection (DEDS) [44], hybrid evolutionary algorithm with adaptive constraint-handling techniques (HEAA) [45], league championship algorithm (LCA) [46], water cycle algorithm (WCA) [24], and hybrid cuckoo search (HCS) [23]. A comparison of the optimal solutions by the aforementioned methods is given in Table 7, and the statistical results by ASIM, MAM, and other algorithms are shown in Table 8.
In Table 7, the ASIM framework has the ability to find the optimal solution (0.0126652), which is the best available design compared to that which the other algorithms achieved. Although LCA [46] found a slighter better solution (0.01266523), the corresponding constraint g 1 ( x ) was violated. Therefore, it was not a feasible solution. The same conclusion can be drawn for the results by DEDS [44] and HEAA [45]. Together with the statistical results shown in Table 8, it can be observed that the ASIM method is superior to other methods for the global solution in terms of the number of function evaluations and the accuracy throughout the optimization process. Obviously, the referenced methods used more than 10,000 calls to find the global optimum while ASIM finds the optimum with about half of those calls. Meanwhile, the ASIM could reduce the number of simulations by over 28 % compared to MAM.
As a general remark on the comparisons above, ASIM shows a very competitive performance over eight state-of-the-art optimization methods to find the global optimal solution in terms of the efficiency, the quality, and the robustness.

5.3. Mathematical Problem G10

This problem was first described in [47] and then was considered one of the benchmark problems at the 2006 IEEE Congress on Evolutionary Computation [48]. In this optimization example, there are eight variables and six inequality constraints (three linear and three nonlinear). The mathematical formulations are shown below.
m i n   f ( x ) = x 1 + x 2 + x 3 s . t .   g 1 ( x ) = 1 + 0.0025 ( x 4 + x 6 ) 0   g 2 ( x ) = 1 + 0.0025 ( x 5 + x 7 x 4 ) 0   g 3 ( x ) = 1 + 0.01 ( x 8 x 5 ) 0   g 4 ( x ) = x 1 x 6 + 833.33252 x 4 + 100 x 1 83 , 333.333 0   g 5 ( x ) = x 2 x 7 + 1250 x 5 + x 2 x 4 1250 x 4 0   g 6 ( x ) = x 3 x 8 + 1 , 250 , 000 + x 3 x 5 2500 x 5 0 where   100 x 1 10 , 000 , 1000 x i 10 , 000 ( i = 2 ,   3 ) ,   10 x i 1000 ( i = 4 , ,   8 )
The optimal solutions found by ASIM and MAM are given in Table 9 as well as the known optimum. In Table 10, nine independent experiments have been performed and each experiment includes 40 parallel runs of MAM. Although each run by MAM is initialized with a random starting point, there is no guarantee that the converged global optimum can be achieved. As there has a very small feasible region ( 0.0010 % ) in this challenging example, limited runs by MAM could not find a feasible solution, and normally, a bad design with a very large value of the fitness function (up to 100,000) is obtained. However, a feasible solution could be achieved within 20,000 function evaluations. Applying the developed ASIM, the capability of the adaptive multi-level search for the global optimum was significantly improved, and the statistical results are shown in Table 11. Using the same parameter settings in the previous example, the worst solution found by the particles is about 7361, which is only 4.42 % higher than the global optimum 7049.248 . In the mean time, all nine independent experiments of ASIM could find a decent global optimum, which is slightly 10 5 higher than the global optimum even in the worst case (number 5 in Table 11). In Figure 7, it shows how 10 independent runs initialized with a total of 50 particles converge to the global optimum by ASIM. It is noted that the initial design varies dramatically for each particle, and finally, all particles succeed in finding the global optimum. It is concluded that the PSO process applied in ASIM remarkably boosts the exploration capability. Owing to advantages such as the guidance of personal memory for the best position and social cognition in addition to the stochastic search behavior, ASIM is a robust and efficient algorithm for solving such a challenging problem.
Recently, other algorithms including evolutionary optimization by approximate ranking and surrogate models (EOAS) [49], constraint optimization via particle swarm optimization (COPSO) [50], league championship algorithm (LCA) [46], hybrid cuckoo search (HCS) [23], and surrogate-assisted differential evolution (SADE) [51] have also solved this optimization problem. A comparison of results by ASIM, MAM, and other algorithms is given in Table 12. Although all methods listed are very competitive and has the ability to find global or near global optimum, ASIM demonstrates superiority over the others in terms of computational efficiency. Evolutionary algorithms usually need over 150,000 simulations to find the global optimum, while ASIM could reduce the number of function evaluations to 19,522 by more than 80 % . Furthermore, the optimum ( 7049.2481 ) achieved by ASIM is in a good agreement with the global optimum ( 7049.2480 ) . Although HCS [23] proposed a best optimum ( 7049.237 ), the fourth constraint is slightly violated and, therefore, is not a feasible design. Summarily, ASIM outperforms other methods in seeking the global optimal solutions of complex black-box optimization problems in terms of efficiency and accuracy.

6. Conclusions

In this paper, an adaptively integrated swarm intelligence-metamodelling (ASIM) technique that enables adaptive multi-level adaptive search for the global optimal solution was proposed for solving expensive and complex black-box constrained optimization problems. In the first step, the adaptive swarm-intelligence algorithm carries out global exploration for the near-optimal solution. In the second step, the metamodel-based optimization algorithm multipoint approximation method (MAM) is performed for local exploitation. Essentially, each particle’s current position in ASIM gains local refinement by optimization of metamodel building around their neighborhood and tends to move towards the global best position according to swarm intelligence. Eventually, the swarm as a whole, similar to a flock of birds collectively foraging for food while each bird brilliantly finds the most tasty food with limited horizon directly, possibly moves close to a global optimum position. One mathematical problem and two engineering optimization problems were studied in detail using the ASIM framework. By comparison of the results obtained from ASIM, MAM, and other state-of-art algorithms, it was demonstrated that ASIM has the capability to tackle expensive constrained black-box optimization problems with remarkably less computational effort, higher accuracy, and stronger robustness. The adaptive multi-level search ability of ASIM indeed makes up the local search deficiency and the sensitivity to the starting point observed in MAM. Consequently, the ASIM technique achieves a good balance between exploration and exploitation. Moreover, ASIM provides valuable insight into the development of nature-inspired metaheuristic algorithms for solving nonlinear optimization problems with less computational cost throughout the simulation-based optimization process.

Author Contributions

G.D. contributed to drafting the paper and example validation; C.L. contributed to algorithm development and editing; D.L. contributed to designing and planning the study, approved the final version, and agreed to be accountable for the accuracy and integrity; X.M. contributed to editing, analysing, and commenting on the first version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Short Term Recruitment Program of Professional Scholars of NLAA in Beihang University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hickernell, F.J.; Yuan, Y.X. A Simple Multistart Algorithm for Global Optimization. OR Trans. 1997, 1, 1–12. [Google Scholar]
  2. Rios, L.M.; Sahinidis, N.V. Derivative-free optimization: A review of algorithms and comparison of software implementations. J. Glob. Optim. 2013, 56, 1247–1293. [Google Scholar] [CrossRef] [Green Version]
  3. Glover, F. Future paths for integer programming and links to artficial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  4. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optim. Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  5. Holland, J.H. Adaptation in Natural and Artificial Systems: An introductory Analysis with Applications to Biology, Control and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1975; p. 183. [Google Scholar] [CrossRef]
  6. Borowska, B. Genetic Learning Particle Swarm Optimization with Interlaced Ring Topology. In Computational Science—ICCS 2020; Krzhizhanovskaya, V.V., Závodszky, G., Lees, M.H., Dongarra, J.J., Sloot, P.M.A., Brissos, S., Teixeira, J., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 136–184. [Google Scholar]
  7. Wang, S.; Liu, G.; Gao, M.; Cao, S.; Guo, A.; Wang, J. Heterogeneous Comprehensive Learning and Dynamic Multi-Swarm Particle Swarm Optimizer with Two Mutation Operators; Elsevier Inc.: Amsterdam, The Netherlands, 2020; Volume 540, pp. 175–201. [Google Scholar] [CrossRef]
  8. Ahmad, R.; Choubey, N.S. Review on Image Enhancement Techniques Using Biologically Inspired Artificial Bee Colony Algorithms and Its Variants. In Biologically Rationalized Computing Techniques for Image Processing Applications; Hemanth, J., Emilia, B.V., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 249–271. [Google Scholar] [CrossRef]
  9. Acevedo, H.G.S.; Escobar, C.M.; Andres Gonzalez-Estrada, O. Damage detection in a unidimensional truss using the firefly optimization algorithm and finite elements. arXiv 2018, arXiv:1706.04449. [Google Scholar]
  10. Mortazavi, A.; Togan, V. Sizing and layout design of truss structures under dynamic and static constraints with an integrated particle swarm optimization algorithm. Appl. Soft Comput. 2016, 51, 239–252. [Google Scholar] [CrossRef]
  11. soon Ong, Y.; Nair, P.B.; Keane, A.J.; Wong, K.W. Surrogate-Assisted Evolutionary Optimization Frameworks for High-Fidelity Engineering Design Problems. In Knowledge Incorporation in Evolutionary Computation; Yaochu, J., Ed.; Springer: Berlin, Germany, 2005; pp. 307–331. [Google Scholar] [CrossRef] [Green Version]
  12. Kleijnen, J.P. Kriging metamodeling in simulation: A review. Eur. J. Oper. Res. 2009, 192, 707–716. [Google Scholar] [CrossRef] [Green Version]
  13. Myers, R.H.; Montgomery, D.T.; Vining, G.G.; Borror, C.M.; Kowalski, S.M. Response Surface Methodology: A Retrospective and Literature Survey. J. Qual. Technol. 2004, 36, 53–78. [Google Scholar] [CrossRef]
  14. Dyn, N.; Levin, D.; Rippa, S. Numerical procedures for surface fitting of scattered data by radial functions. SIAM J. Sci. Stat. Comput. 1986, 7, 639–659. [Google Scholar] [CrossRef]
  15. Gerrard, C.E.; McCall, J.; Coghill, G.M.; Macleod, C. Exploring aspects of cell intelligence with artificial reaction networks. Soft Comput. 2014, 18, 1899–1912. [Google Scholar] [CrossRef]
  16. Dong, H.; Song, B.; Dong, Z.; Wang, P. SCGOSR: Surrogate-based constrained global optimization using space reduction. Appl. Soft Comput. J. 2018, 65, 462–477. [Google Scholar] [CrossRef]
  17. Zhou, Z.; Ong, Y.S.; Nair, P.B.; Keane, A.J.; Lum, K.Y. Combining Global and Local Surrogate Models to Accelerate Evolutionary Optimization. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2007, 37, 66–76. [Google Scholar] [CrossRef] [Green Version]
  18. Regis, R.G. Evolutionary Programming for High-Dimensional Constrained Expensive Black-Box Optimization Using Radial Basis Functions. IEEE Trans. Evol. Comput. 2014, 18, 326–347. [Google Scholar] [CrossRef]
  19. Bouhlel, M.A.; Bartoli, N.; Regis, R.G.; Morlier, J.; Regis, R.G.; Otsmane, A. Efficient global optimization for high-dimensional constrained problems by using the Kriging models combined with the partial least squares method. Eng. Optim. 2018, 50, 2038–2053. [Google Scholar] [CrossRef]
  20. Tang, Y.; Chen, J.; Wei, J. A surrogate-based particle swarm optimization algorithm for solving optimization problems with expensive black box functions. Eng. Optim. 2013, 45, 557–576. [Google Scholar] [CrossRef]
  21. Regis, R.G. Particle swarm with radial basis function surrogates for expensive black-box optimization. J. Comput. Sci. 2014, 5, 12–23. [Google Scholar] [CrossRef]
  22. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  23. Long, W.; Liang, X.; Huang, Y.; Chen, Y. An effective hybrid cuckoo search algorithm for constrained global optimization. Neural Comput. Appl. 2014, 25, 911–926. [Google Scholar] [CrossRef]
  24. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  25. Toropov, V.V.; Filatov, A.A.; Polynkin, A.A. Multiparameter structural optimization using FEM and multipoint explicit approximations. Struct. Optim. 1993, 6, 7–14. [Google Scholar] [CrossRef]
  26. Keulen, F.V.; Toropov, V.V. New Developments in Structural Optimization Using Adaptive Mesh Refinement and Multipoint Approximations. Eng. Optim. 1997, 29, 217–234. [Google Scholar] [CrossRef]
  27. Polynkin, A.; Toropov, V.V. Mid-range metamodel assembly building based on linear regression for large scale optimization problems. Struct. Multidiscip. Optim. 2012, 45, 515–527. [Google Scholar] [CrossRef]
  28. Liu, D.; Toropov, V.V. Implementation of Discrete Capability into the Enhanced Multipoint Approximation Method for Solving Mixed Integer-Continuous Optimization Problems. Int. J. Comput. Methods Eng. Sci. Mech. 2016, 17. [Google Scholar] [CrossRef] [Green Version]
  29. Caloni, S.; Shahpar, S.; Toropov, V.V. Multi-Disciplinary Design Optimisation of the Cooled Squealer Tip for High Pressure Turbines. Aerospace 2018, 5, 116. [Google Scholar] [CrossRef] [Green Version]
  30. Toropov, V.V. Simulation approach to structural optimization. Struct. Optim. 1989, 1, 37–46. [Google Scholar] [CrossRef]
  31. Kraft, D. A Software Package for Sequential Quadratic Programming; Technical Report; Institut fuer Dynamik der Flugsysteme: Oberpfaffenhofen, Germany, 1988. [Google Scholar]
  32. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  33. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  34. Zhan, Z.; Zhang, J.; Li, Y.; Chung, H.H. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man, Cybern. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [Green Version]
  35. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  36. Guo, D.; Jin, Y.; Ding, J.; Chai, T. Heterogeneous Ensemble-Based Infill Criterion for Evolutionary Multiobjective Optimization of Expensive Problems. IEEE Trans. Cybern. 2019, 49, 1012–1025. [Google Scholar] [CrossRef]
  37. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  38. Toropov, V.V.; van Keulen, F.; Markine, V.; Alvarez, L. Multipoint approximations based on response surface fitting: A summary of recent developments. In Proceedings of the 1st ASMO UK/ISSMO Conference on Engineering Design Optimization, Ilkley, UK, 8–9 July 1999; pp. 371–381. [Google Scholar]
  39. Coello Coello, C.A.; Montes, E.M.; Mezura Montes, E. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv. Eng. Inform. 2002, 16, 193–203. [Google Scholar] [CrossRef]
  40. Mezura-Montes, E.; Coello Coello, C.A. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
  41. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  42. Kaveh, A.; Mahdavi, V.R. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  43. Belegundu, A.D. A Study of Mathematical Programming Methods for Structural Optimization. Ph.D. Thesis, University of Iowa, Iowa City, IA, USA, 1982. [Google Scholar]
  44. Zhang, M.; Luo, W.; Wang, X. Differential evolution with dynamic stochastic selection for constrained optimization. Inf. Sci. 2008, 178, 3043–3074. [Google Scholar] [CrossRef]
  45. Wang, Y.; Cai, Z.; Zhou, Y.; Fan, Z. Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique. Struct. Multidiscip. Optim. 2009, 37, 395–413. [Google Scholar] [CrossRef]
  46. Kashan, A.H. An efficient algorithm for constrained global optimization and application to mechanical engineering design: League championship algorithm (LCA). Comput. Aided Des. 2011, 43, 1769–1792. [Google Scholar] [CrossRef]
  47. Hock, W.; Schittkowski, K. Test Examples for Nonlinear Programming Codes; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  48. Liang, J.; Runarsson, T.; Mezura-Montes, E.; Clerc, M.; Suganthan, P.; Coello, C.A.C.; Deb, K. Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization; Nangyang Technological University: Singapore, 2006; Volume 41. [Google Scholar]
  49. Runarsson, T.P. Constrained Evolutionary Optimization by Approximate Ranking and Surrogate Models. In Parallel Problem Solving from Nature—PPSN VIII; Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 401–410. [Google Scholar] [CrossRef]
  50. Aguirre, A.H.; Zavala, A.E.M.; Diharce, E.V.; Rionda, S.B. COPSO: Constrained Optimization via PSO Algorithm; Technical Report; Center for Research in Mathematics, CIMAT: Guanajuato, Mexico, 2007. [Google Scholar]
  51. Garcia, R.d.P.; de Lima, B.S.L.P.; Lemonge, A.C.D.C. A Surrogate Assisted Differential Evolution to Solve Constrained Optimization Problems. In Proceedings of the 2017 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Arequipa, Peru, 8–10 November 2017; pp. 1–6. [Google Scholar]
Figure 1. Flow chart of the mulitpoint approximation method (MAM).
Figure 1. Flow chart of the mulitpoint approximation method (MAM).
Applsci 11 02277 g001
Figure 2. Flow chart of ASIM Framework.
Figure 2. Flow chart of ASIM Framework.
Applsci 11 02277 g002
Figure 3. Overview of trust region strategy in MAM.
Figure 3. Overview of trust region strategy in MAM.
Applsci 11 02277 g003
Figure 4. Schematic of the welded beam structure with indication of design variables.
Figure 4. Schematic of the welded beam structure with indication of design variables.
Applsci 11 02277 g004
Figure 5. Schematic of the tension/compression spring.
Figure 5. Schematic of the tension/compression spring.
Applsci 11 02277 g005
Figure 6. First and final fitness values in ASIM for solving the tension/compression spring.
Figure 6. First and final fitness values in ASIM for solving the tension/compression spring.
Applsci 11 02277 g006
Figure 7. First and final output fitness values for G10 in a hybrid optimization framework.
Figure 7. First and final output fitness values for G10 in a hybrid optimization framework.
Applsci 11 02277 g007
Table 1. Six indicators in MAM.
Table 1. Six indicators in MAM.
1st indicatorThe quality of metamodel approximation
GoodReasonableBad
2nd indicatorLocation of the suboptimal point x k + 1 with respect to trust region
BoundaryInternalExternal
3rd indicator and 4th indicatorThe angle between the last two move vectors
Backward π 2 θ 3 π 2
Parallel
Forward π 2 θ π 2
Curved
5th indicatorTermination criterion: size of the current region
SmallLarge
6th indicatorValue of the most active constraint
Close from the boundaryFar from the boundary
Table 2. Default parameters for MAM and ASIM.
Table 2. Default parameters for MAM and ASIM.
MethodMAM ParametersPSO Parameters
MI aNOP bSIR cSMR d ω e α f β f
MAM30 n + 5 0.250.1N.A.
ASIM30 n + 5 0.250.10.72981.496181.49618
a The maximum number of iteration. b The number of required sampling points. c The size of the initial trust region. d The minimum size of the trust region. e The initial weight in PSO. f The acceleration coefficients in PSO.
Table 3. Comparison of present optimized designs with the literature for welded beams.
Table 3. Comparison of present optimized designs with the literature for welded beams.
Methods x 1 h x 2 l x 3 t x 4 b Cost
GA-based [39]0.2059863.4713289.0202240.206481.728226
CPSO [22]0.2023693.5442149.048210.2057231.728024
ES-based [40]0.1997423.6120609.0375000.2060821.737300
CSS [41]0.205823.4681099.0380240.2057231.724866
CBO [42]0.2057223.470419.0372760.2057351.724663
MAM0.20572963.47048939.03662420.20572971.724852
ASIM0.20572963.47048879.03662390.20572961.724852
Table 4. Statistical results from different optimization methods for the welded beam design problem.
Table 4. Statistical results from different optimization methods for the welded beam design problem.
MethodsBestAverageWorstS.D.NFEs
GA-based [39]1.7282261.7926541.9934080.07471380000
CPSO [22]1.7280241.7488311.7821430.012926200000
ES-based [40]1.7373001.8132901.9946510.07050025,000
CSS  [41]1.7248661.7396541.7594790.0080644000
CBO  [42]1.7246621.7257071.7250590.00024374000
MAM1.7248521.7255631.7396050.0031358122
ASIM1.7248521.7248521.7248521.10E-07565
Table 5. Statistical results for the tension/compression spring problem by MAM.
Table 5. Statistical results for the tension/compression spring problem by MAM.
NumberWorstMeanBestS.D.NFEsa
10.0328397370.0150575870.01266920.0042466088041
20.0464789990.015374790.0126774250.0052751998536
30.0705517550.0155218460.0126807620.0090645747483
40.0538713120.0165307770.0126666920.008576957483
50.0308295670.0146870790.0127332110.0034559077536
60.0175570550.0140670460.0126672730.0012471618149
Average0.0127332110.0126824270.012666692 2.55305 × 10 5 7871
Table 6. Statistical results for the tension/compression spring problem by adaptively integrated swarm intelligence-metamodelling (ASIM).
Table 6. Statistical results for the tension/compression spring problem by adaptively integrated swarm intelligence-metamodelling (ASIM).
NumberWorstMeanBestS.D.NFEs
10.0127074190.012680760.012669372 1.53792 × 10 5 4891
20.0150768220.0131588680.0126655120.0010722785719
30.0127341310.0126819090.012665469 2.94215 × 10 5 5161
40.0131511810.0127975960.0126661270.0002015875233
50.0126747250.0126711270.012665294 3.80784 × 10 6 4882
60.0129622670.0127343870.0126652590.0001280415337
70.0127871690.0126790220.012669651 1.17008 × 10 5 4702
80.0127803620.012699880.012665634 4.68624 × 10 5 5170
Average0.0126696510.012666540.012665259 1.85492 × 10 6 5141
Table 7. Comparison of the best solutions obtained by various studies for the tension/compression spring design optimization problem.
Table 7. Comparison of the best solutions obtained by various studies for the tension/compression spring design optimization problem.
NameCPSO [22]DEDS [44]HEAA [45]LCA [46]WCA [24]HCS [23]MAMASIM
x 1 0.0517280.0516890.0516890.0516890.0516800.0516890.0516560.051724
x 2 0.3576440.3567170.3567290.3567180.3565220.3567180.3559020.357570
x 3 11.24454311.28896511.28829311.2889611.30041011.2889611.3379111.239126
g 1 ( x ) 8.25 × 10 4 1.45 × 10 9 3.96 × 10 10 N.A./ 2.00 × 10 15 1.65 × 10 13 6.41 × 10 6 1.64 × 10 5 1.13 × 10 7
g 2 ( x ) 2.52 × 10 5 1.19 × 10 9 3.59 × 10 10 N.A./ 2.22 × 10 15 7.9 × 10 14 3.90 × 10 6 5.16 × 10 5 1.05 × 10 7
g 3 ( x ) −4.051306−4.053785−4.053808N.A./−4.053786−4.053399−4.053775 4.051810 4.055466
g 4 ( x ) −0.727085−0.727728−0.727720N.A./−0.727728−0.727864−0.727729 0.728293 0.727136
f ( x ) 0.0126740.0126650.0126650.012665230.0126650.01266520.01266670.0126652
Table 8. Comparison of statistical results given by different algorithms for the tension/compression spring design optimization problem.
Table 8. Comparison of statistical results given by different algorithms for the tension/compression spring design optimization problem.
MethodsWorstMeanBestS.D.NFEs
CPSO [22]0.0129240.0127300.012674 5.20 × 10 4 240,000
DEDS [44]0.0127380.0126690.012665 1.3 × 10 5 24,000
HEAA [45]0.0126650.0126650.012665 1.4 × 10 9 24,000
LCA [46]0.012666670.012665410.01266523 3.88 × 10 7 15,000
WCA [24]0.0129520.0127460.012665 8.06 × 10 5 11,750
HCS [23]0.01267640.01266830.0126652 5.37 × 10 7 150,000
MAM0.0127332110.0126824270.012666692 2.55305 × 10 5 7871
ASIM0.0126696510.012666540.012665259 1.85492 × 10 6 5141
Table 9. Optimal solutions of G10 found by ASIM and MAM.
Table 9. Optimal solutions of G10 found by ASIM and MAM.
DescriptionSolution [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ] Objective Value
Known optimum [48][579.3066850, 1359.9706780, 5109.970657, 182.0176996, 295.6011737, 217.98230036, 286.4165259, 395.6011737]7049.2480
ASIM[579.0697378, 1360.029849, 5110.148583, 181.9979046, 295.5940579, 218.0020914, 286.4038427, 395.5940579]7049.2481
MAM[579.2439615, 1360.814966, 5109.189094, 182.0124631, 295.6324343, 217.9875329,286.380036, 395.6324333]7049.2499
Table 10. Statistical results for G10 by MAM.
Table 10. Statistical results for G10 by MAM.
NumberWorstMeanBestS.D.NFEs
1142,392.715617,882.820557069.39088830,251.4981518,436
268,065.0737111,619.115837049.29644611,320.4633919,388
343,458.1834810,426.990537049.2499486247.16241819,584
476,953.3891212,669.274077052.44266412,331.3235818,478
553,761.5546511,938.013267060.4685038513.60717119,122
638,601.5192911,216.428277049.3042365669.87350417,274
7133,020.344512,395.338097062.69876319,714.9868419,640
850,195.6887212,527.617217061.83186810,079.563419,668
986,553.7842212,382.573667053.50933113,257.2251120,270
Average7069.3908887056.465857049.2499487.33490394719,095
Table 11. Statistical results for G10 by ASIM.
Table 11. Statistical results for G10 by ASIM.
NumberWorstMeanBestS.D.NFEs
17071.7461677054.0647147049.249969.8992502719,374
27058.5546397051.2060527049.2488514.11241651620,452
37151.0488777070.6062757049.24890945.0147286819,612
47361.050897112.2567887049.275307139.081857318,450
57063.1079517053.222957049.3183925.98237841819,094
67049.8020077049.4181177049.2488490.2381329319,318
77361.6484677111.8502547049.248177139.641671720,102
87206.6033447081.3699937049.2629670.0135874419,780
97052.6973697049.9996797049.2512241.51279768519,962
107105.1920427060.6432877049.26141224.9048300719,794
Average7049.3183927049.2626767049.2481770.02451733119,522
Table 12. Statistical features of the results obtained by various algorithms on G10.
Table 12. Statistical features of the results obtained by various algorithms on G10.
MethodsWorstMeanBestS.D.NFEs
EOAS [49]7258.5407082.2277049.40442.0304,066
COPSO [50]7049.6685937049.2788217049.248871N.A.240,000
LCA [46]7049.24828167049.24805427049.2480206 5.80 × 10 5 225,000
HCS [23]7250.9577049.6687049.23786.5150,000
SADE [51]N.A.7278.7857049.249N.A.500,000
MAM7069.3908887056.465857049.2499487.33490394719,095
ASIM7049.3183927049.2626767049.2481770.02451733119,522
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, G.; Liu, C.; Liu, D.; Mao, X. Adaptive Multi-Level Search for Global Optimization: An Integrated Swarm Intelligence-Metamodelling Technique. Appl. Sci. 2021, 11, 2277. https://doi.org/10.3390/app11052277

AMA Style

Dong G, Liu C, Liu D, Mao X. Adaptive Multi-Level Search for Global Optimization: An Integrated Swarm Intelligence-Metamodelling Technique. Applied Sciences. 2021; 11(5):2277. https://doi.org/10.3390/app11052277

Chicago/Turabian Style

Dong, Guirong, Chengyang Liu, Dianzi Liu, and Xiaoan Mao. 2021. "Adaptive Multi-Level Search for Global Optimization: An Integrated Swarm Intelligence-Metamodelling Technique" Applied Sciences 11, no. 5: 2277. https://doi.org/10.3390/app11052277

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop