Next Article in Journal
Automatic Modulation Classification for Underwater Acoustic Communication Signals Based on Deep Complex Networks
Previous Article in Journal
A Score-Based Approach for Training Schrödinger Bridges for Data Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surrogate-Assisted Hybrid Meta-Heuristic Algorithm with an Add-Point Strategy for a Wireless Sensor Network

1
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
3
Department of Electronic Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
4
Graduate School of Information, Production and Systems, Waseda University, Kitakyushu 808-0135, Japan
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 317; https://doi.org/10.3390/e25020317
Submission received: 13 December 2022 / Revised: 30 January 2023 / Accepted: 4 February 2023 / Published: 9 February 2023
(This article belongs to the Section Multidisciplinary Applications)

Abstract

:
Meta-heuristic algorithms are widely used in complex problems that cannot be solved by traditional computing methods due to their powerful optimization capabilities. However, for high-complexity problems, the fitness function evaluation may take hours or even days to complete. The surrogate-assisted meta-heuristic algorithm effectively solves this kind of long solution time for the fitness function. Therefore, this paper proposes an efficient surrogate-assisted hybrid meta-heuristic algorithm by combining the surrogate-assisted model with gannet optimization algorithm (GOA) and the differential evolution (DE) algorithm, abbreviated as SAGD. We explicitly propose a new add-point strategy based on information from historical surrogate models, using information from historical surrogate models to allow the selection of better candidates for the evaluation of true fitness values and the local radial basis function (RBF) surrogate to model the landscape of the objective function. The control strategy selects two efficient meta-heuristic algorithms to predict the training model samples and perform updates. A generation-based optimal restart strategy is also incorporated in SAGD to select suitable samples to restart the meta-heuristic algorithm. We tested the SAGD algorithm using seven commonly used benchmark functions and the wireless sensor network (WSN) coverage problem. The results show that the SAGD algorithm performs well in solving expensive optimization problems.

1. Introduction

WSNs are a product of the new era, combining elements of computing, communication, and sensors to provide monitoring functions. A WSN consists of many sensor nodes forming a multi-hop self-organizing network using wireless communication. The WSN collaboratively senses, collects, and processes information in the signal coverage area and sends it to an observer to achieve the monitoring of the target area. Compared to traditional networks, WSNs have the advantage of being easy to deploy and fault-tolerant. Users can quickly deploy a practical WSN with limited time and conditions. Once deployed successfully, WSNs do not require much human effort, and the network automatically integrates and transmits information. Therefore, WSNs are widely used in various environments to perform monitoring tasks such as searching, battlefield surveillance, and disaster relief through the transmission of signals.
In a WSN, coverage is the ratio of the signal coverage of sensor nodes to the detection area. Since the number of sensors and the signal radius of a single sensor are limited, researchers are very interested in maximizing the detection of the target area under the premise of a limited number of sensors [1]. Optimizing the location of sensor nodes can improve network coverage and save costs. Various meta-heuristic algorithms are now used to solve the sensor node deployment problem. For example, Zhao et al. [2] used a particle swarm optimization algorithm and variable domain chaos optimization algorithm to find the optimal locations of sensors to improve the coverage in a two-dimensional environment. As the research progresses, the deployment of 3D WSN nodes applicable to the real environment is getting more and more attention. However, as the two-dimensional problems become three-dimensional problems, increasing the solution’s dimensions increases the computational difficulty and computational time, so we need to seek more suitable methods to solve such problems.
The meta-heuristic algorithm does not depend on the structure and characteristics of the problem and has a powerful search capability in dealing with non-convex and nondifferentiable problems. It can give relatively optimal solutions for practical problems in a certain time and is therefore, widely used in many engineering problems, such as speed reducer design [3,4], cantilever beam design [5,6], and wireless sensor network design [7,8,9]. However, meta-heuristic algorithms to optimize various problems require the computation of fitness values, the time consumption of which usually increases significantly with the complexity of the problem, such as finite element analysis [10] and 3D WSN node deployment [11]. Trying to solve such problems is time-consuming and it often takes hours or even days to obtain a satisfactory result, which is a waste of resources and a challenge to the researcher’s patience. Therefore, to address this problem of long solution time for complex fitness functions, researchers have proposed a surrogate-assisted meta-heuristic algorithm to improve the utility of meta-heuristic algorithms for expensive problems [12].
Surrogate-assisted meta-heuristic algorithms have been developed rapidly in recent years, and they have been studied by many researchers and are widely used in real-world problems. The surrogate-assisted meta-heuristic algorithm fits the fitness function using a surrogate model (also called an approximation model or meta-model). Nowadays, the widely used surrogate models are the Gaussian process (GP) or Kriging process [13,14], the polynomial approximation model [15], the support vector regression model [16], the radial basis function (RBF) model [17,18], etc., along with hybrid surrogate models that combine some of the models mentioned above [19,20]. Many papers have evaluated various surrogate models [21,22]. As the dimensionality of the optimization problem increases, the RBF surrogate model can obtain a relatively good performance, and the subsequent contents of this paper involve the RBF surrogate model.
The surrogate-assisted meta-heuristic algorithm can be classified according to the use of all samples or local samples when constructing the surrogate model. That is, it can be one of three types: a global, local, or hybrid surrogate-assisted meta-heuristic algorithm. Yu et al. [23] proposed a global surrogate model combined with a social learning particle swarm optimization algorithm, restart strategies, and generation-based and individual-based model management methods into a coherently operating whole. Pan et al. [24] proposed an effective, local, surrogate-assisted hybrid meta-heuristic algorithm that finds the nearest neighbors of an individual from a database based on Euclidean distance and uses this local information to train the surrogate model, introducing a selection criterion based on the best and top set of information to choose suitable individuals for true fitness evaluation. Wang et al. [25] proposed a hybrid surrogate-assisted particle swarm optimization algorithm whose global model management strategy is inspired by active learning based on committees. The local surrogate model is built from the best samples. The local surrogate model is run when the global model does not improve the optimal solution, and the two models are alternated.
The process of generating samples that enable constructing a surrogate model is called infill criteria (sampling). The performance of the surrogate model depends heavily on the quality and quantity of the samples in the database, which can affect how well the surrogate model fits the true fitness function. As the sampled data points need to be evaluated by the true fitness before they can be added to the database, each act of adding points brings a high computational cost. Therefore, we must investigate the infill criteria to expect better samples to build the surrogate model. The sampling strategies are broadly classified into static and adaptive sampling (add-point). Static sampling means that the training points required to build the surrogate model are drawn simultaneously. Currently, the widely used static sampling methods include Latin hypercube sampling (LHS) [26,27], full factorial design [28], orthogonal arrays [29], central composite design [30], etc. Since the sample points and the constructed surrogate model are independent in static sampling, static sampling cannot obtain helpful information for unknown problems. Static sampling hopes that the samples can regularly cover the whole solution space. LHS is a hierarchical sampling method which can avoid the aggregation of sample points in a small area and will use fewer samples when the same threshold is reached while making the computation less complicated. We use LHS to initialize the surrogate model.
Infill criteria are the product of combining static sampling with an add-point strategy. Static sampling techniques first establish the initial surrogate model, and the samples are updated using a meta-heuristic algorithm guided by the current surrogate model. Then, the samples are sampled according to the add-point strategy, and the surrogate model is re-updated to improve the overall accuracy of the current surrogate model. Finally, the above process is repeated until the maximum number of true fitness evaluations is satisfied. As the add-point strategy uses the previous experimental results as a reference for subsequent sampling, adaptive sampling can construct a more accurate surrogate model using a small number of samples compared to the inexperience of static sampling. Modern scientists are interested in the study of add-point strategies. They have proposed the statistical lower bound strategy [31], the maximizing probability of improvement strategy [32,33], and the maximizing mean squared error strategy [34,35]. Based on the above discussion, this paper proposes a new adaptive sampling strategy based on the historical surrogate model information.
Several aspects, such as the choice of the surrogate model, the choice of the add-point strategy, and the choice of the meta-heuristic algorithm, determine the performance of the surrogate-assisted meta-heuristic algorithm. Among these, the meta-heuristic algorithm’s choice largely determines the surrogate-assisted algorithm’s performance. The surrogate-assisted meta-heuristic algorithm also performs differently when we choose different meta-heuristic algorithms. In recent years, researchers have proposed more and more meta-heuristic algorithms inspired by certain phenomena or natural laws in nature, from the classical GA [36,37], DE [38], and PSO [39,40,41] to the recently proposed GOA [42] and PPE [43]. PPE is a population evolution algorithm that imitates the evolution rules of phasmatodea populations, that is, the characteristics of convergent evolution, path dependence, population growth, and competition. These algorithms have shown powerful optimization capabilities. Using a single meta-heuristic algorithm to solve complex problems may follow a similar update strategy to update the samples and thus fall into a local optimum. Therefore, many researchers consider hybrid algorithms a research hotspot, such as combining the structures of two algorithms or connecting them in series. Zhang et al. [44] proposed a strategy to combine the WOA and SFLA algorithms, i.e., combining the powerful optimization capability of WOA with the interpopulation communication capability of SFLA, and the hybrid algorithm obtained a more powerful optimization capability than when the two algorithms were run separately. However, few hybrid algorithms have been applied to surrogate-assisted meta-heuristic algorithms to deal with costly problems. We combined the recently proposed GOA with the DE algorithm to design a surrogate-assisted meta-heuristic algorithm.
We propose a surrogate-assisted hybrid meta-heuristic algorithm based on the historical surrogate model add-point strategy, the main work for which was as follows.
  • An add-point strategy is proposed, in which the useful information in the historical surrogate model is retained and will be compared with the information in the latest surrogate model to select the appropriate sample points for true fitness evaluation.
  • Combining GOA and DE algorithms, we make full use of the powerful exploration ability of GOA and the exploitation ability of DE, and propose an escape-from-the-local-optimum control strategy to control the selection of the two algorithms.
  • Generation-based optimal restart strategies are incorporated, and some of the best sample information is used to construct local surrogate models.
  • The SAGD algorithm was tested in seven benchmark functions and compared with other surrogate-assisted and meta-heuristic algorithms, including statistical analysis and iterative curves. The SAGD algorithm was applied to the 3D WSN node deployment problem to improve the network signal coverage.
The rest of this paper is organized as follows. Section 2 introduces the RBF surrogate model, GOA, DE algorithm, and 3D WSN node deployment problem. Section 3 describes the proposed surrogate-assisted hybrid meta-heuristic algorithm based on the historical surrogate model add-point strategy. Section 4 compares the SAGD algorithm with other algorithms on seven benchmark functions and shows how it has been applied to the 3D WSN node deployment problem. Finally, conclusions are given in Section 5.

2. Related Techniques

2.1. RBF Surrogate Model

The radial basis function is a real-valued function that takes its value only in relation to the distance from the origin, which can be expressed as Φ ( x ) = Φ ( x ) . If the origin here is some other special point c, the formula can be defined as Φ ( x , c ) = Φ ( x c ) , ‖. ‖ means the calculation of the distance, commonly used as the Euclidean distance. The common radial basis functions include Gaussian basis functions, linear basis functions, cubic basis functions, etc.
The RBF model performs well for nonlinear problems with small-scale data and scales well for large-scale problems [21]. Therefore, we use the RBF surrogate model to approximate the expensive objective function. The RBF surrogate model used in this paper is in the form of interpolation, defined as follows, given the data points x 1 , x 2 , , x n R D and their fitness values f x 1 , f x 2 , , f x n . The radial basis function can be fitted to the given data points and their fitness values using Equation (1).
f ^ ( x ) = i = 1 n ω i φ x x i + p ( x )
where φ ( . ) and ‖.‖ denote the basis function and distance, respectively—the cubic basis function and Euclidean distance are used in this paper; and ω i denotes the weight coefficient of the cubic basis function interpolation for the point x i . p ( x ) is a linear polynomial that satisfies i = 1 n ω i p x i = 0 . The unknown term in the radial basis function interpolation formula can be found in Equation (2).
Φ P P T 0 w b = F 0
where Φ is a matrix with n rows and n columns, each element of the matrix Φ i , j = φ x i x j , i , j = 1 , 2 , , n ; b = b 1 , b 2 , , b D + 1 T is the parameter vector of p(x) and F = f x 1 , f x 2 , , f x n T ; P is the matrix of basis functions of p(x) at the interpolation points. An RBF model is obtained when the rank of P is D + 1.

2.2. Gannet Optimization Algorithm

The GOA is a new meta-heuristic algorithm inspired by the predatory behavior of gannet in nature. It mainly consists of two phases, exploration and exploitation. The exploration phase mathematizes the gannet’s U- and V-shaped diving strategies, the exploitation phase focuses on the gannet’s sudden rotation and random swimming characteristics, and the two stages are run alternately to search for the best area in the search space.

2.2.1. Initialization Phase

The GOA starts from a set of randomly initialized solutions X and chooses whether to perform an exploration or exploitation phase according to probability. The GOA contains a memory matrix MX. When updates are performed using each update strategy, the new position found is not immediately updated for the current position. Still, the new position is compared with the fitness value of the current position, and the individual with the better fitness value is retained in the X matrix. The initialization phase assigns X to MX.

2.2.2. Exploration Phase

When hunting, gannets often observe their prey in the air and choose where prey is dense to glide and rush into the water. After rushing into the water, gannets have two types of diving, namely, U-shaped dives and V-shaped dives. A U-shaped dive is a long and deep one, and a V-shaped dive has a short and shallow shape. Equation (3) is used here to represent the shape of the U-shaped dive of the gannet, and Equation (4) represents the shape of the V-shaped dive.
a = 2 × cos 2 × π × r 1 × t
b = 2 × V 2 × π × r 2 × t
t = 1 I t T m a x _ i t e r
V ( x ) = 1 π × x + 1 , x ( 0 , π ) 1 π × x 1 , x ( π , 2 π )
where cos is MATLAB’s cosine function, r 1 and r 2 are random numbers between 0 and 1, It is the current number of iterations, and T m a x _ i t e r is the maximum number of iterations specified. As can be seen, t slowly decreases throughout the iterations until it reaches 0.
Once the two dive shapes are defined, the gannet updates its positions according to these two dive shapes. Since the gannet chooses both dive shapes with the same probability when feeding, a random number q is defined to indicate which dive shape is chosen.
M X i ( t + 1 ) = X i ( t ) + u 1 + u 2 , q 0.5 X i ( t ) + v 1 + v 2 , q < 0.5
u 2 = A × X i ( t ) X r ( t )
v 2 = B × X i ( t ) X m ( t )
A = ( 2 × r 3 1 ) × a
B = ( 2 × r 4 1 ) × b
where u 1 and v 1 are random numbers between [−a,a] and [−b,b], respectively; X i ( t ) is the ith individual in the current population; X r ( t ) is a randomly selected individual in the current population; and X m ( t ) is the average position of individuals in the current population. r 3 and r 4 are random numbers between 0 and 1.

2.2.3. Exploitation Phase

When a gannet hunts in the water, a flexible fish often performs a sudden turning action to escape from the pursuit. A parameter C is defined for the capture ability, which decreases with the number of iterations (see Equation (12)); cf indicates the centripetal force of the fish in the water.
C = 1 c f × t 2
t 2 = 1 + I t T m a x _ i t e r
c f = M × v s . e l 2 L
L = 0.2 + ( 2 0.2 ) × r 5
where M = 2.5 kg is the common weight of the gannet in reality, vel = 1.5 m / s is the speed of the gannet diving in the water, and r 5 is a random number between 0 and 1.
Define a constant c. If the gannet’s capture capacity C is greater than or equal to c, then the algorithm executes the strategy of a sudden turn to update the current position of the gannet. Otherwise, it means that the gannet does not have enough energy to complete the pursuit of the cunning fish at this time, so it carries out the strategy of random wandering to update the position of the gannet—that is, Equation (16).
M X i ( t + 1 ) = t × delta × X i ( t ) X Best ( t ) + X i ( t ) , C c X Best ( t ) X i ( t ) X Best ( t ) × P × t , C < c
d e l t a = C × X i ( t ) X Best ( t )
P = Levy ( D i m )
where X Best ( t ) is the best individual in the current population, and Levy ( ) is the Levy flight function used to model the behavior of individuals wandering randomly, which can be found in Equation (19).
Levy ( Dim ) = 0.01 × μ × σ | v | 1 β
σ = Γ ( 1 + β ) × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
where μ N 0 , σ 2 , v s . N ( 0 , 1 ) , β = 1.5 is a pre-defined constant, Γ is the gamma function that comes with MATLAB, and Dim is the dimensional size of the problem.
The GOA consists of two stages: exploration and exploitation, and each stage contain two update strategies. Algorithm 1 gives the pseudo-code of the GOA.
Algorithm 1 Pseudo-code of the GOA.
Input: 
N : population size; Dim : problem dimension; Tmax _ iter : maximum number of iterations;
Output: 
The position of the best individual and its fitness value;
  1:
Initialize the population X randomly, r and q are all random numbers from 0 to 1;
  2:
Generate memory matrix MX;
  3:
Calculate the fitness value of X;
  4:
while stopping condition is not met do
  5:
      if  r > 0.5  then
  6:
         for  MX i  do
  7:
             if  q 0.5  then
  8:
                   Update the location Gannet using Equation (7),where q 0.5 ;
  9:
             else
10:
                   Update the location Gannet using Equation (7),where q < 0.5 ;
11:
             end if
12:
         end for
13:
    else
14:
         for  MX i  do
15:
             if  C 0.2  then
16:
                   Update the location Gannet using Equation (16),where C 0.2 ;
17:
             else
18:
                   Update the location Gannet using Equation (16),where C < 0.2 ;
19:
             end if
20:
         end for
21:
    end if
22:
    for  MX i  do
23:
          Calculate the fitness value of M X i ;
24:
          If the value of MX i is better than the value of X i , replace X i with MX i ;
25:
    end for
26:
end while

2.3. Differential Evolutionary Algorithm

The DE algorithm was proposed by American scientist Rainer et al. to solve problems with Chebyshev polynomials, and it is widely used in shop scheduling, transportation, and engineering design problems. It has fast convergence, good robustness, and few control parameters. The DE algorithm has four basic steps: initialization, variation, crossover, and selection; and three important operations, variation, crossover, and selection, are performed cyclically. Mutation and crossover change the individuals, and the selection keeps the good individuals and eliminates the bad ones. The above operations lead the whole population to the optimal solution. These four steps are described in detail below.
The initialization of the population is the first step to be performed by the meta-heuristic algorithm. The DE algorithm is the same as most algorithms, in that it generates a population X by random initialization within predefined bounds. The second step is the generation of the mutation vector by the mutation operation. A mutation operation is an operation that selects several individuals and changes the basis vector given the difference between them. There are many kinds of mutation operations, and the common ones are shown in Equations (21)–(24).
v i t = x rand 1 t + f · x rand 2 t x rand 3 t
v i t = x best t + f · x rand 1 t x rand 2 t
v i t = x rand 1 t + f · x rand 2 t x rand 3 t + x rand 4 t x rand 5 t
v i t = x best t + f · x rand 1 t x rand 2 t + x rand 3 t x rand 4 t
where f is a scaling factor to control the scale of mutation; x r a n d 1 t , , x rand 5 t denotes the random individuals in generation t; rand1, rand2, rand3, rand4, and rand5 are different from each other; x b e s t t is the best individual in generation t. Thus, Equation (22) means that the best individual on the current population is the base vector, and two individuals are randomly selected to make the difference to form the difference vector, and the base vector is perturbed by the difference vector. This mutation operation is used later in this article.
The third step is the crossover operation, which uses the mutation vector generated in the second step to crossover with the target vector to generate the test vector. Two common crossover methods are binomial crossover and exponential crossover. The binomial crossover can be described as Equation (25).
t r i , j t = v i , j t , if rand i , j ( 0 , 1 ) C R or j = j rand x i , j t , otherwise
where C R = 0.9 is a predefined crossover probability; r a n d i , j ( 0 , 1 ) is a random number between 0 and 1. The above two variables jointly determine how many dimensions in the trial vector have information from the mutation vector. j r a n d is a random integer uniformly distributed in the interval [1,D], which is used to ensure that at least one dimension is updated in each crossover operation, and D represents a total of D dimensions.
The final selection operation uses the idea of greed by comparing the fitness values of the target vector and the test vector and selecting the one with the better fitness value for the next iteration. The pseudo-code of the DE algorithm is given in Algorithm 2.
Algorithm 2 Pseudo-code of the DE.
Input: 
N : population size; Dim : problem dimension; Tmax _ iter : maximum number of iterations; F : scaling factor; CR : crossover probability;
Output: 
The position of the best individual and its fitness value;
  1:
Initialize the population X randomly;
  2:
Calculate the fitness value of X;
  3:
while stopping condition is not met do
  4:
      for  x i t  do
  5:
         The mutation vector v i t is generated using the mutation operation shown in Equation (22) for individual x i t ;
  6:
      end for
  7:
      for  x i t  do
  8:
         A test vector tr i , j t is generated for individual x i t and variance vector v i t using the crossover operation shown in Equation (25);
  9:
      end for
10:
      Calculate the fitness value of the function for each trial vector tr i , j t ;
11:
      for  x i t  do
12:
         Generate the next generation of individuals x i t + 1 using the selection operation for individual x i t and trial vector tr i , j t ;
13:
      end for
14:
end while

2.4. 3D Wireless Sensor Network Node Deployment

Wireless sensor networks are considered the second largest network after the Internet and one of the most influential technologies of the 21st century [45]. The optimization of WSNs includes routing and deployment optimization [46,47,48]. With a limited number of sensors, improving node coverage in WSNs has always been an important issue. Optimizing the coverage of WSNs is important for the rational allocation of network resources and better-accomplishing tasks such as environmental sensing and information acquisition. The research on 2D planar node deployment has achieved many desireable, and the application to 3D spatial coverage strategy has also gradually attracted the attention of scholars. Using meta-heuristic algorithms to study sensor deployment problems in 3D environments is a highly time-consuming study, so using surrogate-assisted meta-heuristic algorithms to solve such problems is a good direction.
When a sensor detects the area around it, the emitted signal may be blocked by intermediate obstacles and thus unable to detect things within its coverage, so the signal coverage of the whole sensor network is affected. Here we use Bresenham’s line of sight (LOS) [11] algorithm to detect whether there is a spatial obstruction between two points. As shown in Figure 1, if there is a protruding obstacle between sensor node S and a point L within its communication radius, it blocks the detection of the point L by sensor node S. If no other sensor node detects this location either, then this location is a blind area for the whole sensor network. We use the 0-1 model to optimize the WSN coverage. In the 0-1 model, the sensing radius is set to R s . If the Euclidean distance between sensor node S and detection point L is less than the radius R s and there is no obstruction in between, the probability of the model is one. In the inverse situation, the probability is 0. It can be expressed as Equation (26).
F ( S , L ) = 1 , D ( S , L ) < R S and if no obstacle 0 , else
where function D is used to calculate the Euclidean distance between S and L.

3. Proposed SAGD Method

This section introduces our proposed algorithm, which consists of four main parts: population initialization, surrogate model building or updating, execution of the hybrid meta-heuristic algorithm, and execution of the add-point strategy. The general framework of the algorithm SAGD is given in Figure 2, where the sample database (SDB) holds the data samples and their true fitness values, and the surrogate model database (SADB) keeps the information of all the surrogate models. The black line represents the algorithm stream, the green dashed line represents the surrogate model data stream, and the yellow dashed line represents the sample data stream.
First, in the population initialization phase, we initialize the solution space using the LHS method, where the LHS method uses the lhsdesign function that comes with MATLAB. The fitness values are then calculated for all individuals and the mean individual using the true fitness function. All individuals, the mean individual, and their fitness values are saved to the SDB, as shown in flow ➀ of Figure 2, for subsequent surrogate modeling and meta-heuristic algorithm initialization. The number of initial individuals is consistent with the population size of the meta-heuristic algorithm. The population size, ps, was set to 5 × D for the expensive optimization problems in 10, 20, and 30 dimensions. For the expensive optimization problems in 50 and 100 dimensions, the population size ps was set to 100 + D / 10 .
The second step is the creation or updating of the surrogate model, where the data in the SDB are used for modeling, as shown in flow ➁ in Figure 2. The predictive power of the surrogate model is very important for the whole algorithm, so we want to build a good surrogate model with good performance. In general, the global surrogate model can fit the contour of the problem well, but it is difficult to apply the global surrogate model for high-dimensional problems, so we used local data to build the surrogate model in this study. The local surrogate model can speed up the search for promising regions and improve the accuracy of the surrogate model. When selecting the data for building the local surrogate model, the method of sample selection and the size of the data volume are both important. Here, we used the sample of neighbors of the current population in the SDB to build the local radial basis function surrogate model; the number of neighbors per individual was set to 5 × D for 10, 20, and 30 dimensions; and the number of neighbors was set to D for 50 and 100 dimensions, as shown in Algorithm 3. When updating the local surrogate model, we saved the information of the surrogate model at this time to the surrogate model database SADB and then updated the surrogate model. At the same time, the information was not saved when the surrogate model was built for the first time, as shown in flow ➂ in Figure 2.
Algorithm 3 Pseudo-code for constructing a local radial basis function model.
  1:
if update the local surrogate model then
  2:
      The information of the local surrogate model at this time is kept for the add-point strategy;
  3:
end if
  4:
for each x i t  do
  5:
      Use Euclidean distance to find the n nearest neighbors of the individual from the database and form the set NP i ;
  6:
end for
  7:
Merge the sets generated by each individual to form the training set NP ;
  8:
Constructing a new local surrogate model with the training set NP ;
The third step is to evolve the population using GOA and DE algorithms. The fitness function used here is the function fitted by the RBF surrogate model in the third step, as shown in flow ➃ in Figure 2. The GOA has a strong global search ability and better performance in high dimensions, and the position update model of the GOA can make its escape from a local optimum. Still, the GOA is more complicated. DE has good local searching and is fast. Still, it mostly perturbs the optimal individual, so the effect is not ideal when solving high-dimensional multimodal problems, and it will fall into a local optimum. Therefore, we combine the GOA and DE; i.e., we use the GOA or DE to evolve alternately. We propose a control strategy for alternate execution of the algorithms, where we start the DE algorithm if the locations of the individuals in the population updated by the GOA are present in the SDB and vice versa. We start the DE algorithm if the GOA does not find a better solution, and vice versa.
Here we add the optimal restart strategy based on generations after performing K generations. We select the individuals in the population at this time by the add-point strategy. We add the selected individual and the true fitness value to the SDB. At this time, the data in the database have changed. We select the best ps individuals from the database as the initialized population of the meta-heuristic algorithm to restart the evolution, as shown in flow ➄ in Figure 2. This generation-based optimal restart strategy prevents the GOA and DE from being misled by the errors generated by the fitted surrogate models. Using the best individual information in the SDB helps search for promising regions quickly. Again, because the local RBF model is updated every K generations, both meta-heuristic algorithms can fully explore the approximate fitness landscape.
As the evaluation of fitness is time-consuming in many complex problems, it is unacceptable for us to evaluate the true fitness value for unimportant individuals or individuals with no future, so it is important to select the appropriate individuals for evaluation. After K generations of meta-heuristic algorithm updates, our new add-point strategy for selecting appropriate individuals for true fitness evaluation kicks in. As the surrogate model is a fit to the real function, the historical surrogate model also contains useful information, and the predictive ability of the new surrogate model is usually better than that of the historical surrogate model. Thus, we use the information from the surrogate model database SADB for the sample selection, as shown in flow ➅ in Figure 2. If the fitness value obtained on the new surrogate model is better than that obtained on the historical surrogate model, the individual is proved to be more promising.
First, select the optimal individual in the population optimized by the GOA and DE algorithms, then randomly generate an integer tpc in the range of [1, ps] and calculate the average value of the first tpc individuals. The fitness values of the optimal individuals and the mean individuals are calculated using the historical surrogate model and the new surrogate model, respectively. Suppose the new surrogate model’s fitness value is better than the historical surrogate model’s. In that case, the fitness value of this individual on the true fitness function is calculated, and this individual and its true fitness value are saved in the SDB. Finally, suppose both the optimal individual and the mean individual are not added to the SDB. In that case, the fitness values of the new surrogate model for both individuals are inferior to the fitness values of the historical surrogate model. One individual is randomly selected among the first third of the entire population to calculate its true fitness value, and the random individual and its fitness value are saved to the SDB, as shown in flow ➆ in Figure 2. The pseudo-code of add-point strategy based on historical surrogate model is given in Algorithm 4.
Algorithm 4 Add-point strategy based on historical surrogate model information.
  1:
Historical surrogate model information has been saved in SADB
  2:
Obtain the best individual in the current population (best), the mean of the top tpc individuals (mean), and the random individual in the top third (rand).
  3:
The fitness values of the optimal individual, the mean individual and the random individual were calculated using the new RBF surrogate model R B F _ N E W and the historical RBF surrogate model R B F _ O L D , respectively.
  4:
if  R B F _ N E W ( b e s t ) < R B F _ O L D ( b e s t )  then
  5:
      Then the best individual and its true fitness value are saved to the SDB.
  6:
end if
  7:
if  R B F _ N E W ( m e a n ) < R B F _ O L D ( m e a n )  then
  8:
      Save the mean individual and its true fitness value to SDB.
  9:
end if
10:
if Both the best individual and the mean individual were not added to the SDB then
11:
    Then the rand individual and its true fitness values are saved to the SDB.
12:
end if
It is worth noting that one or two sample points are added at a time in this add-point strategy. In the above steps, population initialization and surrogate model building are run only once. Then, surrogate model updating, hybrid meta-heuristic algorithm optimization, and the add-point strategy are executed iteratively. The process stops when the actual number of function calculations reaches the specified maximum number. We define that FESmax = 11 × D when D 30 , and F E S m a x = 1000 when 50 D 100 . Algorithm 5 gives the pseudo-code of the proposed SAGD algorithm.
Algorithm 5 Pseudo-code for the SAGD algorithm.
Input: 
RunGOA = 1; FES = ps + 1; gen = 0; K = 30; r : random values between 0 and 1;
Output: 
Optimal solution and its fitness value;
  1:
The solution space is initialized by Latin hypercube sampling;
  2:
Use the real fitness function to calculate the fitness value for all individuals and the average individual;
  3:
while F E S < F E S m a x do
  4:
      Initialize the population with the first ps samples in the SDB;
  5:
      if  F E S > p s + 1  then
  6:
          Preserving the information of the local surrogate model and constructing the local surrogate model using Algorithm 3;
  7:
      else
  8:
          Putting populations and mean individuals and their true fitness values into a database;
  9:
      end if
10:
      while  g e n < K  do
11:
         if  R u n G O A = 1  then
12:
             for  X i  do
13:
                   if  r > 0.5  then
14:
                      Execution of Equation (7);
15:
                   else
16:
                      Execution of Equation (16);
17:
                   end if
18:
                   A local RBF surrogate model is used to estimate the fitness values of the evolved individuals and the original individuals, and the individuals with high fitness values are retained;
19:
             end for
20:
             gen = gen + 1;
21:
          else
22:
                for  X i  do
23:
                   Generation of a mutated individual by Equation (22);
24:
                   Generation of test individuals by Equation (25);
25:
                   A local RBF surrogate model is used to estimate the fitness values of the evolved individuals and the original individuals, and the individuals with high fitness values are retained;
26:
               end for
27:
               gen = gen + 1;
28:
         end if
29:
      end while
30:
      if When the meta-heuristic algorithm updates the location of the population individuals that are all present in the SDB then
31:
         then invert the value of RunGOA;
32:
      else
  Calculating the true fitness value by selecting individuals using the add-point strategy of Algorithm 4 to update the SDB;
33:
         if The meta-heuristic algorithm did not find a better solution then
34:
             Invert the value of RunGOA;
35:
         else
36:
             Retain the value of RunGOA;
37:
         end if
38:
    end if
39:
end while

4. Experimental Results and Analysis

4.1. Benchmark Test Functions and Comparison Algorithms

In this section, we measure our algorithm using seven commonly used benchmark functions with different properties, the details of which are given in Table 1. These have been widely used to test the effectiveness of surrogate-assisted meta-heuristic algorithms. We followed the comparison method of the literature [23] in 10, 20, and 30 dimensions using the first five benchmark functions and in 50 and 100 dimensions for all benchmark functions except for Rastrigin. These functions are all minimization problems, and comparing the minima can help us evaluate the optimization algorithm’s effectiveness.
We compare the SAGD algorithm with the surrogate-assisted meta-heuristic algorithms that have been proposed and the ordinary meta-heuristic algorithms, where the surrogate-assisted meta-heuristic algorithms include GORS-SSLPSO [23] and FHSAPPSO [49], and the ordinary meta-heuristic algorithms include the GOA [42] algorithm and the DE [38] algorithm, which make up the SAGD algorithm. Among them, the GORS-SSLPSO algorithm uses generation-based evolutionary control and optimal restart strategy and combines social learning particle swarm for optimization; the FHSAPPSO algorithm uses a three-layer surrogate model, which is fuzzy surrogate-assisted, local surrogate-assisted, and global surrogate-assisted, and combines probabilistic particle swarm for optimization. Note that the GORS-SSLPSO and DE algorithms can be downloaded online, and the original authors provide the codes of FHSAPPSO and GOA.
All experiments were conducted on a computer with a 3.60 GHz Intel(R) Core(TM) i3-8100 CPU and 24.0 GB of RAM in a hardware environment and WIN10 operating system and MATLAB2021a in a software environment. For a fair comparison of the algorithms, we ran them 20 times and averaged them for comparison. We limited the number of iterations and the initial population size of all algorithms. The maximum number of iterations was 11 × D for dimensions 10, 20 and 30, and 1000 for dimensions 50 and 100. The population size ps was set to 5 × D for expensive optimization problems in 10, 20, and 30, dimensions; and 100 + D / 10 for expensive optimization problems in 50 and 100 dimensions. We set the number of generations, K, to the same value as in the literature [23], i.e., K = 30. Other setup parameters for the GORS-SSLPSO, FHSAPPSO, GOA, and DE were used as recommended in the original paper.
Table 1. Characteristics of seven benchmark functions.
Table 1. Characteristics of seven benchmark functions.
DescriptionDimensionGlobal OptimumCharacteristics
(Ellipsoid) f ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 10, 20, 30, 50, 1000Uni-modal
(Rosenbrock) f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 10, 20, 30, 50, 1000Multimodal with narrow valley
(Ackley) f ( x ) = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 10, 20, 30, 50, 1000Multimodal
(Griewank) f ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 10, 20, 30, 50, 1000Multimodal
(Rastrigin) f ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 10, 20, 30, 50, 1000Multimodal
Shifted Rotated Rastrigin (SRR, Cost function see F10 in [50])10, 20, 30, 50, 100−330Very complicated multimodal
Rotated Hybrid composition function with a
Narrow Basin for the Global Optimum (RHC, Cost function see F19 in [50])10, 20, 30, 50, 10010Very complicated multimodal

4.2. Experimental Results on 10D, 20D, and 30D Benchmark Functions

The experimental statistical results of the SAGD algorithm and the comparison algorithms GOA, DE, GORS-SSLPSO, and FHSAPPSO in 10, 20, and 30 dimensions, including the mean, best, worst, and standard deviation values obtained from multiple runs of each algorithm, are given in Table 2, Table 3 and Table 4. To statistically validate the results obtained by SAGD, we used the Wilcoxon rank sum test to discern whether the SAGD algorithm is significantly different from the other algorithms. The Wilcoxon rank sum test was performed at a significance level of alpha equal to 0.05. Using the symbol “+” indicates that SAGD is statistically significantly different from and better than the comparison algorithm. “=” indicates that SAGD is not statistically significantly different from the comparison algorithm. “−” indicates that SAGD is statistically significantly different from and inferior to the comparison algorithm. The Wilcoxon rank sum test results are also given in Table 2, Table 3 and Table 4.
Table 2, Table 3 and Table 4 show that the SAGD algorithm achieved promising results on all five benchmark functions relative to the other comparison algorithms. Specifically, the SAGD algorithm outperformed the other comparison algorithms on all 15 test problems. The SAGD algorithm not only achieved better average fitness values but also showed good stability by comparing standard deviations. This is because the SAGD algorithm combines two meta-heuristic algorithms to balance exploration and exploitation. The add-point strategy based on the historical surrogate model selected better sample points for the true fitness value calculation. For the common meta-heuristic algorithm, the surrogate-model-based meta-heuristic algorithm achieved better results on all test problems due to the inclusion of the surrogate model. The surrogate model can perform model estimation based on the evaluation of the true function and can better approximate the solution of the real function.
To compare SAGD with the other algorithms more intuitively, we used the iteration curves of the average fitness values of the algorithms for a visual comparison. We can compare each algorithm’s convergence speed and ability on different benchmark functions through the convergence curve plots, as shown in Figure 3, Figure 4 and Figure 5. The horizontal axis is the number of iterations, and the vertical axis is the natural logarithm of the average fitness value.
In Figure 3, Figure 4 and Figure 5, we can see that our proposed algorithm has better convergence speed and convergence ability than other algorithms. The surrogate-assisted meta-heuristic algorithms all achieved relatively better solutions than ordinary meta-heuristic algorithms. Our algorithm had promising results on most of the tested problems, showing its superiority. The GORS-SSLPSO algorithm achieved better performance than the FHSAPPSO algorithm in most cases. With the information from the algorithm iteration comparison graphs, we can say that SAGD effectively solves low and medium-dimensional expensive optimization problems.

4.3. Experimental Results on 50D and 100D Benchmark Functions

To further test the scalability of the SAGD algorithm in higher dimensions, we compared the SAGD algorithm with GOA, DE, GORS-SSLPSO, and FHSAPPSO in two dimensions, 50D and 100D. The mean, best, worst, and standard deviation are given in Table 5 and Table 6, in regard to results for using a significance level alpha of 0.05 in Wilcoxon’s rank sum test. In Table 5 and Table 6, SRR is an abbreviated form of the shifted rotated rastrigin function, and RHC is an abbreviated form of the rotated hybrid composition function.
From Table 5 and Table 6, we can conclude that although the mean value of the GORS-SSLPSO algorithm on the 50 dimensions of the SSR benchmark function is not as good as that of the SAGD algorithm, it obtained a better standard deviation. The GORS-SSLPSO algorithm achieved better results than the SAGD algorithm on the 100 dimensions of the Rosenbrock and SSR benchmark functions. The SAGD algorithm has achieved good average fitness values on other test problems and has strong stability. Surrogate meta-heuristic algorithms also achieved more competitive results than GOA and DE. Our algorithm achieved no statistically significant difference in the 50 dimensions of the Rosenbrock function compared with GORS-SSLPSO. Our algorithm generated no statistically significant difference in the 100 dimensions of the SRR function compared with FHSAPPSO.
The comparison graphs of the average fitness value iteration curves are given in Figure 6 and Figure 7. The horizontal axis is the number of iterations, and the vertical axis represents the natural logarithm of the average fitness value. The vertical axis of the SRR function represents the average fitness value. From Figure 6 and Figure 7, it can be seen that the SAGD algorithm has achieved a huge advantage over ellipsoid, Ackley, Griewank, and RHC benchmark functions, which shows a strong ability to explore and escape local optima. For the Rosenbrock and SRR benchmark functions, there was no significant gap between the proposed algorithm and the GORS-SSLPSO and FHSAPPSO algorithms in the later iteration curves. The SAGD algorithm won on the 50 dimensions of the Rosenbrock and SRR benchmark functions, and the GORS-SSLPSO algorithm won with 100 dimensions.
The above analysis shows that the SAGD algorithm is equally effective in high-dimensional expensive problems. In conclusion, the SAGD algorithm provides good solutions to low, medium, and high-dimensional expensive optimization problems. There are several reasons why our algorithm can provide better solutions. The first is that the combination of GOA and DE algorithms enhances the local exploitation ability and global search ability of the surrogate model; the control strategy of escaping the local optima subtly controls the alternation of GOA and DE. Secondly, the add-point strategy based on the database of the surrogate model can again select the appropriate samples for true evaluation. Finally, the generation-based restart strategy makes the surrogate model better use the information in the sample database.

4.4. Effects of the Hybrid of GOA and DE

To verify the validity of GOA and DE algorithms for SAGD, we performed validity analysis on 50 and 100 dimensions for both Griewank and SRR functions. The three algorithms compared were SADE, SAGOA, and SAGD. SADE the SAGD without the GOA, and SAGOA the SAGD without the DE algorithm. The average convergence curves obtained for these three algorithms in 20 independent runs are shown in Figure 8 and Figure 9. In the 50-dimension Griewank function plot, we can see that the three algorithms converged with similar performance in the early iterations. Still, our proposed algorithm showed strong competitiveness in the late iterations because the SAGD algorithm combines the unique advantages of the two algorithms and can balance exploration and exploitation well.
In the Griewank function plot in 100 dimensions, we can see that SAGOA outperformed the SAGD algorithm, which is due to the low exploration ability of the DE algorithm in higher dimensions, and the operation of the DE algorithm in the SAGD algorithm limits the number of GOA runs. The SRR plots for 50 and 100 dimensions show smoother declines of all three curves, but the SAGD algorithm outperformed the other two algorithms. The SADE was better than the SAGOA in lower dimensions, and the SAGOA was better than the SADE algorithm in higher dimensions. The above analysis shows that the combination of GOA and DE algorithms can compensate for the disadvantages of individual algorithms and improve the algorithm’s performance in most cases.

4.5. Application on Node Deployment in a Wireless Sensor Network

We used the SAGD algorithm to solve the node deployment problem in 3D sensor networks, using each individual in the SAGD algorithm to represent a combination of sensor node deployments at a time. The 3D terrain is projected downward into a 2D plane, and the first and second dimensions in each individual represent the horizontal and vertical position of the first sensor, the third and fourth dimensions represent the horizontal and vertical position of the second sensor, and so on. When the horizontal and vertical coordinates of the sensors are determined, corresponding to the 3D terrain shown in Figure 10, the height of the sensor nodes is also determined. Now, the whole terrain is divided into one pixel by one, and the signal coverage of the WSN can be expressed using Equation (27).
rate = 1 A a = 1 A b = 1 B F S b , L a
where A denotes the number of pixels over the entire terrain area and B denotes the number of sensor nodes; F ( S b , L a ) can be calculated by Equation (26).
To verify the effectiveness of the SAGD algorithm in solving this problem, we compare SAGD with C-PGDRC [51], EBH [52], and MSAALO [53]. All three comparison algorithms were used to solve the 3D sensor node deployment problem. MSAALO is a surrogate-assisted meta-heuristic; the remaining two are improved meta-heuristic algorithms. Experiments for all four algorithms were conducted on the same simulated terrain. To better compare our algorithm with these three algorithms, we set up 30, 40, 50, and 60 sensors for each experiment and ran the experiment 20 times to take the average. The experimental results are shown in Table 7, where N represents the unknown data. The number of real evaluations was 1000 for the SAGD algorithm and MSAALO algorithm and more than 1000 for both the EBH algorithm and the C-PGDRC algorithm.
From Table 7, we can see that as the number of nodes increases, the coverage rate of the WSN also increases. Surrogate-assisted meta-heuristic algorithms have significant advantages over ordinary meta-heuristic algorithms. Our algorithm showed promising results on all node numbers compared to the other three algorithms. The standard deviation was not given in the original paper of the comparison algorithms, so only the standard deviation of the SAGD algorithm is provided here. Through the standard deviation of the SAGD algorithm, we can see that the SAGD algorithm has strong stability when solving such problems, showing its superiority.

5. Conclusions

Our method uses some excellent sample information to construct local surrogate models, and we proposed an add-point strategy based on the information from historical surrogate models. The information of each constructed surrogate model was stored in the database. We can select more competitive individuals for real fitness evaluation through the information of historical surrogate models. Two efficient meta-heuristic algorithms, GOA and DE, train the samples. The combination of GOA and DE algorithms enhances the surrogate model’s local exploitation ability and global search ability, and the algorithm can escape from the local optima through the control strategy. A generation-based restart strategy is also incorporated for selecting better samples to avoid GOA and DE being misled by overfitted surrogate models. RBF models are updated every K generations, allowing both meta-heuristic algorithms to be fully explored.
We tested the SAGD algorithm using seven commonly used benchmark functions, and the experimental results show that the SAGD algorithm obtains good results in several dimensions. Finally, the SAGD algorithm was used to solve the deployment problem of 3D sensor nodes. It would be good to apply the SAGD algorithm to more practical problems in future work; the degree of fit of various meta-heuristic algorithms and different surrogate models is also worth considering.

Author Contributions

Conceptualization, L.-G.Z.; methodology, L.-G.Z.; software, S.-C.C.; validation, J.-S.P. and S.-C.C.; formal analysis, C.-S.S.; investigation, J.W.; resources, J.-S.P.; data curation, L.-G.Z.; writing—original draft preparation, L.-G.Z.; writing—review and editing, L.-G.Z.; visualization, L.-G.Z.; supervision, J.-S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GOAGannet optimization algorithm
DEDifferential evolution
SAGDCombining surrogate-assisted model with GOA and the DE algorithm
WSNwireless sensor network
RBFRadial basis function
GPGaussian process
LHSLatin hypercube sampling
GAGenetic algorithm
PSOParticle swarm optimization
PPEPhasmatodea population evolution
WOAWhale optimization algorithm
SFLAShuffled frog leaping algorithm
CRCrossover probability
LOSLine of sight
SDBSample database
SADBSurrogate model database
psPopulation size
tpcAn integer tpc in the range of [1, ps]
FESmaxMaximum number of iterations
GORS-SSLPSOGeneration-based optimal restart strategy for surrogate-assisted social learning particle swarm optimization.
FHSAPPSOFuzzy hierarchical surrogate assists probabilistic particle swarm optimization
SRRShifted rotated rastrigin
RHCRotated hybrid composition
SADEPart of SAGD without the GOA
SAGOAPart of SAGD without the DE algorithm
C-PGDRCParallel gases brownian motion optimization
EBHEnhanced black hole
MSAALOMahalanobis surrogate-assisted ant lion optimization

References

  1. Jamshed, M.A.; Ali, K.; Abbasi, Q.H.; Imran, M.A.; Ur-Rehman, M. Challenges, applications and future of wireless sensors in Internet of Things: A review. IEEE Sens. J. 2022, 22, 5482–5494. [Google Scholar] [CrossRef]
  2. Zhao, Q.; Li, C.; Zhu, D.; Xie, C. Coverage Optimization of Wireless Sensor Networks Using Combinations of PSO and Chaos Optimization. Electronics 2022, 11, 853. [Google Scholar] [CrossRef]
  3. Akay, B.; Karaboga, D. Artificial bee colony algorithm for large-scale problems and engineering design optimization. J. Intell. Manuf. 2012, 23, 1001–1014. [Google Scholar] [CrossRef]
  4. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar]
  5. Cheng, M.Y.; Prayogo, D. Symbiotic organisms search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar]
  6. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  7. Moh’d Alia, O.; Al-Ajouri, A. Maximizing wireless sensor network coverage with minimum cost using harmony search algorithm. IEEE Sens. J. 2016, 17, 882–896. [Google Scholar]
  8. Wu, C.; Fu, S.; Li, T. Research of The WSN Routing based on Artificial Bee Colony Algorithm. J. Inf. Hiding Multim. Signal Process. 2017, 8, 120–126. [Google Scholar]
  9. Wu, C.M.; Yang, T. An Improved DV-HOP Algorithm was Applied for The Farmland Wireless Sensor Network. J. Inf. Hiding Multim. Signal Process. 2017, 8, 148–155. [Google Scholar]
  10. Kaveh, A.; Seddighian, M. Domain decomposition of finite element models utilizing eight meta-heuristic algorithms: A comparative study. Mech. Based Des. Struct. Mach. 2022, 50, 2616–2634. [Google Scholar] [CrossRef]
  11. Temel, S.; Unaldi, N.; Kaynak, O. On deployment of wireless sensors on 3-D terrains to maximize sensing coverage by utilizing cat swarm optimization with wavelet transform. IEEE Trans. Syst. Man Cybern. Syst. 2013, 44, 111–120. [Google Scholar] [CrossRef]
  12. Jin, Y. Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm Evol. Comput. 2011, 1, 61–70. [Google Scholar]
  13. Luo, J.; Gupta, A.; Ong, Y.S.; Wang, Z. Evolutionary optimization of expensive multiobjective problems with co-sub-Pareto front Gaussian process surrogates. IEEE Trans. Cybern. 2018, 49, 1708–1721. [Google Scholar]
  14. Kleijnen, J.P. Kriging metamodeling in simulation: A review. Eur. J. Oper. Res. 2009, 192, 707–716. [Google Scholar]
  15. Zhou, Z.; Ong, Y.S.; Nguyen, M.H.; Lim, D. A study on polynomial regression and Gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 3, pp. 2832–2839. [Google Scholar]
  16. Gunn, S.R. Support vector machines for classification and regression. ISIS Tech. Rep. 1998, 14, 5–16. [Google Scholar]
  17. Billings, S.A.; Zheng, G.L. Radial basis function network configuration using genetic algorithms. Neural Netw. 1995, 8, 877–890. [Google Scholar]
  18. Yu, H.; Tan, Y.; Sun, C.; Zeng, J.; Jin, Y. An adaptive model selection strategy for surrogate-assisted particle swarm optimization algorithm. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  19. Elsayed, K.; Lacor, C. Robust parameter design optimization using Kriging, RBF and RBFNN with gradient-based and evolutionary optimization techniques. Appl. Math. Comput. 2014, 236, 325–344. [Google Scholar] [CrossRef]
  20. Dong, H.; Li, C.; Song, B.; Wang, P. Multi-surrogate-based Differential Evolution with multi-start exploration (MDEME) for computationally expensive optimization. Adv. Eng. Softw. 2018, 123, 62–76. [Google Scholar] [CrossRef]
  21. Díaz-Manríquez, A.; Toscano, G.; Coello Coello, C.A. Comparison of metamodeling techniques in evolutionary algorithms. Soft Comput. 2017, 21, 5647–5663. [Google Scholar]
  22. Cheng, K.; Lu, Z.; Ling, C.; Zhou, S. Surrogate-assisted global sensitivity analysis: An overview. Struct. Multidiscip. Optim. 2020, 61, 1187–1213. [Google Scholar] [CrossRef]
  23. Yu, H.; Tan, Y.; Sun, C.; Zeng, J. A generation-based optimal restart strategy for surrogate-assisted social learning particle swarm optimization. Knowl. Based Syst. 2019, 163, 14–25. [Google Scholar] [CrossRef]
  24. Pan, J.S.; Liu, N.; Chu, S.C.; Lai, T. An efficient surrogate-assisted hybrid optimization algorithm for expensive optimization problems. Inf. Sci. 2021, 561, 304–325. [Google Scholar] [CrossRef]
  25. Wang, H.; Jin, Y.; Doherty, J. Committee-based active learning for surrogate-assisted particle swarm optimization of expensive problems. IEEE Trans. Cybern. 2017, 47, 2664–2677. [Google Scholar] [CrossRef] [PubMed]
  26. Stein, M. Large sample properties of simulations using Latin hypercube sampling. Technometrics 1987, 29, 143–151. [Google Scholar]
  27. Ma, Y.; Xiao, Y.; Wang, J.; Zhou, L. Multicriteria optimal Latin hypercube design-based surrogate-assisted design optimization for a permanent-magnet vernier machine. IEEE Trans. Magn. 2021, 58, 1–5. [Google Scholar]
  28. Gunst, R.F.; Mason, R.L. Fractional factorial design. Wiley Interdiscip. Rev. Comput. Stat. 2009, 1, 234–244. [Google Scholar] [CrossRef]
  29. Won, K.S.; Ray, T. Performance of kriging and cokriging based surrogate models within the unified framework for surrogate assisted optimization. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), Portland, OR, USA, 19–23 June 2004; Volume 2, pp. 1577–1585. [Google Scholar]
  30. Jin, R.; Chen, W.; Simpson, T.W. Comparative studies of metamodelling techniques under multiple modelling criteria. Struct. Multidiscip. Optim. 2001, 23, 1–13. [Google Scholar]
  31. Cox, D.D.; John, S. A statistical method for global optimization. In Proceedings of the 1992 IEEE International Conference on Systems, Man, and Cybernetics, Chicago, IL, USA, 18–21 October 1992; pp. 1241–1246. [Google Scholar]
  32. Kushner, H.J. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. J. Basic Eng. 1964, 86, 97–106. [Google Scholar] [CrossRef]
  33. Carpio, R.R.; Giordano, R.C.; Secchi, A.R. Enhanced surrogate assisted global optimization algorithm based on maximizing probability of improvement. In Computer Aided Chemical Engineering; Elsevier: Amsterdam, The Netherlands, 2017; Volume 40, pp. 2065–2070. [Google Scholar]
  34. Forrester, A.I.; Keane, A.J. Recent advances in surrogate-based optimization. Prog. Aerosp. Sci. 2009, 45, 50–79. [Google Scholar]
  35. Yu, M.; Li, X.; Liang, J. A dynamic surrogate-assisted evolutionary algorithm framework for expensive structural optimization. Struct. Multidiscip. Optim. 2020, 61, 711–729. [Google Scholar]
  36. Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85. [Google Scholar]
  37. Loukhaoukha, K. On the security of digital watermarking scheme based on SVD and tiny-GA. J. Inf. Hiding Multimed. Signal Process. 2012, 3, 135–141. [Google Scholar]
  38. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar]
  39. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  40. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar]
  41. Sun, C.; Jin, Y.; Zeng, J.; Yu, Y. A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput. 2015, 19, 1461–1475. [Google Scholar] [CrossRef]
  42. Pan, J.S.; Zhang, L.G.; Wang, R.B.; Snášel, V.; Chu, S.C. Gannet optimization algorithm: A new metaheuristic algorithm for solving engineering optimization problems. Math. Comput. Simul. 2022, 202, 343–373. [Google Scholar] [CrossRef]
  43. Song, P.C.; Chu, S.C.; Pan, J.S.; Yang, H. Simplified Phasmatodea population evolution algorithm for optimization. Complex Intell. Syst. 2022, 8, 2749–2767. [Google Scholar]
  44. Zhang, L.G.; Fan, F.; Chu, S.C.; Garg, A.; Pan, J.S. Hybrid Strategy of Multiple Optimization Algorithms Applied to 3-D Terrain Node Coverage of Wireless Sensor Network. Wirel. Commun. Mob. Comput. 2021, 2021, 6690824. [Google Scholar] [CrossRef]
  45. Snasel, V.; Kong, L.; Tsai, P.W.; Pan, J.S. Sink Node Placement Strategies based on Cat Swarm Optimization Algorithm. J. Netw. Intell. 2016, 1, 52–60. [Google Scholar]
  46. Pan, J.S.; Kong, L.; Sung, T.W.; Tsai, P.W.; Snášel, V. α-Fraction first strategy for hierarchical model in wireless sensor networks. J. Internet Technol. 2018, 19, 1717–1726. [Google Scholar]
  47. Nguyen, T.T.; Lin, W.W.; Vo, Q.S.; Shieh, C.S. Delay aware routing based on queuing theory for wireless sensor networks. Data Sci. Patten Recognit. 2021, 5, 1–10. [Google Scholar]
  48. Xue, X.; Pan, J.S. A compact co-evolutionary algorithm for sensor ontology meta-matching. Knowl. Inf. Syst. 2018, 56, 335–353. [Google Scholar] [CrossRef]
  49. Chu, S.C.; Du, Z.G.; Peng, Y.J.; Pan, J.S. Fuzzy hierarchical surrogate assists probabilistic particle swarm optimization for expensive high dimensional problem. Knowl. Based Syst. 2021, 220, 106939. [Google Scholar]
  50. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
  51. Gao, M.; Pan, J.S.; Li, J.P.; Zhang, Z.P.; Chai, Q.W. 3-D terrains deployment of wireless sensors network by utilizing parallel gases brownian motion optimization. J. Internet Technol. 2021, 22, 13–29. [Google Scholar]
  52. Pan, J.S.; Chai, Q.W.; Chu, S.C.; Wu, N. 3-D terrain node coverage of wireless sensor network using enhanced black hole algorithm. Sensors 2020, 20, 2411. [Google Scholar] [CrossRef] [Green Version]
  53. Li, Z.; Chu, S.C.; Pan, J.S.; Hu, P.; Xue, X. A Mahalanobis Surrogate-Assisted Ant Lion Optimization and Its Application in 3D Coverage of Wireless Sensor Networks. Entropy 2022, 24, 586. [Google Scholar] [CrossRef]
Figure 1. Three-dimensional terrain node signal obstruction example.
Figure 1. Three-dimensional terrain node signal obstruction example.
Entropy 25 00317 g001
Figure 2. Framework of the algorithm SAGD.
Figure 2. Framework of the algorithm SAGD.
Entropy 25 00317 g002
Figure 3. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 10D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Figure 3. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 10D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Entropy 25 00317 g003
Figure 4. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 20D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Figure 4. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 20D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Entropy 25 00317 g004aEntropy 25 00317 g004b
Figure 5. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 30D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Figure 5. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 30D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) Rastrigin.
Entropy 25 00317 g005
Figure 6. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 50D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) SRR. (f) RHC.
Figure 6. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 50D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) SRR. (f) RHC.
Entropy 25 00317 g006
Figure 7. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 100D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) SRR. (f) RHC.
Figure 7. Convergence curves of SAGD, GOA, DE, GORS-SSLPSO, and FHSAPPSO for 100D benchmark functions. (a) Ellipsoid. (b) Rosenbrock. (c) Ackley. (d) Griewank. (e) SRR. (f) RHC.
Entropy 25 00317 g007
Figure 8. Convergence curves of SADE, SAGOA, and SAGD over 50D Griewank and SRR functions. (a) Griewank. (b) SRR.
Figure 8. Convergence curves of SADE, SAGOA, and SAGD over 50D Griewank and SRR functions. (a) Griewank. (b) SRR.
Entropy 25 00317 g008
Figure 9. Convergence curves of SADE, SAGOA, and SAGD over 100D Griewank and SRR functions. (a) Griewank. (b) SRR.
Figure 9. Convergence curves of SADE, SAGOA, and SAGD over 100D Griewank and SRR functions. (a) Griewank. (b) SRR.
Entropy 25 00317 g009
Figure 10. Deployment of 3D topographic maps of sensor nodes.
Figure 10. Deployment of 3D topographic maps of sensor nodes.
Entropy 25 00317 g010
Table 2. Comparison of statistical results on 10D.
Table 2. Comparison of statistical results on 10D.
FunctionMethodMeanBestWorstStd.WS Rank-Test
EllipsoidGOA32.05986231.7146139.488753.92827+
DE106.5104225.3724152.099839.62943+
GORS-SSLPSO0.0025950.1133980.0452550.034015+
FHSAPPSO0.77288.689443.149322.43648+
SAGD 4.07 × 10 7 0.000191 6.05 × 10 5 7.55 × 10 5 /
RosenbrockGOA93.299241163.463579.0592378.2185+
DE408.05151489.294847.2024350.04+
GORS-SSLPSO11.572731.609920.20867.74637+
FHSAPPSO41.068782.057759.645812.6012+
SAGD8.781818.884568.835570.034997/
AckleyGOA16.861620.372918.87381.19643+
DE19.245920.618719.87910.414064+
GORS-SSLPSO3.278727.943115.130481.56844+
FHSAPPSO7.5428211.31659.588371.36459+
SAGD0.0036490.3614660.1906190.131978/
GriewankGOA23.3198130.888788.1402234.36457+
DE36.35064154.4768108.882333.02842+
GORS-SSLPSO0.2925051.039130.6761050.281672+
FHSAPPSO1.08361.693941.327150.184562+
SAGD0.0021490.1103660.0248660.033259/
RastriginGOA73.54777134.3728100.573217.1564+
DE85.33661129.4805104.517114.1654+
GORS-SSLPSO10.073176.039632.59419.3827+
FHSAPPSO46.119886.893874.925712.5114+
SAGD 8.45 × 10 8 0.0035160.0004780.001099/
Table 3. Comparison of statistical results on 20D.
Table 3. Comparison of statistical results on 20D.
FunctionMethodMeanBestWorstStd.WS Rank-Test
EllipsoidGOA260.40761073.858768.2923212.7306+
DE696.42241084.811865.7986112.3264+
GORS-SSLPSO0.1021030.430270.2470980.119181+
FHSAPPSO6.2462623.067514.93045.13703+
SAGD 9.50 × 10 9 3.21 × 10 5 1.17 × 10 5 1.16 × 10 5 /
RosenbrockGOA538.33643697.2242064.123989.6387+
DE2525.7564328.3623342.573556.2642+
GORS-SSLPSO30.1832874.8948648.2801816.5743+
FHSAPPSO66.58076116.484996.6563816.72317+
SAGD18.6277918.7648618.723710.036755/
AckleyGOA13.26620.39418.80162.5611+
DE19.899520.623120.3620.208332+
GORS-SSLPSO1.12796710.496694.3456612.535612+
FHSAPPSO6.6359.66447.791671.01214+
SAGD0.0073370.1060950.0393360.034759/
GriewankGOA170.0912381.3964269.243360.03781+
DE217.6609367.7392315.780946.80688+
GORS-SSLPSO0.1536320.4715660.3544870.092308+
FHSAPPSO1.357942.173521.77290.269687+
SAGD 6.05 × 10 5 0.0234780.0044280.007241/
RastriginGOA128.1745277.8843221.542245.55824+
DE238.623276.5483257.25210.93466+
GORS-SSLPSO14.9283188.965647.4356750.62116+
FHSAPPSO117.9776183.3136151.913821.55732+
SAGD 1.04 × 10 9 5.03 × 10 6 6.67 × 10 7 1.58 × 10 6 /
Table 4. Comparison of statistical results on 30D.
Table 4. Comparison of statistical results on 30D.
FunctionMethodMeanBestWorstStd.WS Rank-Test
EllipsoidGOA579.23672232.4861627.833489.0814+
DE1817.552406.8322064.727173.2234+
GORS-SSLPSO0.3856221.5716430.7792070.364867+
FHSAPPSO13.7947.329731.898312.7156+
SAGD 3.55 × 10 9 1.99 × 10 5 4.67 × 10 6 6.60 × 10 6 /
RosenbrockGOA837.27436979.0943869.0611987.883+
DE5140.7226290.0925771.844376.0112+
GORS-SSLPSO63.59973127.654798.7516220.6104+
FHSAPPSO57.4259110.727384.9420719.11291+
SAGD28.5627128.6504728.599870.028426/
AckleyGOA19.174920.409720.02640.36001+
DE20.148520.705320.50060.181577+
GORS-SSLPSO3.4250437.8082695.3371171.479306+
FHSAPPSO6.418988.414737.446670.604243+
SAGD0.000840.0358940.0140720.014881/
GriewankGOA173.2629587.0626365.7254135.9223+
DE435.1658613.6942539.353555.08831+
GORS-SSLPSO0.2542310.5354320.3599670.102317+
FHSAPPSO1.986473.326292.446640.470472+
SAGD 5.57 × 10 6 0.0052370.0010740.001853/
RastriginGOA281.1905407.6522365.269637.05603+
DE360.4444426.2734401.81121.96221+
GORS-SSLPSO43.8226106.87564.0372621.26781+
FHSAPPSO187.2826291.9423253.720226.74184+
SAGD 2.35 × 10 11 1.57 × 10 6 2.06 × 10 7 4.87 × 10 7 /
Table 5. Comparison of statistical results on 50D.
Table 5. Comparison of statistical results on 50D.
FunctionMethodMeanBestWorstStd.WS Rank-Test
EllipsoidGOA343.77964456.9662216.8471485.085+
DE6217.8677707.0676902.88528.7414+
GORS-SSLPSO0.002329471.903584.07133178.5927+
FHSAPPSO5.96834710.161868.1537711.424026+
SAGD 4.08 × 10 12 0.000424 4.99 × 10 5 0.000133/
RosenbrockGOA1236.3917156.2673824.7571677.912+
DE9785.58714020.2612205.941375.941+
GORS-SSLPSO42.2773495.4817556.4372620.29723 =
FHSAPPSO49.4624474.5651255.933618.948982+
SAGD47.1044748.3374947.910330.453592/
AckleyGOA7.9616220.593314.96954.83463+
DE20.438920.857520.72080.123546+
GORS-SSLPSO2.814056.3518794.095041.037834+
FHSAPPSO2.9342694.3084833.7086590.445261+
SAGD 7.79 × 10 11 3.01 × 10 7 3.06 × 10 8 9.50 × 10 8 /
GriewankGOA13.56748815.6762353.2857333.8933+
DE776.58211066.333939.697687.98558+
GORS-SSLPSO 2.81 × 10 5 0.0173210.0033380.005807+
FHSAPPSO1.0654661.2534011.1538930.062102+
SAGD 1.06 × 10 12 8.05 × 10 8 8.09 × 10 9 2.54 × 10 8 /
SRRGOA699.66441088.573866.0672121.4055+
DE881.39731155.8421087.91378.80308+
GORS-SSLPSO−135.396−24.3956−73.146935.7645+
FHSAPPSO-80.0944292.118102.1862108.9186+
SAGD−212.506249.6789−110.814133.0928/
RHCGOA1315.8211438.5541379.78840.68161+
DE1365.3681450.4781421.53133.20405+
GORS-SSLPSO948.37381128.2761051.30165.45301+
FHSAPPSO1031.2191177.0531080.80845.1294+
SAGD910910.0001910 3.22 × 10 5 /
Table 6. Comparison of statistical results on 100D.
Table 6. Comparison of statistical results on 100D.
FunctionMethodMeanBestWorstStd.WS Rank-Test
EllipsoidGOA1435.99518014.618009.5375426.974+
DE30937.0434202.6732093.35925.4301+
GORS-SSLPSO1.3405282450.227514.2376813.412+
FHSAPPSO64.20482121.985.3216615.5251+
SAGD 1.39 × 10 8 0.0036320.0005730.001138/
RosenbrockGOA5404.47829669.0315022.017685.385+
DE29923.2133170.5831613.671138.18+
GORS-SSLPSO96.21071147.2883102.52915.74872
FHSAPPSO130.4009186.9138153.2217.92665 =
SAGD97.8853798.0056497.969170.045568/
AckleyGOA8.3222220.683317.62194.57513+
DE20.904221.05120.9650.049507+
GORS-SSLPSO6.22773413.7832710.199492.843565+
FHSAPPSO5.123298.9389986.3822341.180745+
SAGD0.0022070.0193050.0092070.005136/
GriewankGOA91.098431946.847875.4752677.6542+
DE2156.5782457.7352313.977105.8374+
GORS-SSLPSO0.09598990.3419818.2247837.98808+
FHSAPPSO3.3230627.2306524.4464241.069094+
SAGD 2.51 × 10 5 0.0046960.0015630.001654/
SRRGOA1974.1172836.6322346.489323.2095+
DE2536.2893086.0532879.253180.671+
GORS-SSLPSO1005.0551266.6761133.15794.69411
FHSAPPSO1106.1031564.5871382.903139.4766+
SAGD1054.4481697.6921406.459209.4309/
RHCGOA1338.6081581.6251457.95181.98479+
DE1470.6381578.8471528.80329.61393+
GORS-SSLPSO1371.8711482.2531436.11232.37645+
FHSAPPSO1282.9871360.5041309.6223.53878+
SAGD910910.0125910.00230.0043/
Table 7. Comparing results with different numbers of nodes.
Table 7. Comparing results with different numbers of nodes.
NumEBHC-PGDRCMSAALOSAGD (STD)
3048.01%47.55%47.56%51.36% (0.0166)
4057.85%57.40%59.45%62.72% (0.0191)
5065.06%64.99%66.87%71.36% (0.0179)
6071.26%71.27%N77.08% (0.0180)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, J.-S.; Zhang, L.-G.; Chu, S.-C.; Shieh, C.-S.; Watada, J. Surrogate-Assisted Hybrid Meta-Heuristic Algorithm with an Add-Point Strategy for a Wireless Sensor Network. Entropy 2023, 25, 317. https://doi.org/10.3390/e25020317

AMA Style

Pan J-S, Zhang L-G, Chu S-C, Shieh C-S, Watada J. Surrogate-Assisted Hybrid Meta-Heuristic Algorithm with an Add-Point Strategy for a Wireless Sensor Network. Entropy. 2023; 25(2):317. https://doi.org/10.3390/e25020317

Chicago/Turabian Style

Pan, Jeng-Shyang, Li-Gang Zhang, Shu-Chuan Chu, Chin-Shiuh Shieh, and Junzo Watada. 2023. "Surrogate-Assisted Hybrid Meta-Heuristic Algorithm with an Add-Point Strategy for a Wireless Sensor Network" Entropy 25, no. 2: 317. https://doi.org/10.3390/e25020317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop