Next Article in Journal
SM-FSOD: A Second-Order Meta-Learning Algorithm for Few-Shot PCB Defect Object Detection
Previous Article in Journal
Design and Evaluation of Knowledge-Distilled LLM for Improving the Efficiency of School Administrative Document Processing
Previous Article in Special Issue
3D-Printed Circular Horn Antenna with Dielectric Lens for Focused RF Energy Delivery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surrogate-Assisted Evolutionary Multi-Objective Antenna Design

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University, Hefei 230601, China
3
College of Mathematics Science, Inner Mongolia Normal University, Hohhot 010028, China
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(19), 3862; https://doi.org/10.3390/electronics14193862
Submission received: 15 August 2025 / Revised: 22 September 2025 / Accepted: 25 September 2025 / Published: 29 September 2025

Abstract

This paper presents a multi-problem surrogate-assisted evolutionary multi-objective optimization approach for antenna design. By transforming the traditional antenna design optimization problem into expensive multi-objective optimization problems, this method employs a multi-problem surrogate (MPS) model to stack multiple antenna design problems. The MPS model is a knowledge-transfer framework that stacks multiple surrogate models (e.g., Gaussian Processes) trained on related antenna design problems (e.g., Yagi–Uda antennas with varying director configurations) to accelerate optimization. The parameters of Yagi–Uda antenna including radiation patterns and beamwidth—across various director configurations are considered as decision variables. The several surrogates are constructed based on the number of directors of Yagi–Uda antenna. The MPS algorithm identifies promising candidate solutions using an expected improvement strategy and refines them through true function evaluations, effectively balancing exploration with computational cost. Compared to benchmark algorithms assessed by hypervolume, our approach demonstrated superior average performance while requiring fewer function evaluations.

1. Introduction

In recent years, evolutionary algorithms (EAs) have found broad applications in antenna synthesis and design optimization [1]. Antenna design typically involves complex and challenging constrained optimization problems to meet specific performance metrics, such as the voltage standing wave ratio (VSWR) [2], gain [3], directional diagram [4], axial ratio [5], bandwidth [6], radiation pattern control [7], and size constraints [8]. For antenna arrays, it is necessary to generate beam patterns with desired beamwidths while maintaining a constrained peak sidelobe level. Moreover, antenna performance must be validated through electromagnetic (EM) simulations, where the computational cost grows exponentially with structural complexity. Given the high computational cost, complex multi-objective trade-offs, and resource-intensive iterative processes, antenna design is widely regarded as an expensive multi-objective optimization problem. Traditional design methods rely heavily on expert knowledge and are often -consuming and labor-intensive. EAs have proven to be effective optimization tools in the field of electromagnetics [9,10,11], especially in antenna design, where they offer clear advantages: not only do they overcome the limitations of conventional approaches, but they also enable efficient parameter tuning in response to evolving design requirements.
As a key research direction in computational intelligence and complex systems, evolutionary algorithms offer significant advantages over traditional mathematical programming and gradient-based methods, particularly in solving black-box, nonlinear, and multimodal problems. In the antenna design domain, evolutionary strategies such as Genetic Algorithms (GAs) [12], Particle Swarm Optimization (PSO) [13], and Differential Evolution (DE) [14] have been widely applied to tasks such as geometry optimization, pattern shaping, and bandwidth enhancement, often outperforming traditional heuristic methods. Despite their strong global search capabilities, EAs face a fundamental challenge when applied to expensive multi-objective problems (eMOPs): the high cost of fitness evaluations [15]. In EM simulation-driven optimization, a single structural evaluation can take from several minutes to hours, which limits both population size and iteration count during evolution [16,17].
To address this issue, researchers have proposed Surrogate-Assisted Evolutionary Algorithms (SA-EAs) in recent years. These approaches construct approximate models (surrogates) to replace the true objective functions, significantly reducing the number of actual simulations while maintaining convergence accuracy. Common surrogate models include Gaussian Processes (GPs), Support Vector Regression (SVR), Random Forests (RFs), and Neural Networks (NNs) [18,19,20], which aim to approximate the true objective function using existing data, enabling fast prediction in unexplored regions. In [21], Knowles introduced the ParEGO method, which scalarizes multi-objective optimization problems using an augmented Tchebycheff function and employs a GP model with an expected improvement (EI) criterion to guide the selection of evaluation points, thereby improving efficiency. Unlike this weighted aggregation strategy, Wang et al. selected candidate solutions based on hypervolume contribution, which performs well but incurs high computational costs in high-dimensional objective spaces [22]. Zhang et al. proposed MOEA/D-EGO [23], which decomposes the problem into sub-tasks, uses fuzzy clustering and GP modeling, and introduces an EI matrix that considers both diversity and density. However, most of these methods rely solely on the data from the current problem, ignoring transferable knowledge from related problems.
Most surrogate-assisted MOEAs focus only on the target problem and neglect useful information from other related source problems. Studies show that integrating EAs with cross-domain knowledge transfer—known as Transfer Evolutionary Multi-objective Optimization (TEMO) [24,25,26]—can improve optimization efficiency and effectiveness. In [27], Jiang et al. proposed a transfer learning-based dynamic multi-objective EA (Tr-DMOEA), which uses the Pareto-optimal set from the previous stage as source knowledge to generate the initial population for the next stage, thereby accelerating convergence. To further enhance efficiency, the HeMPS framework [28] constructs a large-scale surrogate model library, aligns feature spaces across tasks using kernel methods, and introduces Adaptive k-fold Cross-Validation (AKCV) to balance modeling cost and prediction accuracy. Similarly, the “Sparse Transfer Stacking” approach [29] incorporates L1 regularization to filter the most relevant source models, effectively mitigating the risk of negative transfer. This method also integrates the expected improvement (EI) criterion to enable cost-sensitive optimization in scenarios with non-uniform evaluation costs. In [30], the concurrent optimization of multiple multi-objective problems facilitated genetic information sharing, improving convergence speed and solution diversity. These studies suggest that knowledge transfer mechanisms under multi-task frameworks can accelerate optimization and open new pathways for solving problems in dynamic environments.
Current TEMO approaches demonstrate limited capability in effectively utilizing knowledge from source models, primarily handling only a small subset of available reference information. In practical applications, however, there exists substantial accumulated expertise derived from diverse design tasks—including both finalized projects and ongoing tasks—all of which could potentially enhance performance in specific objective optimization scenarios. Implementations such as pre-trained models in the fields of deep learning and parallel task execution frameworks in cloud computing have demonstrated practical effectiveness in knowledge transfer applications.
To address this, we integrate the Multi-Problem Surrogate-Assisted (MPS) algorithm into the expensive antenna design problem and propose an evolutionary algorithm for multi-problem surrogate-assisted antenna design. For high-cost simulation tasks, this approach uses historical data from source problems along with limited real evaluations on the target problem to collaboratively construct cross-problem surrogate models. The goal is to achieve high-accuracy, low-cost, and generalizable antenna optimization. Building on this concept, our study proposes a novel methodology that integrates multi-problem surrogate modeling, evolutionary strategies, and electromagnetic simulation feedback. The method is tailored to address the computational bottlenecks and engineering challenges present in modern antenna design. The contributions of this paper are two-fold: (1) Multi-Problem Surrogate Framework: Develops a novel meta-regression model that intelligently combines multiple surrogate models, enabling effective knowledge transfer across related optimization problems while preventing negative transfer through adaptive weighting. (2) Computationally Efficient Optimization: Achieves superior performance with fewer evaluations than benchmarks by integrating expected improvement with evolutionary search, significantly reducing simulation costs.
The rest of the study is structured as follows. Section 2 first introduces the modeling process of converting antenna design into a multi-objective problem, and then elaborates on the multi-problem surrogate model. Section 3 details the proposed MPS, including its overall framework, specific components, and parameter settings. Section 4 reports and analyzes the comprehensive experimental results. Section 5 summarises the full paper and provides an outlook on future research directions.

2. Background and Motivation

This section describes the Yagi antenna design by using an anonymous function together with the bounds and the options structure. Then the multi-problem surrogates are introduced in detail.

2.1. Yagi Antenna Design

The Yagi antenna investigated in this work consists of a driven element, a reflector, and multiple directors [31]. Its performance primarily depends on the lengths and spacings of these metallic elements. To facilitate parametric modeling and optimization, the antenna geometry is abstracted into a decision variable vector denoted by x = [ r l , r s , d l , d s ] , where r l represents the length of the reflector, r s denotes the spacing between the reflector and the driven element, and d s is the spacing between the director elements. The decision vector x , thus, fully defines the antenna geometry, making it possible to reconstruct the structure automatically within an electromagnetic simulation environment during the optimization process.
Before carrying out the optimization process, the initial guess of the radiation pattern was plotted in 3D. Simulation results revealed that this initial Yagi–Uda antenna did not exhibit enhanced directivity in the preferred direction, namely toward the zenith (elevation = 90°), indicating that it is a poorly designed radiator. Therefore, further optimization of the antenna structure is necessary to improve its performance.
To comprehensively assess the antenna’s performance within the desired frequency range, several key metrics are considered, including the maximum directivity D max ( f ) , the reflection coefficient Γ ( f ) , the front-to-back ratio F B ( f ) , and the mismatch loss T ( f ) . Among these metrics, D max ( f ) serves as a direct measure of the antenna’s ability to radiate or receive energy in a desired direction, while T ( f ) quantifies impedance matching performance, which strongly influences signal transmission efficiency. According to established research in antenna performance optimization [32], these two factors are typically the most critical for evaluating and improving overall antenna behavior.
Considering the limited effectiveness and convergence of multi-objective optimization algorithms in high-dimensional objective spaces, we selected D max ( f ) and T ( f ) as the primary objective functions, as they capture the essential trade-off between radiation performance and power efficiency. Other performance-related factors, including F B ( f ) and Γ ( f ) , though also important, are incorporated as constraint functions to ensure engineering feasibility while maintaining optimization tractability.
In the following, we present the mathematical formulations of the performance metrics, constraint penalties, and the overall objective function used in our optimization framework. The mismatch loss is calculated from the reflection coefficient as
T ( f ) = 10 log 10 ( 1 | Γ ( f ) | 2 )
where T ( f ) denotes the mismatch loss (in decibels, dB), and Γ ( f ) represents the reflection coefficient at frequency f.
Since the Yagi antenna needs to maintain satisfactory performance across the bandwidth, these metrics are evaluated at the center frequency f c as well as at the bandwidth edge frequencies  
f min = f c B 2 , f max = f c + B 2 ,
where B denotes the bandwidth over which the antenna performance is specified. We define the frequency set as F = { f min , f c , f max } . At each frequency point, the composite gain is defined as  
G ( f ) = D max ( f ) + T ( f ) ,
which integrates both the directivity and the mismatch effects.
To ensure the designed antenna meets practical engineering requirements, several performance constraints are introduced. Firstly, a gain constraint enforces that G ( f ) G min . If this requirement is not met, a penalty term is added as
P g a i n = f F max ( 0 , G min G ( f ) ) .
Secondly, to guarantee gain flatness across the band, a deviation constraint is imposed such that | G ( f ) G min | Δ G , with the corresponding penalty expressed as
P d e v = 1 3 f F max ( 0 , | G ( f ) G min | Δ G ) .
Additionally, to ensure adequate directional properties, the front-to-back ratio is required to satisfy F B ( f ) F B min , yielding the penalty
P F / B = 1 3 f F max ( 0 , F B min F B ( f ) ) .
Finally, for impedance matching, the reflection coefficient is constrained at the center frequency as Γ ( f c ) Γ max , resulting in the penalty
P Γ = max ( 0 , Γ ( f c ) Γ max ) .
By integrating the above performance metrics and constraint penalties, the antenna optimization problem is formulated as a single-objective minimization problem. The objective function is defined as
min f ( x ) = G ( f c ) + K · P ( x ) ,
where K is a large penalty factor employed to heavily penalize designs that violate the constraints. The overall penalty function P ( x ) is given by
P ( x ) = max ( 0 , P g a i n + P d e v + P F / B + P Γ ) .
This modeling approach enables the optimization process to simultaneously pursue high gain at the center frequency while effectively controlling in-band performance flatness, front-to-back ratio, and impedance matching, thereby achieving a balanced and practically viable antenna design.

2.2. Multi-Problem Surrogates

Traditionally, the development of surrogate model techniques has mainly focused on approximating a single expensive objective function or constraint function. However, many engineering and scientific applications actually involve multiple interrelated problems. To effectively handle such situations, the concept of multi-problem surrogates (MPS) has emerged, aiming to simultaneously approximate several related problems by exploiting their shared information.
Recently, combining multi-problem surrogate strategies with evolutionary algorithms has attracted widespread attention. Studies have shown that there exists useful knowledge from other problems, which can be transferred to assist in optimizing the current problem [24]. This involves extracting valuable information to optimize the target multi-objective optimization (MOO) problem of interest. This approach is known as transfer evolutionary multi-objective optimization (TEMO), which can be categorized into problem-based transfer, model-based transfer, and feature-based transfer [33].
Since single-objective problems may have significant similarity with the given MOO problems, in [34], useful features are extracted from single-objective problems to enhance the solving capability for the considered MOO problem. In [35], positive prior knowledge is transferred into existing MOEAs via probabilistic models, thereby accelerating search convergence. Zou et al. extracted stable features from past community structures [36], preserving valuable characteristic information. These features were then transferred into the current optimization process to improve EAs. Meanwhile, in [37], Jiang et al. proposed a novel individual-based transfer learning approach, which combines a pre-search strategy to identify high-quality individuals with improved diversity, effectively mitigating negative transfer caused by individual clustering.
In recent years, Alan et al. proposed a TEMO multi-problem surrogate (MPS) algorithm that extracts useful information from source surrogate models and transfers it to optimize the target problem [38]. Overall, the relevant literature highlights that multi-problem surrogate strategies can significantly reduce computational burdens and improve predictive robustness by leveraging multiple data sources and the correlations among optimization objectives.

3. Methodology

In this section, the multiobjective antenna design optimization problem is defined first. Then, the general framework of the MPS algorithm is introduced. Finally, the components of the algorithm tailored for multi-objective antenna design optimization and their computational details are discussed.

3.1. Multiobjective Antenna Design

The Yagi–Uda antenna consists of a driven half-wavelength dipole, a reflector behind it, and directors in front. The reflector length r l shapes current distribution, enhancing back lobe suppression and the front-to-back ratio. The spacing r s between the driven element and reflector governs interference, reducing backward radiation. The director length d l sets resonance and coupling, impacting gain and frequency response, while spacing d s between directors and to the driven element controls main lobe formation and impedance. Optimizing [ r l , r s , d l , d s ] enables precise tailoring of the radiation pattern, gain, bandwidth, and matching, thus, meeting specific antenna performance requirements. As described in Section 2, the Yagi antenna design can be modeled as a constrained multiobjective optimization problem, which is defined as:
min { D max ( f ) , T ( f ) } T s . t . P g a i n < 0 , P d e v < 0 , P F / B < 0 , P Γ < 0 ,
where T ( f ) = 10 log 10 ( 1 | Γ ( f ) | 2 ) , Γ ( f ) is the reflection coefficient. The design parameter vector x = [ r l , r s , d l , d s ] includes the reflector length r l , the distance between the driven element and the reflector r s (reflector spacing), the director length d l , and the spacing d s between director elements as well as between the driven element and the nearest director. These parameters are subject to practical constraints, which collectively define the feasible design space X = [ r l min , r l max ] × [ r s min , r s max ] × [ d l min , d l max ] × [ d s min , d s max ] .

3.2. General Framework of MPS

The construction method of the MPS model is outlined in Algorithm 1, and its detailed process is described below. To alleviate the high computational cost associated with evaluating the objective functions through electromagnetic simulations or measurements, the multi-problem surrogate-assisted algorithm is employed in this work. By surrogate models trained on related tasks, cross-domain knowledge is effectively transferred and rapid performance estimates are provided.
Algorithm 1 MPS model construction
Require: Source model set S = { S 1 , S 2 , , S n s } , training set D = { X , y } , training size n D
Ensure: MPS model M MPS
      1:
Train target surrogate model M T using D, and record hyperparameters θ
      2:
for each i = 1 to n D  do
      3:
    Construct a new dataset D i = D { x i , y i }
      4:
    Train a temporary Gaussian Process model M i on D i
      5:
    Input X i into M i to obtain the predicted target value y ^ T ( i )
      6:
end for
      7:
Input x i into each source model to obtain corresponding source features y S 1 , y S 2 , , y S n
      8:
Construct a regression model using Equation (14)
      9:
Combine source and target models through linear regression coefficients to form the final MPS model M MPS S
We first record all antenna optimization solutions and their function values in a database D = { X , y } . The database consists of values and their corresponding function values. Once a new evaluation is available, its exact function value will be stored in the database. From the pre-trained large-scale source model pool S = { S 1 , S 2 , , S n S } , we aggregate population P eval using a randomly selected weight vector w ( i ) where w ( i ) ,   i = 1 , 2 , , m , with m being the number of selected models.
The transfer of the evolutionary multi-objective optimization framework with MPS in a large-scale source surrogate setting is illustrated in Figure 1. MPS algorithm starts by initializing the population, then iteratively aggregates values using random weight vectors, selects a source surrogate set and build a MPS model, performs evolutionary optimization to determine and evaluates next candidate points, validates it with function value and repeats until termination conditions are met.

3.3. Multi-Problem Surrogate Algorithm for Multi-Objective Antenna Design

Multi-problem surrogate refers to a model formed by stacking multiple source surrogate models together with a target surrogate model, collectively called the MPS model. By replacing EM simulations with these surrogate models, the computational cost of antenna design evaluation is significantly reduced. This structure enables effective knowledge transfer from source surrogate models to the target surrogate model. Given a training dataset consisting of n input-output pairs: { ( x i , y i ) i = 1 , 2 , , n } , where x i denotes the i-th row of the input matrix X R n × d representing antenna parameters, and y i is the corresponding value in the output vector y representing evaluation result. Let x * be an unknown input, then the predictive distribution is defined as:
y ^ ( x * ) = A * ( A + σ n 2 I ) 1 y ,
σ ^ 2 ( x * ) = A A ( A + σ n 2 I ) 1 A * T ,
In this study, we use the Automatic Relevance Determination Squared Exponential (ARD-SE) covariance function, defined as:
k ( x i , x j ) = θ f exp D = 1 d ( x D i x D j ) 2 2 θ D 2
where x D i and x D j are the values of the D-th dimension of the respective inputs; d is the total number of input dimensions; θ f is the signal variance controlling the overall scale of the function; and θ D is the characteristic length-scale associated with the D-th input dimension, controlling how sensitive the function is to variation along that dimension.
First, a set of source models is provided as input, denoted by S = { S 1 , S 2 , , S n s } . Before training begins, there are n D individuals that have already been evaluated by the true objective functions, forming the training dataset D = { X , y } , where X contains the values of the decision variables and y is the aggregated vector of all objective values. Prior to the main training phase, an initial Gaussian Process surrogate model M T for the target task is trained using the dataset D, and the hyperparameters θ used during training are recorded. The training process begins with a variant of the leave-one-out (LOO) cross-validation method to generate target feature vectors from M T . Specifically, each data point in the training set is iteratively removed: for the i-th data pair, a temporary training set D i = D { X i , y i } is constructed by excluding the i-th sample. Then, using D i and the previously recorded hyperparameters θ , a temporary Gaussian Process model M i is trained. Once M i is obtained, the excluded data point X i is input into M i to obtain the i-th predicted value y T ( i ) of the target vector, as is shown in Algorithm 2.
Algorithm 2 Surrogate-assisted
Require:  N S source models
Ensure: Nondominated solutions
      1:
Initialize the population P using the Latin hypercube sampling method.
      2:
Evaluate P.
      3:
while termination criterion is not satisfied do
      4:
   Aggregation: Obtain y by an appropriate aggregation method with a randomly sampled weight vector.
      5:
   Establish the target training set D = ( X , y ) .
      6:
   MPS Model Construction (see Algorithm 1): Obtain the mixture coefficients α by solving the optimization problem (14) according to Algorithm 1.
      7:
   Evolutionary Search: Genetic operators are employed to generate the offspring O based on the parents P.
      8:
   while the number of cheap evaluations is not exceeded do
      9:
        % N O is the size of O.
    10:
       for  i = 1 to N O  do
    11:
           Calculate the prediction y ^ T ( x i * ) by the target surrogate model.
    12:
           for  j = 1 to N S  do
    13:
                Calculate the prediction y ^ S , j ( x i * ) by the j-th source surrogate model.
    14:
           end for
    15:
           Calculate the prediction y ^ ( x i * ) and variance σ ^ 2 ( x * ) by Equations (11) and (12) based on the mixture coefficients α , respectively.
    16:
       end for
    17:
       Evaluate the expected improvement of solutions based on Equation (16).
    18:
       Generate the next population based on the population O.
    19:
   end while
    20:
end while
    21:
Acquire the best evolved solution x new .
    22:
Obtain y new of solution x new by the expensive function.
    23:
Update datasets: D = D ( x new , y new ) .
After traversing all the training data, the final target feature predictions y T are obtained. Next, the source models and the target model are stacked together using meta-regression coefficients to construct the final MPS model M MPS . Specifically, each input X is fed into every source model to obtain the corresponding source features: y S 1 , y S 2 , , y S n s Then, the meta-regression coefficients are obtained by minimizing the following optimization problem:
min α E ( α ) = i = 1 n D j = 1 n S α S , j y S , j ( i ) + α T y T ( i ) y ( i ) 2 , s . t . j = 1 n S α S , j + α T = 1 , α S , j 0 , for j = 1 , , n S , α T 0
Let α = { α S , 1 , α S , 2 , , α S , n s , α T } denote the set of meta-regression coefficients. Here, y T ( i ) represents the predicted value corresponding to the input X ( i ) , and α T is the weight assigned to the target surrogate model. After obtaining all the meta-regression coefficients, the final MPS model is constructed using the stacking formulation in Equation (5):
M MPS = y ( X ) = j = 1 n s α S , j y S , j ( X ) + α T y T ( X )
To optimize the coefficients in surrogate model stacking, we incorporate evolutionary algorithms for the search process, as outlined in Algorithm 2. Firstly, The process starts with the initialization of a population P using the Latin hypercube sampling method to ensure a well-distributed sampling across the search space. This population is then evaluated by the true expensive objectives. In each generation, an aggregation method with a randomly sampled weight vector transforms the multi-objective optimization problem into a scalar optimization task, establishing a target training set D = ( X , y ) . Subsequently, the MPS model construction Algorithm 1 is employed to derive optimal mixture coefficients α by solving the optimization problem formulated in Equation (14), effectively combining information from the N S source surrogate models and the target surrogate.
The evolutionary search is conducted using standard genetic operators to generate offspring O from the current population P. To reduce expensive evaluations, a surrogate-assisted selection mechanism is adopted wherein offspring are assessed using the constructed ensemble surrogate. Specifically, for each individual in O, predictions are generated by the target surrogate model and all source surrogate models. These predictions are fused according to the mixture coefficients α to compute the final predicted values and associated variances, as defined in Equations (11) and (12). The expected improvement is then evaluated for these solutions to guide the generation of the next population.
This process iterates until a pre-specified budget of inexpensive surrogate evaluations is reached. During each iteration, the surrogate-assisted search generates a set of offspring solutions, whose objective values and prediction uncertainties are estimated using the multi-problem surrogate model. For each candidate solution x i * , the predictive mean y ^ ( x i * ) and variance σ ^ 2 ( x i * ) are computed according to Equation (11) and Equation (12), respectively. These are then used to evaluate the expected improvement (EI) criterion defined in Equation (16), which quantifies the potential benefit of sampling a candidate in terms of improvement over the current best-known solutions. The candidate with the highest EI score is selected as the most promising solution, denoted by x new . This solution is subsequently evaluated using the true expensive objective functions to obtain its actual performance y new . The dataset D is updated with this new observation, thereby enabling iterative refinement of the surrogate model. Through this mechanism, the proposed framework maintains an effective balance between exploration and exploitation under a limited evaluation budget, ultimately producing high-quality nondominated solutions.

4. Experimental Study

In this section, the antenna element model and partial parameter configurations are first introduced. Subsequently, the optimization objectives, optimization tools, and platform are presented. Finally, the performance of the proposed MPS method and other optimization algorithms is provided under varying numbers of directors, along with a comparative analysis.

4.1. Experimental Settings

The Yagi–Uda antenna is an end-fire radiating structure comprising a driven dipole element, a passive reflector, and multiple passive directors arranged collinearly. The passive elements function as reflectors and directors, with their nomenclature determined by their spatial relationship to the driven element. The reflector dipole positions are placed behind the driven element to suppress the rearward radiation lobes, whereas the director dipoles are arranged ahead of the driven element to reinforce forward beam formation. Figure 2a displays a Yagi antenna model simulated by MATLAB R2023a, where elements with positive Z-axis coordinates operate as directors and those with negative Z-axis coordinates serve as reflectors. The product of Yagi–Uda antenna based on the simulation results is shown in Figure 2b.
Table 1 lists the main design parameters for the Yagi–Uda antenna. The left column displays the fixed parameters, while the right column shows the optimized parameters. Since a folded dipole is employed as the feed, the input impedance of the antenna is 300 Ω . For Yagi–Uda antenna optimization, adjusting the physical dimensions and configuration represents the most practical and cost-effective approach. Thus, five parameters intrinsically linked to the physical structure are selected for optimization: the number of directors, the length and spacing of the reflector, and the length and spacing of the directors. The defined objective function aims to maximize the radiation pattern value at 90 ° , minimize it at 270 ° , and achieve a high peak power within the angular bounds of the elevation beamwidth.
In this study, we used the Antenna Toolbox in Matlab to carry out simulation experiments using a Yagi–Uda antenna [39,40]. The toolbox provides a comprehensive suite of functions and applications for designing, analyzing, and visualizing antenna elements and arrays. The toolbox enables the design of standalone antennas and the construction of antenna arrays. The Antenna Toolbox employs electromagnetic solvers, including the Method of Moments (MoM), to calculate critical electromagnetic parameters such as impedance, current distribution, radiation efficiency, near-field radiation patterns, and far-field radiation patterns. In addition, the toolbox integrates optimization algorithms to systematically refine antenna design parameters, thus enhancing operational performance through iterative computational adjustments.

4.2. Experimental Analysis

A multi-parent surrogate-assisted multi-objective optimization algorithm was applied to optimize the performance of the Yagi–Uda antenna. For comparison, several commonly used optimization algorithms were evaluated, including ABSAEA [22], ESBCEO [41], HeEMOEA [42], MCEAD [43], ParEGO, PBNSGAIII [44], PCSAEA [45], REMO [46], and SFADE [47], as illustrated in the accompanying figure.
Table 2, Table 3 and Table 4 present a comparative analysis between the MPS method and nine alternative approaches, evaluating three director configurations (four, five, and six directors) under varying evaluation budgets. Statistical comparison results from systematic computation of mean values and variances in optimization trials. Figure 3, Figure 4 and Figure 5 quantitatively visualize the hypervolume (HV) metric distributions corresponding to these parametric conditions in the table. HV is a performance indicator that measures the hypervolume covered by the approximate front, and is widely used in MOO problems. The larger the HV value, the better the diversity and comprehensiveness of the approximate front. The specific calculation formula is as follows:
H V = V ( T ) i = 1 m T i f i ( x ) d x ,
where V(T) represents the hypervolume, T represents the approximate front, m represents the number of objectives, T i represents the region formed by the line segment from the minimum value of the i-th objective to the i-th point on the approximate front, and f i ( x ) represents the function value of the i-th objective at point x.
The data in the analysis table show that the mean value of MPS under different evaluation times is significantly higher than that of other algorithms, and the variance is lower or close to that of other algorithms in most cases, indicating that the MPS achieves stronger optimization performance on the premise of maintaining good optimization stability.
It can be seen from the Figure 3, Figure 4 and Figure 5 that our MPS method achieves satisfactory results in all cases. For example, in the four-director case, after more than 400 evaluations, it achieves better results than other comparison algorithms, which indicates that the MPS method has faster convergence speed than other methods. After 1000 evaluations, it achieves 6% improvement over the sub-optimal MCEAD algorithm and 23% improvement over the average of other algorithms.
In five-director configurations, the MPS method matches ABSAEA’s performance at 100–300 evaluations. ABSAEA temporarily outperforms MPS between 300–600 evaluations due to faster local convergence. However, MPS achieves superior hypervolume values at 1000 evaluations, demonstrating stronger global optimization capabilities. As the number of directors increases to six, the HV value of the MPS method is higher than that of other algorithms in all evaluations, which shows that the MPS method has more obvious performance advantages than other methods in this case.
The Comparison results between the MPS model and the baseline model in different training rounds are also provided in Figure 6, Figure 7 and Figure 8. Here, the baseline refers to a model that does not leverage knowledge acquired from previous rounds to accelerate training, representing a conventional surrogate model. Data analysis and visualization reveal that within the 200-to-1000-round interval, the MPS model achieves superior performance metrics (such as the median HV value) compared to the conventional surrogate model. This shows that knowledge transfer techniques significantly enhance the efficiency of convergence.
The results with the highest HV are provided in Figure 9. Therefore, MPS proposed in this paper is superior to existing methods in accelerating convergence (less evaluations to reach a better solution), avoiding local optima (higher HV value), and improving stability (lower variance), which verifies its efficiency and practicability in expensive multi-target antenna design problems.

5. Conclusions

This paper proposes an evolutionary multi-objective antenna design method based on multi-problem surrogate-assisted optimization. By constructing simulation models of antenna specification parameters, the antenna design problem is transformed into a computationally expensive multi-objective optimization problem. To improve the evaluations efficiency, surrogate models are built for the antenna simulation process. The proposed MPS model is applied to the simulation-based optimization of a Yagi–Uda antenna. Within 1000 evaluations, it achieves significant performance advantages over existing optimization algorithms and is capable of reaching high-quality solutions with fewer evaluations. Future work will explicitly extend this framework to other antenna types, building upon the validated Yagi-Uda implementation presented in this study.

Author Contributions

Conceptualization, Z.L. and H.L.; methodology, Z.L., B.W. and R.W.; software, Z.L., B.W. and R.W.; validation, Z.L. and R.W.; formal analysis, B.W.; investigation, M.G.; resources, B.W. and R.W.; data curation, B.W. and R.W.; writing—original draft preparation, Z.L. and B.W.; writing—review and editing, Z.L. and H.L.; visualization, B.W.; supervision, M.G.; project administration, H.L. and M.G.; funding acquisition, H.L. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China under Grant 62576259, in part by The Open Project of Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University, in part by Open Fund of Shaanxi Key Laboratory of Antenna and Control Technology, and in part by the Fundamental Research Funds for the Central Universities.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, Q.; Zeng, S.; Zhao, F.; Jiao, R.; Li, C. On Formulating and Designing Antenna Arrays by Evolutionary Algorithms. IEEE Trans. Antennas Propag. 2021, 69, 1118–1129. [Google Scholar] [CrossRef]
  2. Ji, C.; Ning, X.; Dai, W. Design of Shared-Aperture Base Station Antenna with a Conformal Radiation Pattern. Electronics 2025, 14, 225. [Google Scholar] [CrossRef]
  3. Wu, Q.; Li, H.; Wong, S.W.; Zhang, Z.; He, Y. A Simple Cylindrical Dielectric Resonator Antenna Based on High-Order Mode with Stable High Gain. IEEE Antennas Wirel. Propag. Lett. 2024, 23, 3476–3480. [Google Scholar] [CrossRef]
  4. Raghuvanshi, A.; Sharma, A.; Awasthi, A.K.; Singhal, R.; Sharma, A.; Tiang, S.S.; Wong, C.H.; Lim, W.H. Linear antenna array pattern synthesis using Multi-Verse optimization algorithm. Electronics 2024, 13, 3356. [Google Scholar] [CrossRef]
  5. Ta, L.P.; Nakayama, D.; Hirose, M. Design of a High-Gain X-Band Electromagnetic Band Gap Microstrip Patch Antenna for CubeSat Applications. Electronics 2025, 14, 2216. [Google Scholar] [CrossRef]
  6. Guo, H.; Zhao, Y.; Li, J.; Gao, R.; He, Z.; Yang, Z. Design of Ultra-Wideband Low RCS Antenna Based on Polarization Conversion Metasurface. Electronics 2025, 14, 2204. [Google Scholar] [CrossRef]
  7. Fujimoto, T.; Guan, C.E. A Printed Hybrid-Mode Antenna for Dual-Band Circular Polarization with Flexible Frequency Ratio. Electronics 2025, 14, 2504. [Google Scholar] [CrossRef]
  8. Wu, Q.Y.; Wu, L.H.; Ben, C.Q.; Lian, J.W. Millimeter-Wave Miniaturized Substrate-Integrated Waveguide Multibeam Antenna Based on Multi-Layer E-Plane Butler Matrix. Electronics 2025, 14, 2553. [Google Scholar] [CrossRef]
  9. Yu, Y.; Jolani, F.; Chen, Z. A wideband omnidirectional horizontally polarized antenna for 4G LTE applications. IEEE Antennas Wirel. Propag. Lett. 2013, 12, 686–689. [Google Scholar] [CrossRef]
  10. Zhu, Q.; Yang, S.; Chen, Z. A wideband horizontally polarized omnidirectional antenna for LTE indoor base stations. Microw. Opt. Technol. Lett. 2015, 57, 2112–2116. [Google Scholar] [CrossRef]
  11. Yonas Gebre Woldesenbet, G.G.Y.; Tessema, B.G. Constraint Handling in Multiobjective Evolutionary Optimization. IEEE Trans. Evol. Comput. 2009, 13, 514–525. [Google Scholar] [CrossRef]
  12. Altshuler, E.E.; Linden, D.S. Wire-antenna designs using genetic algorithms. IEEE Antennas Propag. Mag. 2002, 39, 33–43. [Google Scholar] [CrossRef]
  13. Jin, N.; Rahmat-Samii, Y. Particle swarm optimization for antenna designs in engineering electromagnetics. J. Artif. Evol. Appl. 2008, 2008, 728929. [Google Scholar] [CrossRef]
  14. Goudos, S.K.; Siakavara, K.; Samaras, T.; Vafiadis, E.E.; Sahalos, J.N. Self-adaptive differential evolution applied to real-valued antenna and microwave design problems. IEEE Trans. Antennas Propag. 2011, 59, 1286–1298. [Google Scholar] [CrossRef]
  15. Tan, Z.; Wang, H.; Liu, S. Multi-stage dimension reduction for expensive sparse multi-objective optimization problems. Neurocomputing 2021, 440, 159–174. [Google Scholar] [CrossRef]
  16. Ma, L.; Jin, J.; Li, X.; Liu, W.; Ma, K.; Zhang, Q.J. Advanced Surrogate-Based EM Optimization Using Complex Frequency Domain EM Simulation-Based Neuro-TF Model for Microwave Components. IEEE Trans. Microw. Theory Tech. 2025, 73, 2309–2319. [Google Scholar] [CrossRef]
  17. Na, W.; Liu, K.; Cai, H.; Zhang, W.; Xie, H.; Jin, D. Efficient EM Optimization Exploiting Parallel Local Sampling Strategy and Bayesian Optimization for Microwave Applications. IEEE Microw. Wirel. Compon. Lett. 2021, 31, 1103–1106. [Google Scholar] [CrossRef]
  18. Wei, F.F.; Chen, W.N.; Zhang, J. A hybrid regressor and classifier-assisted evolutionary algorithm for expensive optimization with incomplete constraint information. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5071–5083. [Google Scholar] [CrossRef]
  19. Shi, M.; Lv, L.; Sun, W.; Song, X. A multi-fidelity surrogate model based on support vector regression. Struct. Multidiscip. Optim. 2020, 61, 2363–2375. [Google Scholar] [CrossRef]
  20. Liu, Q.; Cheng, R.; Jin, Y.; Heiderich, M.; Rodemann, T. Reference vector-assisted adaptive model management for surrogate-assisted many-objective optimization. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 7760–7773. [Google Scholar] [CrossRef]
  21. Knowles, J. ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 2006, 10, 50–66. [Google Scholar] [CrossRef]
  22. Wang, X.; Jin, Y.; Schmitt, S.; Olhofer, M. An adaptive Bayesian approach to surrogate-assisted evolutionary multi-objective optimization. Inf. Sci. 2020, 519, 317–331. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Liu, W.; Tsang, E.; Virginas, B. Expensive multiobjective optimization by MOEA/D with Gaussian process model. IEEE Trans. Evol. Comput. 2009, 14, 456–474. [Google Scholar] [CrossRef]
  24. Tan, K.C.; Feng, L.; Jiang, M. Evolutionary transfer optimization-a new frontier in evolutionary computation research. IEEE Comput. Intell. Mag. 2021, 16, 22–33. [Google Scholar] [CrossRef]
  25. Gupta, A.; Ong, Y.S.; Feng, L. Insights on transfer optimization: Because experience is the best teacher. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 2, 51–64. [Google Scholar] [CrossRef]
  26. Da, B.; Gupta, A.; Ong, Y.S. Curbing negative influences online for seamless transfer evolutionary optimization. IEEE Trans. Cybern. 2018, 49, 4365–4378. [Google Scholar] [CrossRef]
  27. Jiang, M.; Huang, Z.; Qiu, L.; Huang, W.; Yen, G.G. Transfer learning-based dynamic multiobjective optimization algorithms. IEEE Trans. Evol. Comput. 2017, 22, 501–514. [Google Scholar] [CrossRef]
  28. Li, H.; Wan, F.; Gong, M.; Qin, A.K.; Wu, Y.; Xing, L. Fast heterogeneous multi-problem surrogates for transfer evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
  29. Li, H.; Wan, F.; Gong, M.; Qin, A.K.; Wu, Y.; Xing, L. Many-Problem Surrogates for Transfer Evolutionary Multiobjective Optimization With Sparse Transfer Stacking. IEEE Trans. Evol. Comput. 2025. [Google Scholar] [CrossRef]
  30. Gupta, A.; Ong, Y.S.; Feng, L.; Tan, K.C. Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Trans. Cybern. 2016, 47, 1652–1665. [Google Scholar] [CrossRef]
  31. Briqech, Z.; Sebak, A.R.; Denidni, T.A. High-efficiency 60-GHz printed Yagi antenna array. IEEE Antennas Wirel. Propag. Lett. 2013, 12, 1224–1227. [Google Scholar] [CrossRef]
  32. Balanis, C.A. Antenna Theory: Analysis and Design; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  33. Tan, Z.; Luo, L.; Zhong, J. Knowledge transfer in evolutionary multi-task optimization: A survey. Appl. Soft Comput. 2023, 138, 110182. [Google Scholar] [CrossRef]
  34. Huang, L.; Feng, L.; Wang, H.; Hou, Y.; Liu, K.; Chen, C. A preliminary study of improving evolutionary multi-objective optimization via knowledge transfer from single-objective problems. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; IEEE: New York, NY, USA, 2020; pp. 1552–1559. [Google Scholar]
  35. Lim, R.; Zhou, L.; Gupta, A.; Ong, Y.S.; Zhang, A.N. Solution representation learning in multi-objective transfer evolutionary optimization. IEEE Access 2021, 9, 41844–41860. [Google Scholar] [CrossRef]
  36. Zou, J.; Lin, F.; Gao, S.; Deng, G.; Zeng, W.; Alterovitz, G. Transfer learning based multi-objective genetic algorithm for dynamic community detection. arXiv 2021, arXiv:2109.15136. [Google Scholar] [CrossRef]
  37. Jiang, M.; Wang, Z.; Guo, S.; Gao, X.; Tan, K.C. Individual-based transfer learning for dynamic multiobjective optimization. IEEE Trans. Cybern. 2020, 51, 4968–4981. [Google Scholar] [CrossRef]
  38. Min, A.T.W.; Ong, Y.S.; Gupta, A.; Goh, C.K. Multiproblem surrogates: Transfer evolutionary multiobjective optimization of computationally expensive problems. IEEE Trans. Evol. Comput. 2017, 23, 15–28. [Google Scholar] [CrossRef]
  39. Liu, B.; Liu, H.; Aliakbarian, H.; Ma, Z.; Vandenbosch, G.; Gielen, G.; Excell, P. An efficient method for antenna design optimization based on evolutionary computation and machine learning techniques. IEEE Trans. Antennas Propag. 2013, 62, 7–18. [Google Scholar] [CrossRef]
  40. Kuwahara, Y. Multiobjective optimization design of Yagi-Uda antenna. IEEE Trans. Antennas Propag. 2005, 53, 1984–1992. [Google Scholar] [CrossRef]
  41. Bian, H.; Tian, J.; Yu, J.; Yu, H. Bayesian co-evolutionary optimization based entropy search for high-dimensional many-objective optimization. Knowl. Based Syst. 2023, 274, 110630. [Google Scholar] [CrossRef]
  42. Guo, D.; Jin, Y.; Ding, J.; Chai, T. Heterogeneous ensemble-based infill criterion for evolutionary multiobjective optimization of expensive problems. IEEE Trans. Cybern. 2019, 49, 1012–1025. [Google Scholar] [CrossRef] [PubMed]
  43. Sonoda, T.; Nakata, M. Multiple classifiers-assisted evolutionary algorithm based on decomposition for high-dimensional multi-objective problems. IEEE Trans. Evol. Comput. 2022, 26, 1581–1595. [Google Scholar] [CrossRef]
  44. Song, Z.; Wang, H.; Xu, H. A framework for expensive many-objective optimization with Pareto-based bi-indicator infill sampling criterion. Memetic Comput. 2022, 14, 179–191. [Google Scholar] [CrossRef]
  45. Tian, Y.; Hu, J.; He, C.; Ma, H.; Zhang, L.; Zhang, X. A pairwise comparison based surrogate-assisted evolutionary algorithm for expensive multi-objective optimization. Swarm Evol. Comput. 2023, 80, 101323. [Google Scholar] [CrossRef]
  46. Hao, H.; Zhou, A.; Qian, H.; Zhang, H. Expensive multiobjective optimization by relation learning and prediction. IEEE Trans. Evol. Comput. 2022, 26, 1157–1170. [Google Scholar] [CrossRef]
  47. Horaguchi, Y.; Nishihara, K.; Nakata, M. Evolutionary multiobjective optimization assisted by scalarization function approximation for high-dimensional expensive problems. Swarm Evol. Comput. 2024, 86, 101516. [Google Scholar] [CrossRef]
Figure 1. Diagram of multi-problem surrogate model construction. The top part illustrates training source models from different source problems and then integrating their knowledge to assist in building the target model for the target problem. The bottom part shows using a real-world dataset to let both source and target models predict features, applying the leave-one-out (LOO) method for evaluation, and calculating meta-regression coefficients.
Figure 1. Diagram of multi-problem surrogate model construction. The top part illustrates training source models from different source problems and then integrating their knowledge to assist in building the target model for the target problem. The bottom part shows using a real-world dataset to let both source and target models predict features, applying the leave-one-out (LOO) method for evaluation, and calculating meta-regression coefficients.
Electronics 14 03862 g001
Figure 2. Yagi–Uda Antenna. (a) Matlab simulation example of Yagi–Uda antenna. (b) Product of Yagi–Uda antenna based on the simulation results (https://amphenolprocom.com/products/base-station-antennas/2450-s-6y-165) (accessed on 28 September 2025).
Figure 2. Yagi–Uda Antenna. (a) Matlab simulation example of Yagi–Uda antenna. (b) Product of Yagi–Uda antenna based on the simulation results (https://amphenolprocom.com/products/base-station-antennas/2450-s-6y-165) (accessed on 28 September 2025).
Electronics 14 03862 g002
Figure 3. Comparison of MPS and other algorithms with different evaluation counts under the four directors circumstance.
Figure 3. Comparison of MPS and other algorithms with different evaluation counts under the four directors circumstance.
Electronics 14 03862 g003
Figure 4. Comparison of MPS and other algorithms with different evaluation counts under the five directors circumstance.
Figure 4. Comparison of MPS and other algorithms with different evaluation counts under the five directors circumstance.
Electronics 14 03862 g004
Figure 5. Comparison of MPS and other algorithms with different evaluation counts under the six directors circumstance.
Figure 5. Comparison of MPS and other algorithms with different evaluation counts under the six directors circumstance.
Electronics 14 03862 g005
Figure 6. Comparison of MPS and Baseline with different evaluation counts under the four directors circumstance.
Figure 6. Comparison of MPS and Baseline with different evaluation counts under the four directors circumstance.
Electronics 14 03862 g006
Figure 7. Comparison of MPS and Baseline with different evaluation counts under the five directors circumstance.
Figure 7. Comparison of MPS and Baseline with different evaluation counts under the five directors circumstance.
Electronics 14 03862 g007
Figure 8. Comparison of MPS and Baseline with different evaluation counts under the six directors circumstance.
Figure 8. Comparison of MPS and Baseline with different evaluation counts under the six directors circumstance.
Electronics 14 03862 g008
Figure 9. The results with the highest HV. (a) Radiation pattern in 3D. (b) Antenna.
Figure 9. The results with the highest HV. (a) Radiation pattern in 3D. (b) Antenna.
Electronics 14 03862 g009
Table 1. Main design parameters of antenna.
Table 1. Main design parameters of antenna.
ParameterValueParameterRange
Frequency165 MHzRefLengthBounds [ 0.4 , 0.6 ] × λ
Z 0 300 Ω DirLengthBounds [ 0.35 , 0.5 ] × λ
BandWidth8.25 MHzRefSpacingBounds [ 0.05 , 0.3 ] × λ
λ 1.82 mDirSpacingBounds [ 0.05 , 0.23 ] × λ
Table 2. Comparison between MPS and other optimization algorithms across different evaluation budgets under the four directors circumstance.
Table 2. Comparison between MPS and other optimization algorithms across different evaluation budgets under the four directors circumstance.
Number of Evaluations ReferenceMetric [22]A1A2A3A4A5A6A7A8A9MPS (Ours)
150Avg.0.21030.19310.21260.19760.22040.21310.17160.19120.19170.2268
Var.0.02140.01100.01500.01910.02690.03260.00710.01200.02350.2114
300Avg.0.30410.22100.22620.23510.30780.30030.21910.25470.22820.2997
Var.0.04490.02240.01530.02670.01940.02040.02490.03890.03650.0585
500Avg.0.34070.22810.24160.29300.31160.32920.26000.30630.28100.3626
Var.0.04210.02130.01750.02880.01930.02070.02660.03770.03500.0387
1000Avg.0.36990.23930.26990.38720.31760.36370.33940.35330.37090.4125
Var.0.03770.02140.02370.01870.01920.01290.03820.03440.04610.0226
A1–A9 represent the baseline optimization algorithms: ABSAEA, ESBCEO, HeEMOEA, MCEAD, ParEGO, PBNSGAIII, PCSAEA, REMO, and SFADE, respectively. MPS (ours) denotes our proposed method.
Table 3. Comparison between MPS and other optimization algorithms across different evaluation budgets under the five directors circumstance.
Table 3. Comparison between MPS and other optimization algorithms across different evaluation budgets under the five directors circumstance.
Number of Evaluations ReferenceMetric [22]A1A2A3A4A5A6A7A8A9MPS (Ours)
150Avg.0.21080.17950.18380.17300.20910.20370.17390.19310.18310.2038
Var.0.02910.01100.01310.00700.02610.03450.01100.01590.01770.0135
300Avg.0.33560.24710.21530.22240.32880.28630.23960.26850.22960.3402
Var.0.04090.02650.02570.03290.03760.03160.02480.02700.04290.0605
500Avg.0.38530.26060.23420.28690.35890.32970.29130.29660.27710.3755
Var.0.04510.02710.03070.05170.03600.02030.02410.01620.04230.0552
1000Avg.0.41150.27050.26520.38040.36340.36760.37460.33700.36560.4197
Var.0.05180.02280.03120.03980.03210.02730.03740.03360.04180.0426
A1–A9 represent the baseline optimization algorithms: ABSAEA, ESBCEO, HeEMOEA, MCEAD, ParEGO, PBNSGAIII, PCSAEA, REMO, and SFADE, respectively. MPS (ours) denotes our proposed method.
Table 4. Comparison between MPS and other optimization algorithms across different evaluation budgets under the six directors circumstance.
Table 4. Comparison between MPS and other optimization algorithms across different evaluation budgets under the six directors circumstance.
Number of Evaluations ReferenceMetric [22]A1A2A3A4A5A6A7A8A9MPS (Ours)
150Avg.0.23360.20280.21340.18950.22640.20220.19430.21070.20130.2538
Var.0.01890.01840.01150.01370.01570.01330.00970.00990.01470.0282
300Avg.0.31380.25230.22270.22540.30570.24910.22550.26550.21070.3300
Var.0.02420.05840.01320.02220.03110.02600.01550.01960.01420.0188
500Avg.0.35520.28070.23820.27630.32210.32130.28550.29590.25200.3801
Var.0.01460.04800.01650.02590.03470.03120.01710.04650.03460.0250
1000Avg.0.39230.29620.25920.36950.33090.36500.35080.33880.33550.4151
Var.0.02100.04470.02240.03870.03690.02570.01990.04130.05280.0224
A1–A9 represent the baseline optimization algorithms: ABSAEA, ESBCEO, HeEMOEA, MCEAD, ParEGO, PBNSGAIII, PCSAEA, REMO, and SFADE, respectively. MPS (ours) denotes our proposed method.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Wu, B.; Wang, R.; Li, H.; Gong, M. Surrogate-Assisted Evolutionary Multi-Objective Antenna Design. Electronics 2025, 14, 3862. https://doi.org/10.3390/electronics14193862

AMA Style

Li Z, Wu B, Wang R, Li H, Gong M. Surrogate-Assisted Evolutionary Multi-Objective Antenna Design. Electronics. 2025; 14(19):3862. https://doi.org/10.3390/electronics14193862

Chicago/Turabian Style

Li, Zhiyuan, Bin Wu, Ruiqi Wang, Hao Li, and Maoguo Gong. 2025. "Surrogate-Assisted Evolutionary Multi-Objective Antenna Design" Electronics 14, no. 19: 3862. https://doi.org/10.3390/electronics14193862

APA Style

Li, Z., Wu, B., Wang, R., Li, H., & Gong, M. (2025). Surrogate-Assisted Evolutionary Multi-Objective Antenna Design. Electronics, 14(19), 3862. https://doi.org/10.3390/electronics14193862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop