## 1. Introduction

An array of sensors or antennas, compared with a single one, often displays better characteristics and can perform more functions. One example is the structural health monitoring (SHM) system based on the finite element theory [

1], which utilizes a sensor array. This paper focuses on the antenna arrays which have the merits of high gains, flexible scanning, and easy beamforming implementations, and which have therefore been widely used in radios such as radar and electronic communication. The antenna array pattern synthesis, as a key problem to antenna arrays, has attracted much attention. The main task of the antenna array pattern synthesis is to adjust the excitation amplitudes, the phases, and the positions of array elements of the antenna array to obtain the pattern with required characteristics.

In the early study, due to the limited computing resource, some classic analytical techniques such as the Dolph–Chebyshev method and Taylor method [

2] are used to optimize the excitation amplitudes of the array so as to suppress the sidelobe. Since the end of the 20th century, with the rapid development of computer technology, more and more random optimization algorithms have been applied to the antenna design. These highly flexible algorithms have few restrictions on the optimization objectives, which make them suitable to solve complicated and nonlinear optimization problems, and hence they show a better optimization performance and a higher degree of freedom in antenna array pattern synthesis. These algorithms have been used in various pattern synthesis problems, and with the help of them, optimization objectives such as sidelobe suppression, null depth and beamforming have been achieved. For example, the genetic algorithm (GA) [

3] has been applied to optimize the excitations of uniform array elements [

4]. Additionally, the particle swarm optimization (PSO) algorithm [

5] and the enhanced flower pollination (EFPA) algorithm have been applied to optimize the distances between the array elements of the non-uniform arrays [

6,

7], and the differential evolution (DE) algorithm [

8] has been used for the pattern synthesis of the time-modulated array [

9]. However, the random optimization algorithms cannot always meet the design requirements of antenna arrays due to their low convergence rate, long running time, and/or being trapped in the local optimum. Therefore, there are still incentives to find a more efficient and practical random optimization algorithm.

The DE algorithm is prominent for the pattern synthesis of antenna array which utilizes the array element excitations as optimization variables. However, there are difficulties in the control parameter settings and the mutation strategy selection. Therefore, various algorithms are proposed such as the dynamic differential evolution (DDE) [

10], the composite differential evolution (CoDE) [

11], and the hybrid differential evolution algorithms [

12]. Most of them, as well as some pertinently modified differential evolution algorithms [

13,

14], have been applied to the pattern synthesis of antenna arrays [

15,

16,

17] and prove to enhance the optimization performance effectively. Among them, the JADE algorithm [

18], a state-of-the-art improved DE algorithm, has been widely applied and shown strong performance in optimization problems because of its ability in the parameter adaptive adjustment. Besides, to solve the stagnation problem of DE algorithm, Shu-Mei Guo and Chin-Chang Yang proposed the successful-parent-selecting (SPS) framework [

19] in 2015 that uses the success history in the iterations to select the parent vectors, which makes it easier for the DE algorithm to find the global optimum.

Under the constraint of the beamwidth, the excitation amplitudes optimization of the linear antenna arrays with sidelobe suppression and null depth is considered in this paper. Several similar works have been completed with different algorithms [

20,

21,

22,

23], and they all show satisfactory results. However, in the above-mentioned works, the number of the array elements used in the simulations is small, and thus the power of the algorithms hasn’t been examined in the large-scale antenna arrays. Another less-noticed requirement is to shorten the optimal solution searching time so that the antenna arrays can respond to the changing situations more easily and quickly. In this paper, an SPS-JADE algorithm for antenna array pattern syntheses is designed by combining the aforementioned JADE algorithm with the SPS framework. This algorithm can find the optimal solution with fewer iteration numbers, and it is verified in the pattern synthesis simulations for the antenna arrays with more elements. In comparison with other random optimization algorithms, the SPS-JADE algorithm gives better optimization result in the pattern syntheses and has the potential to meet the requirements stated above with its excellent global optimization and rapid convergence feature.

The rest of this paper is organized as follows. In

Section 2, the problem of linear antenna array pattern synthesis and its optimization model is introduced. In

Section 3, the basic ideas and implementation steps of the classic DE algorithm, the JADE algorithm, and the SPS-JADE algorithm with SPS framework are introduced. In

Section 4, the SPS-JADE algorithm is utilized for the simulations of pattern syntheses. Numerical results compared with other algorithms are presented and analyzed. Conclusions are given in

Section 5.

## 2. Pattern Synthesis of Linear Antenna Array

The far-field pattern is mainly related to the array factor when a linear antenna array consists of elements placed along the

x-axis as shown in

Figure 1.

The array factor of a broadside linear array on the

x-z plane is expressed as follows [

24]:

where the elements of the vectors

$x={[{x}_{1},{x}_{2},\dots ,{x}_{N}]}^{T}$,

$I={[{I}_{1},{I}_{2},\dots ,{I}_{N}]}^{T}$, and

$\mathsf{\phi}={[{\phi}_{1},{\phi}_{2},\dots ,{\phi}_{N}]}^{T}$ are the position coordinates, the excitation amplitudes and phases of array elements, respectively.

$\lambda $ is the wavelength,

$\theta $ is the steering angle of the antenna from the positive

z-axis, and

N is the number of array elements.

In this paper, we consider an

N-element equally spaced linear symmetric antenna array with an adjacent element separation of

$\lambda /2$. Consider the case that

N is even, with the phases of all array elements zeroed in advance, Equation (1) becomes

The excitation amplitudes are taken as the optimization variables and constrained within [0,1]. The sidelobe suppression, along with the beamwidth constraint, serves as the optimization objective. The objective function is given by

where

S is the area outside of the main beam in the pattern. The first term on the right side of the equation is the normalized maximum sidelobe level (MSL).

$FNBW$ denotes the first null beamwidth (FNBW), which is calculated as the angle difference between the minimum amplitude points nearest to the peak of the main beam on the left and right in the pattern.

${FNBW}_{D}$ is the desired FNBW.

$\epsilon $ is the penalty factor set to 10

^{4}. For the anti-interference function, by adding a null depth term of which the penalty factor is set to 1, the objective function becomes

where

${\theta}_{m}^{null}$ is the given angle direction of the

mth null. The optimization model can then be expressed as

and the result of pattern synthesis can be obtained by optimizing this model with the given random optimization algorithm.

## 3. SPS-JADE Algorithm

#### 3.1. Classic DE Algorithm

The simplicity, high efficiency and robustness of the classic DE algorithm proposed by Rainer Storn and Kenneth Price make it suitable for solving many nonlinear problems. Firstly, an initial population

is randomly generated in the constrained optimization space, where

D is the dimension of the variable, and

NP is the population size. After initialization, the steps to update the population in each iteration can be divided into three operations: mutation, crossover, and selection.

The mutation operation is the process of generating the mutation vectors through a linear calculation of the parent vectors and the differential vectors. The expressions of the two most common strategies are given as follows:

where the subscripts

${r}_{1}$,

${\mathrm{r}}_{2}$, and

${r}_{3}$ are three different integers chosen from

$\left\{1,2,\dots ,NP\right\}$ randomly and not equal to

i.

${x}_{best,G}$ is the optimal vector in the

Gth generation population, and

F is the scaling factor, which is a constant within [0,1].

The crossover operation is the process of generating trial vectors

${u}_{i,G}$ by the binomial crossover between the mutation vectors and the parent vectors expressed as follows:

where

$CR\in [0,1]$ is the crossover rate. For each

i and

j,

rand is a uniformly distributed number within [0,1].

${j}_{rand}$ is an integer randomly chosen from [1,

D] for each

i, which guarantees the diversity of searching. The trial vectors out of the boundary constraints can be adjusted by

where

${x}_{j}^{low}$ and

${x}_{j}^{up}$ are the lower and upper boundary of the optimization space, which are set to 0 and 1, respectively as in

Section 2.

Finally, the selection operation, which is greedy, is to choose the better individuals from the trial vectors and the parent vectors, and place them into the parent population of the next generation. As shown in

Section 2, the purpose of the pattern synthesis is to find the minimum of the objective function. Hence, the specific operation can be expressed as

where

$f(\cdot )$ is the objective function value. The selection operation is a successful update when satisfies

$f({u}_{i,G})<f({x}_{i,G})$, and such condition is of great significance in the JADE algorithm and SPS framework introduced next.

#### 3.2. JADE Algorithm

The JADE algorithm proposed by Jingqiao Zhang and Arthur C. Sanderson is an adaptive DE algorithm with an optional external archive that improves the performance of the classic DE algorithm. The improvements mainly consist of the selection of mutation strategies and the adaptive adjustment of control parameters.

The selection of mutation strategies seriously affects the balance between the search ability and the convergence rate. Therefore, it is significant to choose an appropriate mutation strategy. The JADE algorithm provides a compromise strategy by giving consideration to both sides, which is named as DE/current-to-

pbest/1 and can be expressed as

where the subscripts

${r}_{1}$,

${\mathrm{r}}_{2}$, are randomly chosen, and

${r}_{1}\ne {r}_{2}\ne i$.

${x}_{best,G}^{p}$ is a vector randomly selected from the top

p $\times $ 100% of the current parent population sorted from best to worst, and

p is a given parameter within [0,1].

${x}_{{r}_{2},G}^{A}$ is a vector randomly selected from the union of the current parent population and the archive defined by the set of archived inferior solutions. The archive is one of the improvements of the JADE algorithm. It is initialized as empty with a maximum population size of

NP. Whenever a successful update is completed, the replaced vector enters the archive. When the number of vectors in the archive is equal to

NP, the newly entered one will randomly replace an original one. This procedure can expand the selection space of the difference vector and further enhance the search diversity.

It can be seen in Equation (12) that the scaling factor

F is no longer a constant; instead, it is independently generated for each individual in the population at each generation. Similarly, the crossover rate

CR is also a variable. These two parameters are generated by

and truncated to [0,1] (especially,

${F}_{i,G}$ will be regenerated if

${F}_{i,G}\le 0$), where for each

i,

$rand{c}_{i}(a,b)$ and

$rand{n}_{i}(a,b)$ are random numbers generated by a Cauchy distribution that serves to diversify the scaling factors, and a Gaussian distribution, respectively, with location parameter

a and scale parameter

b.

${\mu}_{F,G}$ and

${\mu}_{CR,G}$ are initialized as given parameters

${\mu}_{F,0}$ and

${\mu}_{CR,0}$, respectively, and updated after the selection operation at each generation:

where

$\mathrm{mean}(\cdot )$ denotes the numerical average,

c is a given constant within [0,1].

${S}_{F}$ and

${S}_{CR}$ are the sets of scaling factors and crossover rates corresponding to all individuals that have completed the successful updates in the current population, which guide the selection of control parameters at the next generation and realize the adaptive adjustment to overcome the lack of adaptability to the optimization problems. Therefore, this improvement is helpful to adjust the mutation and crossover operation timely and to further solve the pattern synthesis problems pertinently with a high convergence rate to shorten the searching time.

#### 3.3. SPS-JADE Algorithm with SPS Framework

Although the JADE algorithm can improve the convergence performance, problems may arise in the multiple-dimensional cases such as the pattern syntheses. When the optimization dimension increases, the aggregate of the solutions will grow exponentially, resulting in a sharp increase in the number of local optima, and the high convergence rate will lead to the global searching trapping in these local optima. Consequently, it will be harder and take longer to find the new optimal solutions in the population when the DE algorithm is applied to the large-scale antenna arrays pattern syntheses with a large number of the array elements. This phenomenon is called stagnation, and to overcome this, here, the SPS framework that can provide a timely response to the stagnation occurring is utilized to further improve the optimization ability of the DE algorithm and the efficiency of the pattern syntheses.

The core of the SPS framework is to use different parent vectors in the mutation and crossover operation. In the DE algorithm, when the stagnation occurs, a population individual cannot be successfully updated for a long time. In this case, the last NP vectors that complete the successfully updates in the history, dubbed the successful parents, will be chosen as the parent vectors instead of the vectors selected at the previous generation. The algorithm can then be guided out of the stagnation by successful parents with a higher potential of searching. The standard to measure whether stagnation occurs is the total number of selection operations performed for each population individual over the duration that the individual continuously fails to be updated. When this number is greater than the given stagnation tolerance Q, the stagnation occurs, and the algorithm will use the successful parents to update the population. This novel method of the parent selection makes the algorithm keep exploring a better solution efficiently without the reduction of the convergence rate, which shows its potential to solve the high dimensional optimization problems.

Theoretically, the SPS framework can be applied to the general DE algorithms. In this paper, the introduced JADE algorithm is combined with the SPS framework as the SPS-JADE algorithm, which is further applied to the pattern synthesis of the linear antenna arrays introduced in

Section 2. The flowchart of the SPS-JADE algorithm is shown in

Figure 2.

## 5. Conclusions

In this paper, a SPS-JADE algorithm for the antenna array pattern synthesis is designed. It improves the shortcomings of the classical DE algorithm by making the control parameters adaptive and by giving a better mutation strategy. In particular, the SPS framework used in the algorithm solves the stagnation problem which occurs in various DE algorithms. The SPS-JADE algorithm is applied to the pattern synthesis of a 40-element linear symmetric antenna array with the sidelobe suppression and null depth under the beamwidth constraint of optimizing the excitation amplitudes of array elements. By using the SPS-JADE algorithm, the normalized MSL of the antenna array can be reduced to around −38.45 dB, and this value is around −38.25 dB with the null depth lower than −130 dB, which are both the lowest among the algorithms discussed in this paper. The small average and standard deviations of MSLs mean that the SPS-JADE algorithm can stably obtain satisfactory results. Furthermore, the pattern synthesis efficiency of the SPS-JADE algorithm is compared with the JADE algorithm to validate the effect of the SPS framework in speeding up the global optimum searching. In conclusion, these simulation results show that, for the antenna array pattern synthesis, the SPS-JADE algorithm has better performance in terms of the global search ability, the convergence rate, and robustness, which demonstrates its great potential for the design of large-scale antenna arrays.