2.2.1. Iterative Reweighted l1 Minimization
The synthesis of a sparse cross array can be regarded as an
l0-norm problem as follows:
where
w = [
wm wn];
wm and
wn are the weight matrices of the transmitting and receiving arrays;
is the
l0-norm of the
w matrix, i.e., the number of non-zero elements of
w. BP.C represents the BP constraints including SLP, MLW (at −3 dB), and the beam pattern shape shown in
Figure 5 [
18]. The solution
w is a sparse matrix: non-zero elements are active, and zero elements are inactive.
This optimization problem is very difficult to solve directly because the minimum
l0-norm is non-convex. According to the CS theory, the optimization problem of Equation (9) can be approximated by the following iterative reweighted
l1 minimization problem based on a convex optimization algorithm [
32]:
where
is the
l1-norm of the
w matrix, which is the sum of the absolute values of all elements in
w;
wi ◦
ρi is the Hadamard product of the two matrices
wi and
ρi;
i is the number of iterations; and
ρ is the coefficient related to the optimization result of the last iteration, which makes the minimum
l1-norm problem of Equation (10) gradually approximate the minimum
l0-norm problem of Equation (9). Moreover, in the minimum
l1-norm, the value of
w cannot be equal to zero, but it approaches zero, and the elements less than 1 × 10
−6 in magnitude can be considered as zero elements [
18];
is slightly less than the minimum value of
w, which ensures that the zero elements are likely to be non-zero in the next iteration. In the first iteration,
1 is set to a matrix of all ones; a MATLAB software for disciplined convex programming CVX [
33] is used to solve the minimum
l1-norm problem of Equation (10) to obtain
w1 and determine
, which is slightly less than the minimum value of
w1. In the following iteration,
i is obtained using Equation (11), and the minimum
l1-norm problem of Equation (10) is then solved to obtain
wi until the sparse array results converge.
CVX is a modeling framework for solving disciplined convex problems, including linear and quadratic programs, semidefinite programs,
l1-norms, etc. CVX is implemented in Matlab, conveniently solving constrained norm minimization, entropy maximization, and many other convex optimization problems. The general convex optimization problems can be expressed in the following form:
where
x is the objective variable,
f0 is the objective function, and
f1, …,
fM are the constraint functions.
where
C,
Ak and
bi are given matrices. The dual problem associated with Equation (12) is solved as follows:
where
yi and
z are variables.
SDPT3 [
34] is the default solver of CVX to solve convex optimization problems. SDPT3 is a primal-dual interior-point algorithm via the path-following paradigm. In each iteration of the algorithm, a predictor search direction is calculated to decrease the duality gap as much as possible. The solver uses two search directions: the Helmberg–Kojima–Monteiro (HKM) direction [
35,
36,
37] and the Nesterov–Todd (NT) direction [
38]. Then, the algorithm generates a Mehrotra-type corrector step [
39] to approach the central path. The algorithm does not impose any neighborhood restrictions and tries to achieve feasibility and optimality simultaneously.
x0,
y0 and
z0 are initialized in the first iteration. Suppose the variables in the current and the next iterations are (
x,
y,
z) and (
x+,
y+,
z+) respectively. The step-length parameter in the current and the next iterations are (
α,
β,
γ) and (
α+,
β+,
γ+). Set
γ0 = 0.9. The iteration stops if the relative duality gap (relgap) is less than 1 × 10
−8.
(
x+,
y+,
z+) are set as following:
where (Δ
x, Δ
y, Δ
z) are search directions.
Emin(
x-1Δ
x) is the minimum eigenvalue of (
x-1Δ
x). Set
γ+ = 0.9 + 0.09 min(
α,
β).
The search directions (Δx, Δy, Δz) are obtained via the symmetrized Newton equation with respect to an invertible matrix P. If semidefinite blocks are present, the HKM direction is selected; otherwise, the NT direction is selected. The HKM direction is corresponding to P = z1/2; the NT direction is corresponding to P = N−1, where NTzN = N−1 × N−T.
Problems that can be solved by CVX must be disciplined convex problems, and CVX is not efficient for very large problems (for example, a very large sparse planar array synthesis). For the problem of this paper, CVX is an effective solution.
2.2.2. Perturbed Convex Optimization
To enhance the degree of freedom for candidate sensor positions, a PCO method is proposed to optimize sparse array synthesis. The beamforming can be approximated as in Equations (19) and (20) using first-order Taylor expansion [
23].
where
is the derivative of BP
OT with respect to
x;
<
dmin/2 and
<
dmin/2 are the position perturbations; and
dmin is the minimum distance between sensors. On the basis of the first-order Taylor expansion, a PCO method is proposed to optimize the position perturbation and weight simultaneously. For the transmitting array, the optimization solves the following PCO problem to find the optimal position perturbation and weight.
where
vi = [
wi si];
si =
wi ∙ △
xi; and
i is the number of iterations. The matrix
vi contains the position perturbation and weight information. The PCO method obtains the minimum number of active array sensors, while the BP satisfies the constraint, and the position perturbations are constrained within
dmin/2. Moreover, in the sonar system, the sensor positions are fixed, but the weight can vary under different conditions. To obtain better BP performance under different conditions (transmitting frequency and
δ), Equation (22) can be solved independently at different transmitting frequencies and
δ.
Through this method, sensors can be placed in continuous positions instead of being placed on discrete grid points. The proposed method provides more degree of freedom for the sensors.
As shown in Equation (7), the beamforming of the cross array can be regarded as the product of the transmitting beamforming and the receiving beamforming. In addition, the transmitting beamforming and receiving beamforming can be regarded as the beamforming of two linear arrays. Therefore, sparse cross array synthesis can be divided into two sparse linear array syntheses. The flow diagram of a sparse cross array synthesis via the PCO method is shown in
Figure 6. The procedure is described as follows:
The transmitting array of
M sensors is considered. The transmitting frequency is set from
f1 to
fJ.
δ is set from
δ1 to
δA (−
δmin to +
δmax). In the first iteration,
is set to a matrix of all ones, and CVX [
33] is employed to solve the PCO problem of Equation (21) to obtain △
x1 and
, which makes the BP
OT satisfy the constraints over the entire frequency range, as well as the near-field and far-field conditions.
ϵ is determined to be slightly less than the minimum value of
. In the following iterations, the PCO problem is solved to obtain △
xi and
and iterated until the number of active sensors remains unchanged for five iterations. At this point, the iterations are concluded, and the positions and weight values of the sparse transmitting array are considered optimal. Next, Equation (15) is applied to optimize the BP performance under different conditions.
The receiving array of N sensors is synthesized in the same way as the transmitting array. Since the PCO method is deterministic optimization, the results of the sparse receiving array are the same as those of the transmitting array when M = N at the same conditions.