A Hybrid Non-Convex Compressed Sensing Approach for Array Diagnosis Using Sparse Promoting Norm with Perturbation Technique

: The use of (cid:96) p (0 < p < 1) norm minimization will improve array diagnosis performance provided that the issue of local minima associated to its non-convex nature is properly handled. In order to overcome this deﬁciency, a hybrid method using random perturbation and non-convex optimization is investigated in this paper. Although it acquires a higher computational time, the trade-off between an accurate diagnosis and the computational burden appears to be acceptable. Theoretical analysis and simulation results demonstrate that the proposed method overcomes this disadvantage effectively and achieves better performance compared to the standard (cid:96) 1 norm minimization with a smaller number of far-ﬁeld measurements, suggesting that the proposed method can be used to improve the performance of array diagnosis.


Introduction
Array antennas are widely used in radar, remote sensing and mobile and satellite communications, etc.However, the deteriorated working condition, coupled with ever-growing of systematic complexity, inevitably increases the probability of defective array elements.Failed sensors impose negative impacts on the radiation performance, especially deteriorating the characteristic of beam-forming and lowering the accuracy of direction-of-arrival (DOA) estimation [1].The cost associated with their life-cycle maintenance could exceed the corresponding original capital investment as the functionality of antennas and other equipments degrades as a result of age [2].Therefore, before correcting the damaged array antennas, it is necessary to locate the numbers and positions of the broken elements via array diagnosis methods, which is of significant importance both on theoretical and practical values.
Approaches for finding the impaired elements have been attracted considerable attentions of scholars and engineers worldwide, varying from academic community to industrial field.In specific, back propagation method [3], matrix method [4], neural networks [5], case-based reasoning [6], genetic algorithm [7], source reconstruction method [8] and mutual coupling technique [9] could be found in the open literature.Recently, the Compressed Sensing/Sparse Recovery (CS/SR) based approaches have also been proposed in the framework of array diagnosis [10][11][12][13].It is worth noting that, in general, accuracy and efficiency are two major aspects in developing diagnosis approaches.Therefore, improving the probability of success rate of diagnosis with an acceptable decrease of efficiency using either far-field or near-field measurements has always been the heated topic in the community of array diagnosis.For instance, re-weighted 1 norm minimization has been applied in both near-field case [14] and far-field case [15].In [16] a fast diagnosis method of antenna arrays using a small number of far-field measurements with simulated and experimental data is addressed.Different from classical CS based paradigm, a reliable diagnosis method of both linear and planar arrays using Bayesian Compressive Sensing (BCS) are also proposed in [17,18].
The aim of this paper is to increase the probability of success rate of diagnosis with a smaller number of far-field measurements using a hybrid method of random perturbation and non-convex optimization, to overcome the issue of local minima associated to the intrinsic non-convex nature of p (0 < p < 1) norm minimization.The better performance of p (0 < p < 1) norm compared to 1 norm is not new in the open literature [19][20][21][22].However, application of non-convex norms in array diagnosis will bring unwanted impact on the performance of diagnosis unless the problem of solutions to be trapped in local minima is properly solved.In this article, lots of numerical experiments are carried out and the results have vividly confirmed that the proposed method overcomes this problem effectively, and also obtains a stronger promotion of sparsification of the solutions compared to the standard 1 norm minimization with an acceptable increase of calculated amount.

Description of the Method
Let us consider an Uniform Linear Array (ULA) of N isotropic elements radiating in the free space, with inter-element distance d = λ/2, where λ being the free space wavelength, see Figure 1.The goal is to identify the presence of faulty elements using a smaller number of far-field measurements.Following the approach outlined in [10] we suppose that the excitations of the failure-free array are available, as well as the corresponding far-field data in M measurement points.The failure-free array (i.e., the gold array) will be denoted as reference array.Let y r ∈ C M be the vector collecting the far-field of the failure-free array (wherein the superscript r stands for reference), and x r ∈ C N the vector of its excitations.The field radiated by the Antenna Under Test (AUT) with faulty elements is collected at the same spatial positions in the vector y aut ∈ C M , while the vector of its excitations will be denoted as x aut ∈ C N .Now, let us consider the linear system wherein x = x r − x aut and y = y r − y aut are innovation vectors and A ∈ C M×N is the radiation matrix relating the excitations to the far-field data.
Since the number of faulty elements S is much smaller than the total number of array elements N (as usually happens), we have an equivalent problem involving a highly sparse array, in which only the faulty elements of the original array radiate.In other words, to locate the failed array elements is transformed into reconstruct a sparse vector accurately with high probability.Once recovered, the positions and numbers of non-zero entities represent the locations and amounts of the failed elements, respectively.
A more general scenario is that the far-field measurement data are contaminated by a complex white Gaussian noise e with zero mean and variance σ This under-determined system is usually solved by the following optimization strategy wherein ||x|| p is an objective function that promoting the sparsity, and the subscript p stands for p norm. is related to the uncertainty level affecting the data.
In the open literature the most widely adopted sparse promoting function is the 1 norm of the vector x [23,24].Under certain conditions it is equivalent to the so called 0 norm minimization [25].These conditions involve the number of measurements, that must be large enough to assure a accurate reconstruction of the unknown vectors.
An interesting question regards the possibility of a further reduction of the number of measurements using a different norm as sparse promoting function.In this paper, instead of using 1 norm we focus our attention on a non-convex norm of vector x = (x 1 , x 2 , ..., x N ), defined as In this case, the minimization problem (3) is not convex.However, non-convex minimization arises in many different branches of applied physics, and a large number of local minimizer have been proposed.All the algorithms explore the search space picking up elements of the space according to some elaborate-designed rules.In our study we focus our attention on MCCA (Minimizing a Concave Function via a Convex Function Approximation) algorithm proposed by [26], as a local minimizer.The algorithm locally replacing the original concave objective function by a convex function.Given a point t 0 on the concave function, a quadratic convex function g(t) using the second-order Taylor series expansion around the point t 0 with the sign of the second order term reversed is built.The position of minimum of this quadratic (convex) function, let t 1 be is used to build a new quadratic function.In [26] it has shown that the sequences of minima of these quadratic functions converge to a fixed point that reduces the original concave function.A pseudo-code of this algorithm is also reported in the paper.
It is also worth noting that there is no magic algorithm which is able to efficiently solve any minimization problems, as stated by the famous no free lunch theorem [27].For p (0 < p < 1) norm minimization, the major drawback is that the solution could be trapped in local minima.The relevant consequence is that the use of local minimizer does not assure correct estimation of the minimization problem, which results in an adverse impact on the diagnosis performance.Therefore, the minimization algorithm must be tailored to the problem.
In order to overcome the defect of MCCA, a hybrid method of random search-convex local minimizer is proposed.The use of hybrid algorithms is quite common in various minimization problems and consists of using a local minimizer whose starting point is the randomly selected value of the function.In our specific minimization problem, it is possible to restrict the search space to a few-dimensional space.In fact, given any solution of the problem Ax = y, the sparsest solution belongs to the null space of matrix A shifted by the solution.Consequently, a possible strategy is to find a solution of the minimization problem and then perform a random search in the null space of that matrix.This suggests the following strategy.(1) find the least squares solution of the problem Ax = y, which is given by the pseudo-inverse of matrix A. Let s 0 as this initial starting point, that is associated to the 2 norm minimization [28].(2) use MCCA to generate an initial solution vector x 0 .
(3) a random vector n 0 is introduced to perturb this initial solution vector x 0 , where n 0 should belong to the null space of matrix A and generate a new starting point s 1 by adding the random vector n 0 into the initial solution vector x 0 , i.e., s 1 = x 0 + n 0 .(4) use MCCA to get an updated solution vector x 1 .
(5) if the p (0 < p < 1) norm of solution vector x 1 is not larger than x 0 , i.e., ||x 1 || p ≤ ||x 0 || p , accept x 1 , otherwise accept x 0 as a new starting point s 2 .(6) repeat step (1)-( 5) until the stopping criterion Q is reached, where Q denotes the number of random starting points.The value of Q should be a trade-off between accuracy of diagnosis and the computational burden.
In our approach the critical factor is the design of n which is formulated as n = Fu, where F belongs to the null space of matrix A and u ∈ R N−r represents a random noise vector, respectively.One feasible way of generating F and u is given in the following steps.Let (U, V, Σ) the singular system of matrix A, where U and V are matrices whose columns are respectively the left and right singular vectors of matrix A. Σ is a diagonal matrix whose diagonal elements σ k (k = 1, ..., r) are the singular values of the matrix, where r being its rank.According to Matrix Theory [28], the last N − r columns of V represent a basis for the null space of matrix A, therefore can be used to constitute F. In addition, u ∈ [−γx max , γx max ] are random variables subject to uniform distribution with zero mean, where x max is the maximum absolute entity of the current solution vector and γ is an empirically determined non-negative number.
The procedure of the proposed array diagnosis method of random search-convex local minimizer is summarized in Algorithm 1.

Algorithm 1
The Hybrid Array Diagnosis Method of Random Search-Convex Local Minimizer input: measurement matrix A, far-field data y output: recovered sparse vector x initialization: the number of random starting point Q, counter j=0, non-negative number γ = 2 for j=0 to Q-1 do 1.set an initial starting point s j 2: generate an initial solution vector x j via MCCA using s j 3: perturb x j to get a new starting point s j+1 = x j + n j , where n j = Fu j 4: get an updated solution vector x j+1 via MCCA using the new starting point s j+1 5: if ||x j+1 || p ≤ ||x j || p , accept x j+1 , otherwise accept x j to replace s j 6: j = j+1 end for

Numerical Simulations
As first example let us consider the same ULA with N = 32 radiating elements reported in [29], to which the reader can refer for more details on the simulation procedure.All the elements have unit nominal excitations.We suppose that the AUT is affected by S = 3 faulty elements having both amplitude and phase failures, randomly distributed among the N elements, and a number of far-field measurements M = 10 are considered.Each excitation of faulty elements can be modeled as x = (1 − δ A )e jδ φ , where 0 < δ A < 1 and 0 < δ φ < 10 o representing amplitude and phase failures, respectively.The far-field data are corrupted by a 35 dB additive white Gaussian noise.
As discussed in the above section, in this paper the non-linear nature of p (0 < p < 1) norm minimization is alleviated by a random search associated to a convex local minimizer.The random starting points explored in the procedure are selected randomly in the shifted null space N (A) of matrix A. In order to investigate the effectiveness of the proposed method, in Figures 2-4 the Cumulative Distribution Function (CDF) of the Mean Square Error (MSE) of the retrieved solutions obtained by proposed method of random search-convex local minimizer is plotted considering three representative non-convex norms, i.e., strong non-convex norm (p = 0.1), moderate non-convex norm (p = 0.5) and weak non-convex norm (p = 0.9) with the number of random starting points Q = 5, 10, respectively.In the same figure the CDF curve of p (0 < p < 1) norm minimization using MCCA and the standard 1 norm minimization using CVX convex minimization software package [30] are also plotted for sake of comparison.For each case, a number of 500 trials have been considered, randomly changing the positions of the broken elements.The plot shows that a value of Q equal or higher than 10 is able to overcome the strong non-convex feature of 0.1 norm and provide better performance compared to standard 1 norm minimization.While for the moderate non-convex norm, i.e., p = 0.5, a smaller number of random starting points, i.e., Q = 5 is enough to have satisfactory result.An analysis of the computational time obtained using a Lenovo desktop with dual quad-core 3.6 GHz Intel i7-4790 processors and 8 GB of memory is shown in Table 1.It should be pointed out that the p (0 < p < 1) norm minimization is equivalent to using just only one random starting point, i.e., in the case Q = 1 using our method, as display in the first three rows of second column in Table 1.As mentioned above, for moderate non-convex norm (p = 0.5), a value of Q = 5 is enough to overcome its strong non-convex feature, and showing better performance compared to the standard 1 norm minimization.The cost associated with this improvement is displayed in Table 1 that in the case Q = 5, the average computational time of proposed method is just around four times longer than the classical 1 norm minimization.This ratio is around 10 times longer compared to the standard 1 norm minimization when considering strong non-convex norm (p = 0.1), requiring Q = 10 to achieve satisfactory result.Therefore, from the perspective of accuracy and efficiency, the increasement of computational burden appears to be acceptable for array diagnosis.Table 1.Comparison of computational time for linear array using different methods (unit: second).The goal of the next experiment is to investigate the effectiveness of the proposed method as function of M and 2S ranging from 2 to 18.
In Figures 5-7, the equi-probability curves of the rate of success of the recovered excitations are plotted considering strong non-convex norm (p = 0.1) and moderate non-convex norm (p = 0.5) with the number of random starting points Q = 10 and Q = 5, respectively.In the same figure the CDF curve of the standard 1 norm minimization is also plotted in order to make a comparison.The estimation of the excitations is considered to be exact if MSE is lower than −30 dB and the rate of success (i.e., the number of trials terminated with an MSE lower than −30 dB over the total number of trials) is evaluated for each (M, 2S) considering 100 trials.These figures demonstrate that, for a fixed number of failures S, the rate of success increases from 0.25 (purple curve) to 0.95 (yellow curve) varying the number of measurements M in a small range, indicating the presence of a narrow phase transition behavior also in proposed method.In addition, these figures also clearly show that the proposed algorithm requires a number of measurements less than the 1 norm to obtain a desired probability of success rate.For example, supposing that a probability of success rate of 0.95 or higher, and considering the presence of no more than five failures, namely 2S = 10, by properly using random perturbation technique, a number of measurements M = 12 is required for both 0.1 and 0.5 norm minimization with a tolerable computational burden, while 1 norm minimization requires at least M = 14 measurements, confirming again the better performance of the proposed method.A large number of simulations on different kinds of arrays confirm the above observations.As another example, let us consider an uniform planar array (UPA) composed of N x × N y = 9 × 9 point-like isotropic radiating elements, placed on d x × d y = 0.5λ mesh.The far-field data are measured in M = 25 points using random sampling strategy and the AUT is affected by S = 8 faulty elements.The type of faulty sensors and the measurement noise have the same configurations as reported in the first example, i.e., partial failures and 35 dB additive white Gaussian noise are considered.The MSE of the retrieved excitations are estimated with proposed method of random search-convex local minimizer is plotted, also considering strong non-convex norm 0.1 , moderate non-convex norm 0.5 and weak non-convex norm 0.9 with the number of random starting points Q = 5, 20, respectively.In the same figure the p (0 < p < 1) norm minimization using MCCA and 1 norm minimization using CVX are also taken into account.Similarly, the CDF of the MSE is plotted in Figures 8-10 with 500 trials for each case randomly changing the positions of the broken elements.In addition, the computational time is also shown in Table 2. .CDF of MSE of the retrieved excitations using 0.1 , 0.5 , 0.9 norm minimization and the standard 1 norm minimization; solid line: 1 norm minimization; dashed line: p (0 < p < 1) norm minimization.

Table 2.
Comparison of computational time for planar array using different methods (unit: second).

L1 (CVX)
Proposed method (p=0.1,Q=20)Proposed method (p=0.5,Q=20)Proposed method (p=0.9,Q=20)Similar conclusions could be made according to the simulation results for planar array.The plots show that a value of Q equal or higher than 5 is able to overcome the moderate non-convex feature of 0.5 norm and provide better performance compared to the standard 1 norm minimization also in this scenario.While for the strong non-convex norm 0.1 , a larger number of random starting points Q = 20 is enough to have satisfactory result.Correspond to this is the fact that according to the computational burden displayed in Table 2, if we use moderate non-convex norm 0.5 , the average computational time of proposed method is around 1.4 s.Although this time will be around 9.1 s if we use 0.1 norm as the associated non-convex feature strengthens, it still appears to be acceptable from the angle of significant reduction of acquisition time of far-field sampling data compared to the matrix method.
The goal of the last experiment is to validate the effectiveness of the proposed method as a function of M ranging from 9 to 29.In Figures 11 and 12, the curves of probability of exact reconstruction of the recovered excitations are plotted considering strong non-convex norm (p = 0.1) and moderate non-convex norm (p = 0.5) with the number of random starting points Q = 20 and Q = 5, respectively.In the same figure the probability curve of the p (0 < p < 1) norm minimization using MCCA and the standard 1 norm minimization using CVX are also plotted in order to make comparisons.The estimation of the excitations is considered to be exact if MSE is lower than −30 dB and the rate of success (i.e., the number of trials terminated with an MSE lower than −30 dB over the total number of trials) is evaluated for each M considering 300 trials.The results demonstrate that the proposed method is effective to overcome the non-convex feature of 0.1 and 0.5 norm minimization and perform better than the standard 1 norm minimization using an acceptable number of random starting points.For example, supposing that a probability of success rate of 0.95 or higher, a number of measurements M = 25 is required for both 0.1 norm minimization and 0.5 norm minimization taking advantage of the proposed random perturbation technique, while 1 norm minimization requires at least M = 27 measurements, not to mention the worse performance of p (0 < p < 1) norm minimization using MCCA, which uses more than 29 measurements.These results have confirmed again the better performance of the proposed method.It is also worth noting that increasing the number of random starting points will not bring any significant improvements apart from a higher computational burden.

Figure 1 .
Figure 1.Geometry of the Array Under Test.

Figure 4 .
Figure 4. CDF of MSE of the retrieved excitations using proposed method in the case Q = 10 and the standard 1 norm minimization; solid line: 1 norm minimization; dashed line: proposed method.

Figure 7 .Figure 8
Figure 7. Rate of success of the retrieved excitations versus M and 2S using proposed method in the case p = 0.5, Q = 5; 32 elements ULA; SNR = 35 dB.

Figure 9 .
Figure 9. CDF of MSE of the retrieved excitations using proposed method in the case Q = 5 and the standard 1 norm minimization; solid line: 1 norm minimization; dashed line: proposed method.

Figure 10 .
Figure 10.CDF of MSE of the retrieved excitations using proposed method in the case Q = 20 and the standard 1 norm minimization; solid line: 1 norm minimization; dashed line: proposed method.

Figure 11 .
Figure 11.Probability of exact reconstruction of the recovered excitations as a function of M using 0.1 , 0.5 norm minimization and the standard 1 norm minimization; red line: 1 norm minimization; green and black line: 0.1 and 0.5 norm minimization.