Next Article in Journal
A Potentiometric Indirect Uric Acid Sensor Based on ZnO Nanoflakes and Immobilized Uricase
Previous Article in Journal
Metamaterials Application in Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Performance Analysis Based on SAR Sample Covariance Matrix

Institute of Environmental Engineering, ETH Zurich, Schafmattstr. 6, CH-8093 Zurich, Switzerland
Sensors 2012, 12(3), 2766-2786; https://doi.org/10.3390/s120302766
Submission received: 30 January 2012 / Revised: 15 February 2012 / Accepted: 24 February 2012 / Published: 1 March 2012
(This article belongs to the Section Remote Sensors)

Abstract

: Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

1. Introduction

Multi-channel systems with random nature appear in a wide range of fields in the literature. For several of them, the central limit theorem applies and their random behavior can be modeled by Gaussian statistics, being thus statistically fully described by the first and second order moments. In the case of multi-channel SAR systems, the assumption of zero-mean multivariate complex Gaussian distribution is frequently valid for geophysical media, being thus fully described by the complex Hermitian covariance matrix. This is the statistical case treated in this work.

Since a SAR signal corresponds to the superposition of the scattered fields from all scatters inside a resolution cell, physical parameter estimation may be performed by exploiting the multi-channel covariance matrix. For that, eigendecomposition theorems have been widely used in the literature, decomposing the whole covariance matrix into elementary quantities, in order to provide a better physical interpretation of the data.

The eigendecomposition of the multi-channel SAR covariance matrix leads to useful information in a large range of SAR applications, including Ground Moving Target Indication (GMTI) [1], polarimetric SAR (PolSAR) [2], interferometric SAR (InSAR) [3], polarimetric interferometric SAR (PolInSAR) [4], change detection [5], target detection [6] and filtering [7,8]. Particularly, the covariance matrix maximum eigenvalue has been proved to be a key parameter in various areas. In [5] and [9], the change detection problem was elaborated using the maximum eigenvalue of the interferometric covariance matrix. The eigendecomposition of the SAR covariance matrix with application to SAR GMTI is used in and to calculate the probability of moving target detection in homogeneous and heterogeneous terrain. Regarding polarimetry, the eigendecomposition of the coherency (which is also a covariance) matrix is the most common tool among incoherent target decomposition theorems for the interpretation of the earth’s surface. The maximum eigenvalue of the coherency matrix is also often used in polarimetry, due to the close relationship to the dominant scattering mechanism of the scattering process [2,10,11].

Although there is a wide area of applications regarding the maximum eigenvalue of the multi-channel covariance matrix, a certain lack of information in the literature concerning its statistical characterization in terms of multi-channel SAR may be verified. The statistical description can be useful to understand bias effects and to analyze estimation as well as detection problems. The estimated (sample) covariance matrix follows a Wishart distribution, which has been of interest along the years since its first derivation. In areas as communication systems, several works using such results have been published [1216]. The application of the Wishart distribution in remote sensing was however considerably late introduced [17].

The cornerstone study considering the statistical description of the covariance matrix eigendecomposition in polarimetry has been carried out in [18]. The majority of the analysis in [18] was performed on the basis of numerical methods. In this paper the results of [18] is supported by addressing analytical solutions. Additionally, it is derived an exact closed form expressions for the Moment Generating Function (MGF) of the sample covariance matrix maximum eigenvalue.

The validation of the derived expressions is performed using simulated data, demonstrating its agreement with the theory. The effect of the number of samples and underlying correlation scenario of the sample covariance matrix on the bias in the estimation of the maximum eigenvalue is also investigated. Finally, examples of applications are as well given, in the fields of estimation and detection theory.

The next section reminds the basics of the statistical description of multi-channel SAR systems and of the covariance matrix eigendecomposition. It also includes the derived theorems presenting the statistical founds. Section 3 includes numerical examples validating the theorems as well as an analysis of their behavior. Sections 4 and 5 show their usage for the analysis of the estimation bias and for the elaboration of detection problems, respectively. Section 6 concludes the paper with discussions and directions for future work. The detailed statistical derivations can be found in the Appendix.

2. Statistical Characteristics of the Maximum Eigenvalue of the Multi-Channel Sample Covariance Matrix

2.1. Preliminaries

The individual elements of a m-dimensional multi-channel system may be organized in a m-dimensional vector k. As the elements are assumed to follow zero mean complex Gaussian distributions, the vector k is said to follow a m-dimensional multivariate normal distribution, with zero mean and true covariance matrix Σ among the vector elements, and is represented by k 𝒩 m C ( 0 , Σ ) [19,20]. For zero mean Gaussian statistics the covariance matrix fully describes the data, playing a key role in several application fields. In practical situations however, the true covariance matrix Σ is unknown and has to be estimated by its maximum likelihood estimator (MLE), the sample covariance matrix Z = ( 1 / n ) j = 1 n k j   k j , where n is the number of estimation samples and † is the transpose conjugate operator. In the SAR context, the number of independent samples is also called looks. The elements of Z follow a m dimensional complex Wishart distribution with n degrees of freedom and true covariance matrix Σ, represented by Z 𝒲 m C ( n , Σ ) and defined as [20]:

p Z   ( Z ) = n mn | Z | n m   etr   ( n Σ 1   Z ) | Σ | n Γ ˜ m   ( n )           with           Γ ˜ m   ( n ) = π m ( m 1 ) / 2   i = 1 m Γ ( n i + 1 )
where Γ(·) is the gamma function and etr(·) is the exponential trace of a matrix.

The spectral theorem from linear algebra allows the decomposition of the full rank m-dimensional covariance matrix in a set of m one-rank covariance matrices using its eigenvalues and eigenvectors. Accordingly, the decompositions of the true covariance matrix Σ = i = 1 m   l i   ( e i   e i ) and its estimator Z = i = 1 m   λ i   ( e i   e i ) are given by

Σ = Q [ l 1 0 0 l m ] Q     and     Z = Q [ λ 1 0 0 λ m ] Q ,       Q = [ e 1 ,   e 2 ,   , e m ] Q = [ e 1 ,   e 2 ,   ,   e m ]
with their real non-negative eigenvalues li and λi, and respective complex eigenvectors ei, ei, for i = {1, ...m}.

2.2. Sample Maximum Eigenvalue Statistical Description

The following Theorem III concerns the derived Moment Generating Function (MGF) of the maximum eigenvalue of the sample covariance matrix Z of a multi-channel statistical system, which is a critical step in removing the bias of the largest eigenvalue. The Theorem III, which is derived due to the previous works (Theorems I and II), is the main result of the paper. It is detailed in the appendix and will be used for the elaboration of illustrative estimation and detection problems in Sections 4 and 5.

Throughout the paper, | · | is the matrix determinant, X n = ( 1 / n ) i = 1 n   X i denotes the estimator of the random matrix X formed from a sample of size n.

Theorem I: Let k 𝒩 m C ( 0 , Σ ) be a m-dimensional complex vector whose elements follow zero mean Gaussian distributions with associated m × m covariance matrix Σ. Let Σ have lm ≤ .... ≤ l1 eigenvalues. Then the Cumulative Density Function (CDF) of the maximum eigenvalue λmax of the sample covariance matrix 〈kkn, with the assumption mn, is given by

F λ max ( x ) = 𝒮 | Ψ ( x ) | ,         with constant term      𝒮 = π m ( m 1 ) n m ( 2 n m + 1 ) / 2 Γ ˜ m   ( m )   Γ ˜ m   ( n )   k = 1 m 1   k m k i = 1 m   l i n   i < j m ( 1 l j 1 l i )
where Ψ(x) is a m × m matrix with its (i, j)th element Ψ ( x ) i , j = γ ( n + 1 j , x n l i ) ( n l i ) ( n + 1 j ), and γ is the incomplete Gamma function (Equation (2.42) in [21]).

Proof: [14] gave the closed expression of CDF of the largest eigenvalue of the complex Wishart matrices. Here, it is written in the form of SAR covariance matrix after small continuation, see also Appendix A.

Theorem II: Let k 𝒩 m C ( 0 , Σ ) be a m-dimensional complex vector whose elements follow zero mean Gaussian distributions with associated m × m covariance matrix Σ. Let Σ have lm ≤ .... ≤ l1 eigenvalues. Then the Probability Density Function (PDF) of the maximum eigenvalue λmax of the sample covariance matrix 〈kkn, with the assumption of mn, is given by

p λ max   ( x ) = 𝒮 | Ψ ( x ) | tr   [ Ψ ( x ) 1 Ω ( x ) ]
where Ω(x) is an m × m matrix with its i, jth elements Ω ( x ) i , j = exp ( n l i x )   x n j, and Ψ(x) and 𝒮 are defined in Equation (3).

Proof: Equation (4) is obtained by differentiating Equation (3) with respect to x using (Equation (9) in [22])

d dt | X ( t ) | = | X ( t ) |   tr   ( X ( t ) 1 d dt X ( t ) ) .
Here it can be noted that when the true covariance matrix Σ is diagonal, the PDF of the largest eigenvalue reduces to Ermolaev and Rodyushkin result [23].

Theorem III: Let k 𝒩 m C ( 0 , Σ ) be a m-dimensional complex vector whose elements follow zero mean Gaussian distributions with associated m × m covariance matrix Σ. Let Σ have lm ≤ .... ≤ l1 eigenvalues. Then for any positive integers s, the sth moment of the maximum eigenvalue λmax of the sample covariance matrix 〈kkn, with the assumption of mn, is given by

E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π s π sub S m sgn ( π s ) k = 1 m 1 1 n + 1 π s ( k )   ( n l k ) n π s ( k ) 2   a k u k   ( b + A ) v M   Γ ( v + M ) × F A ( v + M ;   u 1 λ 1 , , u m 1 λ m 1 ; 2 u 1 , , 2 u m 1 ;   a 1 b 1 + A , , a m 1 b m 1 + A ) if   k = j π s ( j ) = i ,   π s π sub
where 𝒮 indicates the constant term as in Theorem I and FA(· · ·) is the hypergeometric function of several variables with a k = n l k, uk = (2 + nπsub(k))/2, v = 1 + s + n i + k = 1 m 1   n π sub   ( k ) 2, b = n l j + k = 1 m 1   n 2 l k, λ k = n π sub   ( k ) 2, M = k = 1 m 1 u k, A = 1 2 k = 1 m 1 a k. Here, the sum is computed over (m − 1)! permutations of πs. Sm denotes the set of all m! permutations of the set S = {1, 2, ...,m}, and sgn(πs) denotes the sign of the permutation πs : +1 if πs is an even permutation and −1 if it is odd.

Proof: See Appendix section B. In the appendix the moment of sample maximum eigenvalue is also given in more friendlier form for programming obtained by splitting the hypergeometric function of several variables into a sum of its variables.

2.3. Dependence of the Covariance Matrix Eigenvalues

Before starting the following analysis, it would be nice to illustrate the dependencies of the eigenvalues on the covariance matrix parameters, which is necessary for the further understanding of the addressed topics. For that, a two-dimensional covariance matrix is used, originated from the expected values of the outer product of the vector k = [k1 k2]T, i.e.,

Σ = E ( kk ) = [ σ 1 2 σ 1 σ 2 ρ e j ϕ σ 1 σ 2 ρ e j ϕ σ 2 2 ]
where σ 1 2, σ 2 2 are the variance or power of k1 and k2, respectively, ρ is the absolute and ϕ the phase of their complex correlation coefficient, and T means transpose.

The eigenvalues of the true covariance matrix can be shown to be given by [5]:

l 1 , 2 = 1 2 [ b + 1 b ± b 2 + 1 b 2 + 4 ρ 2 ] ,         σ 1 2 = b σ 2 2 .
The behavior of l1 and l2 is presented in Figure 1, as a function of a normalized power ratio b and a correlation ρ. Note their mutual dependence, making clear that b and ρ directly determine the behavior of both eigenvalues.

3. Validation and Analysis of the Theoretical Expressions

This section aims to validate the theorems mentioned in the previous section using simulated data. The simulated data have been generated using different multi-dimensional configurations, where the correlation between channels have been generated using the well known Mahalanobis transformation [24].

Figure 2(a) shows the comparison of the Equation (4) with simulations. The theoretical PDF curves clearly agree with the histograms obtained from simulated data. As expected, the PDFs become narrower with increasing n, indicating less variance around the true value of the maximum eigenvalue l1 = 1.8. This behavior can be better seen in Figure 2(b) where the distribution of λmax as a function of the number of estimation samples n is presented. Note also that the expected value of the distributions seems to change for different n.

Figure 3(a) shows the variation of the histogram mean as a function of n for a two-dimensional case with fix correlation ρ = 0.2. In Figure 3(a), the theoretical expected value of λmax has been also over plotted for comparison. The curves match well, which validates Theorem III. The same has been carried out for the second order moment, i.e., the variance of λmax, and is presented in Figure 3(b). The agreement between the theoretical variance of λmax and the variance of the simulated data can again be confirmed. Also, as already observed in Figure 2(a), the lower the number of samples n, the higher the variance of λmax.

The impact of the correlation and the number of samples on the behavior of the third (skewness) and fourth (kurtosis) order moment of the maximum sample eigenvalue λmax is presented in Figure 4. Skewness is a measure of how symmetrical the distribution is with respect to its mean. Figure 4(a,b) indicates that the skewness of λmax converges to zero for increasing n and increasing ρ, expressing the tendency to a symmetrical distribution in that cases.

Kurtosis is a measure of the peakedness of the distribution. Figure 4(c,d) shows that for increasing n and ρ, the kurtosis of the λmax distribution tends to three, which corresponds to the kurtosis of the normal Gaussian distribution.

4. Estimation Bias

Figure 5 shows the expected value (first order moment) of the maximum sample eigenvalue λmax as a function of the true eigenvalues lmax for a fixed number of samples n = 3 and k = 1. The variation of the underlying correlation of the true covariance matrix ρ, which changes the values of lmax, has been as well indicated in a color code. Observe that the higher the correlation is, the higher lmax becomes.

The bias of the estimator of a certain parameter is defined as the difference between the expected value of the estimator and the true value of the parameter. In the present case, the bias is hence given by E[λmax] − lmax. Notice thus from Figure 5 that the bias of λmax becomes very strong for low values of the correlation lmax (or equivalently, low values of ρ).

The effect is emphasized in Figure 6(a,b), where the bias of λmax is presented as a function of lmax and l2, for n = 3 and n = 16, respectively. The values of the underlying correlation ρ have been also indicated. The bias is small either when the correlation is high or the number of samples is sufficiently large. When the correlation between channels is low, the bias becomes very significant and a large number of samples is necessary to decrease the estimation bias.

5. Analysis of Detection Problems

Detection theory is a means to quantify the ability of a procedure to detect a parameter (or signal) immersed in a noise environment. A decision has to be taken, in order to say yes, there is a signal, or no, there is no signal. Such decision is usually taken under the application of a decision threshold. Noise contributes off course negatively inducing wrong decisions. For the quantification of detection accuracy, some quantities are usually defined as the probability of detection (PD) and probability of false alarm (PFA), allowing a detection problem analysis.

The maximum eigenvalue of covariance matrices has been frequently used in the literature for the elaboration of certain detection problems. In the Ground Moving Target Indication (GMTI) area, for instance, the signal to clutter plus noise ratio has been studied under the distribution of the sample eigenvalues, which allowed the implementation of a constant false alarm rate detector [9]. The eigenvalues of the covariance matrix of a SAR image pair have been also used in order to formulate a problem in the change detection area [5]. Another example is the determination of the existence of just a single dominant scattering mechanism in a SAR polarimetric acquisition, which also has been evaluated using a threshold in the maximum sample eigenvalue of the polarimetric covariance matrix [11]. Target detection and polarimetric filtering represent other fields of application in which the maximum eigenvalue can be used.

Having the closed form expressions of the PDF (Equation (4)) and/or CDF (Equation (3)) of the maximum sample eigenvalue of the covariance matrix, the PD and PFA when applying a threshold in λmax can be analytically computed, allowing a complete detection problem analysis.

5.1. Detection of a Dominant Maximum Eigenvalue

Since several detection problems rely in fact on the choice of a dominant or non-dominant lmax by applying a threshold in λmax, the following problem with two hypotheses is elaborated

H 0   ( l max   is dominant ) : l max = l 1       and      l i = 0 H 1   ( l max   is not dominant ) : l max = l 1 = l i
for i = {2, 3, . . ., m}.

In one application area H0 may mean that just a single scattering mechanism is present inside a resolution cell of polarimetric SAR data, in other application H0 can mean that a change happened or a moving target is present in the scene.

Since lmax is not achievable, a threshold 𝒯 has to be used in λmax, originating the following PD and PFA

PD = p [ accepting  H 0 | H 0   is true ] = p [ λ max > 𝒯 | H 0   is true ] = 𝒯 p λ max ( x ;   H 0 )   d x PFA = p [ accepting  H 0 | H 1   is true ] = p [ λ max > 𝒯 | H 1   is true ] = 𝒯 p λ max ( x ,   H 1 )   d x
as a function of the decision threshold 𝒯.

Hence, for a two-dimensional system configuration, the distribution pλmax (x; H0) is the distribution of λmax when l1 = 2 and l2 = 0. For a three-dimensional system, pλmax (x; H0) is determined evaluating the distribution of λmax when l1 = 3 and l2 = l3 = 0. Higher dimensional system configurations follow the same rule.

On the other hand, the distribution pλmax (x; H1) is given by l1 = l2 = 1 and l1 = l2 = l3 = 1 for a two- and three-dimensional system, respectively.

For each value of 𝒯, there exists a pair (PFA, PD). The curves of PD versus PFA are called Receiver Operating Characteristic (ROC) curves and express the detection performance. The more a ROC curve bends toward the upper left, the better is the detection performance since a higher PD and lower PFA is achieved.

Figure 7(a) shows the detection performance as a function of the dimension of the multichannel system, i.e., m = 2, e.g., interferometric, m = 3, e.g., polarimetric and m = 6, e.g., polarimetric-interferometric system. For all cases, with fixed number of samples n = 6, one can realize that the performance of detection is significantly improved as the number of SAR images increases. Figure 7(b) shows the ROC curves for m = {2, 3} and for different number of samples n. It can be seen that for both multidimensional system configurations the number of samples increases the detection performance.

In order to state how correlation effects the detection performance, the detection problem is reformulated as follows

H 0   ( l max   is dominant ) : l max = l 1 > l 2 l 3 , , l m 0 H 1   ( l max   is not dominant ) : l max = l 1 = l i i = { 2 , 3 , , m } .
In this way, every time that correlation is greater than zero the eigenvalues have different values, and one is larger than the others. For the two-dimensional case, for instance, pλmax (x; H0) is the distribution of λmax when l1 > l2, which changes for different values of the correlation ρ. Figure 7(c) presents the ROC curves for this case. For small correlation between channels, the detector suffers from a significant false-alarm rate. As expected, when correlation increases the ROC curve bends toward the upper left, indicating better detection performance. The limit is reached when ρ = 1, meaning that l1 = 2 and l2 = 0, which corresponds to the cases of the previous detection problem.

5.2. Target Detection Using Polarimetric Matched Filter

In this section the application of the expressions for the detection of specific targets using the Polarimetric Matched Filter (PMF) concept is illustrated. For that, a short review on PMF is required [6,7].

In a polarimetric acquisition, the dimension of the system corresponds to the number of channels, which are in general four, but three for the backscattering reciprocal case. The measured vector k is hence three- or four-dimensional. The elements of the vector k may be weighted in a given way, in order to satisfy a certain condition. The weighting can be accounted for by making y = hk, where h is a complex vector with same dimension as k. A condition usually aimed to be satisfied in the literature is the maximization of the quadratic detector |y|2 (Equation (59) in [6]). The optimal weighting vector h in order to detect a distributed target with the target vector k is dependent on the target characteristics. Accordingly, the expected value of |y|2 is given by

E { | y | 2 } = E { h k ( h k ) } = h Σ t   h
where Σt = E{kk} is the target covariance matrix. Regarding Rayleigh quotient (Theorem 15.91 in [25]), for any m-dimensional complex vector x and a given m × m Hermitian matrix A, xAx ≤ ||x||2amax, where amax is the maximum eigenvalue of A. The equality is valid if x is along the direction of the amax eigenvector Umax (||Umax|| = 1). Hence, for a given distributed target with covariance matrix Σt, the optimum weighting vector h is given by the eigenvector of the maximum eigenvalue of Σt.

The target vector k is a random sample and has alone no physical meaning. Therefore, usually multi-look processing is performed making, for n samples (or looks)

y ¯ 2 = i = 1 n | h k i | 2 .
The target is assumed to be immersed in polarization independent clutter. In this way, the detection procedure is thus evaluated by choosing h as the eigenvector corresponding to the maximum eigenvalue of the covariance matrix Σt of the target to be detected, and applying a detection threshold 2 > 𝒯.

In the presence of target 2 is maximum, given by 2 = ||||2λmax, where is the eigenvector of λmax. Hence, assuming without loss of generality that has unitary length (||||2 = 1), detection performances can be made using the derivations given in this work, as done in the previous section, where a threshold has been used in order to detect a dominant maximum sample eigenvalue (λmax = 2 > 𝒯).

Note that the probabilities of detection and false alarm defined in last section can be more simply evaluated by

PD = 𝒯 p λ max ( x ;   H 0 )   d x = 1 𝒯 p λ max ( x ;   H 0 )   d x = 1 F λ max ( 𝒯 ;   H 0 ) PFA = 1 F λ max ( 𝒯 ;   H 1 ) .

A three-dimensional polarimetric target detection problem was formulated making k = [SHH SHV SVV]T, where Si is the polarimetric scattering matrix element in channel i, and using the target covariance matrix structure

Σ t = S H H [ 1 0 ρ γ 0 2 ɛ 0 ρ * γ 0 γ ] .
where ɛ = E { | S HV | 2 } E { | S HH | 2 }, γ = E { | S VV | 2 } E { | S HH | 2 } and ρ is the complex correlation coefficient between SHH and SVV [6,26]. Figure 8 shows the ROC curves for different number of samples for three different kinds of targets. Figure 8(a) corresponds to an azimuthal symmetric target having covariance matrix with parameters ɛ = 1, γ = 0.5 and ρ = 0. Figure 8(c) corresponds to a reflection symmetric target having covariance matrix with parameters ɛ = 1, γ = 0.8 and ρ = 0.8, while Figure 8(b) to a reflection symmetric target with parameters ɛ = 0, γ = 0.8 and ρ = 0.8 in its covariance matrix. For all three cases, SHH = 1 and the PFA was evaluated using the polarization independent clutter having ɛ = 1, γ = 1 and ρ = 0. Note that the curves vary not just with the number of used samples n but are also different for different targets having different Σt. This means that some types of targets are easier to detect than others, when using the quadratic detector described here and when the clutter is polarization independent. When the target covariance matrix is similar to the one of the clutter, the detection performance weakens (Figure 8(a)). On the other hand, when the covariance matrix of the target is significantly different from the clutter one, the detection performance improves (Figure 8(c)).

6. Conclusions and Discussion

In this paper a depth statistical analysis of the maximum eigenvalue of the eigendecomposition of the sample covariance matrix in terms of SAR applications is presented. The proposed analysis are supported by simulation results via several examples. The results are based on a exact closed-form expressions of PDF, CDF and MGF. In this study, existing density functions of the sample maximum eigenvalue were extended and/or implemented into multi-channel SAR system in order to obtain a simple expression of the sample eigenvalues giving a way to fruitful applications. From these closed-form expressions, it has been possible to develop new algorithms for unbiased calculations of parameters extracted from multi-channel SAR covariance matrix. In addition to these implementations, closed-form expressions were developed for the MGF of the sample maximum eigenvalue, which can be critical in the area of bias removal and detection performance analysis. This new closed-form expressions of the MGF can be also interesting for other application areas like MIMO systems (Multiple-Input Multiple-Output). Apart from estimation theory analysis including the MGF, the detection problem of the sample maximum eigenvalue has been also discussed.

References

  1. Sikaneta, I.; Gierull, C.; Chouinard, J.Y. Metrics for SAR-GMTI Based on Eigen-Decomposition of the Sample Covariance Matrix. Proceedings of the 2003 International Radar Conference, Adelaide, Australia, 3–5 September 2003; pp. 442–447.
  2. Hajnsek, I.; Pottier, E.; Cloude, S.R. Inversion of surface parameters from polarimetric SAR. IEEE Trans. Geosci. Remote Sens 2003, 4, 727–744. [Google Scholar]
  3. Li, Z.; Bao, Z.; Li, H.; Liao, G. Image autocoregistration and InSAR interferogram estimation using joint subspace projection. IEEE Trans. Geosci. Remote Sens 2006, 44, 288–297. [Google Scholar]
  4. Erten, E.; Reigber, A.; Ferro-Famil, L.; Hellwich, O. A new coherent similarity measure for temporal multichannel scene characterization. IEEE Trans. Geosci. Remote Sens 2011, 50, 1–13. [Google Scholar]
  5. Zandoná-Schneider, R.; Fernandes, D. Entropy Concept for Change Detection in Multitemporal SAR Images. Proceedings of EUSAR, Köln, Germany, 4–6 June 2002.
  6. Novak, L.M.; Sechtin, M.B.; Cardullo, M.J. Studies of target detection algorithms that use polarimetric radar data. IEEE Trans. Aerosp. Electon. Syst 1989, 25, 150–165. [Google Scholar]
  7. Liu, G.; Huang, S.; Torre, A.; Rubertone, F. The multilook polarimetric whitening filter (MPWF) for intensity speckle reduction in polarimetric SAR images. IEEE Trans. Geosci. Remote Sens 1998, 36, 1016–1020. [Google Scholar]
  8. Li, Z.; Bao, Z.; Suo, Z.A. Joint image coregistration, phase noise suppression, and phase unwrapping method based on subspace projection for multibaseline InSAR systems. IEEE Trans. Geosci. Remote Sens 2007, 54, 584–591. [Google Scholar]
  9. Nadarajah, S. Comments on Eigendecomposition of the multi-channel covariance matrix with applications to SAR-GMTI. Signal Process 2007, 87, 1534–1536. [Google Scholar]
  10. Lee, J.S.; Grunes, M.R.; Schuler, D.L.; Pottier, E.; Ferro-Famil, L. Scattering-model-based speckle filtering of polarimetric SAR data. IEEE Trans. Geosci. Remote Sens 2006, 44, 176–187. [Google Scholar]
  11. Touzi, R. Target scattering decomposition in terms of roll-invariant target parameters. IEEE Geosci. Remote Sens. Lett 2007, 45, 73–84. [Google Scholar]
  12. Khatri, C.G. Non-central distributions of ith largest characteristic roots of three matrices concerning complex multivariate normal populations. Ann. Inst. Stat. Math 1969, 21, 23–32. [Google Scholar]
  13. Hirakawa, F. Some distributions of the latent roots of a complex Wishart matrix variate. Ann. Inst. Stat. Math 1975, 27, 357–363. [Google Scholar]
  14. Kang, M.; Alouini, M.S. Largest eigenvalue of complexWishart matrices and performance analysis of MIMO MRC systems. IEEE J. Sel. Areas Commun 2003, 21, 418–426. [Google Scholar]
  15. McKay, M.R.; Grant, A.J.; Collings, I.B. Performance analysis of MIMO-MRC in double-correlated Rayleigh environments. IEEE Trans. Commun 2007, 55, 497–507. [Google Scholar]
  16. McKay, M.R. Random Matrix Theory Analysis of Multiple Antenna Communication Systems. Ph.D. Thesis,. The University of Sydney, Sydney, Australia, 2006. [Google Scholar]
  17. Lee, J.S.; Hoppel, K.W.; Mango, S.A.; Miller, A.R. Intensity and Phase Statistics of Multilook Polarimetric and Interferometric SAR Imagery. IEEE Trans. Geosci. Remote Sens 1994, 32, 1117–1028. [Google Scholar]
  18. Lopez-Martinez, C.; Pottier, E.; Cloude, S.R. Statistical assessment of eigenvector-based target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens 2005, 43, 2058–2074. [Google Scholar]
  19. Muirhead, R.J. Aspects of Multivariate Statistic Theory; John Wiley&Sons: New York, NY, USA, 1982. [Google Scholar]
  20. Ferro-Famil, L. Télédetection multi-fréquentielle et multi-temporelle d’environnements naturels à partir de données SAR polarimétriques. Ph.D. Thesis,. Université de Nantes, Nantes, France, 2000. [Google Scholar]
  21. Andrews, L.C. Special Functions of Mathematics for Engineers; McGraw-Hill Publishing Co: New York, NY, USA, 1992. [Google Scholar]
  22. Golberg, M.A. The derivative of a determinant. Am. Math. Month 1972, 78, 1124–1126. [Google Scholar]
  23. Ermolaev, V.T.; Rodyushkin, K.V. The distribution function of the maximum eigenvalue of a sample correlation matrix of internal noise of antenna-array elements. Radiophys. Quantum El 1999, 42, 439–444. [Google Scholar]
  24. Peebles, P.Z. Probability, Random Variables, and Random Signal Principles; McGraw-Hill College, 1993. [Google Scholar]
  25. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Academic Press: Burlington, MA, USA, 2007. [Google Scholar]
  26. Li, J.-S.; Pottier, E. Polarimetric Radar Imaging from Basics to Applications; CRS Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  27. Chiani, M.; Win, M.Z.; Zanella, A. On the capacity of spatially correlated MIMO Rayleigh-fading channels. IEEE Trans. Inform. Theory 2003, 49, 2363–2371. [Google Scholar]
  28. Mestre, X.; Lagunas, M.A. Finite sample size effect on minimum variance beamformers: Optimum diagonal loading factor for large arrays. IEEE Trans. Signal Process 2003, 54, 69–82. [Google Scholar]

Appendix

This Appendix starts with the reminder of some basic results from linear algebra and statistical theory to prove the Theorem III proposed in this manuscript.

Definition 1 (Theorem A3 in [19]): The determinant of a square m × m matrix A is

| A | = π sgn ( π ) a 1 k 1   a 2 k 2 a m k m = π sgn ( π ) i = 1 m a i k i
where ∑π denotes the summation over all m! permutations, and sgn(π) denotes the sign of the permutations determined by (−1)π, where π is the permutation symbol. Notice that the determinant is a combination of the determinants of its sub-matrices, and permuting the columns or rows of a matrix changes the sign of the determinant related to the permutation.

Definition 2 (Corollary 2 in [27]): Given two arbitrary m × m matrices Φ(x) and Ψ(x) with ijth elements Φi(xj) and Ψi(xj), and an arbitrary function ξ(·), the following identity holds:

𝒟 | Φ ( x ) |   | Ψ ( x ) | k = 1 m ξ ( x k ) d x 1 d x 2 d x m = | { a b Φ i ( x ) Ψ j ( x ) ξ ( x ) d x } i , j = 1 m
where the multiple integral is over the domain 𝒟 = {axm ≤ · · · ≤ x2x1b}.

Proof of Theorem I

In the case mn, the sample covariance matrix has full rank and follows a central Wishart distribution. It can be noted that [28] is an interesting work concerning the case nm. In Wishart history, the joint PDF of the real ordered sample eigenvalues ∞ ≥ λ1 ≥ . . . ≥ λm ≥ 0 of Z is given by Equation (36) in [18]

p Ξ ( Ξ ) = π m ( m 1 )   n m ( 2 n m + 1 ) / 2 Γ ˜ m   ( m )   Γ ˜ m   ( n ) | exp ( n λ j l i ) i , j = 1 m k = 1 m 1 k m k i = 1 m   λ i n m   i < j m ( λ i λ j ) i = 1 m   l i n   i < j m   ( 1 l j 1 l i ) , with Ξ = Q H Z n Q
where Ξ = diag{λ1, . . ., λm}, pΞ(Ξ) is the joint PDF of the eigenvalues of the sample covariance matrix ∞ ≥ λ1 ≥ . . . ≥ λm−1λm ≥ 0 and ∞ ≥ l1 ≥ ... ≥ lm−1lm ≥ 0 are the eigenvalues of the true covariance matrix.

The CDF of the maximum eigenvalue λmax = λ1 is obtained using

F λ max   ( x ) = p ( λ max x ) = p ( λ m x , , λ 1 x ) = 𝒟 p ( λ 1 ,   λ 2 , , λ m ) d λ 1 d λ m
where p(λ1, λ2, ..., λm) is the joint PDF of the eigenvalues as defined in Equation (A3), and 𝒟 = {0 ≤ λm ≤ ... ≤ λ1x}. Before substituting Equation (A3) in Equation (A4) it is desirable to write a friendlier expression for the joint PDF Equation (A3). If the constant is denoted by 𝒮 as in Equation (3), then Equation (A3) can be written as
p Ξ ( Ξ ) = 𝒮 i = 1 m λ i n m i < j m ( λ i λ j ) | exp ( n λ j l i ) i , j = 1 m .
Due to the definition of the Vandermonde determinant [25] the multiplication of i = 1 m λ i n m and i < j m ( λ i λ j ) is:
[ λ 1 n m 0 0 0 0 0 0 λ m n m ] [ λ 1 m 1 λ 1 m m λ m m 1 λ m m m ] = [ λ 1 n 1 λ 1 n m λ m n 1 λ m n m ] | λ i n j | | i , j = 1 m
Then the joint PDF of the sample eigenvalues can be written in the form of
p Ξ ( Ξ ) = 𝒮 | λ i n j | | i , j = 1 m | exp ( n λ j l i ) i , j = 1 m
By applying the result of Definition 2 and Definition 1 into (A4) with Ψ i ( λ j ) = λ i n j and Φ i ( λ j ) = exp ( n λ j l i ), it yields
F λ max ( x ) = 𝒮 | 0 x λ n j exp ( n λ l i ) d λ | .
Finally, solving the remaining integral using Equation (351) in [25] yields the desired closed-form expression of the sample maximum eigenvalue CDF of multi-channel covariance matrix
F λ max ( x ) = 𝒮 | Ψ ( x ) i , j | with  Ψ ( x ) i , j = ( n l i ) ( j n 1 ) γ ( n + 1 j , x n l i )
where γ ( a , x ) = 0 x x a 1 e x d x denotes the lower incomplete gamma function.

Proof of Moments of the Sample Maximum Eigenvalue

For any positive integer s, the sth moment of the maximum eigenvalue is obtained using

E ( λ max s ) = 0 x s p λ max   ( x ) d x
where pλmax (x) is the PDF of λmax. It has been shown in Equation (4) that
p λ max ( x ) = 𝒮 | Ψ ( x ) | tr   [ Ψ ( x ) 1 Ω ( x ) ]
where Ω(x) and Ψ(x) are m × m matrices with ijth element Ω ( x ) i , j = exp ( n l i x )   x n j and Ψ ( x ) i , j = ( n l i ) ( n + 1 j ) γ ( n + 1 j ,   x n l i ), and 𝒮 denotes the constant terms given in Equation (3).

It is desirable to write a friendlier expression for Equation (A11) to be useful for deriving the moments of the sample maximum eigenvalue. Due to the definition of the inverse of the m × m matrix A 1 = C ij T / | A |, Equation (A11) can be written in the following form

p λ max ( x ) = 𝒮   tr  [ C T   ( x ) Ω ( x ) ]
where CT (x) is the transpose of the cofactor matrix Ψ(x). Since the trace of a square matrix is the sum of the diagonal elements of the matrix, Equation (A11) can be written as the sum of the diagonal elements of D(x) = (CT (x)Ω(x)).
E ( λ max s ) = 𝒮 0 x s tr  [ C T   ( x ) Ω ( x ) ] d x
= 𝒮 0 x s tr  [ D ] d x

Now the problem in Equation (A11) is the evaluation of the diagonal elements of the D(x). The matrix D(x) is the product of the two square matrices and the trace of the product of two matrices is tr ( C T Ω ) = i , j = 1 m C ji   Ω ji = C 11   Ω 11 + + C m 1   Ω m 1 + + C 1 m   Ω 1 m + + C mm   Ω mm shown as

tr   ( [ C 11 C 21 C m 1 C 12 C 22 C m 2 C 1 m C 2 m C mm ] [ e x n l 1   x n 1 e x n l 1   x n 2 e x n l 1   x n 1 e x n l 2   x n 1 e x n l 2   x n 2 e x n l 2   x n 2 e x n l m   x n 1 e x n l m   x n 2 e x n l m   x n m ] )
where Cji is the determinant of the sub-matrix Mji obtained from Ψ(x) by removing its jth row and ith column. Then, the j, i th cofactor of Ψji, denoted by Cji, is Cji = (−1)i+j |Mji|. Now, Equation (A14) is:
E ( λ max s ) = 𝒮 j , i = 1 m 0 x s   C ji   Ω ji   d x
where Cji can be extracted from Ψ(x) by Definition I as follows,
C ji = ( 1 ) i + j π sub S m sgn ( π sub ) k = 1 m Ψ k , π sub ( k ) Ψ j , i , if  k = j π s ( j ) = i , π s π sub .
Here, the sum is computed over (m − 1)! permutations of πs. Sm denotes the set of all m! permutations of the set S = {1, 2, ...,m}, and sgn(πsub) denotes the sign of the permutation πs : +1 if πs is an even permutation and −1 if it is odd. After interchanging the order of the summations and integral in Equation (A16), the integration reduces to the following form
E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j 0 x s   exp ( n l j x )   x n i × ( π sub S m sgn ( π sub ) k = 1 m γ ( n + 1 π sub ( k ) ,   x n l k )   ( n l k ) π sub ( k ) n 1 γ ( n + 1 i ,   x n l j )   ( n l j ) i 1 n )   d x 𝒮 i , j = 1 m ( 1 ) i + j π sub S m sgn ( π sub ) 0 x s exp ( n l j x )   x n i × ( k = 1 m γ ( n + 1 π sub ( k ) ,   x n l k )   ( n l k ) π sub ( k ) n 1 γ ( n + 1 i ,   x n l j )   ( n l j ) i 1 n ) d x .
It can be seen from Equation (A18) that the integration includes the multiplication of (m − 1) times incomplete gamma functions with exponential and powers. After applying identities (ex. 10.3, Equations 10.15 and 10.65 in [21]),
γ ( n ,   ax ) = 1 n   ( ax ) n 1 2   e ax 2   𝒨 n 1 2 ,   n 2   ( ax ) , n 2 1 ,   2 ,   3 , ,
γ ( n + 1 π sub ( k ) ,   n l k   x ) = 1 n + 1 π sub ( k )   ( n l k x ) n π sub ( k ) 2   e nx 2 l k   𝒨 n π sub ( k ) 2 ,   n + 1 π sub ( k ) 2   ( n l k x ) ,
a solution of integration in Equation (A18) may be obtained by using the following formula
0 x v 1   e bx   𝒨 λ 1 ,   u 1 1 2   ( xa 1 ) 𝒨 λ m ,   u m 1 2   ( xa m ) d x = a 1 u 1 a m u m ( b + A ) v M   Γ ( v + M ) × F A ( v + M ; u 1 λ 1 , , u m λ m ; 2 u 1 , , 2 u m ;   a 1 b 1 + A , , a m b m + A ) ,
where M = u1+· · ·+um and A = (1/2)(a1+· · ·+am) are valid for ℜ(M + v) > 0, and ( b ± 1 2 a 1 ± ± 1 2 a n ) > 0. Here 𝒨k,m(x) is a Whittaker function of the first kind Equation (7.622) in [25] and FA(α : β1, . . ., βm; γ1, . . ., γm : z1, . . ., zm) is a hypergeometric function of several variables Equation (9.19) in [25]. After replacing the integration part of Equation (A18) by Equation (A21), it can easily be shown that for m dimensional SAR system with nm, the moments of the sample maximum eigenvalue is
                        E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π sub S m k i Λ π sub ( k ) j sgn ( π sub ) 0 k = 1 m 1   1 n + 1 π sub   ( k )   ( n l k ) n π sub   ( k ) 2   x s + n i + n π sub   ( k ) 2   exp ( x ( n l j + n 2 l k ) )   𝒨 n π sub   ( k ) 2 ,   n + 1 π sub ( k ) 2   ( x n l k ) d x E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π sub S m sgn ( π sub ) k = 1 m 1 1 n + 1 π sub   ( k )   ( n l k ) n π sub   ( k ) 2   a k u k ×                         ( b + A ) v M   Γ ( v + M ) × F A ( v + M ; u 1 λ 1 , , u m 1 λ m 1 ; 2 u 1 , , 2 u m 1 ;   a 1 b 1 + A ,                         , a m 1 b m 1 + A )
where a k = n l k, uk = (2+nπsub(k))/2, v = 1 + s + n i + k = 1 m 1 n π sub   ( k ) 2, b = n l j + k = 1 m 1 n 2 l k, λ k = n π sub   ( k ) 2, M = k = 1 m 1 u k and A = 1 2 k = 1 m 1 a k. Notice that k and πsub(k) takes values from (k, πsub(k) = {k, πsub(k) ∈ {1, 2, . . ., m}, kjπsub(j) ≠ i}) related to given i, j.

Although Equation (A22) is an exact closed-form expression for the MGF of the sample maximum eigenvalue, the hypergeometric function of several variables is in general very difficult to evaluate, and it is better to express Equation (A22) in a more tractable form for future analysis. Regarding Equation (2.53) in [21], the incomplete gamma function can be written in the closed form

γ ( n + 1 π sub   ( k ) ,   x n l k ) = ( n π sub   ( k ) ) ! ( 1 exp ( x n l k )   ( t = 0 n π sub   ( k ) ( x n l k ) t t ! ) )
for n ∈ ℤ+, which is a constraint fulfilled by our problem. For the sake of simplicity, f ( k ,   π sub ( k ) ) = exp ( x n l k )   ( t = 0 n π sub   ( k ) ( x n l k ) t t ! ), and k = 1 m ( n π sub   ( k ) ) !   ( 1 f ( k ,   π sub ( k ) ) ) = k = 1 m a k ! ( 1 f k ) indicate m times multiplications of incomplete gamma function. Then if this multiplication is extended as in the form
k = 1 m a k ! ( 1 + k 1 = 1 ( m 1 ) ( 1 ) 1 f k 1 + k 2 = 2 > k 1 = 1 ( m 2 ) ( 1 ) 2 f k 1 f k 2 + + k m = m > > k 2 = 2 > k 1 = 1 ( m m ) ( 1 ) m f k 1 f k 2 f k m )
and the integration can be easily implemented into Equation (A18) as follows
E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π sub S m sgn ( π sub ) 0 x s + n i   exp ( n l j x ) ( k = 1 m ( n π sub ( k ) ) ! ( 1 f ( k , π sub ( k ) ) ) ) / Ψ ji   d x
By analyzing that the term Ψji in the determinant of Equation (A25) reduces to (m − 1) multiplication of incomplete gamma functions:
E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π sub S m sgn ( π sub ) k = 1 m a k ! ( 0 x s   exp ( n l j x )   x n i d x + k 1 = 1 ( m 1 1 ) 0 f k 1   x s exp ( n l j x ) x n i   d x + + k m 1 = m 1 > > k 2 = 2 > k 1 = 1 ( m 1 m 1 ) 0 f k 1   f k 2 f k m 1   x s   exp ( n l j x ) x n i   d x ) .

Applying the rule 0 x n   e u x   d x = n ! u n 1 in Equation (A26) it yields:

E ( λ max s ) = 𝒮 i , j = 1 m ( 1 ) i + j π sub S m sgn ( π sub ) k = 1 m 1 a k !           ( ( n i + s ) ! ( n l j ) n i + s + 1 + k 1 = 1 ( m 1 1 ) t k 1 = 0 n π sub ( k 1 ) ( n l k 1 ) t k 1 t k 1 ! ( s + n i + t k 1 ) ! ( n l j + n l k 1 ) s + n i + t k 1 + 1 + +           t k m 1 = 0 n π sub ( k m 1 ) ( n l k m 1 ) t k m 1 t k m 1 ! t k 1 = 0 n π sub ( k 1 ) ( n l k 1 ) t k 1 t k 1 ! ( s + n i + t k 1 + + t k m 1 ) ! ( n l j + n l k 1 + + n l k m 1 ) s + n i + t k 1 + + t k m 1 + 1 ) .
The sth moment of sample maximum eigenvalue can finally be written in a closed-form expression as
E ( λ max   s ) = 𝒮 i , j = 1 m π sub S m sgn ( π sub ) 𝒢 ( k , π sub ( k ) ) , if  k = j π k ( j ) = i ,   π k π sub
𝒢 ( k ,   π sub   ( k ) ) = k = 1 m 1 ( n π sub   ( k ) ) ! × ( ( n i + s ) ! ( n l j ) n i + s + 1 + k 1 = 1 ( m 1 1 ) t k 1 = 0 n π sub ( k 1 ) ( n l k 1 ) t k 1 t k 1 ! ( s + n i + t k 1 ) ! ( n l j + n l k 1 ) s + n i + t k 1 + 1 + k 2 = 2 > k 1 = 1 ( m 1 2 ) t k 1 = 0 n π sub ( k 1 ) ( n l k 1 ) t k 1 t k 1 ! t k 2 = 0 n π sub ( k 2 ) ( n l k 2 ) t k 2 t k 2 ! ( s + n i + t k 1 + t k 2 ) ! ( n l j + n l k 1 + n l k 2 ) s + n i + t k 1 + t k 2 + 1 + + t k m 1 = 0 n π sub ( k m 1 ) ( n l k m 1 ) t k m 1 t k m 1 ! t k 1 = 0 n π sub ( k 1 ) ( n l k 1 ) t k 1 t k 1 !
( s + n i + t k 1 + + t k m 1 ) ! ( n l j + n l k 1 + + n l k m 1 ) s + n i + t k 1 + + t k m 1 + 1 )

The moments of the sample maximum eigenvalue regarding its PDF is solved. For larger m dimensional systems, such an approach may become complicated due to the need for a large number of calculation of cofactors. However, for applications where only a few eigenvalues, like polarimetry (m = 3) and interferometry (m = 2), are of interest, the numerical calculations are quite rapid and stable. For example, in the simplest case of m = 2, n ≥ 2, the MGF is given by

E ( λ max s ) = 𝒮 ( j , i = 1 2 0 x s x n i   exp ( x n l j ) π sub S m sgn ( π sub )   γ ( n + 1 k , x n l π sub ( k ) ) ( n l π sub ( k ) ) k n 1 ) if   π k ( j ) = i ,     π k π sub ,         and       if   k j     and     l π sub ( k ) i ,   k = 1 ,   2 = 𝒮 Γ ( 2 n ) Γ ( n 1 ) ( l 1 l 2 n l 1 + n l 2 ) 2 n × ( j , i = 1 2 π sub S m sgn ( π sub ) ( n + 1 k ) 2 F 1 ( 1 , 2 n ; n + 2 k ; l i l i + l π sub ( k ) ) ) .
where the constant 𝒮 = ( l 1 l 2 ) 1 n n 2 n Γ ( n 1 ) Γ ( n + 1 ) ( l 1 l 2 ) for m = 2 and 2F1(a, b; c; x) is a Hypergeometric function Equation (9.8) in [21].

Figure 1. Two-dimensional system configuration. (a) True eigenvalue l1 and (b) true eigenvalue l2 as a function of correlation and normalized power ratio b. (c) Counter plots of l1.
Figure 1. Two-dimensional system configuration. (a) True eigenvalue l1 and (b) true eigenvalue l2 as a function of correlation and normalized power ratio b. (c) Counter plots of l1.
Sensors 12 02766f1 1024
Figure 2. (a) Comparison between the theoretical distributions and the histograms obtained from simulated data of the maximum sample eigenvalue. (b) The distribution of the maximum sample eigenvalue as a function of the number of samples, for a three-dimensional system. In both case, the powers are given by σk1 = σk2 = σk3 = 1 and correlations by ρk1k2 = 0, ρk1k3 = 0.8 and ρk2k3 = 0. When n → ∞, λmax = l1 = 1.8.
Figure 2. (a) Comparison between the theoretical distributions and the histograms obtained from simulated data of the maximum sample eigenvalue. (b) The distribution of the maximum sample eigenvalue as a function of the number of samples, for a three-dimensional system. In both case, the powers are given by σk1 = σk2 = σk3 = 1 and correlations by ρk1k2 = 0, ρk1k3 = 0.8 and ρk2k3 = 0. When n → ∞, λmax = l1 = 1.8.
Sensors 12 02766f2 1024
Figure 3. Theoretical versus simulation results (a) for the first order statistics (mean), and (b) for the second order statistics (variance), of the sample maximum eigenvalue of a two-dimensional system with powers σk1 = σk2 = 1 and correlation ρk1k2 = 0.2.
Figure 3. Theoretical versus simulation results (a) for the first order statistics (mean), and (b) for the second order statistics (variance), of the sample maximum eigenvalue of a two-dimensional system with powers σk1 = σk2 = 1 and correlation ρk1k2 = 0.2.
Sensors 12 02766f3 1024
Figure 4. Theoretical results for the third (skewness) and the fourth (kurtosis) order statistics of the sample maximum eigenvalue of a 2D system having powers σk1 = σk2 = 1 and correlations ρ = {0.2, 0.3, . . ., 0.9} versus the number of samples n = {2, 3, . . ., 62}.
Figure 4. Theoretical results for the third (skewness) and the fourth (kurtosis) order statistics of the sample maximum eigenvalue of a 2D system having powers σk1 = σk2 = 1 and correlations ρ = {0.2, 0.3, . . ., 0.9} versus the number of samples n = {2, 3, . . ., 62}.
Sensors 12 02766f4 1024
Figure 5. Effects of the correlation between channels on the expected value of the maximum sample eigenvalue keeping n = 3 and k = 1.
Figure 5. Effects of the correlation between channels on the expected value of the maximum sample eigenvalue keeping n = 3 and k = 1.
Sensors 12 02766f5 1024
Figure 6. The bias, E[λmax] − lmax, of the sample maximum eigenvalue with the number of samples 3 and 16 in various correlated channels having a standard deviation of σk1 = σk2 = 1.
Figure 6. The bias, E[λmax] − lmax, of the sample maximum eigenvalue with the number of samples 3 and 16 in various correlated channels having a standard deviation of σk1 = σk2 = 1.
Sensors 12 02766f6 1024
Figure 7. Maximum sample eigenvalue detection performance as a function of the number or channels m, samples n and correlation ρ. (a) ROC curves for different multidimensional systems, m = {2, 3, 6} and n = 6. (b) ROC curves for the two- and three-dimensional cases, and for different number of samples n. (c) ROC curves for the two-dimensional case, n = 3, and for different correlation.
Figure 7. Maximum sample eigenvalue detection performance as a function of the number or channels m, samples n and correlation ρ. (a) ROC curves for different multidimensional systems, m = {2, 3, 6} and n = 6. (b) ROC curves for the two- and three-dimensional cases, and for different number of samples n. (c) ROC curves for the two-dimensional case, n = 3, and for different correlation.
Sensors 12 02766f7 1024
Figure 8. Detection performance of three different distributed scatterers using the PMF concept for different number of samples. (a) Azimuthal symmetric scatterer with ɛ = 1, γ = 0.5 and ρ = 0. (b) Reflection symmetric scatterer with ɛ = 1, γ = 0.8 and ρ = 0.8. (c) Reflection symmetric scatterer with ɛ = 0, γ = 0.8 and ρ = 0.8. The PFA is evaluated using the polarization independent clutter with ɛ = 1, γ = 1 and ρ = 0.
Figure 8. Detection performance of three different distributed scatterers using the PMF concept for different number of samples. (a) Azimuthal symmetric scatterer with ɛ = 1, γ = 0.5 and ρ = 0. (b) Reflection symmetric scatterer with ɛ = 1, γ = 0.8 and ρ = 0.8. (c) Reflection symmetric scatterer with ɛ = 0, γ = 0.8 and ρ = 0.8. The PFA is evaluated using the polarization independent clutter with ɛ = 1, γ = 1 and ρ = 0.
Sensors 12 02766f8 1024

Share and Cite

MDPI and ACS Style

Erten, E. The Performance Analysis Based on SAR Sample Covariance Matrix. Sensors 2012, 12, 2766-2786. https://doi.org/10.3390/s120302766

AMA Style

Erten E. The Performance Analysis Based on SAR Sample Covariance Matrix. Sensors. 2012; 12(3):2766-2786. https://doi.org/10.3390/s120302766

Chicago/Turabian Style

Erten, Esra. 2012. "The Performance Analysis Based on SAR Sample Covariance Matrix" Sensors 12, no. 3: 2766-2786. https://doi.org/10.3390/s120302766

Article Metrics

Back to TopTop