Next Article in Journal
Using Minimum Local Distortion to Hide Decision Tree Rules

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

# Isometric Signal Processing under Information Geometric Framework

College of Electronic Science, National University of Defense Technology, Changsha 410073, China
*
Authors to whom correspondence should be addressed.
Entropy 2019, 21(4), 332; https://doi.org/10.3390/e21040332
Received: 28 February 2019 / Revised: 20 March 2019 / Accepted: 25 March 2019 / Published: 27 March 2019

## Abstract

:
Information geometry is the study of the intrinsic geometric properties of manifolds consisting of a probability distribution and provides a deeper understanding of statistical inference. Based on this discipline, this letter reports on the influence of the signal processing on the geometric structure of the statistical manifold in terms of estimation issues. This letter defines the intrinsic parameter submanifold, which reflects the essential geometric characteristics of the estimation issues. Moreover, the intrinsic parameter submanifold is proven to be a tighter one after signal processing. In addition, the necessary and sufficient condition of invariant signal processing of the geometric structure, i.e., isometric signal processing, is given. Specifically, considering the processing with the linear form, the construction method of linear isometric signal processing is proposed, and its properties are presented in this letter.

## 1. Introduction

Information geometry was pioneered by Rao  in 1945, and the more concise framework was built up by Chentsov , Efron [3,4], and Amari . In information geometry, the research object is the statistical manifold, which consists of a parameterized family of probability distributions with a topological structure, $M = { p ( x ; ξ ) }$. Given the Fisher information matrix as the Riemannian metric, the distance between any two points (probability distributions) can be calculated . In such a manifold, the distance between two points stands for the intrinsic measure for the dissimilarity between two probability distributions . As information geometry provides a new perspective on signal processing, there are many applications of it. In estimation issues, based on the Riemannian distance, the natural gradient has been employed [8,9,10]. The intrinsic Cramér–Rao bound is a tighter bound of both biased and unbiased estimators and derives from the Grassmann manifold . In addition, the geometric structure (considering the distance between all pairs of points) can be used as an evaluation of the quality of the observation model, which has been applied in waveform optimization . In optimization problems under the matrix constraint, the geometric structure was utilized [13,14,15]. Moreover, there are also many significant works of detection based on the distance [16,17,18,19,20]. Furthermore, in image processing, based on the Grassmann manifold, the target recognition in the SAR (Synthetic Aperture Radar) image is proposed .
As this new general theory has revealed the capability to solve statistical problems, the further development of information geometry demands the unambiguous relationship between the geometric structure and the intrinsic characteristic of common issues. This letter focuses on the influence of the signal processing on the statistical manifold in terms of estimation issues. In the estimation issues, the signal processing is the common means to mine for the information of a desired parameter. Accompanying signal processing, the geometric structure of the considered statistical manifold, to which the distribution of the observed data belongs, would change. The purpose of this letter is studying the geometric structure change accompanying signal processing and proposing an appropriate processing based on the change of the structure.
This research will be presented in the following way. At first, according to the essence of the estimation issues, the intrinsic parameter submanifold, which reflects the geometric characteristic of the issues, has been defined. Then, we show that the statistical manifold will become a tighter one after processing and give the necessary and sufficient condition of the invariant signal processing of the geometric structure (named isometric signal processing). Considering the more specific condition that the processing is linear, the construction method of linear isometric processing is proposed. Moreover, the properties of the constructed processing are presented.
The following notations are adopted in this paper: the math italic x, lowercase bold italic $x$, and uppercase bold $A$ denote the scalars, vectors, and matrices, respectively. Constant matrix $I$ indicates the identity matrix. Symbols $( · ) H$, $( · ) T$, and $( · ) *$ indicate the conjugate transpose operator, transpose operator, and the complex conjugate, respectively. In addition, $[ A ] i j$ indicates the ith row jth column element of matrix $A$, and $rk ( A )$ is the rank of matrix $A$. Moreover, $A ≥ 0$ means that the matrix $A$ is a positive semidefinite matrix. Finally, $E ( · )$ indicates the statistical expectation of a random variable.

## 2. Intrinsic Parameter Submanifold

Let $M = { p ( x , ξ ) }$ be a statistical manifold with coordinate system $ξ$, which consists of a family of probability distributions. Consider an estimation issue on the statistical manifold $M$; the observed data $x = ( x 1 , x 2 , ⋯ , x N )$ belong to one of the probability distributions $p ( x , ξ )$ in $M$. Suppose the desired parameter $θ$ is implied in parameter $ξ$ and the relation between $θ$ and $ξ$ can be expressed as a mapping, $h : θ ↦ ξ$. As an instance, in the distance measurement of the pulse-Doppler radar, the desired distance r is embedded in the statistical mean $μ$ of the observed data, i.e., $μ = h ( r ) = P ( t − 2 r / c )$ ($P ( t )$ means the pulse signal, and c is the velocity of light).
Actually, not all $p ( x , ξ )$ in $M$ are concerned with the estimation issue; the considered probability distributions ${ p ( x , h ( θ ) ) }$ not cover the whole manifold, they are only from a submanifold, which is the essential manifold in the issue. In the above example, the considered distributions are screened by the pulse signal $P ( t )$ (the statistical mean $μ$ is able to be expressed as $P ( t − 2 r / c )$).
Definition 1 (Intrinsic parameter submanifold).
The manifold $S = { p ( x , h ( θ ) ) }$ is the intrinsic parameter submanifold of $M = { p ( x , ξ ) }$, with coordinate system $θ$.
The Riemannian metric of submanifold $S$ is defined as $I x ( θ )$, the Fisher information matrix associated with parameter $θ$, as in Figure 1. Actually, the distance of two points on the submanifold is defined by using the Riemannian metric .
Remark 1.
When the Fisher information matrices $G 1 , G 2$ belonging to two observation models satisfy $G 1 ≥ G 2$, the observation model with $G 1$ is suggested to be better than another in terms of the estimation problem. The reason is that the distance $D 1 ( θ 1 , θ 2 )$ (defined by $G 1$) is larger than $D 2 ( θ 1 , θ 2 )$ (defined by $G 2$), because of the definition of the distance on the manifold. That means the two parameters $θ 1 , θ 2$ are easier to discriminate in the manifold with $G 1$ than $G 2$.
Furthermore, the above remark also can be explained in traditional statistical signal processing. In estimation theory, the Fisher information also plays an important role, as the CRLB (Cramér–Rao Lower Bound) inequality. Therefore, in the traditional estimation theory, the same conclusion can be educed.

## 3. Signal Processing on the Intrinsic Parameter Submanifold

#### 3.1. Geometric Structure Change by Signal Processing

In estimation issues, the signal is often processed to another form to obtain accurate estimates. Consider the signal processing $y = g ( x )$, where $x$ indicates the original signal and $y$ is the processed signal. The signal processing often accompanies the varying of the statistical manifold, specially the varying of the Riemannian metric.
One of the most vital factors of the submanifold in terms of estimation issues is its Riemannian metric, because the distance, representing the similarity, between two parameters is defined by it. Suppose the intrinsic parameter submanifold of x and y are $S$ and $S ′$, respectively. The Riemannian metrics of $S$ and $S ′$ are $G S$ and $G S ′$, respectively. If the PDFs (Probability Density Functions) $p x ( x ; θ )$, $p y ( y ; θ )$, and $p xy ( x , y ; θ )$ obey the boundary condition , then the Fisher information satisfies the following equation [22,23],
$− E ∂ 2 ln p x , y ( x , y ; θ ) ∂ θ i ∂ θ j = − E ∂ 2 ln p x | y ( x | y ; θ ) ∂ θ i ∂ θ j − E ∂ 2 ln p y ( y ; θ ) ∂ θ i ∂ θ j ,$
$− E ∂ 2 ln p x , y ( x , y ; θ ) ∂ θ i ∂ θ j ≥ − E ∂ 2 ln p y ( y ; θ ) ∂ θ i ∂ θ j .$
Because $y$ is produced by $x$ via $y = g ( x )$, the following equation has been established.
$− E ∂ 2 ln p x , y ( x , y ; θ ) ∂ θ i ∂ θ j = − E ∂ 2 ln p x ( x ; θ ) ∂ θ i ∂ θ j ( y = g ( x ) )$
Proof.
Because $p x y ( x , y ; θ ) = 0$ for $y ≠ g ( x )$, then the $p x , y ( x , y ; θ )$ can be expressed as $p x , y ( x , y ; θ ) = p x ( x ; θ ) δ ( y − g ( x ) )$, the Fisher information can be simplified:
$− E ∂ 2 ln p x , y ( x , y ; θ ) ∂ θ i ∂ θ j = − E ∂ 2 ln p x ( x ; θ ) ∂ θ i ∂ θ j − E ∂ 2 ln δ ( y − g ( x ) ) ∂ θ i ∂ θ j = − E ∂ 2 ln p x ( x ; θ ) ∂ θ i ∂ θ j .$
□
Then, the following lemma holds.
Lemma 1.
The Riemannian metrics $G S$ and $G S ′$ satisfy, $∀ θ$:
$[ G S ′ ( θ ) ] i j − E ∂ 2 ln p x | y ( x | y ; θ ) ∂ θ i ∂ θ j = [ G S ( θ ) ] i j .$
Proof.
By Equations (1) and (3), and the definitions of $G S$ and $G S ′$, the lemma has been established. □
Corollary 1.
For each $θ$, $G S ( θ ) − G S ′ ( θ )$ is a positive semidefinite matrix, i.e., $G S ( θ ) ≥ G S ′ ( θ )$.
Proof.
By Equation (2), Equation (3), and the definitions of $G S$ and $G S ′$, the corollary has been established. □
Therefore, according to Lemma 1 and its corollary, the signal processing would result in Fisher information loss. As Figure 2 shows, the signal processing would turn the intrinsic parameter submanifold into a tighter one, i.e., discriminating two parameters turns out to be more difficult.

#### 3.2. Isometric Signal Processing

As the above discussion, the appropriate signal processing should satisfy that the intrinsic parameter submanifold of processed signal is isometric to the original submanifold, i.e., the difference between any two parameters is unreduced.
Definition 2 (Isometry).
When $G S ( θ ) = G S ′ ( θ )$, the two intrinsic parameter submanifolds $S$ and $S ′$ are isometric.
Actually, the sufficient and necessary condition of the isometry of $S$ and $S ′$ is as follows.
Theorem 1.
If and only if $y$ is the sufficient statistic of $x$, $G S ( θ ) = G S ′ ( θ )$.
Proof.
For Lemma 1, the following relations are equivalent,
$G S ( θ ) = G S ′ ( θ ) ⟺ E ( ∂ ln p x | y ( x | y ; θ ) ∂ θ i ) 2 = 0 ( ∀ i ) ⟺ ∂ p x | y ( x | y ; θ ) ∂ θ i = a . e . 0 ( ∀ i ) .$
That means $p x | y ( x | y ; θ )$ is irrelevant to parameter $θ$, i.e., $y$ is the sufficient statistic of $x$. □
The theorem suggests to use the test statistic to estimate the desired parameter, in the information geometry view. Actually, this conclusion also can be ensured in traditional estimation theory. For the Rao–Blackwell theorem , for any estimator $θ ^ ( x )$, the estimator $θ ˇ ( y ) = E ( θ ^ ( x ) | y )$ is the better estimator, i.e., $E ( θ ˇ ( y ) − θ ) 2 ≤ E ( θ ^ ( x ) − θ ) 2$, when $y = g ( x )$ is the sufficient statistic. This theorem indicates that designing the estimator using the sufficient statistic $y$ is more appropriate, because for each estimator $θ ^ ( x )$ using the original signal $x$ as input, there exists the estimator $θ ˇ ( y ) = E ( θ ^ ( x ) | y )$ using the sufficient statistic $y$ as the input that is better than $θ ^ ( x )$. Furthermore, for the Lehmann–Scheffé theorem [25,26], when the sufficient statistic $y$ is complete, if the estimator $θ ˇ ( y )$ is unbiased, i.e., $E ( θ ˇ ( y ) ) = θ$, the estimator $θ ˇ ( y )$ is the minimum-variance unbiased estimator.
Corollary 2.
If $g ( x )$ is a reversible function, $G S ( θ ) = G S ′ ( θ )$.
Proof.
If $g ( x )$ is a reversible function, the PDF of $x$ and $y$ satisfy:
$p y ( y ; θ ) d g ( x ) d x = p x ( x ; θ ) .$
According to the Fisher–Neyman factorization theorem , $y$ is the sufficient statistic of $x$, so $G S ( θ ) = G S ′ ( θ )$. □
When the processed signal $y = g ( x )$ is the sufficient statistic of $x$, the signal processing $g ( x )$ is the isometric signal processing. Specifically, the reversible processing is definitely isometric processing, such as DFT (Discrete Fourier Transformation, because the inverse discrete Fourier transformation can recover the original signal, i.e., DFT is a reversible process). Moreover, this conclusion is also encountered in traditional estimation theory as the Rao–Blackwell theorem and Lehmann–Scheffé theorem.

## 4. Linear Form of Signal Processing

In real works, the noise is often Gaussian or asymptotically Gaussian, and the common signal processing is linear, such as DFT, matched filter, coherent integration, etc. This section will discuss the linear form of signal processing on the Gaussian statistical manifold.

#### 4.1. Model Formulation

The information, as the desired parameter, is usually embedded in the signal, and the signal is often contaminated by noise, which can be described as $x = s ( θ ) + w$, where $s ( θ )$ is the uncontaminated signal waveform, $w$ is the Gaussian noise, and $x$ is the signal. The linear signal processing can be expressed as a matrix form, $y = H x$.

#### 4.2. Fisher Information Loss of Linear Signal Processing

Suppose the linear form of signal processing is formed as $y = H x$; $x$ is the m dimension, and $y$ is the n dimension, then the matrix $H$ is the $n × m$ dimension. If $rk ( H ) < n$, there are $n − rk ( H )$ rows, which are the linear combination of the rest of the $rk ( H )$ rows. Therefore, the PDF of $y$ only depends on the $rk ( H )$ corresponding elements, and the Fisher information loss is equivalent to the loss of the submatrix consisting of such $rk ( H )$ rows. Therefore, for a convenient statement, $rk ( H )$ is assumed to be n, i.e., matrix $H$ is row full rank.
The Fisher information loss will be discussed under WGN (White Gaussian Noise), at first. Then, the Fisher information under CGN (Colored Gaussian Noise) will be presented based on the results under WGN.

#### 4.2.1. White Gaussian Noise

Suppose the noise is WGN and with power $σ 2$, then the signal also obeys normal distribution $x ∼ N ( s ( θ ) , σ 2 I )$. As the property of the normal distribution, the distribution of $y$ is also the normal distribution, but with different parameter $N ( H s ( θ ) , σ 2 H H H )$. Calculate the Fisher information of $x$ and $H x$; the loss of information is:
$G Δ ( θ ) = G S ( θ ) − G S ′ ( θ ) = 1 σ 2 ∂ s ( θ ) ∂ θ H ( I − H H ( H H H ) − 1 H ) ∂ s ( θ ) ∂ θ .$

#### 4.2.2. Colored Gaussian Noise

Suppose the noise is CGN and with covariance matrix $C$. According to the property of the Hermite positive definite matrix, the covariance matrix can be expressed as $C = D D H$, where $D$ is a reversible matrix.
According to Theorem 1, perform the reversible transformation $x * = D − 1 x$; the Fisher information is invariant, i.e., $G S ( θ ) = G S ′ ( θ )$, and the noise in $x *$ is WGN. Performing the linear processing $H D$ to $x *$, the result is:
$H D x * = H D D − 1 x = y ,$
and the information loss can be calculated by Equation (8). Therefore, the loss of information is:
$G Δ ( θ ) = ∂ s ( θ ) ∂ θ H ( C − 1 − H H ( H C H H ) − 1 H ) ∂ s ( θ ) ∂ θ .$

#### 4.3. The Construction of the Isometric Linear Form of Signal Processing

In the previous section, the sufficient and necessary condition of isometric signal processing was that $y = g ( x )$ is the sufficient statistic of $x$. However, the sufficient statistic of $x$ is often difficult to obtain, and the isometric processing should be constructed in another way. This part will introduce the construction method of linear isometric signal processing.
As regards the previous discussion, the signal under CGN can be transformed to the signal under WGN without information loss. Therefore, the signal under WGN is discussed in this part. As for the condition of CGN, the signal can be white at first, then the next steps are the same as the WGN condition.
The linear isometric processing can be obtained in the following way. Firstly, solve the equation:
$∀ θ , ∂ s ( θ ) ∂ θ H v = 0 ( v ∈ R m ) .$
Suppose the solution space is $V = span { v 1 , v 2 , ⋯ , v l }$ with dimension l and the orthogonal complement of $V$ is $V ⊥$ with dimension $n = m − l$. Then, the desired signal processing is formed as:
$H = [ v 1 ′ , v 2 ′ , ⋯ , v n ′ ] H ,$
where $v 1 ′ , v 2 ′ , ⋯ , v n ′$ is the bias of $V ⊥$.
Proposition 1.
$H = [ v 1 ′ , v 2 ′ , ⋯ , v n ′ ] H$ is the isometric processing.
Proof.
Let $Q = I − H H ( H H H ) − 1 H$. Because the non-zero eigenvalue of $H H ( H H H ) − 1 H$ is equivalent to that of $H H H ( H H H ) − 1 = I$, the eigenvalue of $H H ( H H H ) − 1 H$ is one (n multiplicity) and zero ($m − n$ multiplicity). Therefore, the eigenvalue of $Q$ is one ($m − n$ multiplicity) and zero (n multiplicity). Then, as the matrix $Q$ is the Hermitian symmetric matrix, it can be expressed as:
$Q = L diag ( 1 , ⋯ , 1 , 0 , ⋯ , 0 ) L H .$
Consider the fact $Q H H = 0$; the first n columns of $H L$ must equal zero. That means the first n columns of $L$ are the bias of $V$, and the rest of the columns are the bias of $V ⊥$, i.e.,
$L = [ v 1 , ⋯ , v m − n , v 1 ″ , ⋯ , v n ″ ] .$
Because $v 1 , ⋯ , v m − n$ is the solution of Equation (12),
$L H ∂ s ( θ ) ∂ θ = [ v 1 , ⋯ , v m − n , v 1 ″ , ⋯ , v n ″ ] H ∂ s ( θ ) ∂ θ = [ 0 , ⋯ , 0 , v 1 ″ H ∂ s ( θ ) ∂ θ , ⋯ , v n ″ H ∂ s ( θ ) ∂ θ ] T ,$
then:
$∂ s ( θ ) ∂ θ H L diag ( 1 , ⋯ , 1 , 0 , ⋯ , 0 ) L H ∂ s ( θ ) ∂ θ = 0 ,$
i.e., the Fisher information loss is zero. □
According to the proposed construction method, the following proposition can be obtained.
Proposition 2.
The matrix $H$ is the isometric matrix with the minimal rows, i.e., the processed signal has the minimal length.
Proof.
Let $H ′$ be the isometric matrix with dimension $n ′$ and $Q ′ = I − H ′ H ( H ′ H ′ H ) − 1 H ′$. Similarly, the matrix also can be expressed as:
$Q ′ = L ′ diag ( 1 , ⋯ , 1 , 0 , ⋯ , 0 ) L ′ H ,$
where the multiplicity of eigenvalue one is $m − n ′$.
As:
$∂ s ( θ ) ∂ θ H L ′ diag ( 1 , ⋯ , 1 , 0 , ⋯ , 0 ) L ′ H ∂ s ( θ ) ∂ θ = 0 ,$
the first $m − n ′$ rows of $L ′ H ∂ s ( θ ) ∂ θ$ must be zero, which means the first $m − n ′$ columns of $L$ is the linear independent solution of Equation (12). However, the solution space $V = span { v 1 , v 2 , ⋯ , v l }$ has dimension $m − n$, so we can get $m − n ′ ≤ m − n$, i.e., $n ′ ≥ n$.
Therefore, the matrix $H$ is the isometric matrix with the minimal rows. □
Remark 2.
Because the first $m − n$ columns of $L$ are the linear independent solution of Equation (12), that means any element $v ′$ from $V ⊥$ satisfies that the first $m − n$ elements of $L H v ′$ equal zero. Therefore, the solution space of $Q ′ x = 0$ is $V ⊥$. Moreover, $H ′$ satisfies $Q ′ H ′ H = 0$, so $H ′$ consists of the bias of $V ⊥$.
In other words, the isometric matrix with dimension n is the equivalent matrix of $H$, which indicates that the proposed construction method can generate any isometric matrix with minimal rows.

#### Sample of the Construction

Consider the radar target detection scene: the radar emits the single frequency signal and receives the echo to obtain the distance and RCS (Radar-Cross-Section) information of the target. The observation model can be formulated as:
$x k = A exp ( j 2 π f ( k t Δ − 2 r c ) ) + w k k = 1 , ⋯ , N ,$
where j indicates the unit of the imaginary part, $t Δ$ is the sampling interval, f is the frequency of the emitted signal, c is the velocity of light, $w k$ denotes WGN, r indicates the distance of the target, and A is the unknown amplitude, which contains the information of RCS. The desired parameter is $θ = ( A , r )$.
Firstly, the derivative is:
$∂ s ( θ ) ∂ θ H = exp ( j 2 π f ( t Δ − 2 r c ) ) ⋯ exp ( j 2 π f ( N t Δ − 2 r c ) ) 4 j A π f c exp ( j 2 π f ( t Δ − 2 r c ) ) ⋯ 4 j A π f c exp ( j 2 π f ( N t Δ − 2 r c ) ) .$
Solve Equation (12); the orthogonal complement of the solution space is:
$span { ( exp ( j 2 π f t Δ ) , ⋯ , exp ( j 2 π f N t Δ ) ) } .$
Therefore, the isometric processing is:
$y = ∑ k = 1 N x k exp ( j 2 π f k t Δ ) .$

## 5. Conclusions

This letter focuses on the influence of signal processing on the geometric structure of the statistical manifold in estimation issues. Based on the intrinsic characteristics of the estimation issues, the intrinsic parameter submanifold is defined in this letter. Then, the intrinsic parameter submanifold is proven, which turns into a tighter one after signal processing. Moreover, we show that if and only if the processed signal is the sufficient statistic, the geometric structure of the intrinsic parameter submanifold is invariant. In addition, the construction method of the linear isometric signal processing is proposed. Moreover, the linear processing produced by the proposed method is shown with minimal rows (when it is represented as a matrix), i.e., the processed signal has the minimal length, and the proposed method can generate all linear isometry with minimal rows.

## Author Contributions

H.W. put forward the original ideas and performed the research. Y.C. raised the research question, reviewed this paper, and provided improvement suggestions. H.W. reviewed the paper and provided useful comments. All authors have read and approved the final manuscript.

## Funding

This research was funded by the National Natural Science Foundation of China under grant No. 61871472.

## Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant No. 61871472. The authors are grateful for the valuable comments made by the reviewers, which have assisted us in having a better understanding of the underlying issues and therefore resulting in a significant improvement in the quality of the paper.

## Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

## References

1. Rao, C.R. Information and the Accuracy Attainable in the Estimation of Statistical Parameters. In Breakthroughs in Statistics: Foundations and Basic Theory; Kotz, S., Johnson, N.L., Eds.; Springer: New York, NY, USA, 1992; pp. 235–247. [Google Scholar]
2. Chentsov, N.N. Statistical Decision Rules and Optimal Inference; Number v. 53 in Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
3. Efron, B. Defining the Curvature of a Statistical Problem (with Applications to Second Order Efficiency). Ann. Stat. 1975, 3, 1189–1242. [Google Scholar] [CrossRef]
4. Efron, B. The Geometry of Exponential Families. Ann. Stat. 1978, 6, 362–376. [Google Scholar] [CrossRef]
5. Amari, S.I. Information Geometry and Its Applications, 1st ed.; Springer Publishing Company, Incorporated: Berlin, Germany, 2016. [Google Scholar]
6. Chern, S.S.; Chen, W.H.; Lam, K.S. Lectures on Differential Geometry. Ann. Inst. Henri Poincare-Phys. Theor. 2014, 40, 329–342. [Google Scholar]
7. Rong, Y.; Tang, M.; Zhou, J. Intrinsic Losses Based on Information Geometry and Their Applications. Entropy 2017, 19, 405. [Google Scholar] [CrossRef]
8. Cheng, Y.; Wang, X.; Caelli, T.; Moran, B. Tracking and Localizing Moving Targets in the Presence of Phase Measurement Ambiguities. IEEE Trans. Signal Process. 2011, 59, 3514–3525. [Google Scholar] [CrossRef]
9. Cheng, Y.; Wang, X.; Caelli, T.; Li, X.; Moran, B. Optimal Nonlinear Estimation for Localization of Wireless Sensor Networks. IEEE Trans. Signal Process. 2011, 59, 5674–5685. [Google Scholar] [CrossRef]
10. Cheng, Y.; Wang, X.; Moran, B. Optimal Nonlinear Estimation in Statistical Manifolds with Application to Sensor Network Localization. Entropy 2017, 19, 308. [Google Scholar] [CrossRef]
11. Smith, S.T. Covariance, subspace, and intrinsic Cramér-Rao bounds. IEEE Trans. Signal Process. 2005, 53, 1610–1630. [Google Scholar] [CrossRef]
12. Wang, L.; Wong, K.K.; Wang, H.; Qin, Y. MIMO radar adaptive waveform design for extended target recognition. Int. J. Distrib. Sens. Netw. 2016, 2015, 84. [Google Scholar] [CrossRef]
13. Manton, J.H. Optimization algorithms exploiting unitary constraints. IEEE Trans. Signal Process. 2002, 50, 635–650. [Google Scholar] [CrossRef]
14. Abrudan, T.E.; Eriksson, J.; Koivunen, V. Steepest Descent Algorithms for Optimization Under Unitary Matrix Constraint. IEEE Trans. Signal Process. 2008, 56, 1134–1147. [Google Scholar] [CrossRef]
15. Abrudan, T.; Eriksson, J.; Koivunen, V. Conjugate gradient algorithm for optimization under unitary matrix constraint. Signal Process. 2009, 89, 1704–1714. [Google Scholar] [CrossRef]
16. Barbaresco, F. Innovative Tools for Radar Signal Processing Based on Cartan’s Geometry of SPD Matrices and Information Geometry. In Proceedings of the Radar Conference, Rome, Italy, 26–30 May 2008; pp. 1–6. [Google Scholar]
17. Barbaresco, F. Robust statistical Radar Processing in Fréchet metric space: OS-HDR-CFAR and OS-STAP Processing in Siegel homogeneous bounded domains. In Proceedings of the International Radar Symposium, Leipzig, Germany, 7–9 September 2011; pp. 639–644. [Google Scholar]
18. Wu, H.; Cheng, Y.; Hua, X.; Wang, H. Vector Bundle Model of Complex Electromagnetic Space and Change Detection. Entropy 2018, 21, 10. [Google Scholar] [CrossRef]
19. Hua, X.; Cheng, Y.; Wang, H.; Qin, Y.; Li, Y.; Zhang, W. Matrix CFAR detectors based on symmetrized Kullback-Leibler and total Kullback-Leibler divergences. Digit. Signal Process. 2017, 69, 106–116. [Google Scholar] [CrossRef]
20. Hua, X.; Fan, H.; Cheng, Y.; Wang, H.; Qin, Y. Information Geometry for Radar Target Detection with Total Jensen-Bregman Divergence. Entropy 2018, 20, 256. [Google Scholar] [CrossRef]
21. Dong, G.; Kuang, G.; Wang, N.; Wang, W. Classification via Sparse Representation of Steerable Wavelet Frames on Grassmann Manifold: Application to Target Recognition in SAR Image. IEEE Trans. Image Process. 2017, 26, 2892–2904. [Google Scholar] [CrossRef] [PubMed]
22. Zegers, P. Fisher Information Properties. Entropy 2015, 17, 4918–4939. [Google Scholar] [CrossRef]
23. Zamir, R. A proof of the Fisher information inequality via a data processing argument. IEEE Trans. Inf. Theory 1998, 44, 1246–1250. [Google Scholar] [CrossRef]
24. Blackwell, D. Conditional Expectation and Unbiased Sequential Estimation. Ann. Math. Stat. 1947, 18, 105–110. [Google Scholar] [CrossRef]
25. Lehmann, E.L.; ScheffÉ, H. Completeness, Similar Regions, and Unbiased Estimation-Part I. In Selected Works of E. L. Lehmann; Rojo, J., Ed.; Springer US: Boston, MA, USA, 2012; pp. 233–268. [Google Scholar]
26. Lehmann, E.L.; ScheffÉ, H. Completeness, Similar Regions, and Unbiased Estimation—Part II. In Selected Works of E. L. Lehmann; Rojo, J., Ed.; Springer US: Boston, MA, USA, 2012; pp. 269–286. [Google Scholar]
27. Kay, S.M. Fundamentals of statistical signal processing: Estimation theory. Control Eng. Pract. 1994, 37, 465–466. [Google Scholar]
Figure 1. The intrinsic parameter submanifold.
Figure 1. The intrinsic parameter submanifold.
Figure 2. The signal processing on the intrinsic parameter submanifold.
Figure 2. The signal processing on the intrinsic parameter submanifold.

## Share and Cite

MDPI and ACS Style

Wu, H.; Cheng, Y.; Wang, H. Isometric Signal Processing under Information Geometric Framework. Entropy 2019, 21, 332. https://doi.org/10.3390/e21040332

AMA Style

Wu H, Cheng Y, Wang H. Isometric Signal Processing under Information Geometric Framework. Entropy. 2019; 21(4):332. https://doi.org/10.3390/e21040332

Chicago/Turabian Style

Wu, Hao, Yongqiang Cheng, and Hongqiang Wang. 2019. "Isometric Signal Processing under Information Geometric Framework" Entropy 21, no. 4: 332. https://doi.org/10.3390/e21040332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.