K\"ahlerian information geometry for signal processing

We prove the correspondence between the information geometry of a signal filter and a K\"ahler manifold. The information geometry of a minimum-phase linear system with a finite complex cepstrum norm is a K\"ahler manifold. The square of the complex cepstrum norm of the signal filter corresponds to the K\"ahler potential. The Hermitian structure of the K\"ahler manifold is explicitly emergent if and only if the impulse response function of the highest degree in $z$ is constant in model parameters. The K\"ahlerian information geometry takes advantage of more efficient calculation steps for the metric tensor and the Ricci tensor. Moreover, $\alpha$-generalization on the geometric tensors is linear in $\alpha$. It is also robust to find Bayesian predictive priors, such as superharmonic priors, because Laplace-Beltrami operators on K\"ahler manifolds are in much simpler forms than those of the non-K\"ahler manifolds. Several time series models are studied in the K\"ahlerian information geometry.


Introduction
Since the introduction of Riemannian geometry to statistics [1,2], information geometry has been developed along various directions. The statistical curvature as the differential-geometric analogue of information loss and sufficiency was proposed by Efron [3]. The α-duality of information geometry was found by Amari [4]. Not being limited to statistical inference, information geometry has become popular in many different fields, such as information-theoretic generalization of the expectation-maximization algorithm [5], hidden Markov models [6], interest rate modeling [7], phase transition [8,9] and string theory [10]. More applications can be found in the literature [11] and the references therein.
In particular, time series analysis and signal processing are well-known applications of information geometry. Ravishanker et al. [12] found the information geometry of autoregressive moving average (ARMA) models in the coordinate system of poles and zeros. It was also extended to fractionally-integrated ARMA (ARFIMA) models [13]. The information geometry of autoregressive (AR) models in the reflection coefficient coordinates was also reported by Barbaresco [14]. In the information-theoretic framework, Bayesian predictive priors outperforming the Jeffreys prior were derived for the AR models by Komaki [15].
Kähler manifolds are interesting topics in differential geometry. On a Kähler manifold, the metric tensor and the Levi-Civita connection are straightforwardly calculated from the Kähler potential, and the Ricci tensor is obtained from the determinant of the metric tensor. Moreover, its holonomy group is related to the unitary group. Because of these properties, many implications of Kähler manifolds are found in mathematics and theoretical physics. In addition to these fields, information geometry is one of those fields where the Kähler manifolds are intriguing. After the symplectic structure in information geometry and its connection to statistics were discovered [16], Barbaresco [14] notably introduced Kähler manifolds to information geometry for time series models and also generalized the differential-geometric approach with mathematical structures, such as Koszul geometry [17,18]. Additionally, Zhang and Li [19] found symplectic and Kähler structures in divergence functions.
In this paper, we prove that the information geometry of a signal filter with a finite complex cepstrum norm is a Kähler manifold. The Kähler potential of the geometry is the square of the Hardy norm of the logarithmic transfer function of a linear system. The Hermitian structure of the manifold is explicitly seen in the metric tensor under certain conditions on the transfer functions of linear models and filters. The calculation of geometric objects and the search for Bayesian predictive priors are simplified by exploiting the properties of Kähler geometry. Additionally, α-correction terms on the geometric objects exhibit α-linearity. This paper is structured as follows. In the next section, we shortly review information geometry for signal processing and derive basic lemmas in terms of the spectral density function and transfer function. In Section 3, main theorems for Kählerian information manifolds are proven and the consequences of the theorems are provided. The implications of Kähler geometry to time series models are reported in Section 4. We conclude the paper in Section 5.

Spectral Density Representation in the Frequency Domain
We model an output signal y(w) as a linear system with a transfer function h(w; ξ) of model parameters ξ = (ξ 1 , ξ 2 , · · · , ξ n ): where x(w) is an input signal in frequency domain w. Complex inputs, outputs and model parameters are considered in this paper. The properties of a given signal filter are characterized by the transfer function h(w; ξ) and the model parameters ξ.
In signal processing, one of the most important quantities is the spectral density function. The spectral density function S(w; ξ) is defined as the absolute square of the transfer function: S(w; ξ) = |h(w; ξ)| 2 . (1) The spectral density function describes the way that energy in the frequency domain is distributed by a given signal filter. In terms of signal amplitude, the spectral density function encodes an amplitude response to a monochromatic input e iw . For example, the spectral density function of the all-pass filter is constant in the frequency domain, because the filter passes all inputs to outputs up to the phase difference regardless of frequency. The high-pass filters only allow the signals in the high-frequency domain. Meanwhile, the low-pass filters only permit low-frequency inputs. The properties of other well-known filters are also described by their specific spectral density functions. The spectral density function is also important in information geometry, because the information-geometric objects of the signal processing geometry are derived from the spectral density function [20,21]. Among the geometric objects, the length and distance concepts are most fundamental in geometry. One of the most important distance measures in information geometry is the α-divergence, also known as Chernoff's α-divergence, that is the only divergence which is both an f -divergence and a Bregman divergence [22]. The α-divergence between two spectral density functions S 1 and S 2 is defined as and the divergence conventionally measures the distance from S 1 to S 2 . The α-divergence, except for α = 0, is a pseudo-distance, because it is not symmetric under exchange between S 1 and S 2 . In spite of the asymmetry, the α-divergence is frequently used for measuring differences between two linear models or two filters. Some α-divergences are more popular than others, because those divergences have been already known in information theory and statistics. For example, the (−1)-divergence is the Kullback-Leibler divergence. The 0-divergence is well known as the square of the Hellinger distance in statistics. The Hellinger distance is locally asymptotically equivalent to the information distance and globally tightly bounded by the information distance [23]. The metric tensor of a statistical manifold, also known as the Fisher information matrix, is derived from the α-divergence. In order to define the information geometry of a linear system, the conditions on a signal filter are found in Amari and Nagaoka [21]: stability, minimum phase and 1 2π π −π | log S(w; ξ)| 2 dw < ∞ which imposes that the unweighted power cepstrum norm [24,25] is finite. According to the literature [20,21], the metric tensor of the linear system geometry is given by where the partial derivatives are taken with respect to the model parameters ξ, i.e., ∂ µ = ∂ ∂ξ µ . Since the dimension of the manifold is n, the metric tensor is an n × n matrix.
Other information-geometric objects are also determined by the spectral density function. The α-connection, which encodes the change of a vector being parallel-transported along a curve, is expressed with where α is a real number. Notice that the α-connection is not a tensor. The α-connection is related to the Levi-Civita connection, Γ µν,ρ (ξ), also known as the metric connection. The relation is given by the following equations: where the tensor T is symmetric under the exchange of the indices. The Levi-Civita connection corresponds to the α = 0 case. These information-geometric objects have interesting properties with the reciprocality of spectral density functions. The spectral density function of an inverse system is the reciprocal spectral density function of the original system. The geometric properties of the inverse system are described by the α-dual description. The following lemma shows the correspondence between the reciprocality of the spectral density function and the α-duality. Lemma 1. The information geometry of an inverse system is the α-dual geometry to the information geometry of the original system.
Proof. The metric tensor is invariant under the reciprocality of spectral density functions, i.e., plugging S −1 into Equation (2) provides the identical metric tensor.
Meanwhile, the α-connection is not invariant under the reciprocality and exhibits a more interesting property. The α-connection from the reciprocal spectral density function is given by and the above equation shows that the α-connection induced by the reciprocal spectral density function corresponds to the (−α)-connection of the original geometry. Similar to the α-connection, the α-divergence is equipped with the same property. The α-divergence between two reciprocal spectral density functions is straightforwardly found from the definition of the α-divergence, and it is represented by the (−α)-divergence between the two spectral density functions: Using the inverse systems, we can construct the α-dual description of signal processing models in information geometry. The multiplicative inverse of a spectral density function corresponds to the α-duality of the geometry.
Lemma 1 indicates that given a linear system geometry, there is no way to discern whether the metric tensor is derived from the filters with S or S −1 . Additionally, the model S −1 is (−α)-flat if and only if S is α-flat. The 0-connection is self-dual under the reciprocality. A consequence of Lemma 1 is the following multiplication rule: where S 0 is the unit spectral density function of the all-pass filter. Plugging S 1 = S 0 and S 2 = S, we have We observe that the bilateral transfer functions log |h(e iw ; ξ)| 2 ∈ L 2 (T) are isomorphically embedded in the space R ⊕ zH 2 (D).
This has the interpretation that the information manifold of log |h(e iw ; ξ)| 2 ∈ L 2 is isometric to the Hardy-Hilbert space.
Proof. log h(e iw ; ξ) is represented by the Fourier series: and since log |h(e iw ; ξ)| 2 is real, we have a −r =ā r , and in particular, a 0 is real. We define the conjugate series by the coefficientsã r , so that a r + iã r = 0 for r < 0 andã r for r > 0; so thatã e iθ is real valued, we chooseã 0 = 0. This impliesã and if {a r } ∈ l p for 1 ≤ p ≤ ∞, then {ã r } ∈ l p , in particular, The analytic function f (z) = exp (a 0 + a (z) + iã (z)) has log h(e iw ; ξ) 2 = log f (e iw ; ξ) where log u(e iw ; ξ) ∈ L 2 is pure imaginary, that is, |u(e iw ; ξ)| = 1.
This has the interpretation that h has a well-defined outer factor, and the information geometry of h depends only on h. In the case that the power series coefficients a k (ξ) are continuous, smooth, analytic, etc., then the embedding is likewise smooth.

Transfer Function Representation in the z Domain
By using transfer functions, it is also possible to reproduce all of the previous results with the spectral density function. With Fourier transformation and Z-transformation, z = e iw , a transfer function h(z; ξ) is expressed with a series expansion of z, where h r (ξ) is an impulse response function. It is a bilateral (or two-sided) transfer function expression, which has both positive and negative degrees in z, including the zero-th degree. In the causal response case that h r (ξ) = 0 for all negative r, the transfer function is unilateral. In many applications, the main concern is the causality of linear filters, which is represented by unilateral transfer functions. In this paper, we start with bilateral transfer functions as generalization and then will focus on causal filters.
In the complex z-domain, all formulae for the information-geometric objects are identical to the expressions in the frequency domain, except for the change of the integral measure: for an arbitrary integrand G. Since the evaluation of the integration is obtained from the line integral along the unit circle on the complex plane, it is easy to calculate the above integration with the aid of the residue theorem. According to the residue theorem, the poles only inside the unit circle contribute to the value of the integration. If G(z; ξ) is analytic on the unit disk, the constant term in z of G(z; ξ) is the value of the integration. For more details, see Cima et al. [26] and the references therein. One advantage of using Z-transformation is that a transfer function can be understood in the framework of functional analysis. A transfer function defined on the complex plane is expanded by the orthonormal basis z −r for integers r with impulse response functions as the coefficients. In functional analysis, it is possible to define the inner product between two complex functions F and G in the Hilbert space: By using this inner product, the condition for the stationarity, ∞ r=0 |h r | 2 < ∞, is written as the Hardy norm (H 2 -norm) in complex functional analysis, Since the functional space with a finite Hardy norm is called the Hardy-Hilbert space H 2 , the unilateral transfer functions satisfying the stationarity condition live on the H 2 -space. A transfer function of a stationary system is a function in the L 2 -space if the transfer function is in the bilateral form.
The conditions on the transfer function of a signal filter are also necessary for defining the information geometry of a linear system in terms of the transfer function. Similar to the spectral density representation, the conditions on the linear filters are stability and minimum phase. In addition to these conditions, we also need the following condition on the finite H 2 -norm of the logarithmic transfer function, and for the above condition, it is also known that the unweighted complex cepstrum norm [25,27] is finite. From now on, signal filters in this paper are the linear systems satisfying the above norm conditions. This is a necessary condition for a finite power cepstrum norm. It is natural to complexify the coordinate system as being used in the complex differential geometry. In holomorphic and anti-holomorphic coordinates, the metric tensor of a linear system geometry is represented by where both µ and ν run over all holomorphic and anti-holomorphic coordinates, i.e., µ, ν = 1, 2, · · · , n,1,2, · · · ,n. The components of the metric tensor are categorized into two classes: one with pure indices from holomorphic coordinates and anti-holomorphic coordinates, and another with the mixed indices. The metric tensor components in these categories are given by where gīj = (g ij ) * and gī j = (g ij ) * , and the indices i and j run from one to n. It is also possible to express the α-connection and the α-divergence in terms of the transfer function by using Equation (1), the relation between the transfer function and the spectral density function.
It is noteworthy that the information geometry of a linear system is invariant under the multiplicative factor of z in the transfer function, because the metric tensor is not changed by the factorization. The invariance is also true for the geometry induced by the spectral density function. Proof. Any transfer function can be factored z R out in the form of where R is an integer andh is the factored-out transfer function. In the spectral density function representation, the contribution of the factorization is |z| 2R , and it is a unity in the line integration. It imposes that the metric tensor, the α-connection and the α-divergence are independent of the factorization.
When a transfer function is considered, the same conclusion is obtained. Since the contribution from the factorization parts, log z R , is canceled by the partial derivatives in the metric tensor and the α-connection expression, the geometry is invariant under the factorization. It is also easy to show that α-divergence is also not changed by the factorization. Another explanation is that the terms of ∂ i h/h in the metric tensor and the α-connection are invariant under z R -scaling.
Based on Lemma 3, it is possible to obtain the unilateral transfer function from a transfer function with a finite upper bound in degrees of z. In particular, this factorization invariance of the geometry is useful in the case that the transfer function has a finite number of terms in the non-causal direction of the bilateral transfer function. If the highest degree in z of the transfer function is finite, the transfer function is factored out as where R is the maximum degree in z of the transfer function andh is a unilateral transfer function.
A bilateral transfer function can be expressed with the multiplication of a unilateral transfer function f (z; ξ) and an analytic function a(z; ξ) on the disk: where f r and a r are functions of ξ. For a causal filter, all a i 's are zero, except for a 0 . This decomposition also includes the case of Lemma 3 by setting a i = 0 for i < R and a R = 1. However, it is natural to take f 0 and a 0 as non-zero functions of ξ. This is because powers of z could be factored out for non-zero coefficient terms with the maximum degree in f (z; ξ) and the minimum degree in a(z; ξ), and the transfer function is reducible to h(z; ξ) = z Rh (z; ξ) whereh(z; ξ) has non-zerof 0 andã 0 and R is an integer, which is the sum of the degrees in z with the first non-zero coefficient terms from f (z; ξ) and a(z; ξ), respectively. By Lemma 3, the information geometry of the linear system with the transfer function h(z; ξ) is the same as the geometry induced by the factored-out transfer functionh(z; ξ).
The relation between f (z; ξ), a(z; ξ) and h(z; ξ) is described by the following Toeplitz system: For a given h(z; ξ), f r is determined by the coefficients of a(z; ξ), i.e., if we choose a(z; ξ), f (z; ξ) is conformable to the choice under the above Toeplitz system. The following lemma is noteworthy for further discussions. It is the generalization of Lemma 3. Proof. It is obvious that the information geometry of a linear system is only decided by the transfer function h(z; ξ). Whatever a(z; ξ) is chosen, the transfer function is the same, because f (z; ξ) is conformable to the Toeplitz system.
For further generalization, the transfer function is extended to the consideration of the Blaschke product b(z), which corresponds to the all-pass filter in signal processing. The transfer function can be decomposed into the following form: and every z s is on the unit disk. Although the Blaschke product can be written in z −1 instead of z, our conclusion is not changed, and we choose z for our convention. When z s = 0, the Blaschke product is given by b(z, z s ) = z. Regardless of z s , the Blaschke product is analytic on the unit disk. Since the Taylor expansion of the Blaschke product provides positive order terms in z, it is also possible to incorporate the Blaschke product into a(z; ξ). However, the Blaschke product is separately considered in the paper.
The logarithmic transfer function of a linear system is represented in terms of f, a and b: where φ 0 = log (f 0 a 0 ) and φ r , α r are the r-th coefficients of the logarithmic expansions. φ r and α r are functions of ξ unless all f r /f 0 and a r /a 0 are constant. Meanwhile, β r = 1 r s |zs| 2r −1 z r s is a constant in ξ. It is also straightforward to show that the information geometry is independent of the Blaschke product.
Lemma 5. The information geometry of a signal filter is independent of the Blaschke product.
Proof. It is obvious that the Blaschke product is independent of the coordinate system ξ. Plugging the above series into the expression of the metric tensor in complex coordinates, Equations (8) and (9), the metric tensor components are expressed in terms of φ r and α r : and it is noteworthy that the metric tensor components are independent of the β r terms, which are related to the Blaschke product, because those are not functions of ξ. This is why the z-convention for the Blaschke product is not important. It is straightforward to repeat the same calculation for the α-connection. Based on these, the information geometry of a linear system is independent of the Blaschke product.
According to Lemma 4, the geometry is invariant under the degree of freedom in choosing a(z; ξ). By using the invariance of the geometry, it is possible to fix the degree of freedom as a r /a 0 constant. With the choice of the degree of freedom, the metric tensor components of the information manifold are given by and it is easy to verify that the metric tensor components are only dependent on φ r andφ r . In other words, the metric tensor is dependent only on the unilateral part of the transfer function and a constant term in z of the analytic part. By Lemma 3, any transfer function with the upper-bounded degree in z is reducible to a unilateral transfer function with a constant term. For this class of transfer functions, a similar expression for the metric tensor can be obtained. First of all, the logarithmic transfer function is given in the series expansion: where R is the highest degree in z. The coefficients η r are also known as the complex cepstrum [27], and η 0 = log h −R . After the series expansion of this logarithmic transfer function is plugged into the formulae of the metric tensor components, Equations (8) and (9), the metric tensor components are obtained as and these expressions for the metric tensor components are similar to Equations (10) and (11) with the exchange of φ r ↔ η r .
As an extension of Lemma 5, it is possible to generalize it to the inner-outer factorization of the H 2 -functions. A function in the H 2 -space can be expressed as the product of outer and inner functions by the Beurling factorization [28]. The generalization with the Beurling factorization is given by the following lemma. where O(z; ξ) is an outer function and I(z; ξ) is an inner function. The α-divergence is expressed with S(z; ξ) = |h(z; ξ)| 2 = |O(z; ξ)I(z; ξ)| 2 = |O(z; ξ)| 2 on the unit circle, because the inner function has the unit modulus on the unit circle. Since the α-divergence is represented only with the outer function, other geometric objects, such as the metric tensor and the α-connection, are also independent of the inner function.

Kähler Manifold for Signal Processing
An advantage of the transfer function representation in the complex z-domain is that it is easy to test whether or not the information geometry of a given signal processing filter is a Kähler manifold. As mentioned before, choosing the coefficients in a(z; ξ) is considered as fixing the degrees of freedom in calculation without changing any geometry. By setting a(z; ξ)/a 0 (ξ) a constant function in ξ, the description of a statistical model becomes much simpler, and the emergence of Kähler manifolds can be easily verified. Since causal filters are our main concerns in practice, we concentrate on unilateral transfer functions. Although we will work with causal filters, the results in this section are also valid for the cases of bilateral transfer functions.
Theorem 1. For a signal filter with a finite complex cepstrum norm, the information geometry of the signal filter is a Kähler manifold.
Proof. The information manifold of a signal filter is described by the metric tensor g with the components of the expressions, Equation (10) and Equation (11). Any complex manifold admits a Hermitian manifold by introducing a new metric tensorĝ [29]: where X, Y are tangent vectors at point p on the manifold and J is the almost complex structure, such that With the new metric tensorĝ, it is straightforward to verify that the information manifold is equipped with the Hermitian structure:ĝ Based on the above metric tensor expressions, it is obvious that the information geometry of a linear system is a Hermitian manifold.
The Kähler two-form Ω of the manifold is given by where ∧ is the wedge product. By plugging Equation (11) into Ω, it is easy to check that the Kähler two-form is closed by satisfying ∂ kĝij = ∂ iĝkj and ∂kĝ ij = ∂jĝ ik . Since Kähler manifolds are defined as the Hermitian manifolds with the closed Kähler two-forms, the information geometry of a signal filter is a Kähler manifold.
An information manifold for a linear system with purely real parameters is a submanifold of a Kählerian information manifold where the metric tensor has the isometry of exchanging holomorphicand anti-holomorphic coordinates. In addition to that, a given linear system can be described by two manifolds: one is Kähler, and another is non-Kähler. Although the dimension is doubled, working with Kähler manifolds has many advantages, which will be reiterated later.
In Theorem 1, the Hermitian condition is clearly seen after introducing the new metric tensorĝ. It is also possible to find a condition for which the metric tensor g shows the explicit Hermitian structure. To impose the explicit Hermitian condition, the following theorem is worthwhile to mention. Theorem 2. In the Kählerian information geometry of a signal filter, the Hermitian structure is explicit in the metric tensor if and only if φ 0 (or f 0 a 0 ) is a constant in ξ. Similarly, for the transfer function of which the highest degree in z is finite, the Hermitian condition is directly found if and only if the coefficient of the highest degree in z of the logarithmic transfer function is a constant in ξ.
Proof. Let us prove the first statement.
(⇒) If the geometry is Kähler, it should be the Hermitian manifold satisfying for all i and j. This equation exhibits that f 0 a 0 is a constant in ξ, because φ 0 = log (f 0 a 0 ). (⇐) If φ 0 (or f 0 a 0 ) is a constant in ξ, the metric tensor is found from Equations (10) and (11), and these metric tensor conditions impose that the geometry is the Hermitian manifold. It is noteworthy that the non-vanishing metric tensor components are expressed only with φ r andφ r , which are functions of the impulse response functions f r in f (z; ξ), the unilateral part of the transfer function. For the manifold to be a Kähler manifold, the Kähler two-form Ω needs to be a closed two-form. The condition for the closed Kähler two-form Ω is that ∂ k g ij = ∂ i g kj and ∂kg ij = ∂jg ik . It is easy to verify that the metric tensor components, Equation (14), satisfy the conditions for the closed Kähler two-form. The Hermitian manifold with the closed Kähler two-form is a Kähler manifold. The proof for the second statement is straightforward, because it is similar to the proof of the first one by exchanging φ r ↔ η r . Let us assume that the highest degree in z is R. According to Lemma 3, it is possible to reduce a bilateral transfer function with finite terms along the non-causal direction to the unilateral transfer function by using the factorization of z R . After that, we need to replace η 0 with φ 0 in the proof. The two theorems are equivalent. Theorem 2 can be applied to submanifolds of the information manifolds. For example, a submanifold of a linear system is a Kähler manifold if and only if φ 0 (or f 0 a 0 ) is constant on the submanifold, i.e., φ 0 is a function of the coordinates orthogonal to the submanifold.
On a Kähler manifold, the metric tensor is derived from the following equation: where K is the Kähler potential. There exists the degree of freedom in Kähler potential up to the holomorphic and anti-holomorphic function: K(ξ,ξ) = K (ζ,ζ) + φ(ζ) + ψ(ζ). However, geometry is derived from the same relation: g ij = ∂ i ∂jK. By using Equation (15), the information on the geometry can be extracted from the Kähler potential. It is necessary to find the Kähler potential for the signal processing geometry. The following corollary shows how to get the Kähler potential for the Kählerian information manifold.
Corollary 1. For a given Kählerian information geometry, the Kähler potential of the geometry is the square of the Hardy norm of the logarithmic transfer function. In other words, the Kähler potential is the square of the complex cepstrum norm of a signal filer.
Proof. Given a transfer function h(z; ξ), the non-trivial components of the metric tensor for a signal processing model are given by Equation (9). By using integration by parts, the metric tensor component is represented by where the latter term goes to zero by holomorphicity. When we integrate by parts with respect to the anti-holomorphic derivative once again, the metric tensor is expressed with g ij = 1 2πi |z|=1 ∂ i ∂j log h(z; ξ) logh(ξ;ξ) − ∂ i ∂j log h(z; ξ) logh(ξ;ξ) dz z and the latter term is also zero, because h(z; ξ) is a holomorphic function. Finally, the metric tensor is obtained as and by the definition of the Kähler potential, Equation (15), the Kähler potential of the linear system geometry is given by up to a holomorphic function and an anti-holomorphic function. The right-handed side of the above equation is known as the square of the Hardy norm for the logarithmic transfer function. It is straightforward to derive the relation between the Kähler potential and the square of the Hardy norm of the logarithmic transfer function: Additionally, the Hardy norm of the logarithmic transfer function is also known as the complex cepstrum norm of a linear system [25,27].
For a given linear system, the Kähler potential of the geometry is given by φ r , α r and the complex conjugates of φ r , α r : However, the geometry is not dependent on α andᾱ, because those are not the functions of the model parameters ξ under fixing the degree of the freedom. By using Equation (14), the Kähler potential is expressed with and it is noticeable that the Kähler potential only depends on φ r andφ r , which come from the unilateral part of the transfer function decomposition. It is possible to obtain a similar expression for the finite highest upper-degree case by changing φ r to η r .
Since we assume that the complex cepstrum norm is finite, a transfer function h(z; ξ) in the H 2 -space also lives in the Hardy space of K = log h(z; ξ) 2 H 2 < ∞. This implies that the transfer function lives not only in H 2 , but also in exp (H 2 ), equivalently log h in the H 2 -space.
From Equation (15), the metric tensor is derived from the Kähler potential. Additionally, the metric tensor is also calculated from the α-divergence. These facts indicate that there exists a connection between the Kähler potential and the α-divergence.
Corollary 2. The Kähler potential is a constant term in α, up to purely holomorphic or purely anti-holomorphic functions, of the α-divergence between a signal processing filter and the all-pass filter of a unit transfer function.
Proof. After replacing the spectral density function with the transfer function, the 0-divergence between a signal filter and the all-pass filter with a unit transfer function is given by For a bilateral transfer function, F (ξ) = 1 2 (φ 0 + log |z s |) 2 + r=1 φ r (α r + β r ). For non-zero α, the α-divergence between a signal and the white noise is also obtained as When f 0 a 0 is unity, a constant term in α of the α-divergence is the Kähler potential. This shows the relation between the α-divergence and the Kähler potential.
The α-connection on a Kähler manifold is expressed with the transfer function by using Equation (1) and Equation (3). It is also cross-checked from the α-divergence in the transfer function representation.
Corollary 3. The α-connection components of the Kählerian information geometry are found as and the non-trivial components of the symmetric tensor T are given by In particular, the non-vanishing 0-connection components are expressed with and the 0-connection is directly derived from the Kähler potential: Additionally, the α-connection and the (−α)-connection are dual to each other. (3), the derivation of the α-connection is straightforward by considering holomorphic and anti-holomorphic derivatives in the expression. The same procedure is applied to the derivation of the symmetric tensor T . The 0-connection is also directly derived from the Kähler potential. The proof is as follows:

Proof. After plugging Equation (1) into Equation
To prove the α-duality, we need to test the following relation: where the Greek letters run from 1, · · · , n,1, · · · ,n. After tedious calculation, it is obvious that the above equation is satisfied regardless of combinations of the indices. Therefore, the α-duality also exists on the Kählerian information manifolds.
The 0-connection and the symmetric tensor T are expressed in terms of φ r andφ r , With the degree of freedom that φ 0 is a constant in the model parameters ξ, the non-trivial components of the 0-connection and the symmetric tensor T are Γ ij,k and T ij,k , respectively. In this degree of freedom, the Hermitian condition on the metric tensor is obviously emergent, and it is also beneficial to check the α-duality condition for non-vanishing components: We can cross-check these formulae for the geometric objects of the linear system geometry with the well-known results on a Kähler manifold. First of all, the fact that the 0-connection is the Levi-Civita connection can be verified as follows: where the last equality comes from the expression for the Levi-Civita connection on a Kähler manifold. This is well-matched to the Levi-Civita connection on a Kähler manifold. In Riemannian geometry, the Riemann curvature tensor, corresponding to the 0-curvature tensor, is given by where the Greek letters can be any holomorphic and anti-holomorphic indices. Similar to a Hermitian manifold, the non-vanishing components of the 0-curvature tensor on a Kähler manifold are R ρ σμj and its complex conjugate, i.e., the components with three holomorphic indices and one anti-holomorphic index (and the complex conjugate component). The non-trivial components of the Riemann curvature tensor are represented by because the 0-connection components with the mixed indices are vanishing.
Taking index contraction on holomorphic upper and lower indices in the Riemann curvature tensor, the 0-Ricci tensor is found as where G is the determinant of the metric tensor. This result is consistent with the expression of the Ricci tensor on a Kähler manifold. It is also straightforward to obtain the 0-scalar curvature by contracting the indices in the 0-Ricci tensor: where ∆ is the Laplace-Beltrami operator on the Kähler manifold.
The α-generalization of the curvature tensor, the Ricci tensor and the scalar curvature is based on the α-connection, Equation (4). The α-curvature tensor is given by The α-Ricci tensor and the α-scalar curvature are obtained as It is noteworthy that the α-curvature tensor, the α-Ricci tensor and the α-scalar curvature on a Kähler manifold have the linear corrections in α comparing with the quadratic corrections in α on non-Kähler manifolds. A submanifold of a Kähler manifold is also a Kähler manifold. When a submanifold of dimension m exists, the transfer function of a linear system can be decomposed into two parts: where h is the transfer function on the submanifold and h ⊥ is the transfer function orthogonal to the submanifold. When it is plugged into Equation (16), the Kähler potential of the geometry is decomposed into three terms as follows: where K contains the coordinates from the submanifold, K × is for the cross-terms and K ⊥ is orthogonal to the submanifold. It is obvious that each part in the decomposition of the Kähler potential provides the metric tensors for submanifolds, where an uppercase index is for the coordinates on the submanifold and a lowercase index is for the coordinates orthogonal to the submanifold. As we already know, the induced metric tensor for the submanifold is derived from K , the Kähler potential of the submanifold. Based on this decomposition, it is also possible to use K as the Kähler potential of the submanifold, because it endows the same metric with K . However, the Riemann curvature tensor and the Ricci tensors include the mixing terms from embedding in the ambient manifold, because the inverse metric tensor contains the orthogonal coordinates by the Schur complement. In statistical inference, connections, tensors and scalar curvature play important roles. If those corrections are negligible, dimensional reduction to the submanifolds is meaningful from the viewpoints not only of Kähler geometry, but also of statistical inference.
The benefits of introducing a Kähler manifold as an information manifold are as follows. First of all, on a Käher manifold, the calculation of geometric objects, such as the metric tensor, the α-connection and the Ricci tensor, is simplified by using the Kähler potential. For example, the 0-connection on a non-Kähler manifold is given by demanding three-times more calculation steps than the Kähler case, Equation (18). Additionally, the Ricci tensor on a Kähler manifold is directly derived from the determinant of the metric tensor. Meanwhile, the Ricci tensor on a non-Kähler manifold needs more procedures. In the beginning, the connection should be calculated from the metric tensor. Additionally, then, the Riemann curvature is obtained after taking the derivatives on the connection and considering quadratic terms of the connection. Finally, the Ricci tensor on the non-Kähler manifold is found by the index contraction on the curvature tensor indices. Secondly, α-corrections on the Riemann curvature tensor, the Ricci tensor and the scalar curvature on the Kähler manifold are linear in α. Meanwhile, there exist the quadratic α-corrections in non-Kähler cases. The α-linearity makes it much easier to understand the properties of α-family.
Moreover, submanifolds in Kähler geometry are also Kähler manifolds. When a statistical model is reducible to its lower-dimensional models, the information geometry of the reduced statistical model is a submanifold of the geometry. If the ambient manifold is Kähler, the dimensional reduction also provides a Kähler manifold as the information geometry of the reduced model, and the submanifold is equipped with all of the properties of the Kähler manifold.
Lastly, finding the superharmonic priors suggested by Komaki [15] is more straightforward in the Kähler setup, because the Laplace-Beltrami operator on a Kähler manifold is of the more simplified form compared to that in non-Kähler cases. For a differentiable function ψ, the Laplace-Beltrami operator on a Kähler manifold is given by ∆ψ = 2g ij ∂ i ∂jψ (20) comparing with the Laplace-Beltrami operator on a non-Kähler manifold: where G is the determinant of the metric tensor. On a Kähler manifold, the partial derivatives only act on the superharmonic prior functions. Meanwhile, the contributions from the derivatives acting on G and g ij should be considered in the non-Kähler cases. This computational redundancy is not on the Kähler manifold.

Example: AR, MA and ARMA Models
In the previous section, we show that the information geometry of a signal filter is a Kähler manifold. From the viewpoint of signal processing, time series models can be interpreted as a signal filter that transforms a randomized input x(z) to an output y(z). The geometry of a time series model can also be found by using the results in the previous section. In particular, we cover the AR, the MA and the ARMA models as examples.
First of all, the transfer functions of these time series models need to be identified. The transfer functions of the AR, the MA and the ARMA models with model parameters ξ = (σ, ξ 1 , · · · , ξ n ) are represented by The ARMA models can be considered as the fraction of two AR models or two MA models. By Lemma 1, the correspondence between the α-duality and the reciprocality of transfer functions is also valid for the ARMA(p, q) models. For example, the ARMA(p, q) model with α-connection is α-dual to the ARMA(q, p) model with the (−α)-connection under the reciprocality of the transfer function. Simply speaking, the AR model and the MA model are exchangeable by Lemma 1. The correspondence is given as follows: where h (0) is the unit transfer function of an all-pass filter.

Kählerian Information Geometry of ARMA(p, q) Models
The ARMA(p, q) model is the (p+q+1)-dimensional model with ξ = (σ, ξ 1 , · · · , ξ p+q ), and the time series model is characterized by its transfer function: where σ is the gain and ξ i is a pole with the condition of |ξ i | < 1. The logarithmic transfer function of the ARMA(p, q) model is given by and it is easy to verify that f 0 a 0 = σ 2 /2π. According to Theorem 1, the information geometry of the ARMA model is a Kähler manifold because of stability, minimum phase and the finite complex cepstrum norm of the ARMA filter. By using Theorem 2, the Hermitian condition on the metric tensor is explicitly checked on the submanifold of the ARMA model, where σ is a constant. In addition to that, this submanifold is also a Kähler manifold, because a submanifold of a Kähler manifold is also Kähler. Since it is possible to gauge σ by normalizing the amplitude of an input signal, the σ-coordinate can be considered as the denormalization coordinate [21]. Similar to the non-complexified ARMA models [12], g 0i for all non-zero i vanish by direct calculation using Equation (2). Considering these facts, we work only with the submanifolds of a constant gain.
As mentioned, the Käher potential is crucial for the Kähler manifolds and defined as the square of the Hardy norm of the logarithmic transfer function, equivalently the square of the complex cepstrum norm, Equation (16). For the ARMA(p, q) model, the Kähler potential is given by Since the metric tensor is simply derived from taking the partial derivatives on the Kähler potential, Equation (15), the metric tensor of the ARMA(p, q) model is represented as where other fully holomorphic-and fully anti-holomorphic-indexed components are all zero. It is easily verified that if c i and c j are both from the AR or the MA models, c i and c j exhibit the same signature, which imposes that the AR(p)-and the MA(q)-submanifolds of the ARMA(p, q) model have the same metric tensors with the AR(p) and the MA(q) models, respectively. If two indices are from the different models, there exists only the sign difference in the metric tensor. The metric tensor of the geometry is of a similar form as the metric tensor in Ravishanker's work on the ARMA geometry [12].
By considering the Schur complement, the inverse metric tensor can be deduced from the inverse metric tensor of the AR(p+q) model. The inverse metric tensor of the geometry is represented by and the only difference with the AR case is the signature c i c j in the AR-MA mixed components. With the sign difference in the metric tensor components with the AR-MA mixed indices, the determinant of the metric tensor can be calculated with the aid of the Schur complement. The determinant of the metric tensor is found as .
The 0-connection and the symmetric tensor T for the Kähler-ARMA model can be found from the results in the previous section. The non-trivial 0-connection components are calculated from Equation (18): and the non-zero components of the symmetric tensor T are given by Equation (17): Based on the above expressions, the α-connection is easily obtained from Equation (4). The 0-Ricci tensor of the ARMA geometry is represented by Equation (19): and it is noteworthy that the Ricci tensor is not dependent on c i . The 0-scalar curvature is calculated from the 0-Ricci tensor by index contraction: where c i , c j are from the inverse metric tensor of the ARMA model.
It is straightforward to derive the α-generalization of the Riemann curvature tensor, the Ricci tensor and the scalar curvature by using the results in Section 3.

Conclusion
In this paper, we prove that the information geometry of a signal filter with a finite complex cepstrum norm is a Kähler manifold. The conditions on the transfer function of the filter make the Hermitian structure explicit. The first condition on the transfer function for the Kählerian information manifold is whether or not multiplication between the zero-th degree terms in z of the unilateral part and the analytic part in the transfer function decomposition is a constant. The second condition is whether or not the coefficient of the highest degree in z is a constant in the model parameters. These two conditions are equivalent to each other for some transfer functions.
It is also found that the square of the Hardy norm of a logarithmic transfer function is the Kähler potential of the information geometry. It is also known as the unweighted complex cepstrum norm of a linear system. Using the Kähler potential, it is easy to derive the geometric objects, such as the metric tensor, the α-connection and the Ricci tensor. Additionally, the Kähler potential is a constant term in α of the α-divergence, i.e., it is related to the 0-divergence.
The Kählerian information geometry for signal processing is not only mathematically interesting, but also computationally practical. Contrary to non-Kähler manifolds where tedious and lengthy calculation is needed in order to obtain the tensors, it is relatively easier to calculate the metric tensor, the connection and the Ricci tensor on a Kähler manifold. Taking derivatives on the Kähler potential provides the metric tensor and the connection on a Kähler manifold. The Ricci tensor is obtained from the determinant of the metric tensor. Moreover, α-generalization on the curvature tensor, the Ricci tensor and the scalar curvature is linear in α. Meanwhile, there exist the non-linear corrections in the non-Kähler cases. Additionally, since the Laplace-Beltrami operator in Kähler geometry is of the simpler form, it is more straightforward to find superharmonic priors.
The information geometries of the AR, the MA and the ARMA models, the most well-known time series models, are the Kähler manifolds. The metric tensors, the connections and the divergences of the linear system geometries are derived from the the Kähler potentials with simplified calculation. In addition to that, the superharmonic priors for those models are found with much less computational efforts.