Next Article in Journal
Network Synchronization via Pinning Control from an Attacker-Defender Game Perspective
Previous Article in Journal
A Linear Stabilized Incompressible Magnetohydrodynamic Problem with Magnetic Pressure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator

by
Jianwei E
and
Mingshu Yang
*
College of Mathematics and Physics, Center for Applied Mathematics of Guangxi, Guangxi Minzu University, Nanning 530006, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1840; https://doi.org/10.3390/math12121840
Submission received: 28 May 2024 / Revised: 11 June 2024 / Accepted: 11 June 2024 / Published: 13 June 2024

Abstract

:
Independent component analysis (ICA), as a statistical and computational approach, has been successfully applied to digital signal processing. Performance analysis for the ICA approach is perceived as a challenging task to work on. This contribution concerns the complex-valued FastICA algorithm in the range of ICA over the complex number domain. The focus is on the robust and equivariant behavior analysis of the complex-valued FastICA estimator. Although the complex-valued FastICA algorithm as well as its derivatives have been widely used methods for approaching the complex blind signal separation problem, rigorous mathematical treatments of the robust measurement and equivariance for the complex-valued FastICA estimator are still missing. This paper strictly analyzes the robustness against outliers and separation performance depending on the global system. We begin with defining the influence function (IF) of complex-valued FastICA functional and followed by deriving its closed-form expression. Then, we prove that the complex-valued FastICA algorithm based on the optimizing cost function is linear-equivariant, depending only on the source signals.

1. Introduction

Independent component analysis (ICA) is a statistical and computational model, where the observed signals are considered as linear mixtures of underlying source signals. The purpose of ICA is to estimate mutually independent source signals from their linear mixtures without prior knowledge of sources and mixing coefficients [1,2]. Up to the present, ICA has been widely applied in diverse fields, such as signal processing [3,4], financial analysis [5,6], and so on. Hence, there is ever-increasing literature recognizing the essentiality of analyzing the properties of the employed ICA approaches from the aspect of theoretical support. The main purpose of this paper is to analyze the robustness and equivariance of the complex-valued fast fixed-point algorithm for ICA (complex-valued FastICA for short) [7,8], which is one of the most prominent algorithms in the complex number domain due to its faster convergence and easier implementation.
Among many publications, algorithmic property is one of the most fundamental properties for complex-valued ICA. Novey and Adali proposed complex ICA based on negentropy maximization (NM) [9] and provided the local stability condition of the algorithm. Furthermore, Qian and Wei gave a stability analysis from a unique perspective [10]. The authors pointed out that the NM-based ICA algorithm might iterate a poor separation vector even if the source signals meet the stability conditions. Reference [11] presented the Riemann and Lie structures of the complex unitary group, which generalized the topological properties of the complex-valued ICA. Koldovský and Tichavský [12] addressed the problem regarding the region of convergence of gradient-related ICA algorithms. The results showed that the size of the region of convergence was related to the employed algorithm and relied on the ratio of scales of source signals. E et al. [13] provided a performance analysis of the complex-valued FastICA algorithm with the M-estimator cost function, including stability and local convergence. They proved the existence of local optimal solutions and stability conditions.
Moreover, the statistical property is another fundamental property for the complex-valued ICA. Cramér-Rao bound (CRB) is essential for the ICA algorithms to describe the performance limit, and has been researched, e.g., in [14,15]; the authors derived a closed-form expression for the CRB of the separation parameter for complex-valued ICA. Fu et al. [16] established the theory for complex-valued ICA, providing the Cramér-Rao lower bound and identification conditions, which exploited diversities of non-Gaussianity, non-whiteness, and non-circularity. Furthermore, Koldovský et al. [17] analyzed the accuracy of fast dynamic independent vector analysis, showing asymptotic efficiency under given mixing and statistical models, which coincided with the Cramér-Rao lower bound derived in [15]. Reference [13] also analyzed the statistical properties of the complex-valued FastICA method, including uniformity and robustness (the sequences of the complex-valued FastICA estimator converge in probability to the true demixing vector). However, unfortunately, they addressed the problem concerning the robust property without a rigorous mathematical treatment. In this paper, we focus on deriving a closed-form expression of the robust measurement for the complex-valued FastICA functional.
The circular complex-valued FastICA (c-FastICA) algorithm was introduced by Bingham and Hyvärinen [7], as an extension of FastICA in the real domain [18,19]. It has received wide attention for solving complex-valued source signal separation due to its faster convergence and easier implementation. In order to improve the robustness against outliers, Chao and Douglas [20] selected the Huber M-estimator as nonlinearity in the cost function within the complex-circular FastICA algorithm. The local stability analysis showed that the improved approach was locally stable for Huber’s single-parameter M-estimator cost function with circular source signals. The aforementioned complex-valued FastICA algorithm, however, has poor performance when dealing with noncircular source signal separation. To address this obstacle, Novey and Adali extended it to the noncircular source signals separation scenario and analyzed the local convergence of the estimator [8]. Although there have been several attempts to study the statistical properties of the complex-noncircular FastICA algorithm (nc-FastICA for short), a rigorous analysis of the robust and equivariant behavior of nc-FastICA is still missing in the community. To fill this gap, this paper will analyze the robustness against outliers and the separation performance of the complex-valued FastICA estimator with a weighted unitary constraint.
The innovations of this paper are three-fold. First, we define the complex-valued FastICA functional and its influence function (IF) for the deflationary procedure. Then, a closed-form expression of the IF for the complex-valued FastICA functional is derived, which is utilized to measure robustness against outliers from a rigorous mathematical perspective. Third, we prove that the complex-valued FastICA algorithm is equivariant.
This paper is organized as follows: Section 2 provides the preliminaries of the complex-valued ICA model and the deflationary complex FastICA considered argument. Section 3 establishes the theories concerning the complex-valued FastICA functional and its influence function in the range of ICA over the complex number domain. Section 4 analyzes the equivariance of the complex FastICA estimator. Section 5 concludes the whole paper.

2. Preliminaries

2.1. Complex-Valued ICA Model

Assume that a complex-valued random vector x C n consists of two real-valued random vectors x R , x I R n with imaginary unit j, x = x R + j x I , and the conjugate of x can be defined by x = x R j x I . We denote by
m x = E { x } ,
C x = E { ( x m x ) ( x m x ) H } ,
C ˜ x = E { ( x m x ) ( x m x ) T } ,
the mean vector, covariance matrix, and complementary covariance matrix of x , respectively, wherein E { x } = E { x R } + j E { x I } , H denotes the conjugate transpose, and T represents the ordinary transpose. The complex-valued random vector x is circular if C ˜ x = 0 , and otherwise noncircular.
Let f ( x , x ) be an analytic function of a complex random vector x and its conjugate x . In this paper, we utilize the Wirtinger calculus to obtain the partial derivatives with respect to x and x , considering x and x as two variables independently [21].
The ICA model established in the complex number field can be considered an extension of a real-valued ICA model [2], which assumes that the observed signals are the linear combinations of the unknown source signals s = ( s 1 , s 2 , , s n ) T , resulting in the generation of the observed signals vector x = ( x 1 , x 2 , , x n ) T according to
x = As = i = 1 n a i s i ,
where A C n × n is the unknown mixing matrix or coefficients matrix, which regulates the cumulative distribution function (cdf) of the observed signal x denoted by F ( x ) , and a i is the ith column of matrix A . Generally, the following assumptions are needed in the ICA model:
Assumption 1. 
The source signals are of zero mean and unit variance, i.e., C s = I and the sources s 1 , s 2 , , s n are statistically mutually independent, and at most, one source is Gaussian.
Assumption 2. 
The mixing matrix A is of full rank.
The primary aim of solving the ICA problem is to procure a matrix B named the demixing matrix, which can be utilized to estimate the source signals s by y = ( y 1 , y 2 , , y n ) T defined as follows:
y = Bx = BAs = P Λ s ,
where P and Λ are the permutation and diagonal matrices, respectively, representing the ambiguities of the ICA model (1). This can be obtained by the non-Gaussian-maximization criterion [2] up to the scales, phase, and order of source signals.
The data-centering procedure is needed when solving the ICA problem, as it simplifies the theory and algorithm. Considering the zero-mean source signals s , which can be achieved by subtracting the mean vector m x from x , we can similarly define their covariance matrix C s and complementary covariance matrix C ˜ s . Especially, the source signal is circular if C ˜ s = 0 , otherwise, it is noncircular. In this study, we analyze the robustness and equivariant characteristics of symmetric complex-valued FastICA under the more general case of noncircular sources.

2.2. Deflationary Complex FastICA

To date, various approaches have been developed for solving the complex ICA problem, with the complex-valued FastICA algorithm being one of the most prominent for finding the demixing matrix B .
Let b be one of the columns of the demixing matrix B , regarding a demixing vector. On the basis of Assumption 1, the estimated source signal y = b H x should be of unit variance by searching for the solution on the weighted unitary constraint group O ( n ) = { b C n : | | b | | C x = 1 } with a weighted vector norm, such that | | b | | C x 2 = b H C x b . Hence, the cost function of the complex-valued FastICA algorithm with the weighted unitary constraint for the one-unit version has the following form:
J ( b ) = E { G ( | b H x | 2 ) } , b O ( n ) ,
where G : R + { 0 } R represents a smooth even function defined as the nonlinearity function (which determines the robustness of the complex-valued FastICA algorithm, which will be further involved in the forthcoming Section 3); x denotes the observed signals vector defined in (1).
We note that (i) the cost function performs on the absolute value | b H x | 2 instead of a complex-valued vector, which results from seeking to maximize the expectation of the non-linear cost function; (ii) the optimization of the weighted unitary constraint group O ( n ) guarantees that the estimated source signal y = b H x has unit variance.
In order to solve the optimization problem (3), Bingham and Hyvärinen [7] presented a fast fixed-point type algorithm (called c-FastICA), which is capable of estimating the circular source signals. However, it has poor performance when dealing with noncircular source separation. To figure out this obstacle, Novey and Adali extended the c-FastICA algorithm to the noncircular source signal separation scenario (called the nc-FastICA algorithm), consisting of the following learning rule [8]:
  • Step 1. Choose an arbitrary initial value of the unit norm for b O ( n ) ;
  • Step 2. Run iterations:
    b E { g ( | b H x | 2 ) ( b H x ) x } + E { g ( | b H x | 2 ) | b H x | 2                 + g ( | b H x | 2 ) } b + E { xx T } E { g ( | b H x | 2 ) ( b H x ) 2 } b ,                   b b | | b | | C x 2
    until convergence, where the notations g ( z ) , g ( z ) denote d G ( z ) d z , and d g ( z ) d z , respectively.
One needs to estimate several source signals; the one-unit complex FastICA algorithm (4) should be run several times with vectors b 1 , b 2 , , b n . To prevent the vectors from converging to the same maxima, they need to be orthogonalized after each iteration using the Gram–Schmidt method [2], resulting in the following deflationary complex FastICA algorithm:
  • Step 1. Choose an arbitrary initial value of unit norm for b j O ( n ) ; set counter j = 1 ;
  • Step 2. Run iterations:
    b j E { g ( | b j H x | 2 ) ( b j H x ) x } + E { g ( | b j H x | 2 ) | b j H x | 2 + g ( | b j H x | 2 ) } b j + E { xx T } E { g ( | b j H x | 2 ) ( b j H x ) 2 } b j , b j b j i = 1 j 1 ( b j T b i ) b i b j b j | | b j | | C x 2
    until convergence;
  • Step 3. Set j j + 1 until j = n .

3. Robustness of the Complex-Valued FastICA Estimator

3.1. Nonlinearity

From a statistical viewpoint, the nonlinear function G provides information on the higher-order statistics in the expectation format E { G ( | b H x | 2 ) } , which determines the selection of g (the derivation of the nonlinear function G) in the algorithm (4). Hence, the statistical properties of the complex-valued FastICA estimator b k lie essentially in the selection of nonlinear function G. In this paper, we analyze the robustness of the b k via the IF.
Generally, robustness against outliers is a desirable property for an estimator, meaning that the estimator is insensitive to individual, highly erroneous observations. In this section, we mainly address the problem of how to measure the robustness of the complex-valued FastICA estimator b k ? Heuristically, the value of the function G ( x ) cannot grow fast with the increase in | x | if one needs a robust estimator. Specifically, we list the classical nonlinearities G ( x ) in Table 1. The curves of the function G ( x ) and its derivation with respect to x are plotted in Figure 1 and Figure 2.
From these Figures, one can yield that the Tukey M-estimator function implements a more robust estimator that is insensitive to outliers, and kurtosis gives a non-robust estimator that may be influenced by individual highly erroneous observations. In fact, the values of G k u r t ( x ) (red line in Figure 1) are increased quickly from the beginning of 1 or −1 without a downward trajectory. With the increase in | x | , the outliers do not have much influence on the values of G T u k e y ( x ) in the sense that the complex-valued FastICA estimator based on the Tukey M-estimator cost function is recommended with better robustness. This evidence can also be seen in the curves of the derivation of nonlinearities G ( x ) in Figure 2. In this paper, we shall, hereafter, analyze the robustness of the complex-valued FastICA estimator via the IF.

3.2. Influence Function of Complex-Valued FastICA Functional

We analyze the robustness of the complex-valued FastICA estimator by concerning the deflationary version of the complex-valued FastICA algorithm. For convenience, we shall hereafter suppose that b k is the kth column of the demixing matrix B , which corresponds to the estimator for finding the kth source signal s k in the sense that the equation y k = b k H x gives an estimation of the source signal s k .
In the complex-valued ICA model ( 1 ) , the observed signal vector x comes from an unknown distribution with the cdf F ( x ) . Hence, in order to give the measurement of the complex-valued FastICA estimator, we first define the complex-valued FastICA functional b k ( F ) for the deflationary procedure and its influence function.
Definition 1. 
Assume that the observed vector x follows the complex-valued ICA model (1). The complex-valued FastICA functional b k ( F ) for the deflationary procedure is defined as follows:
b k ( F ) = arg max | | b k | | C x E { G ( | b k H x | 2 ) } ,
subject to the following weighted unitary constraint:
b k ( F ) H C x ( F ) b j ( F ) = 0 , j < k , 1 , j = k .
where G : R + { 0 } R represents a smooth even function and C x ( F ) denotes the covariance matrix function of the observed vector x at the distribution F ( x ) .
We note that the constraint condition (7) can be performed by the deflationary orthogonalization process, which separates the source signals one by one.
Definition 2. 
The influence function (IF) of the complex-valued FastICA functional b k ( · ) at F ( x ) is given by the following:
I F ( x , b , F ( x ) ) = lim t 0 b k ( ( 1 t ) F ( x ) + t x ) b k ( F ( x ) ) t
where x denotes the probability measure, which puts mass 1 at point x C n .
We note that the IF of the complex-valued FastICA functional b k ( · ) is defined based on the fact that it is Gâteaux differentiable [22] at the distribution F ( x ) in C n . Thus, it can also be written as follows:
I F ( x , b , F ( x ) ) = b k ( F t ( x ) ) t | t = 0 ,
where F t ( x ) = ( 1 t ) F ( x ) + t x .
To simplify the notation, we shall hereafter replace the notations I F ( x , b , F ( x ) ) with I F ( b ) to denote the IF of the complex-valued FastICA functional b k ( · ) , C x ( F ) (Res. m x ( F ) ) by the C x (Res. m x ) to denote the covariance matrix function (Res. mean vector function) of the observed vector x at the distribution F ( x ) , F t by F t ( x ) , respectively.
The significance of the IF lies in its heuristic explanation: it reports the influence of the contamination at point x on the estimate. From the robust statistics point of view, the IF quantifies the asymptotic bias resulting from contamination in the observation [22]. Hence, the forthcoming section provides a closed-form expression of the IF of the complex-valued FastICA functional b k ( · ) .

3.3. Robustness

In order to analyze the robustness, we use the Lagrangian multiplier method to deduce the specific expression of the influence function of the complex-valued FastICA functional. Plugging the weighted unitary constraint (7) into the cost function (3), the Lagrange function can be obtained as follows:
L ( b k , λ ) = J ( b k ) λ k [ b k H C x b k 1 ] j = 1 k 1 λ j b j H C x b k ,
where λ j R , j = 1 , , k is the penalty factor; J ( b k ) is the cost function of the complex-valued FastICA. Note that the observed data, x , are concerned without a preprocessing procedure, in the sense that J ( b k ) = E { G ( | y x , k | 2 ) } = E { G ( y x , k y x , k ) } , wherein y x , k = b k H ( x m x ) = i = 1 n b i ( x i m i ) , and b i , x i , m i denote the ith elements for counterparts, respectively. After differentiating the L ( b k , λ ) with respect to b k = ( b 1 , , b n ) T and equating to zero, one can yield
b k J ( b k ) = b k λ k b k H C x b k 1 + b k j = 1 k 1 λ j b j H C x b k ,
where b denotes the gradient operator with respect to b , yielding the following:
E { G ( y x , k y x , k ) } b i = E { g ( y x , k y x , k ) y x , k [ x i m i ] } ,
λ k b k H C x b k b i = λ k b k H c i ,
λ j b j H C x b k b i = λ j b j H c i ,
where c i represents the ith column of the covariance matrix function C x ( F ) . Furthermore,
E { g ( | y x , k | 2 ) y x , k [ x m x ] } = λ k C x T b k + j = 1 k 1 λ j C x T b j .
For the sake of convenience in writing, we shall identify the following:
L k ( F ) E { g ( | y x , k | 2 ) y x , k [ x m x ] }
and
R k ( F ) λ k C x T b k + j = 1 k 1 λ j C x T b j
respectively, yielding the following:
L k ( F ) = R k ( F ) .
Theorem 1. 
The influence function of the complex-valued FastICA functional b k ( · ) , k { 1 , 2 , , n } , at the centered mixture distribution F ( x ) is as follows:
I F ( b k ) = s ¯ k i = 1 k 1 [ u i ( s ¯ i ) + s ¯ i ] b i + 1 | s ¯ k | 2 2 b k + u k ( s ¯ k ) i = k + 1 n s ¯ i b i
where u k ( x ) = ( g ( | x | 2 ) c k ) x ρ p c k μ k , c k = E { g ( | s k | 2 ) | s k | 2 } , μ k = E { g ( | s k | 2 ) } , ρ k = E { g ( | s k | 2 ) s k 2 } , g ( · ) is the derivative operator of the nonlinearity G ( · ) in the cost function (3), and s ¯ i = b i H ( z m x ) denotes the projection of the centered contamination point z into the direction b i .
To prove Theorem 1, the following five Lemmas are needed, which will be proved in Appendix A and Appendix B.
Lemma 1. 
Assume that the observed data, x , obey the distribution F ( x ) with the mean vector m x , and the sources s follow Assumption 1. Let y x , i ( F t ) = b i H ( F t ) ( x m x ) , L k ( F ) = E { g ( | y x , k | 2 ) y x , k [ x m x ] } and B k ( F ) = E { g ( | y x , k | 2 ) [ x m x ] [ x m x ] H } , one obtains the following:
(a) 
I F ( y x , i ) = I F ( b i ) H ( x m x ) s ¯ i ,
(b) 
L k ( F ) = c k a k ,
(c) 
B k ( F ) = μ k AA H + ( c k μ k ) a k a k H ,
(d) 
a k H I F ( b i ) = I F ( b k ) H a i s ¯ k s ¯ i , i = 1 , , k 1 ,
(e) 
2 R e a k H I F ( b k ) = 1 | s ¯ k | 2 ,
where s ¯ i = b i H ( z m x ) , i = 1 , , n denotes the projection of the centered contamination point z into the direction b i ; c k = E { g ( | s k | 2 ) | s k | 2 } , μ k = E { g ( | s k | 2 ) } . The notation of a k represents the kth column of matrix A .
Proof. 
See Appendix A. □
Lemma 2. 
Assume that the observed data, x , obey the distribution F ( x ) with the mean vector m x , and the sources s follow Assumption 1. For λ i , i = 1 , , k in (10) at the distribution F ( x ) , we have the following:
(a) 
λ i ( F ) = 0 , i = 1 , , k 1 ,
(b) 
λ k ( F ) = c k ,
(c) 
I F ( λ i ) = ( c k μ k ) a k H I F ( b i ) + g ( | s ¯ k | 2 ) μ k s ¯ k s ¯ i ρ k s i , i = 1 , , k 1 ,
(d) 
I F ( λ k ) = g ( | s ¯ k | 2 ) c k | s ¯ k | 2 + E { g ( | s ¯ k | 2 ) v k ( s ¯ k ) | s ¯ k | 2 } 2 R e E { g ( | s ¯ k | 2 ) s ¯ k s k } ,
where v k ( s ¯ k ) = | s k | 2 | s k | 2 | s ¯ k | 2 2 R e ( s ¯ k s k ) , and c k , μ k , s ¯ i , ρ k are defined as in Theorem 1.
Proof. 
See Appendix B. □
Based on Lemmas 1 and 2, we will now prove Theorem 1 from the following three steps:
Proof. 
Step 1. Differentiating L k ( F t ) and R k ( F t ) with respect to t = 0 in (15), yields the following equation:
L k ( F t ) t | t = 0 = R k ( F t ) t | t = 0 ,
which aims at obtaining a closed-form expression of the influence function of the complex-valued FastICA functional I F ( b k ) .
Step 2. Calculating the left-hand side of the Equation (17). Combing F t ( x ) = ( 1 t ) F ( x ) + t x with Equation (13), we conclude the following:
L k ( F t ) = ( 1 t ) E { g ( | y x , k | 2 ) y x , k [ x m x ] } + t g ( | y z , k | 2 ) y z , k [ z m x ] ,
Differentiating L k ( F t ) with respect to t = 0 and leveraging Lemma 1, one can obtain the following:
L k ( F t ) t | t = 0 = L k ( F ) + ( 1 t ) [ E { g ( | y x , k | 2 ) I F ( y x , k ) y x , k + y x , k I F ( y x , k ) y x , k ( x m x ) } + E { g ( | y x , k | 2 ) I F ( y x , k ) ( x m x ) } E { g ( | y x , k | 2 ) y x , k I F ( m x ) } ] | t = 0 + g ( | y z , k | 2 ) y z , k ( z m x ) | t = 0 = c k a k + E { g ( | s k | 2 ) | s k | 2 v k ( s ¯ k ) } a k + μ k AA H + ( c k μ k ) a k a k H I F ( b k ) 2 R e E { g ( | s k | 2 ) s ¯ k s k } a k + ρ k s ¯ k a k + g ( | s ¯ k | 2 ) s ¯ k ρ k ( z m x ) .
Step 3. Calculating the right-hand side of (17). From (14), we have the following:
R k ( F t ) = λ k ( F t ) C x T b k ( F t ) + j = 1 k 1 λ j ( F t ) C x T b j ( F t ) ,
Differentiating R k ( F t ) with respect to t = 0 and leveraging Lemma 2 one can obtain the following:
R k ( F t ) t | t = 0 = I F ( λ k ) ( AA H ) T b k + λ k ( F ) I F ( C x ) T b k + λ k ( F ) ( AA H ) T I F ( b k ) + j = 1 k 1 I F ( λ j ) ( AA H ) T b j + λ j ( F ) I F ( C x ) T b j + λ j ( F ) ( AA H ) T I F ( b j ) = I F ( λ k ) a k + c k ( z m x ) s ¯ k a k + c k ( AA H ) T I F ( b k ) + j = 1 k 1 I F ( λ j ) a j = g ( | s ¯ k | 2 ) | s ¯ k | 2 c k | s ¯ k | 2 a k c k a k + c k ( z m x ) s ¯ k + c k ( AA H ) T I F ( b k ) + E { g ( | s k | 2 ) | s k | 2 v k ( s ¯ k ) } a k 2 R e E { g ( | s k | 2 ) s ¯ k s k } a k ρ k j = 1 k 1 s ¯ j a j + ( c k μ k ) j = 1 k 1 a k H I F ( b j ) a j + [ g ( | s ¯ k | 2 ) μ k ] s ¯ k j = 1 k 1 s ¯ j a j .
Step 4. Deriving the specific expression of the IF of the complex-valued FastICA function. Plugging the results derived in step 1 and step 2 into (17), after tedious manipulation, one can yield the following:
I F ( b k ) = g ( | s ¯ k | 2 ) s ¯ k c k s ¯ k ρ k c k μ k B H B ( z m x ) g ( | s ¯ k | 2 ) c k c k μ k | s ¯ k | 2 B H B a k + ρ k s ¯ k c k μ k B H B a k + ρ k c k μ k j = 1 k 1 s ¯ j B H B a j j = 1 k 1 B H B a k T I F ( b j ) a j g ( | s ¯ k | 2 ) μ k c k μ k s ¯ k j = 1 k 1 s ¯ j B H B a j + 1 | s ¯ k | 2 2 B H B a k = g ( | s ¯ k | 2 ) s ¯ k c k s ¯ k ρ k c k μ k j k n b j s ¯ j + s ¯ k b k g ( | s ¯ k | 2 ) c k c k μ k | s ¯ k | 2 b k + ρ k s ¯ k c k μ k b k + ρ k c k μ k j = 1 k 1 s ¯ j b j j = 1 k 1 a k T I F ( b j ) b j g ( | s ¯ k | 2 ) μ k c k μ k s ¯ k j = 1 k 1 s ¯ j b j + 1 | s ¯ k | 2 2 b k = s k j = 1 k 1 s ¯ j b j + u k ( s ¯ k ) j = k + 1 n s ¯ j b j j = 1 k 1 a k T I F ( b j ) b j + 1 | s ¯ k | 2 2 b k .
Combining the preceding equality concludes our proof a k H I F ( b j ) = u j ( s ¯ j ) s ¯ k . □
Remark 1. 
In the proof of Theorem 1 and Lemmas 1–2, we need to consider the following facts:
A H b k = e k ,
A e k = a k ,
b i H a j = 0 , j < k 1 , j = k
where A denotes the mixing matrix in ICA model (1); a k is the kth column of matrix A ; e k represents a vector with 1 in the kth element and 0 elsewhere; and b i is the ith column of the demixing matrix B .
Remark 2. 
Note that pre-multiplying both sides of Equation (20) by a i H , we have the following:
a i H I F ( b k ) = s k j = 1 k 1 s ¯ j a i H b j + u k ( s ¯ k ) j = k + 1 n s ¯ j a i H b j j = 1 k 1 a k T I F ( b j ) a i H b j + 1 | s ¯ k | 2 2 a i H b k ,
After analyzing (21), one can observe the following: a i H I F ( b k ) = u k ( s ¯ k ) s ¯ i for i > k , or a k H I F ( b j ) = u j ( s ¯ j ) s ¯ k for k < j .
Remark 3. 
The expression of complex-valued FastICA IF can also be written as follows:
I F ( b k ) = 1 | s ¯ 1 | 2 2 b 1 + u 1 ( s ¯ 1 ) i = 2 n s ¯ i b i , k = 1 , s ¯ k i = 1 k 1 [ u i ( s ¯ i ) + s ¯ i ] b i + 1 | s ¯ k | 2 2 b k + u k ( s ¯ k ) i = k + 1 n s ¯ i b i , 1 < k < n , s ¯ n i = 1 n 1 [ u i ( s ¯ i ) + s ¯ i ] b i + 1 | s ¯ n | 2 2 b n , k = n .
By observing the closed-form expression of the IF for the complex-valued FastICA functional b k in Theorem 1, we obtain that the IF of b k is the weighted sum of the separation vector b 1 , , b n with the unbounded weight coefficient function, with respect to the projection of the contaminated point s ¯ 1 , , s ¯ n . This finding confirms that the values of I F ( b k ) are large when the outliers are present in the source signals. As can also be seen in Theorem 1, the greater the values of the contaminated point s ¯ 1 , , s ¯ n presenting in the source signals, the higher the values of I F ( b k ) .

4. Equivariance of the Complex-Valued FastICA Estimator

4.1. Equivariance of Complex-Valued ICA

Equivariance is a fundamental property in statistics [23], which makes the transformation of the parameter in the model equal to that of the sample data, which results in the following definition.
Definition 3. 
Assume that A ^ = A ( X ) is an estimator of the mixing matrix A in the complex-valued ICA model (1), which relies on the sample matrix X = [ x ( 1 ) , , x ( T ) ] with T sample points. Then, we call the estimator A ^ equivariant if
A ( TX ) = T A ( X )
for any invertible transformation T .
In order to analyze the equivariance of the complex-valued FastICA estimator, the following Theorem is needed.
Theorem 2. 
Let C = BA be the global mixing–demixing system of the complex-valued ICA model (1) and A ^ be equivariant. Then the global mixing–demixing system C of ICA algorithms based on the optimizing cost function depends only on the source signal matrix S = [ s ( 1 ) , , s ( T ) ] , instead of the mixing system A and demixing system B .
Proof. 
Denote by C the mixing–demixing matrix BA ; the estimation of the source can be rewritten as follows:
y = Cs .
In fact, the main property of the equivariant estimator for the BSS problem is that it provides uniform performance in the sense that
y = Bx = A ( X ) 1 As = A ( AS ) 1 As = A ( S ) 1 s ,
where S = [ s ( 1 ) , , s ( T ) ] is the source signal matrix with T sample points.
By comparing Equations (24) and (25), the result follows from the fact that C = A ( S ) 1 .
Remark 4. 
From Theorem 2, the current estimation of source y t estimated by the equivariant estimator relies only on the source signal matrix S and the source s t , rather than the mixing matrix A .
Remark 5. 
Due to the fact that the estimator based on the optimizing cost function is equivariant [24], we only need to consider the equivariance of the complex-valued FastICA based on the global mixing–demixing system C = BA , i.e., the update learning rule of the estimated parameter for the ICA algorithm depends on the composition system.

4.2. Equivariance of Symmetric Complex-Valued FastICA

For convenience, we analyze the equivariance of the complex-valued FastICA algorithm by focusing on the symmetric version, in the sense that the source signals are estimated in parallel. In particular, the demixing vectors b 1 , b 2 , , b n are iterated simultaneously, as follows:
b 1 E { g ( | b 1 H x | 2 ) ( b 1 H x ) x } + E { g ( | b 1 H x | 2 ) | b 1 H x | 2 + g ( | b 1 H x | 2 ) } b 1 + E { xx T } E { g ( | b 1 H x | 2 ) ( b 1 H x ) 2 } b 1 , b n E { g ( | b n H x | 2 ) ( b n H x ) x } + E { g ( | b n H x | 2 ) | b n H x | 2 + g ( | b n H x | 2 ) } b n + E { xx T } E { g ( | b n H x | 2 ) ( b n H x ) 2 } b n ,
where b 1 , b 2 , , b n is an orthonormal set. In order to prevent different parameters from converging to the same maxima, the estimate sources y 1 = b 1 H x , y 2 = b 2 H x , , y n = b n H x need to be decorrelated, which can be accomplished by the following classic symmetric orthogonalization approach involving matrix square roots
B ( B B H ) 1 2 B ,
where B = ( b 1 , , b n ) is the matrix of the vectors. Note that the inverse square root ( B B H ) 1 2 can be acquired by the EVD of B B H = E d i a g ( d 1 , d 2 , , d n ) E H . For any vector h = ( h 1 , , h n ) T , we denote the following:
d i a g ( h ) = h 1 0 0 h n .
Hence, we have the following:
( B B H ) 1 2 = E d i a g ( d 1 1 2 , d 2 1 2 , , d n 1 2 ) E H .
Using the matrix representation, the symmetric version of the complex-valued FastICA algorithm can be rewritten by the following learning rule:
Step 1 
Choose arbitrary initial values for the b 1 , , b n O ( n ) ; orthogonalize matrix B = ( b 1 , , b n ) as in Step 2 below.
Step 2 
Let y = Bx , run iteration
B E { d i a g ( g ( | y | 2 ) ) yx H } + E { d i a g ( g ( | y | 2 ) ) | y | 2 + g ( | y | 2 ) } B + E { d i a g ( g ( | y | 2 ) ) y 2 } E { y x H } , B ( B B H ) 1 2 B
until convergence.
Definition 4. 
Assume that B = B ( X ) is an estimator of the demixing matrix B in the complex-valued ICA model (1), which relies on the sample matrix X = [ x ( 1 ) , , x ( T ) ] with T sample points. Then, we call the estimator B ( X ) linear-equivariant, if
C = B ( S ) ,
where x ( i ) = As ( i ) , i = 1 , , T , S = [ s ( 1 ) , , s ( T ) ] and C = BA .
From Definition 4, we can obtain the equivariance of the symmetric complex-valued FastICA algorithm:
Theorem 3. 
The algorithm (28) for the complex-valued BSS is linear-equivariant and B ( S ) = A ( S ) 1 .
Proof. 
We denote by C the mixing–demixing matrix BA , and the estimation of sources can be rewritten as follows:
y = Cs .
Plugging the symmetric orthogonalization into the algorithm (28), by post-multiplying A on both sides of the complex-valued FastICA algorithm, one can yield
C E { d i a g ( g ( | Cs | 2 ) ) Css H C H C } + E { d i a g ( g ( | Cs | 2 ) ) | Cs | 2 + g ( | Cs | 2 ) } C + E { d i a g ( g ( | Cs | 2 ) ) ( Cs ) 2 } E { ( Cs ) s H C H C } .
From the above equation, we have C = B ( S ) . Hence, the complex-valued BSS is linear-equivariant. The residual conclusion B ( S ) = A ( S ) 1 can be obtained by combining Theorem 2, Definition 4 and Theorem 3. □
The results obtained demonstrate the following: (1) Theorems 2 and 3 suggest that the complex-valued FastICA estimator is of equivariance, namely, the performance of the complex-valued FastICA algorithm depends only on the source signal s , rather than the mixing matrix A and demixing matrix B ; (2) There is an invertible relationship between the mixing matrix A and demixing matrix B under Assumption 2.

5. Conclusions

In order to provide a rigorous mathematical treatment of the robust measurement and equivariance for the complex-valued FastICA estimator, this paper analyzed the statistical properties of the complex-valued FastICA algorithm in the context of ICA over the complex number domain. Firstly, a closed-form expression of the complex-valued FastICA functional was derived and used to measure robustness against outliers. We found that the complex-valued FastICA algorithm based on Tukey’s single-parameter M-estimator cost function had the best separation performance in both circular and noncircular scenarios. Then, we proved that the complex-valued FastICA algorithm is equivariant in the sense that the global mixing-demixing system of the algorithm depended only on the source signals.

Author Contributions

Conceptualization, J.E. and M.Y.; methodology, J.E. and M.Y.; software, M.Y. writing—original draft preparation, J.E. and M.Y.; validation, J.E. and M.Y.; writing—review and editing, J.E. and M.Y.; visualization, J.E. and M.Y.; supervision, M.Y.; funding acquisition, M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Guangxi Natural Science Foundation under grant 2023GXNSFBA026180, Middle-aged and Young Teachers’ Basic Ability Promotion Project of Guangxi Province under grant 2024KY0181, and the Natural Science Basic Research Program of Shaanxi under grant 2024JC-YBMS-043.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

Sincere thanks are given to anonymous referees for their insightful comments which are valuable for improving this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Proof of Lemma 1. 
(a) By determining the derivative y x , i ( F t ) with respect to t at point t = 0 , we have the following:
I F ( y x , i ) ) = y x , i ( F t ) t | t = 0 = b i H ( F t ) t | t = 0 · ( x m x ) | t = 0 + b i H ( F t ) | t = 0 · ( x m x ) t | t = 0 = I F ( b i ) H ( x m x ) b i H I F ( m x ) = I F ( b i ) H ( x m x ) b i H ( z m x ) = I F ( b i ) H ( x m x ) s ¯ i .
(b) y x , k ( F ) = b k H ( F ) [ x m x ] = b k H As = e k H s = s k , the notation of e k represents a real-valued vector with element 1 in the kth component, and element 0 elsewhere. By E { g ( | s k | 2 ) s k s l } = E { g ( | s k | 2 ) s k } E { s l } = 0 , l k , we have the following:
L k ( F ) = E { g ( | s k | 2 ) s k ( As ) } = A E { g ( | s k | 2 ) | s k | 2 e k } = E { g ( | s k | 2 ) | s k | 2 } a k = c k a k ,
where Ae k = a k .
(c) By the assumption E { s } = 0 , E { ss H } = I ; that is, E { s i } = 0 , E { s i s j } = 0 , E { | s i | 2 } = 1 , i j = 1 , , n , one can obtain the following:
B k ( F A ) = E { g ( | s k | 2 ) As ( As ) H } = A E { g ( | s k | 2 ) ( ss ) H } A = A E { g ( | s k | 2 ) } E { g ( | s k | 2 ) | s k | 2 } E { g ( | s k | 2 ) } A H = A ( c k e k e k T + μ k l k e l e l T ) A H = c k A e k e k H A H + μ k AA H μ k a k a k H = μ k AA H + ( c k μ k ) a k a k H .
(d) According to the constraint condition (7), if i < k : b k ( F t ) H C x ( F t ) b i ( F t ) = 0 , after calculating the derivation with respect to t = 0 on both sides, we have the following:
0 = I F ( b k ) H AA H b i + b k H I F ( C x ) b i + b k H AA H I F ( b i ) = I F ( b k ) H a i + b k H [ ( z m x ) ( z m x ) H AA H ] b i + a k H I F ( b i ) = I F ( b k ) H a i + s ¯ k s ¯ i b k H AA H b i + a k H I F ( b i ) ,
which is equivalent to a k H I F ( b i ) = I F ( b k ) H a i s ¯ k s ¯ i , where s ¯ i = b i H ( z m x ) , i = 1 , , k 1 denotes the projection of the centered contamination point z into the direction b i .
(e) According to the constraint condition (7), if j = k : b k ( F t ) H C x ( F t ) b k ( F t ) = 1 , after calculating the derivation with respect to t = 0 on both sides, we have the following:
0 = I F ( b k ) H AA H b k + b k H I F ( C x ) b k + b k H AA H I F ( b k ) = I F ( b k ) H a k + b k H [ ( z m x ) ( z m x ) H AA H ] b k + a k H I F ( b k ) = I F ( b k ) H a k + s ¯ k s ¯ k b k H AA H b k + a k H I F ( b k ) = I F ( b k ) H a k + a k H I F ( b k ) + | s ¯ k | 2 1 ,
which is equivalent to 2 R e a k H I F ( b k ) = 1 | s ¯ k | 2 , where s ¯ k = b k H ( z m x ) denotes the projection of the centered contamination point z into the direction b k . Note that A H b k = e k ; A e k = a k ; a k H a k = 0 , a k H a i = 0 ( i < k ) .  □

Appendix B

Proof of Lemma 2. 
(a) and (b) Recalling
R k ( F ) = λ k C x T b k + j = 1 k 1 λ j C x T b j ,
by multiplying b i T from the right-hand side of the above equation and plugging the constraint condition (7), one can obtain the following: λ i ( F ) = b i T R k ( F ) , i = 1 , , k . We note that L k ( F ) = R k ( F ) and recalling Lemma 1 (b), we have the following:
λ i ( F A ) = c k ( b i H a k ) .
Hence, λ k ( F ) = 0 , if i = 1 , , k 1 ; λ i ( F ) = c k if i = k .
(c) and (d) According to the proof of Lemma 1, we have the following:
λ i ( F ) = b i T L k ( F ) = b i T E { g ( | y x , k | 2 ) y x , k [ x m x ) ] } = E { g ( | y x , k | 2 ) y x , k y x , i } , i = 1 , , k
due to
λ i ( F t ) = ( 1 t ) E { g ( | y x , k | 2 ) y x , k y x , i } + t g ( | y z , k | 2 ) y z , k y z , i ,
after calculating the derivation with respect to t = 0 , one can achieve the following:
I F ( λ i ) = λ i ( F ) + 2 E { g ( | y x , k | 2 ) R e I F ( y x , k ) y x , k y x , k y x , i } + E { g ( | y x , k | 2 ) I F ( y x , k ) y x , i } + E { g ( | y x , k | 2 ) y x , k I F ( y x , i ) } + g ( | s ¯ k | 2 ) s ¯ k s ¯ i .
When i = 1 , , k 1 , after tedious calculations, we have the following:
I F ( λ i ) = μ k [ I F ( b i ) T a k s ¯ k s ¯ i ] + c k a k H I F ( b i ) ρ k s ¯ i + g ( | s ¯ k | 2 ) s ¯ k s ¯ i = ( c k μ k ) a k H I F ( b i ) + g ( | s ¯ k | 2 ) μ k s ¯ k s ¯ i ρ k s i .
which completes the proof of Lemma 2 (c).
When i = k , after tedious calculations, we have the following:
I F ( λ i ) = c k + E { g ( | s k | 2 ) | s k | 2 | s k | 2 | s ¯ k | 2 2 R e ( s ¯ k s k ) | s k | 2 } + μ k a k T I F ( b k ) + ( c k μ k ) a k T I F ( b k ) E { g ( | s k | 2 ) s ¯ k s k } + c k a k H I F ( b k ) E { g ( | s k | 2 ) s k s ¯ k } + g ( | s ¯ k | 2 ) | s ¯ k | 2 = c k + E { g ( | s k | 2 ) v k ( s ¯ k ) | s k | 2 } + 2 c k R e a k H I F ( b k ) 2 R e E { g ( | s k | 2 ) s ¯ k s k } + g ( | s ¯ k | 2 ) | s ¯ k | 2 = g ( | s ¯ k | 2 ) c k | s ¯ k | 2 + E { g ( | s ¯ k | 2 ) v k ( s ¯ k ) | s ¯ k | 2 } 2 R e E { g ( | s ¯ k | 2 ) s ¯ k s k }
which completes the proof of Lemma 2 (d). □

References

  1. Comon, P. Independent component analysis, A new concept? Signal Process. 1994, 36, 287–314. [Google Scholar] [CrossRef]
  2. Hyvärinen, A.; Karhunen, J.; Oja, E. Independent Component Analysis; Wiley and Sons: New York, NY, USA, 2001. [Google Scholar]
  3. Chen, Y.H.; Wang, S.P. Low-cost implementation of independent component analysis for biomedical signal separation using very-large-scale integration. IEEE Trans. Circuit Syst. II 2020, 67, 3437–3441. [Google Scholar] [CrossRef]
  4. Schell, A.; Oberhauser, H. Nonlinear independent component analysis for discrete-time and continuous-time signals. Ann. Stat. 2023, 51, 487–518. [Google Scholar] [CrossRef]
  5. Liu, J.; Ye, J.; E, J. A multi-scale forecasting model for CPI based on independent component analysis and non-linear autoregressive neural network. Phys. A 2023, 609, 128369. [Google Scholar] [CrossRef]
  6. E, J.; He, K.; Liu, H.; Ji, Q. A novel separation-ensemble analyzing and forecasting method for the gold price forecasting based on RLS-type independent component analysis. Expert Syst. Appl. 2023, 232, 120852. [Google Scholar] [CrossRef]
  7. Bingham, E.; Hyvärinen, A. A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Syst. 2000, 10, 1–8. [Google Scholar] [CrossRef] [PubMed]
  8. Novey, M.; Adali, T. On extending the complex FastICA algorithm to noncircular sources. IEEE Trans. Signal Process. 2008, 56, 2148–2154. [Google Scholar] [CrossRef]
  9. Novey, M.; Adali, L. Complex ICA by negentropy maximization. IEEE Trans. Neural Netw. 2008, 19, 596–609. [Google Scholar] [CrossRef] [PubMed]
  10. Qian, G.; Wei, P. Stability analysis of complex ica by negentropy maximization: A unique perspective. Neurocomputing 2016, 214, 80–85. [Google Scholar] [CrossRef]
  11. Mika, D. Fast gradient algorithm with toral decomposition for complex ICA. Mech. Syst. Signal Process. 2022, 178, 109266. [Google Scholar] [CrossRef]
  12. Koldovský, Z.; Tichavský, P. Gradient algorithms for complex non-Gaussian independent component/vector extraction, question of convergence. IEEE Trans. Signal Process. 2019, 67, 1050–1064. [Google Scholar] [CrossRef]
  13. E, J.; Ye, J.; He, L.; Jin, H. Performance analysis for complex-valued FastICA and its improvement based on the Tukey M-estimator. Digit. Signal Process. 2021, 115, 103077. [Google Scholar] [CrossRef]
  14. Loesch, B.; Yang, B. Cramér-Rao bound for circular and noncircular complex independent component analysis. IEEE Trans. Signal Process. 2012, 61, 365–379. [Google Scholar] [CrossRef]
  15. Kautský, V.; Koldovský, Z.; Tichavský, P.; Zarzoso, V. Cramér-Rao bounds for complex-valued independent component extraction: Determined and piecewise determined mixing models. IEEE Trans. Signal Process. 2020, 68, 5230–5243. [Google Scholar] [CrossRef]
  16. Fu, G.S.; Phlypo, R.; Anderson, M.; Adalı, T. Complex independent component analysis using three types of diversity: Non-Gaussianity, nonwhiteness and noncircularity. IEEE Trans. Signal Process. 2015, 63, 794–805. [Google Scholar] [CrossRef]
  17. Koldovský, Z.; Kautský, V.; Tichavský, P.; Čmejla, J.; Málek, J. Dynamic independent component/vector analysis: Time-variant linear mixtures separable by time-invariant beamformers. IEEE Trans. Signal Process. 2021, 69, 2158–2173. [Google Scholar] [CrossRef]
  18. Hyvärinen, A.; Oja, E. A fast fixed-point algorithms for independent component analysis. Neural Comput. 1997, 9, 1483–1492. [Google Scholar] [CrossRef]
  19. Hyvärinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef] [PubMed]
  20. Chao, J.C.; Douglas, S.C. A Robust Complex FastICA Algorithm Using the Huber M-Estimator Cost Function; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  21. Adali, T.; Schreier, P.J.; Scharf, L.L. Complex-valued signal processing: The proper way to deal with impropriety. IEEE Trans. Signal Process. 2011, 59, 5101–5125. [Google Scholar] [CrossRef]
  22. Hampel, F.R.; Ronchetti, E.M.; Stahel, W.A. Robust Statistics: The Approach Based on Influence Functions; Wiley and Sons: New York, NY, USA, 1986. [Google Scholar]
  23. Lehmann, E.L. Testing Statistical Hypothesis; Wiley: New York, NY, USA, 1959. [Google Scholar]
  24. Cardoso, J.F. Blind signal separation: Statistical principles. Proc. IEEE 1998, 86, 2009–2025. [Google Scholar] [CrossRef]
Figure 1. Plot of nonlinearities G ( y ) .
Figure 1. Plot of nonlinearities G ( y ) .
Mathematics 12 01840 g001
Figure 2. Plot of nonlinearities g ( y ) .
Figure 2. Plot of nonlinearities g ( y ) .
Mathematics 12 01840 g002
Table 1. The nonlinearities in the complex-valued FastICA algorithm.
Table 1. The nonlinearities in the complex-valued FastICA algorithm.
Nonlinear Function G ( x ) Derivation of G ( x )
G l o g = log ( 1 + x 2 ) g l o g = 2 x 1 + x 2
G t a n h = log cosh ( x ) g t a n h = t a n h ( y )
G k u r t = 1 2 x 4 g k u r t = 2 x 3
G s q r t = 1 + x 2 g s q r t = x 1 + x 2
G H u b e r = 1 2 | x | 2 , | x | θ , θ | x | θ 2 2 , | x | > θ . g H u b e r = x , | x | θ , θ s g n ( x ) , | x | > θ .
G T u k e y = θ 2 6 [ 1 ( 1 x 2 θ 2 ) 3 ] , | x | θ , θ 2 6 , | x | > θ . g T u k e y = x ( 1 x 2 θ 2 ) 2 , | x | θ , 0 , | x | > θ .
Note: The functions with subscripts denote the logarithm, hyperbolic tangent, kurtosis, mean square root, Huber M-estimator, and Tukey M-estimator functions, respectively
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

E, J.; Yang, M. Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator. Mathematics 2024, 12, 1840. https://doi.org/10.3390/math12121840

AMA Style

E J, Yang M. Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator. Mathematics. 2024; 12(12):1840. https://doi.org/10.3390/math12121840

Chicago/Turabian Style

E, Jianwei, and Mingshu Yang. 2024. "Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator" Mathematics 12, no. 12: 1840. https://doi.org/10.3390/math12121840

APA Style

E, J., & Yang, M. (2024). Complex-Valued FastICA Estimator with a Weighted Unitary Constraint: A Robust and Equivariant Estimator. Mathematics, 12(12), 1840. https://doi.org/10.3390/math12121840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop