Next Article in Journal
Separable Reversible Data Hiding in Encrypted Signals with Public Key Cryptography
Previous Article in Journal
The Asymmetry is Derived from Mechanical Interlocking of Achiral Axle and Achiral Ring Components –Syntheses and Properties of Optically Pure [2]Rotaxanes–
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(1), 21; https://doi.org/10.3390/sym10010021
Submission received: 3 November 2017 / Revised: 2 January 2018 / Accepted: 3 January 2018 / Published: 9 January 2018

Abstract

:
In modern electronic warfare, multiple input multiple output (MIMO) radar has become an important tool for electronic reconnaissance and intelligence transmission because of its anti-stealth, high resolution, low intercept and anti-destruction characteristics. As a common MIMO radar signal, discrete frequency coding waveform (DFCW) has a serious overlap of both time and frequency, so it cannot be directly used in the current radar signal separation problems. The existing fuzzy clustering algorithms have problems in initial value selection, low convergence rate and local extreme values which will lead to the low accuracy of the mixing matrix estimation. Consequently, a novel mixing matrix estimation algorithm based on data field and improved fuzzy C-means (FCM) clustering is proposed. First of all, the sparsity and linear clustering characteristics of the time–frequency domain MIMO radar signals are enhanced by using the single-source principal value of complex angular detection. Secondly, the data field uses the potential energy information to analyze the particle distribution, thus design a new clustering number selection scheme. Then the particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM, and finally get the estimated value of the mixing matrix. The simulation results show that the proposed algorithm improves both the estimation accuracy and the robustness of the mixing matrix.

1. Introduction

Blind Source Separation (BSS) is used to solve the mixing matrix estimation. It can extract the source signals from the observed signals under the condition that little priori knowledge of sources or channel is obtained except for the independence of signals. In recent years, as a popular signal processing method, it has been successfully applied to many fields, such as: voice signal processing [1], biomedical engineering [2], array signal processing [3], image processing [4], mechanical fault diagnosis [5] and so on. According to the different numbers of observed signals and sources, blind source separation can be divided into three cases including undetermined, normal and overdetermined. Among them, underdetermined blind source separation (UBSS) is most popular in the current research because it best fits the practical application.
Sparse results analysis (SCA) [6] is the most representative method in blind source separation. Its performance usually depends on the sparseness of the signal. However, many of the signals are not sparse in real applications, which require the sparsity of the signals to be increased by means of short-time Fourier transform (STFT), wavelet transform (WT) and other methods. In this paper, the sparse time-frequency representation of the signal is obtained by short-time Fourier transform and our work is mainly focused on the mixing matrix estimation of blind source separation [7,8]. Abrard and Deville [9] proposed a time–frequency ratio of mixtures (TIFROM) algorithm, but the algorithm requires the existence of an adjacent time–frequency domain in which only one source exists; Puigt and Deville [10] applied the TIFROM algorithm into delay mixing model. Arberet [11] proposed the direction estimation of mixing matrix (DEMIX) algorithm by extracting the time-frequency point when there is only one source signal. Arberet [12] then extended the DEMIX algorithm to the no echo mixing model, Kim and Yoo [13] also proposed a method of single source detection using the mixing matrix ratio in the time–frequency domain. The ratio is real when just one source signal exists, however, when two or more signals exist at that point, the ratio becomes plural. Dong [14] normalized the time–frequency coefficients of the observed signals; Xu [15] extends the single-source detection algorithm into image processing and realizes single-source detection in the Haar wavelet domain, but still has high complexity and sensitivivity to noise. For MIMO radar signals, the time–frequency overlap together with the low sparsity weaken the effect of the mentioned algorithms. Ai [16] proposed a method using high-order cumulants and tensor decomposition, however, the great influence of noise leads to low robustness. Guo [17] proposed a hybrid method based on single-source detection and data field dynamics clustering, but the process of seeking the potential point is too slow.
On the analysis above, we propose a single-source principal value of complex angular detection method based on the characteristics of MIMO radar signals. The orthogonal discrete frequency-coding waveforms (DFCW) are processed by the STFT transform. The influence of noisy and isolated points in the observed signal space is filtered out, which improves both the sparseness and linear clustering. Due to the fact that the fuzzy C-means algorithm is easy affected by initial clustering centers, a data field aided fuzzy C-means clustering algorithm is proposed. Firstly, a new clustering number selection scheme is designed by constructing data field of the observed data particles. Since FCM may be interpreted as an optimal problem, it could be integrated with particle swarm optimization (PSO). Secondly, we introduce PSO to improve the iterative clustering process of FCM. The simulation results show that the proposed method has improved accuracy of mixing matrix estimation.
This paper is organized as follows. In Section 2, we briefly introduce the model of MIMO radar signals. In Section 3, the detection method of single-source principal value of complex angular is described. In Section 4 and Section 5, a novel estimation method based on two-step preprocessing of fuzzy clustering is proposed. The experimental results are given in Section 6. Finally, some conclusions are given in Section 7.

2. Model of MIMO Radar Signals

In order to avoid mutual interference among the transmission channels, MIMO radar transmits the ideal orthogonal waveform through spatially distributed antennas, which mainly includes frequency division waveform and code segment waveform. Thanks to its strong anti-detection ability and simplicity of implementation, DFCW is commonly used in MIMO radar systems. The signal model has the following form
s n ( t ) = l = 0 L 1 p l n ( t l T p ) , n = 1 , , N ,
p l n ( t ) = { e j 2 π f l n t 0 t T p 0 otherwise
where N is the total number of pulses, T p the sub-pulse width, L is the number of sub-pulses in n-th signal; the l-th sub-pulse frequency of the n-th signal is f l n ( t ) = f 0 n + ( l 1 ) Δ f , Δ f = 1 / T p , the frequency coding sequence of the n-th signal is expressed as
{ f l } = { l 1 Δ f , l 2 Δ f , , l L Δ f } ,
It can be further formulated as a sequence
L f = { l 1 , l 2 , , l L } .

3. Detection Method Based on Single-Source Principal Value of Complex Angular

3.1. Model of Underdetermined Blind Source Separation

For MIMO radar systems, the observed signal obtained by the receiving antenna can be regarded as the weighted sum of the transmitted signals. When the number of receiving antennas is smaller than that of the transmitting antennas, the signal separation is in accordance with UBSS. And the UBSS linear instantaneous mixing model can be expressed as
x ( t ) = A s ( t ) + n ( t ) ,
where x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x M ( t ) ] T represents M observed signals, s ( t ) = [ s 1 ( t ) , s 2 ( t ) , , s N ( t ) ] T represents N source signals, A = [ a 1 , α 2 , , a N ] R M × N ( M < N ) represents M × N order mixing matrix, a k is the k-th column vector of mixing matrix, and n ( t ) = [ n 1 ( t ) , n 2 ( t ) , , x M ( t ) ] T means additive white noise.

3.2. Single-Source Principal Value of Complex Angular Detection

STFT transform is carried out on both sides of the Equation (5) to obtain the mixing model in the time–frequency domain.
X ( t , f ) = [ X 1 ( t , f ) X 2 ( t , f ) X N ( t , f ) ] = A S ( t , f ) = [ a 11 a 12 a 1 N a 21 a 22 a 2 N a M 1 a M 2 a M N ] [ S 1 ( t , f ) S 2 ( t , f ) S N ( t , f ) ] ,
where [ X 1 ( t , f ) , X 2 ( t , f ) , , X N ( t , f ) ] T and [ S 1 ( t , f ) , , S N ( t , f ) ] T are complex coefficients after the short-time Fourier transform of observed and source signal at the time–frequency point ( t , f ) , respectively. If there is only one source S i at the time point ( t 1 , f 1 ) , Equation (5) will have the following form:
[ Re { X 1 ( t 1 , f 1 ) } + j Im { X 1 ( t 1 , f 1 ) } Re { X 2 ( t 1 , f 1 ) } + j Im { X 2 ( t 1 , f 1 ) } Re { X M ( t 1 , f 1 ) } + j Im { X M ( t 1 , f 1 ) } ] = [ a 1 i Re { S 1 ( t 1 , f 1 ) } + j a 1 i Im { S 1 ( t 1 , f 1 ) } a 2 i Re { S 2 ( t 1 , f 1 ) } + j a 2 i Im { S 2 ( t 1 , f 1 ) } a M i Re { S M ( t 1 , f 1 ) } + j a M i Im { S M ( t 1 , f 1 ) } ] ,
where Re { X i ( t 1 , f 1 ) } and Im { X i ( t 1 , f 1 ) } represent the real and imaginary parts of signal X i , respectively. In the complex plane, the angular θ is formed by the positive real axis and the vector Z. The vector Z is made up of X i and the origin of coordinates. It can be denoted as A r g ( Z ) = θ , then we can conclude tan ( θ ) = Im { X i ( t 1 , f 1 ) } / Re { X i ( t 1 , f 1 ) } . However, any complex number of vectors in the complex plane will have an infinite number of angles, so we take a value for θ 0 that satisfies π < θ 0 < π for the principal angular, representing the direction of the complex vector. From Equation (7) we can get:
A r g { X 1 ( t 1 , f 1 ) } = A r g { X 2 ( t 1 , f 1 ) } = = A r g { X M ( t 1 , f 1 ) } ,
that is, when only source S i acts, each channel signal has the same principal value of the complex angular, and is located on the same line in the complex plane. According to the above inference, the signal after STFT transformation is extracted by using Equation (8) as the constraint, consequently the directionality in frequency domain of the observed signal is more significant. But in the actual signal detection, the constraints of Equation (8) are too harsh and can be relaxed as:
A r g { X i ( t k , f k ) } A r g { X j ( t k , f k ) } δ ,
where δ ( 0 , 1 ) is the detection threshold, and when there is only one source S i , the signal detected by the complex angular principal value will be linearly distributed in the complex plane. The direction vectors of each line correspond to a column vector in the mixing matrix, respectively.

4. Mixing Matrix Estimation Based on Two-Step Preprocessing Fuzzy Clustering

4.1. Fuzzy C-Means Clustering Algorithm

FCM algorithm is an improvement of the early hard C-means clustering method. In this algorithm, data classification is achieved by minimizing the objective function. Compared with K-means clustering algorithm, the degree of membership defined in the range of [ 1 , 1 ] is used to determine the degree of the element belonging to a cluster center in the cluster signal space.
Let X = { x 1 , x 2 , , x k , x n } R S represent n signals to be clustered in S-dimensional Euclidean space, where x k ’s eigenvector is denoted as x k = ( x k 1 , x k 2 , , x k S ) T R S . The signals have c ( 2 c n ) clustering centers with position P = { p 1 , p 2 , , p c } . The objective function of FCM is shown below:
{ J m ( U , P ) = k = 1 n i = 1 c ( μ i k ) m ( d i k ) 2 , m [ 1 , ) s . t . U M
Among them, m [ 1 , ) is the weighting index through which the fuzzy degree of the clustering can be adjusted. d i k = p i x k indicates the Euclidean distance of the k-th signal to the i-th clustering center. The partitioning matrix is indicated as U = [ μ i k ] c × n . μ i k represents the membership degree of the signal to be clustered. Moreover, the membership relationship between the clustering signal x k and the set X i ( 1 i c ) is expressed as μ i k = μ X i ( x k ) . For k , each membership degree has the following relationship:
i = 1 c μ i k = 1 .
The constraint condition is Equation (11). The purpose of clustering is to minimize the objective function, i.e., min { J } . We introduce the Lagrangian multiplier so the new objective function is constructed.
J ( U , p 1 , , p c , λ 1 , , λ n ) = J ( U , p 1 , , p c ) + k = 1 n λ k ( i = 1 c u i k 1 ) = i = 1 c k n u i k m d i k 2 + k = 1 n λ k ( i = 1 c u i k 1 ) .
The membership degree and clustering center can be derived by the partial derivative of Equation (12).
u i k = 1 j = 1 c ( d i k d j k ) 2 m 1 ,
p i = k = 1 n u i k m x k k = 1 n u i k m .
The procedures for the FCM algorithm are as follows:
Step 1: Set the number of cluster centers c , weighted index m , iteration stop threshold e p s m , maximum iteration number M a x i t e r , initial cluster center P 0 , and initialization iteration number to 0;
Step 2: Calculate the initial distance matrix D ;
Step 3: The membership matrix is updated by Equation (13);
Step 4: The cluster center is updated by Equation (14), and the number of iterations is added one;
Step 5: Then the distance matrix is calculated again, and the objective function is calculated by Equation (10). If the result is smaller than the iteration stop threshold or exceeds the maximum iteration number, the algorithm terminates and outputs the result, otherwise it jumps to step 4.
The FCM algorithm requires setting the number of cluster centers and initializing the membership matrix. Equations (13) and (14) are acquired by the Lagrangian multiplier method to calculate the membership degree matrix and the value of the clustering center until reaching the terminal condition. The FCM algorithm is a kind of good soft clustering method, but the algorithm has two principal shortcomings: First, the number of clustering centers must be given in advance. For most of signals, however, the optimal clustering number is unknown; second, the algorithm is an optimization method based on gradient descent—essentially a local search algorithm which is sensitive to the initial value. The clustering center generated by random initialization can easily lead to a local extreme value problem in the clustering process.

4.2. Introduce Data Field to Select the Number of Cluster Centers

In view of the shortcomings of FCM, in this section, the theory of data field is proposed to estimate the number of clustering signals in advance. And then guide the subsequent steps.
This method treats the signals to be clustered as particles with mass in the multidimensional data space. The particles generate fields in the data space. As a result, they will produce forces on the other particles as well as the other particles will also produce the corresponding forces. The data space is denoted as Ω = { z 1 , z 2 , z N } , M = { m 1 , m 2 , m N } represents the set of particles in the data space. The theory of data potential field is augmented by the nuclear field concept in physics and the interaction of all particles forms a data field, which can be analytically modeled by a scalar potential function and a vector intensity function [18,19,20]. The Cartesian grid is used to divide the space, so the potential function of the point vector Z is given as the formula:
φ ( z ) = i = 1 N φ i ( z ) = i = 1 N ( m i × e ( z i z σ ) 2 ) ,
where z i z is the Euclidean distance between the point vector z i and z , m i satisfies the normalization condition i = 1 N m i = 1 , and σ is the radiation factor that controls the interacting distance between two particles.
The distribution of potential function can be given by the equipotential diagram of each plane projection in the coordinate system. The potential of a certain position in the graph and the intensity of the equipotential line are proportional to the intensity of the position distribution. In the data field space, the particle with the same clustering characteristics will show a distribution of concentric curve. The method can be used effectively to specify the default parameter of clustering centers.

4.3. Using the Particle Swarm Optimization Algorithm to Optimize the Clustering Center

The particle swarm optimization algorithm is a swarm intelligence algorithm inspired by the social behavior of the population. The PSO algorithm works with a population of possible solutions rather than a single individual. Assuming that the population size is denoted by N . The position of i-th particle in the S-dimensional spaces can be expressed as x i ( t ) = ( x i 1 , x i 2 , , x i s , x i S ) . The velocity of i-th particle can be indicated as v i ( t ) = ( v i 1 , v i 2 , , v i s , v i S ) . Particle velocity can be dynamically adjusted by its own and companions’ experiences to optimize the precision of the search process. It indicates that if a particle discovers a promising solution, other particles will move toward to it and explore this region more extensively. In order to reduce the likelihood of the particle leaving the search place, the velocity of the particle is clamped to a maximum v max , when v i > v max , make v i = v max .
The quality of particles can be evaluated by the fitness function f ( x i ) . In the process of single particle and whole particle iterations, two variables are defined to record the location of the maximum value of the fitness function. The position p i ( t ) = ( p i 1 , p i 2 , , p i D ) of the maximum value of fitness function in single particle iteration is called individual extremum, denoted as p b e s t ; similarly the position p g = ( p g 1 , p g 2 , , p g D ) in whole particle iteration is called the global extremum g b e s t . In the iterative process, v i ( t ) and x i ( t ) can be continuously updated according to p b e s t and g b e s t .
v i ( t + 1 ) = ω v i ( t ) + c 1 r 1 ( p i ( t ) x i ( t ) ) + c 2 r 2 ( p g ( t ) x i ( t ) ) ,
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 ) ,
In Equations (16) and (17), i = 1 , 2 , , N , s = 1 , 2 , , S , t represents the time of current iteration, ω represents the inertia weight, c 1 c 2 [ 0 , 4 ] represents cognitive and social factors respectively, r 1 r 2 [ 0 , 1 ] are regarded as two random numbers. The termination condition is the iteration time has reached its maximum or the fitness value of global extremum satisfies the preset stop threshold. The output result is the highest fitness in the whole iteration process, that is, the global extreme value denoted as g b e s t .
The particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM. The fuzzy C-means algorithm is a gradient descent based optimization algorithm. Compared with FCM, particle swarm optimization is a kind of population-based optimization algorithm. Through appropriate values of inertia weight, cognitive factor, social factor along with other parameters, PSO can search for more areas in the solution space of the objective function to be optimized. In this way, an improved accuracy of clusters and efficient method can be obtained.

5. Implementation Steps of the Improved Method

In order to overcome the shortcomings of the FCM algorithm mentioned in the first section, we introduce the particle swarm optimization algorithm to increase the randomness of the search process as well as increase the degree and uniformity of the initial solution space. By means of these, a better global search ability is obtained. To be specific, by combining the PSO algorithm with the FCM algorithm, the iterative updating process of the “membership degree matrix—clustering center” in the FCM algorithm is optimized by the PSO optimization process. The velocity and position of the iterative process is optimized by self and social cognition, breaking the shortcoming of lacking variety in the search direction, and ultimately enhancing the ability of global optimization.
The purpose of the FCM clustering algorithm is to get the minimum value of the objective function, as shown in Equation (10), whereas in the PSO algorithm, when the global optimal solution is searched, the fitness function obtains the maximum value. Therefore, the fitness evaluation of the particle can refer to the objective function of FCM and be improved as follows [21]:
f ( x i ) = K J ( U , P ) = K i = 1 c k n u i k m d i k 2 ,
where K is a constant, U is the membership degree matrix obtained by FCM algorithm iteratively and u i k stands for the membership degree of sample i belonging to cluster center k .
In this paper, a mixing matrix estimation method based on data field aided fuzzy C-means clustering algorithm is proposed. By introducing a data field and particle swarm optimization algorithm, the effectiveness of the FCM algorithm can be improved. The implementation steps of the proposed clustering algorithm are as follows:
Step 1: The data field of the aliasing MIMO radar observed signals with single-source principal value of complex angular detection is obtained, and the number of the data center to be clustered is acquired by the equipotential graph, denoted by c ;
Step 2: Initialize the parameters: N represents the population size, the number of clusters is c , m stands for the weighting index, the cognitive and social parameters are expressed as c 1 and c 2 respectively. The inertia factor is denoted as ω , the maximum number of iterations is set as M a x i t e r , the velocity upper limit to v max and the iteration stop threshold are indicated as e p s m and e ;
Step 3: Initialize the population of particles and generate the initial population randomly;
Step 4: Initialize the membership degree matrix U i k , and use single step FCM to calculate the set of initial clustering center;
Step 5: Initialize the value of fitness function by using Equation (18);
Step 6: The best judgment: The current fitness value of the particle is compared with its individual extremum p b e s t , and if it is greater than p b e s t , the current position is assigned to p b e s t . Furthermore, the individual extremum of the particle is compared with the global extremum g b e s t of the population, and if it is greater than g b e s t , g b e s t will be updated to the current position;
Step 7: Population evolution: According to Equations (16) and (17), we can acquire the velocity and position of the next generation of particles and produce a new generation of population to be new cluster centers;
Step 8: We can achieve FCM clustering by using the number and locations of clustering centers respectively obtained by the data field and PSO algorithm. The iteration times denoted as i t e r and the value of the optimal clustering center in the iter-th iteration can be obtained;
Step 9: If the number of iterations reaches M a x i t e r or the fitness reaches the threshold, the global optimal solution g b e s t and the current membership matrix U can be obtained, meanwhile, the algorithm terminates; otherwise, returns to step 6.

6. Simulation Results and Analysis

In order to verify the effectiveness of the proposed method, this paper makes a comparative experiment with the hierarchical clustering, K-means clustering, and tensor decomposition methods. The computer hardware used in the experiment is configured as Intel (R) Pentium (R) CPU G3260 @ 3.30GHz, with 4GB memory, a Windows 7 operating system, and MATLAB 8.1.0.604 (R2013a, MathWorks, Natick, MA, USA) as the development environment.

6.1. Evaluation Criteria of Estimation Error

The normalized mean square error is chosen as the evaluation criterion of the mixing matrix estimation [14,22,23], which can be expressed as:
N M S E = 10 log 10 ( i = 1 M j = 1 N a i j 2 i = 1 M j = 1 N ( a i j ^ a i j ) 2 ) ,
where a i j and a i j ^ represent the elements in the original and estimated mixing matrix, respectively. The estimation accuracy of the mixing matrix will increase as the normalized mean square error decreases.

6.2. Simulation Procedures

Suppose that the MIMO radar transmits four channel DFCW signals, i.e., m = 4 . The pulse width T is set to 16   μ s and the sub pulse width has a relation denoted as T p = T / N = 1   μ s . The fundamental frequency is 1   MHz , let the bandwidth be B = 16   MHz and the sampling rate f s = 64   MHz . Besides, 30   dB is set to the default value of S N R . After a series of experiments, the PSO algorithm has a population size of 20, and the number of cluster centers c max is equal to 4. The cognitive and social factors are set as 2 and the inertia weight is 0.3. Moreover, the maximum number of iterations is set to 20, the velocity upper limit v max is equal to 1.5, and the fitness function threshold e = 10 4 . The value of the radiation factor in the data field is 0.1. The weighted index m in FCM algorithm is equal to 2, and the iteration stop threshold e p s m = 10 6 . The time domain waveform of the four transmission signals are shown in Figure 1.
The mixing matrix A is chosen as the following form.
A = [ 0.3864 0.7885 0.9127 0.4854 0.4869 0.2439 0.2823 0.4998 0.7833 0.5646 0.2955 0.7174 ] ,
The mixing matrix A is a 3 × 4 -dimensional matrix, and three different noisy observed signals are received at the radar receivers. Figure 2 shows the distribution of the time–frequency points which belong to the three observed signals after STFT transformation. It can be seen that the time–frequency distribution in the figure is overlapped without obvious directionality.
The observed signal is processed by a single-source principal value of complex angular detection. After that, the direction of the observed signal is obviously improved, but there are still many noisy and isolated points. As shown in Figure 3a, the measured data is regarded as particles in the data field and projected onto the unit hemisphere with a positive axis of X 1 . The potential value of the particles are calculated according to Equation (15). Using Cartesian grids, each dimension of data space is divided into 50 equal parts. The potential values at each grid point are obtained to establish the potential field. The projection of the potential field on three planes is given by Figure 3b–d, respectively. Figure 4 shows the three-dimensional potentiometric map on the X2–3 plane. In the diagram, the concentration of equipotential line and magnitude of the potential at a point are proportional to the density of the data points. By obtaining the extrema distribution of the potential points in each projection surface, initial values of the FCM algorithm can be provided. It can be determined that the number of clustering centers is equal to four.
The observed signal with obvious directivity after the test is iterated by the improved FCM clustering algorithm, and four global optimal convergence points are obtained. The spatial position vector of each convergence point corresponds to a column vector in the mixing matrix.
After the optimization of the cluster center by PSO and iterative clustering, the estimation result of the mixing matrix is obtained.
A ^ = [ 0.3898 0.7869 0.8931 0.4780 0.4795 0.2477 0.2977 0.5005 0.7824 0.5609 0.3175 0.7129 ] ,
then the normalized mean square error of the mixing matrix estimation is calculated.
N M S E = 10 log 10 i , j a i j 2 i , j ( a i j ^ a i j ) 2 = 40.25   dB .
The method proposed in this paper is compared with the methods of hierarchical clustering, K-means clustering, and tensor decomposition. Figure 5 shows the averaged normalized mean square error obtained by 100 Monte Carlo independent tests when the signal-to-noise ratio (SNR) is in the range of 5   dB 30   dB for different methods. The simulation results show that with the increase of SNR, the estimation accuracy of each method will be improved. In this paper, due to the improvement of the iterative clustering process, the error value proposed by the method in question is the smallest, so a precise mixing matrix estimation can be achieved.

7. Conclusions

In order to improve the effect of mixing matrix estimation in the case of linear instantaneous mixture of MIMO radar DFCW signals, according to the time–frequency-sparse nature of the signal, firstly, the linear clustering characteristics are improved by single-source principal value of complex angular detection. Then, we propose a new selection scheme of clustering numbers based on the data field. And through the improved FCM clustering algorithm, mixing matrix can be estimated. The experimental results show that the proposed method can achieve higher estimation accuracy and robustness than other methods, which lays a good foundation for the accurate restoration of the MIMO radar source signals.

Acknowledgments

This work is supported by the National Key Research and Development Program of China (2016YFC0101700), the International S&T Cooperation Program of China (ISTCP) (No. 2015DFR10220), the Application Technology Research and Development of Heilongjiang Science and Technology Agency (No. GX16A007), National Natural Science Foundation of China (No. 61371172), the Application Technology Research and Development of Heilongjiang Science and Technology Agency (No. GC13A307), and the Fundamental Research Funds for the Central Universities (No. HEUCF1808).

Author Contributions

Qiang Guo and Chen Li conceived and designed the study. Chen Li and Guoqing Ruan performed the experiments. Chen Li wrote the paper. Qiang Guo, Chen Li and Guoqing Ruan reviewed and edited the manuscript. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pedersen, M.S.; Wang, D. Two-microphone separation of speech mixtures. IEEE Trans. Neural Netw. 2008, 19, 475–492. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Abolghasemi, V.; Ferdowsi, S. Fast and incoherent dictionary learning algorithms with application to fMRI. Signal Image Video Process. 2015, 9, 147–158. [Google Scholar] [CrossRef]
  3. Xiang, W.; Huang, Z.; Zhou, Y. Semi-Blind Signal Extraction for Communication Signals by Combining Independent Component Analysis and Spatial Constraints. Sensors 2012, 12, 9024–9045. [Google Scholar]
  4. Aziz, M.A.E.; Khidr, W. Nonnegative matrix factorization based on projected hybrid conjugate gradient algorithm. Signal Image Video Process. 2015, 9, 1825–1831. [Google Scholar] [CrossRef]
  5. Wang, H.; Li, R.; Tang, G. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition. PLoS ONE 2014, 9, e109166. [Google Scholar] [CrossRef] [PubMed]
  6. Hattay, J.; Belaid, S.; Lebrun, D. Digital in-line particle holography: Twin-image suppression using sparse blind source separation. Signal Image Video Process. 2015, 9, 1767–1774. [Google Scholar] [CrossRef]
  7. Romano, J.M.T.; Attux, R.; Cavalcante, C.C. Unsupervised Signal Processing: Channel Equalization and Source Seperation; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  8. Naik, G.R.; Wang, W. Blind Source Separation; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  9. Abrard, F.; Deville, Y. A time-frequency blind signal separation method applicable to underdetermined mixtures of dependent sources. Signal Process. 2005, 85, 1389–1403. [Google Scholar] [CrossRef]
  10. Puigt, M.; Deville, Y. Time–frequency ratio-based blind separation methods for attenuated and time-delayed sources. Mech. Syst. Signal Process. 2005, 19, 1348–1379. [Google Scholar] [CrossRef] [Green Version]
  11. Arberet, S.; Gribonval, R.; Bimbot, F. A robust method to count and locate audio sources in a stereophonic linear anechoic mixture. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2007, Honolulu, HI, USA, 15–20 April 2007. [Google Scholar]
  12. Arberet, S.; Gribonval, R.; Bimbot, F. A robust method to count and locate audio sources in a multichannel underdetermined mixture. IEEE Trans. Signal Process. 2010, 58, 121–133. [Google Scholar] [CrossRef] [Green Version]
  13. Kim, S.G.; Yoo, C.D. Underdetermined blind source separation based on subspace representation. IEEE Trans. Signal Process. 2009, 57, 2604–2614. [Google Scholar]
  14. Dong, T.; Lei, Y.; Yang, J. An algorithm for underdetermined mixing matrix estimation. Neurocomputing 2013, 104, 26–34. [Google Scholar] [CrossRef]
  15. Xu, J.; Yu, X.; Hu, D. A fast mixing matrix estimation method in the wavelet domain. Signal Process. 2014, 95, 58–66. [Google Scholar] [CrossRef]
  16. Ai, X.F.; Luo, Y.J.; Zhao, G.Q. Underdetermined blind separation of radar signals based on tensor decomposition. Syst. Eng. Electron. 2016, 38, 2505–2509. [Google Scholar]
  17. Guo, Q.; Ruan, G.; Nan, P. Underdetermined Mixing Matrix Estimation Algorithm Based on Single Source Points. Circuits Syst. Signal Process. 2017, 36, 4453–4467. [Google Scholar] [CrossRef]
  18. Giachetta, G.; Mangiarotti, L.; Sardanashvily, G. Advanced Classical Field Theory; World Scientific: Singapore, 2009. [Google Scholar]
  19. Kante, B.; Germain, D.; De Lustrac, A. Near field imaging of refraction via the magnetic field. Appl. Phys. Lett. 2014, 104, 021909. [Google Scholar] [CrossRef]
  20. Li, D.; Wang, S.; Gan, W. Data Field for Hierarchical Clustering. Int. J. Data Warehous. Min. 2011, 7, 43–63. [Google Scholar]
  21. Filho, T.M.S.; Pimentel, B.A.; Souza, R.M.C.R. Hybrid methods for fuzzy clustering based on fuzzy c-means and improved particle swarm optimization. Expert Syst. Appl. Int. J. 2015, 42, 6315–6328. [Google Scholar] [CrossRef]
  22. Sun, J.; Li, Y.; Wen, J. Novel mixing matrix estimation approach in underdetermined blind source separation. Neurocomputing 2016, 173, 623–632. [Google Scholar] [CrossRef]
  23. Reju, V.G.; Koh, S.N.; Soon, I.Y. An algorithm for mixing matrix estimation in instantaneous blind source separation. Signal Process. 2009, 89, 1762–1773. [Google Scholar] [CrossRef]
Figure 1. Time domain waveform of transmitted signals.
Figure 1. Time domain waveform of transmitted signals.
Symmetry 10 00021 g001
Figure 2. The time–frequency scatter plot of transmitted signals.
Figure 2. The time–frequency scatter plot of transmitted signals.
Symmetry 10 00021 g002
Figure 3. (a) Distribution map on unit-positive hemispherical; (b) Potentiometric map on the X1–X2 plane; (c) Potentiometric map on the X1–X3 plane; (d) Potentiometric map on the X2–X3 plane.
Figure 3. (a) Distribution map on unit-positive hemispherical; (b) Potentiometric map on the X1–X2 plane; (c) Potentiometric map on the X1–X3 plane; (d) Potentiometric map on the X2–X3 plane.
Symmetry 10 00021 g003
Figure 4. Three-dimensional potentiometric map on the X2–X3 plane.
Figure 4. Three-dimensional potentiometric map on the X2–X3 plane.
Symmetry 10 00021 g004
Figure 5. Comparison chart of NMSE.
Figure 5. Comparison chart of NMSE.
Symmetry 10 00021 g005

Share and Cite

MDPI and ACS Style

Guo, Q.; Li, C.; Ruan, G. Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering. Symmetry 2018, 10, 21. https://doi.org/10.3390/sym10010021

AMA Style

Guo Q, Li C, Ruan G. Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering. Symmetry. 2018; 10(1):21. https://doi.org/10.3390/sym10010021

Chicago/Turabian Style

Guo, Qiang, Chen Li, and Guoqing Ruan. 2018. "Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering" Symmetry 10, no. 1: 21. https://doi.org/10.3390/sym10010021

APA Style

Guo, Q., Li, C., & Ruan, G. (2018). Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering. Symmetry, 10(1), 21. https://doi.org/10.3390/sym10010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop