Next Article in Journal
Preventive Maintenance of the k-out-of-n System with Respect to Cost-Type Criterion
Previous Article in Journal
The Spreading of Shocks in the North America Production Network and Its Relation to the Properties of the Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Decomposition of Acoustic Signals in Dispersive Channels

1
Faculty of Electrical Engineering, University of Montenegro, 81000 Podgorica, Montenegro
2
Faculty of Engineering, University of Rijeka, 51000 Rijeka, Croatia
3
Gipsa-Lab, Université Grenoble Alpes, 38400 Grenoble, France
4
Faculty of Computer Science and Engineering, University Ss. Cyril and Methodius, 1000 Skopje, North Macedonia
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(21), 2796; https://doi.org/10.3390/math9212796
Submission received: 23 September 2021 / Revised: 26 October 2021 / Accepted: 28 October 2021 / Published: 4 November 2021
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
We present a signal decomposition procedure, which separates modes into individual components while preserving their integrity, in effort to tackle the challenges related to the characterization of modes in an acoustic dispersive environment. With this approach, each mode can be analyzed and processed individually, which carries opportunities for new insights into their characterization possibilities. The proposed methodology is based on the eigenanalysis of the autocorrelation matrix of the analyzed signal. When eigenvectors of this matrix are properly linearly combined, each signal component can be separately reconstructed. A proper linear combination is determined based on the minimization of concentration measures calculated exploiting time-frequency representations. In this paper, we engage a steepest-descent-like algorithm for the minimization process. Numerical results support the theory and indicate the applicability of the proposed methodology in the decomposition of acoustic signals in dispersive channels.

1. Introduction

Signals with time-varying spectral content, known as non-stationary signals, are analyzed using time-frequency signal (TF) signal analysis [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Some commonly used TF representations include short-time Fourier transform (STFT) [1,3], pseudo-Wigner distribution (WD) [1,9,12], and S-method (SM) [3]. Time-scale, multi-resolution analysis using the wavelet transform is an additional approach to characterize non-stationary signal behavior [4]. Various representations are primarily applied in the instantaneous frequency (IF) estimation and related applications [8,9,10,11,12,13,14,15], since they concentrate the energy of a signal component at and around the respective instantaneous frequency. Concentration measures provide a quantitative description of the signal concentration in the given representation domain [18], and can be used to assess the area of the time-frequency plane covered by a signal component.
In order to characterize multicomponent signals, it is quite common to perform signal decomposition, which assumes that each individual component is extracted for separate analysis, such as for the IF estimation. Decomposition techniques for multicomponent signals are quite efficient if components do not overlap in the time-frequency plane [19,20,21,22,23,24,25,26]. The method originally presented in [26] can be used to completely extract each component by using an intrinsic relation between the PWD and the SM. In the analysis of multicomponent signals, it is, however, common that various components partially overlap in the time-frequency plane, making the decomposition process particularly challenging [19,20,21,22,23,24,25,26]. In this rather unfavorable scenario, overlapped components partially share the same domains of supports, and existing decomposition techniques provide only partial results in the univariate case, limited to very narrow signal classes. For example, linear frequency modulated signals are decomposed using the chirplet transform, Radon transform, or similar techniques [20,25], whereas sinusoidally modulated signals are separated using the inverse Radon transform [27]. However, these techniques cannot perform the decomposition when components have a general, non-stationary form.
In the multivariate (multichannel) framework, it is assumed that the signals are acquired using multiple sensors, [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]. The sensors modify component amplitudes and phases. However, the interdependence of values from various channels can be utilized in the signal decomposition. This concept has also been exploited in the empirical mode decomposition (EMD) [39,40,41,42,43]. It was previously shown that WD-based decomposition is possible if signals are available in the multivariate form [28,29,30]. Moreover, the decomposition can be performed by directly engaging the eigenanalysis of the auto-correlation matrix, calculated for signals in the multivariate form [31,32,33,34]. It should also be noted that the problem of multicomponent signal decomposition has some similarities with the blind source separation [45,46,47,48]. However, the basic difference is in the aim to extract each signal component in the decomposition framework, whereas in the blind source separation, the aim is to separate signal sources (although one source may generate several components). The mixing scheme from the blind source separation framework is used in a recently proposed mode decomposition approach [49]. Another line of the decomposition-related research includes mode decomposition techniques, which could be used for separation of modal responses and identification of progressive changes in modal parameters [50].
Overlapped components pose a challenge in various applications, such as in biomedical signal processing [44,51,52], radar signal processing [53], and processing of lamb waves [54]. Popular approaches, such as the EMD and multivariate EMD (MEMD), [39,40,41,42,43] cannot respond to the challenges posed by components overlapped in the time-frequency plane and do not provide acceptable decomposition results in this particular case [28]. Additionally, the applicability of these methods is highly influenced by amplitude variations of the signal components. In this paper, we present a framework for the decomposition of acoustic dispersive environment signals into individual modes based on the multivariate decomposition of multicomponent non-stationary signals. Even when simple signal forms are transmitted, acoustic signals in dispersive channels appear in the multicomponent form, with either very close or partially overlapped components. Being reflected from the underwater surfaces and objects, each individual component carries information about the underwater environment. That information is inaccessible while the signal is in its multicomponent form. This makes analyzing acoustic signals (mainly their localization and characterization) a challenging problem for research [55,56,57,58,59,60]. The presented decomposition approach enables complete separation of components and their individual characterization (e.g., IF estimation, based on which knowledge regarding the underwater environment can be acquired).
We aim at solving this notoriously difficult practical problem by exploiting the interdependencies of multiply acquired signals: such signals can be considered as multivariate and are subject to slight phase changes across various channels, occurring due to different sensing positions and due to various physical phenomena, such as water ripples, uneven seabed, and changes in the seabed substrate. As each eigenvector of the autocorrelation matrix of the input signal represents a linear combination of the signal components [31,33], slight phase changes across the various channels are actually favorable for forming an undetermined set of linearly independent equations relating the eigenvectors and the components. Moreover, we have previously shown that each component is a linear combination of several eigenvectors corresponding to the largest eigenvalues, with unknown weights [31] (the number of these eigenvalues is equal to the number of signal components). Among infinitely many possible combinations of eigenvectors, the aim is to find the weights producing the most concentrated combination, as each individual signal component (mode) is more concentrated than any linear combination of components, as discussed in detail in [31]. Therefore, we engage concentration measures [18] to set the optimization criterion and perform the minimization in the space of the weights of linear combinations of eigenvectors.
We revisit our previous research from [28,31,33], and the main contributions are two-fold. The decomposition principles of the auto-correlation matrix [31,33] are reconsidered. Instead of exploiting direct search [31] or a genetic algorithm [33], we show that the minimization of concentration measure in the space of complex-valued coefficients acting as weights of eigenvectors, which are linearly combined to form the components, can be performed using a steepest-descent-based methodology, originally used in the decomposition from [28]. The second contribution is the consideration of a practical application of the decomposition methodology.
The paper is organized as follows. After the Introduction, we present the basic theory behind the considered acoustic dispersive environment in Section 2. Section 3 presents the principles of multivariate signal decomposition of dispersive acoustic signals. The decomposition algorithm is summarized in Section 4. The theory is verified on numerical examples and additionally discussed in Section 5. Whereas the paper ends with concluding remarks.

2. Dispersive Channels and Shallow Water Theory

Our primary goal is the decomposition of signals transmitted through dispersive channels. Decomposition assumes the separation of signal components while preserving the integrity of each component. Signals transmitted through dispersive channels are multicomponent and non-stationary, even in cases when emitted signals have a simple form. This makes the challenging problem of decomposition, localization, and characterization of such signals a fairly studied topic [55,56,57,58,59,60,61,62,63,64,65,66,67]. The decomposition can be performed using the time-frequency phase-continuity of the signals [55], or using the mode characteristics of the signal [56]. After being transmitted through a dispersive environment, measured signals consist of several components called modes. The non-stationarity of these modes is a consequence of frequency dependent properties of the signal propagation media.
The dispersive acoustic environment is commonly studied within the context of shallow waters, defined by the depth of the sea/ocean, which are less than D = 200 m [55,57,58,59,60,61,62,63,64,65,66,67]. The speed of signals traveling through water is affected by many factors, such as the salinity, the temperature, or the pressure of the water, but it is usually approximated around 1480–1500 m/s. Note that this speed is larger than the speed of signals traveling through the air, which is estimated at approximately 340–360 m/s. Such setups typically have extremely complex analyses. Moreover, bottom properties and water volume add up to this complexity, as well as noise caused by activities on the water surface and on the coastlines (commonly related to cavitation). Dispersivity of shallow waters occurs due to many reasons, among which are the roughness of the bottom, strength of the waves, and cavity level. Dispersive channels have varying frequency characteristics (phase and spectral content) during the transmission of the signal.

2.1. Normal Mode Solution

The propagation of sound in a shallow water environment is mathematically represented by the wave equations. Among several methods of deriving the solution of the wave equation, the most commonly used is the normal mode solution, based on solving depth-dependent equations using the method of variable separation. Further analysis will be developed based on the isovelocity waveguide model presented in Figure 1, which characterizes a rigid boundary of the seabed. This further yields to an ideally spread velocity c. Furthermore, channel models assume that the structure of a channel is a waveguide, where multiple normal-modes are received as delayed and scaled versions of the transmitted signal [56,58,59,65]. Our aim is to decompose the received signal by extracting each mode separately. Such extracted modes can be used in further processing, such as IF estimation, characterization, and classification.
More general models assume a more complicated environment, where the boundary of the bottom depends on the nature of the ocean, such as the roughness, depending on the weather conditions and different environments in the ocean itself. These models take into account the scattering of the transmitted signal as well. Our future work will be oriented towards these models as well.

2.2. Problem Formulation—Signal Processing Approach

The practical setup, shown in Figure 1 is further considered. In this setup, it is assumed that the transmitter is located in the water at the depth of z t , whereas the receiver is located at the depth of z r meters. It is assumed that the wave is transmitted through an isovelocity channel as in [55,56,57,61,62,63,67]. The distance between the transmitter and the receiver is r.
Taking into account the spectrum of the received signal, in the normal-mode case, the transfer function reads
H ( ω ) = p = 1 + G p ( z t ) G p ( z r ) exp ( j k r ( p , ω ) r ) k r ( p , ω ) r = p = 1 + A t ( p , ω ) exp j k r ( p , ω ) r ,
with G p ( z t ) and G p ( z r ) being the modal functions of the p-th mode corresponding to the transmitter and the receiver [55,56,65], with the attenuation rate is A t ( p , ω ) = A ( p , ω ) / r . Angular frequency is denoted by ω . The modes are dependent on wavenumbers k r ( p , ω ) [55]
k r 2 ( p , ω ) = ω c 2 ( p 0.5 ) π D 2 .
The multicomponent structure of the transfer function is dependent on the number of modes. The speed of sound propagation underwater is c = 1500 m/s.
The response to a monochromatic signal
s ( n ) = exp ( j ω 0 n )
at the p-th mode can be written as
s p ( n ) A t ( p , ω 0 ) exp ( j ω 0 n j k r ( p , ω 0 ) r ) .
The phase velocity of this signal is
ν p ( ω ) = ω k r ( p , ω ) = ω ω c 2 ( p 0.5 ) π D 2 .
This is the horizontal velocity of the corresponding phase for the p-th mode. The energy propagation of the signal component is represented by the group velocity
u p ( ω ) = d r ( t ) d t = d ω d k r ( p , ω ) = 1 d k r ( p , ω ) d ω = 1 d d ω ω c 2 ( p 0.5 ) π D 2 .
The received signal can be represented in the Fourier transform domain as a product of the Fourier transform of the transmitted signal, S ( ω ) and the transfer function H ( ω ) of the channel in the normal-mode form; that is
X ( ω ) = S ( ω ) H ( ω ) .
In time domain, the received signal, x ( n ) , is the convolution of the transmitted signal, s ( n ) and the impulse response, h ( n ) , from (1), i.e.,
x ( n ) = s ( n ) h ( n ) .
In the following sections, we present an efficient methodology for the decomposition of mode functions, which will make the problem of detecting and estimating the received signal parameters straightforward.

3. Multivariate Decomposition

3.1. Multivariate (Multichannel) Signals

Multivariate or multichannel signals are acquired using multiple sensors. It is further assumed that C sensors at the receiving position are used for the acquisition of signal x R ( n ) . Here, subscript R is used to denote the fact that the acquired signal is real-valued. All C sensors placed at the depth z r are part of the receiver. In the range direction, sensor distances from the transmitter are r + δ c , c = 1 , 2 , , C . Deviations δ c , c = 1 , 2 , , C , are small as compared to the distance, r, between the transmitter and receiver locations in Figure 1 in range direction.
Since the measured signal, x R ( n ) , is real-valued, its analytic extension
x ( n ) = x R ( n ) + j H { x R ( n ) }
is assumed in the further multivariate decomposition setup, where H { x R ( n ) } is the Hilbert transform of this signal. This analytic form assumes only non-negative frequencies. Each sensor modifies the amplitude and the phase of the acquired signal. Therefore, the channel signals take the form a c ( n ) exp ( j ϕ c ( n ) ) = α c exp ( j φ c ) x ( n ) , for each sensor c = 1 , 2 , , C . When a monocomponent signal x ( n ) = A ( n ) exp ( j ψ ( n ) ) , is measured at sensor c, this yields
a c ( n ) exp ( j ϕ c ( n ) ) = α c exp ( j φ c ) x ( n ) ,
or a c ( n ) cos ( ϕ c ( n ) ) in the case of a real-valued signal. The corresponding analytic signal, a i ( n ) exp ( j ϕ i ( n ) ) = a i ( n ) cos ( ϕ i ( n ) ) + j H { a i ( n ) cos ( ϕ i ( n ) ) } is a valid representation of the real amplitude-phase signal a c ( n ) cos ( ϕ c ( n ) ) if the spectrum of a i ( n ) is nonzero only within the frequency range | ω | < B and the spectrum of cos ( ϕ i ( n ) ) occupies a non-overlapping (much) higher frequency range [5]. If variations of the amplitude, a c ( n ) , are much slower than the phase ϕ c ( n ) variations, then this signal is monocomponent [31]. A unified representation of a multichannel (multivariate) signal, acquired using C sensors, assumes the following vector form
x ( n ) = x ( 1 ) ( n ) x ( 2 ) ( n ) x ( C ) ( n ) = a 1 ( n ) e j ϕ 1 ( n ) a 2 ( n ) e j ϕ 2 ( n ) a C ( n ) e j ϕ C ( n ) , n = 1 , 2 , , N .

3.2. Multivariate Multicomponent Signals

When the measured signal consists of a linear combination of P linearly independent components s p ( n ) = A p ( n ) e j ψ p ( n ) , p = 1 , 2 , , P , then it is commonly referred to as a multicomponent signal
x ( n ) = p = 1 P s p ( n ) = p = 1 P A p ( n ) e j ψ p ( n ) .
Component amplitudes, A p ( n ) , are characterized by slow-varying dynamics as compared to the variations of the component phases, ψ p ( n ) . Linear independence of the components assumes that neither component can be represented as a linear combination of other components for any considered time instant n.
Incorporation of multicomponent signal definition (11) into the multichannel form (10), yields
x ( n ) = x ( 1 ) ( n ) x ( 2 ) ( n ) x ( C ) ( n ) = a 1 ( n ) e j ϕ 1 ( n ) a 2 ( n ) e j ϕ 2 ( n ) a C ( n ) e j ϕ C ( n ) = p = 1 P α 1 p s p ( n ) e j φ 1 p p = 1 P α 2 p s p ( n ) e j φ 2 p p = 1 P α C p s p ( n ) e j φ C p , n = 1 , 2 , , N ,
or, more briefly,
x ( 1 ) ( n ) x ( 2 ) ( n ) x ( C ) ( n ) = a 11 a 12 a 1 P a 21 a 22 a 2 P a C 1 a C 2 a C P s 1 ( n ) s 2 ( n ) s P ( n ) ,
that is
x ( n ) = As ( n ) ,
where the vector of signal components, s ( n ) is, for instant n, given by
s ( n ) = s 1 ( n ) s 2 ( n ) s P ( n ) , n = 1 , 2 , , N .
Matrix A of size C × P , which relates the signal in the c-th channel, x ( c ) ( n ) with signal components, s p ( n ) , in form of a linear combination
x ( c ) ( n ) = p = 1 P a c p s p ( n ) = p = 1 P α c p s p ( n ) e j ϕ c p
has the following form
A = a 11 a 12 a 1 P a 21 a 22 a 2 P a C 1 a C 2 a C P ,
with elements being complex constants a c p = α c p e j φ c p , c = 1 , 2 , , C , p = 1 , 2 , , P . These constants linearly relate the channel signals with signal components. Clearly, the maximum number of independent channels x ( 1 ) ( n ) , x ( 2 ) ( n ) , , x ( C ) ( n ) in x ( n ) is
M = min { C , P } ,
since rank { A } min { C , P } .
The relation between the C measured channel signals, x ( c ) ( n ) , and P components, x p ( n ) , can be, taking into consideration all time instants, formed by introducing C × N matrix X s e n with elements being the sensed signal values, and X c o m comprising the samples of signal components s p ( n ) . In that case, the relation is
x ( 1 ) ( 1 ) x ( 1 ) ( N ) x ( 2 ) ( 1 ) x ( 2 ) ( N ) x ( C ) ( 1 ) x ( C ) ( N ) = A s 1 ( 1 ) s 1 ( N ) s 2 ( 1 ) s 2 ( N ) s P ( 1 ) s P ( N ) .
or
X s e n = A X c o m .
Now we can introduce the autocorrelation matrix R of the sensed signal, whose eigenvectors will be used in the multivariate decomposition framework:
R = X s e n H X s e n ,
where ( · ) H denotes the Hermitian transpose. Individually, elements of this matrix are products of x ( n 1 ) and x H ( n 1 ) at given instants n 1 and n 2 :
R ( n 1 , n 2 ) = x H ( n 2 ) x ( n 1 ) = i = 1 C x ( i ) * ( n 2 ) x ( i ) ( n 1 ) ,
where x ( n 1 ) = [ x ( 1 ) ( n 1 ) x ( 2 ) ( n 1 ) x ( C ) ( n 1 ) ] T is the column vector of sensed values at a given instant n 1 . As it will be shown next, the eigenvectors of the autocorrelation matrix, R , corresponding to the largest eigenvalues, consist of linear combinations of signal components. This fact will be used to develop the algorithm for the extraction of those components.

3.3. Eigendecomposition of the Autocorrelation Matrix

It is well-known that any square matrix R , of dimensions K × K , can be subject of eigenvalue decomposition
R = Q Λ Q H = p = 1 K λ p q p q p H ,
with λ p being the eigenvalues and q p being the corresponding eigenvectors of matrix R . Matrix Λ contains eigenvalues λ p , p = 1 , 2 , , K on the main diagonal and zeros at other positions. Matrix Q = q 1 , q 2 , , q K contains the eigenvectors q p as its columns. Since R is symmetric, eigenvectors are orthogonal.
From definition (20) and based on relation X s e n = A X c o m , autocorrelation matrix R can be rewritten as
R = X s e n H X s e n = X c o m H A H A X c o m = i = 1 P j = 1 P a ¯ i j s i s j H ,
where a ¯ i j is used to denote elements of matrix A H A and s i = s i ( 1 ) , s i ( 2 ) , s i ( N ) H . Elements of matrix R are
R ( n 1 , n 2 ) = i = 1 P j = 1 P a ¯ i j s i ( n 1 ) s j * ( n 2 ) = s 1 * ( n 2 ) , s 2 * ( n 2 ) , , s P * ( n 2 ) A H A s 1 ( n 1 ) s 2 ( n 1 ) s P ( n 1 ) .
Based on the decomposition of matrix R on its eigenvalues and eigenvectors, we further have
R = p = 1 M λ p q p q p H = i = 1 P j = 1 P a ¯ i j s i s j H ,
with M = min { C , P } . It will be further assumed that the number of sensors, C is such that C P . In that case, there are M = P eigenvectors in (25). Two general cases can be further discussed:
  • Non-overlapped components. Note that the case when no components s i and s j overlap in the time-frequency plane implies that these components are orthogonal. In that case, the right side of (25) becomes:
    R = i = 1 P s i s i H j = 1 P a ¯ i j = κ p p = 1 P s p s p H = p = 1 P λ p q p q p H
    where κ p = j = 1 P a ¯ i j . The considered case of non-overlapped (orthogonal) components further implies that
    κ p s p = λ p q p , p = 1 , 2 , , P .
  • Partially overlapped components. Based on (25), since the partially overlapped components are non-orthogonal; that is, such components are linearly dependent, eigenvectors can be expressed as linear combinations of such components
    q 1 = ξ 11 s 1 + ξ 21 s 2 + + ξ P 1 s P q 2 = ξ 12 s 1 + ξ 22 s 2 + + ξ P 2 s P q M = ξ 1 M s 1 + ξ 2 M s 2 + + ξ P M s P ,
    with M = min { C , P } , i.e., for assumed C P , M = P .

3.4. Components as the Most Concentrated Linear Combinations of Eigenvectors

Based on (28) and for assumed M = P , each signal component, s p can be expressed as a linear combination of eigenvectors q p of matrix R , p = 1 , 2 , , P ; that is
s p = γ 1 p q 1 + γ 2 p q 2 + + γ P p q P ,
where γ i p , i = 1 , 2 , P , p = 1 , 2 , P are unknown coefficients. Obviously, there are M = P linear equations for P components, with P 2 unknown weights. Among infinitely many solutions of this undetermined system of equations, we aim at finding those combinations that produce signal components. Moreover, since components are partially overlapped, in the case when one component is detected, its contribution should be removed from all equations (linear combinations of eigenvectors) in order to avoid that it is detected again.
Obviously, for the detection of linear combinations of eigenvectors, which represent signal components, a proper detection criterion shall be established. Since non-stationary signals can be suitably represented using time-frequency representations, and signal components tend to be concentrated along their instantaneous frequencies, our criterion will be based on time-frequency representations.
Time-frequency signal analysis provides a mathematical framework for a joint representation of signals in time and frequency domains. If w ( m ) denotes a real-valued, symmetric window function of length N w , then signal s p ( n ) can be represented using the STFT
S T F T p ( n , k ) = m = 0 N w 1 w ( m ) s p ( n + m ) e j 2 π m k / N w ,
which renders the frequency content of the portion of signal around the each considered instant n, localized by the window function w ( n ) .
To determine the level of the signal concentration in the time-frequency domain, we can exploit concentration measures. Among various approaches, inspired by the recent compressed sensing paradigm, measures based on the ρ norm of the STFT have been used lately [18]
M S T F T p ( n , k ) = S T F T ( n , k ) ρ ρ = n k | S T F T ( n , k ) | ρ = n k S P E C ρ / 2 ( n , k ) ,
where S P E C ( n , k ) = S T F T ( n , k ) 2 represents the commonly used spectrogram, whereas 0 ρ 1 . For ρ = 1 , the 1 -norm is obtained.
We consider P components, s p ( n ) , p = 1 , 2 , , P . Each of these components has finite support in the time-frequency domain, P p , with areas of support Π p , p = 1 , 2 , , P . Supports of partially overlapped components are also partially overlapped. Furthermore, we will make a realistic assumption that there are no components that overlap completely. Assume that Π 1 Π 1 Π P .
Consider further the concentration measure M S T F T p ( n , k ) of
y = η 1 q 1 + η 2 q 2 + + η P q P ,
for p = 0 . If all components are present in this linear combination, then the concentration measure S T F T ( n , k ) 0 , obtained for p = 0 in (31), will be equal to the area of P 1 P 2 P P .
If the coefficients η p , p = 1 , 2 , , P are varied, then the minimum value of the 0 -norm based concentration measure is achieved for coefficients η 1 = γ 11 , η 2 = γ 21 , , η P = γ P 1 corresponding to the most concentrated signal component s 1 ( n ) , with the smallest area of support, Π 1 , since we have assumed, without the loss of generality, that Π 1 Π 1 Π P holds. Note that, due to the calculation and sensitivity issues related with the 0 -norm, within the compressive sensing area, 1 -norm is widely used as its alternative, since under reasonable and realistic conditions, it produces the same results [31]. Therefore, it can be considered that the areas of the domains of support in this context can be measured using the 1 -norm.
The problem of extracting the first component, based on eigenvectors of the autocorrelation matrix of the input signal, can be formulated as follows
[ β 11 , β 21 , , β P 1 ] = arg min η 1 , , η P S T F T ( n , k ) 1 .
The resulting coefficients produce the first component (candidate)
s ¯ 1 = β 11 q 1 + β 21 q 2 + + β P q P 1 .
Note that if β 11 = γ 11 , β 21 = γ 21 , β P 1 = γ P 1 holds, then the component is exact; that is, s ¯ 1 = s 1 holds. In the case when the number of signal components is larger than two, the concentration measure in (33) can have several local minima in the space of unknown coefficients η 1 , η 2 , , η P , corresponding not only to individual components but also to linear combinations of two, three or more components. Depending on the minimization procedure, it can happen that the algorithm finds this local minimum; that is, a set of coefficients producing a combination of components instead of an individual component. In that case, we have not extracted successfully a component since s ¯ 1 s 1 in (34), but as it will be discussed next, this issue does not affect the final result, as the decomposition procedure will continue with this local minimum eliminated.

3.5. Extraction of Detected Component and Further Decomposition

Upon detection of the first local minimum, being a signal component or a linear combination of several components, s ¯ 1 , first eigenvector, q 1 should be replaced by s ¯ 1 in the linear combination
y = η 1 q 1 + η 2 q 2 + + η P q P ,
i.e., q 1 = s ¯ 1 is further used as the first eigenvector. However, since (28) holds, the contribution of this detected component (or linear combination of components) is still present in remaining eigenvectors q p , p = 2 , 3 , , P and shall be removed from these eigenvectors as well. To this aim, we utilize the signal deflation theory [31], and remove the projections of this component from remaining eigenvectors using
q p = q p q 1 H q p q 1 1 | q 1 H q p | 2 .
This ensures that s ¯ 1 is not repeatedly detected afterward. If s ¯ 1 = s 1 , then the first component is found and extracted, whereas its projection on other eigenvectors is removed.
The described procedure is then repeated iteratively, with linear combination y = η 1 q 1 + η 2 q 2 + + η P q P with first eigenvector q 1 = s ¯ 1 and eigenvectors q p , p = 1 , 2 , , P , modified according to (36). Upon detecting the second component (or linear combination of a small number of components), s ¯ 2 , the second eigenvector is replaced, q 1 = s ¯ 2 , whereas its projections from remaining eigenvectors is removed using
q p = q p q 2 H q p q 2 1 | q 2 H q p | 2 .
The process repeats until all components are detected and extracted. These principles are incorporated into the decomposition algorithm presented in the next section.

4. The Decomposition Algorithm and Concentration Measure Minimization

4.1. Decomposition Algorithm

The decomposition procedure can be summarized with the following steps:
  • For given multivariate signal
    x ( n ) = x ( 1 ) ( n ) x ( 2 ) ( n ) x ( C ) ( n )
    calculate the input autocorrelation matrix
    R = X s e n H X s e n
    where    
    X s e n = x ( 1 ) ( 1 ) x ( 1 ) ( N ) x ( 2 ) ( 1 ) x ( 2 ) ( N ) x ( C ) ( 1 ) x ( C ) ( N ) .
  • Find eigenvectors q p and eigenvalues λ p , p = 1 , 2 , , P of matrix R .
    It should be noted that the number of components, P, can be estimated based on the eigenvalues of matrix R . Namely, as discussed in [31], P largest eigenvalues of matrix R correspond to signal components. These eigenvalues are larger than the remaining N P eigenvalues. This property holds even in the presence of a high-level noise: a threshold for separation of eigenvalues corresponding to signal components can be easily determined based on the input noise variance [28].
  • Initialize variable N u = 0 and variable k = 0 . Variable N u will store the number of updates of eigenvectors q p , p i when projection of detected component (candidate) is removed from eigenvectors q p , p i . Variable k represents the index of the detected components.
  • For i = 1 , 2 , , P , repeat the following steps:
    (a)
    Solve minimization problem
    min β 1 k , , β P k STFT 1 C p = 1 P β p k q p 1 subject to β i k = 1
    where STFT { · } is the STFT operator. Signal y = 1 C p = 1 P β p k q p is scaled with
    C = p = 1 P β p k q p 2
    in order to normalize energy of the combined signal to 1. Coefficients β 1 k , β 2 k , , β P k are obtained as a result of the minimization.
    (b)
    Increment component index k k + 1
    (c)
    If for any p i , β p k 0 holds, then
    • Increment variable N u N u + 1
    • Upon replacing the i-th eigenvector by the detected component,
      q i = 1 C p = 1 P β p k q p
      remove projections of detected component (candidate) from remaining eigenvectors. For l = i + 1 , i + 2 , , P repeat:
      • b = q i H q l
      • q l = 1 1 | b | 2 ( q l b q i )
  • If N u > 0 , return to Step 3.
Finally, as a result, we obtain the number of components, P, and the set of extracted components, q 1 , q 1 , , q P .
It should be noted that checking whether N u > 0 holds in Step 5 is crucial for removing possibly detected local minima of concentration measure not corresponding to individual components, but to a linear combination of several components. Namely, if this situation happens, upon applying signal deflation by removing projection of the linear combination of components from other eigenvectors, a linear dependence among eigenvectors will still remain, and it will not allow N u to fall to zero. This returns the algorithm to Step 3, and the procedure for the component detection repeats, but with the local minimum removed from the concentration measure since all the eigenvectors are already updated in the previous cycle. Note that the component index k is reset to zero in this case.
Moreover, it should be emphasized that, while the presented procedure produces P eigenvectors, which is exactly equal to the given number of components, this number is not always a priori known. In practical applications, it can be determined based on eigenvalues of matrix R . As it will be illustrated in numerical examples, the largest eigenvalues correspond to signal components [28,31]. The remaining eigenvalues correspond to the noise. Therefore, a simple threshold can be used to calculate the exact number of signal components. Namely, we simply count the number of eigenvalues larger than a predefined threshold T, being a small positive constant. In the presence of the noise, threshold T should be at least equal to the noise variance. The larger the noise, the larger are the eigenvalues corresponding to the noise (thus, larger should the threshold T be). Of course, the procedure works without the exact information about the number of components: for p > P , eigenvectors q p contain only the noise after the decomposition is finished.

4.2. Concentration Measure Minimization

The concentration measure minimization is performed in the steepest descent manner, as presented in Algorithm 1. The coefficient β p k = 1 is fixed for p = i , whereas the values of other coefficients are varied for ± Δ . Note that real and imaginary parts are varied separately.
For the real part, and for each p = 1 , 2 , , P , p i , the 1 -norm based concentration measure is calculated in both cases, for auxiliary signal formed when given coefficient is increased by Δ , and for the other auxiliary signal formed when Δ is subtracted from the given coefficient.
For illustration, observe linear combination y = p = 1 P β p k q p . When Δ is added to given β p k , p i , p = p 0 , signal
y r + = p = 1 p p 0 P β p k q p + ( β p 0 k + Δ ) q p 0 = p = 1 P β p k q p + Δ q p 0 = y + Δ q p 0
is formed. For this signal, with energy normalized using the 2 -norm; that is
y r + y r + 2 = y + Δ q p 0 y + Δ q p 0 2 ,
concentration measure M r + is calculated, as the 1 -norm of the corresponding STFT coefficients
M r + = STFT y r + 1 .
Similarly, for the coefficient β p 0 k changed in the opposite direction; that is, for Δ , measure
M r = STFT y r 1 .
is calculated for signal
y r y i 2 = y r Δ q p 0 y Δ q p 0 2 .
Algorithm 1 Minimization procedure
Input:
    • Vectors q 1 , q 2 , , q P
    • Index i where corresponding vector q i should be kept with unity coefficient β p k = 1
    • Required precision ε
1:
β p k = 1 for p = i 0 for p i for p = 1 , 2 , , P
2:
M o l d
3:
Δ = 0.1
4:
repeat
5:
       y p = 1 P β p k q p
6:
       M n e w STFT y y 2 1  
7:
      if  M n e w > M o l d  then
8:
          Δ Δ / 2
9:
          β p k β p k + p , for p = 1 , 2 , , P    ▹ Cancel the last coefficients update
10:
         y p = 1 P β p k q p
11:
      else
12:
         M o l d M n e w
13:
      end if
14:
      for  p = 1 , 2 , , P  do
15:
        if  p i  then
16:
               M r + STFT y + Δ q p y + Δ q p 2 1  
17:
               M r STFT y Δ q p y Δ q p 2 1  
18:
               M i + STFT y + j Δ q p y + j Δ q p 2 1  
19:
               M i STFT y j Δ q p y j Δ q p 2 1  
20:
               p 8 Δ M r + M r M n e w + j 8 Δ M i + M i M n e w  
21:
        else
22:
               p 0
23:
        end if
24:
      end for
25:
       β p k β p k p , for p = 1 , 2 , , P          ▹ Coefficients update 
26:
until p = 1 P | p | 2 is below required precision ε
Output:
    • Coefficients β 1 k , β 2 k , , β P k
Since each considered coefficient β p 0 k is complex-valued in general, the same procedure is repeated for the imaginary parts of coefficients. Therefore, signals
y i + y i + 2 = y i + j Δ q p 0 y + j Δ q p 0 2 .
and
y i y 2 = y i j Δ q p 0 y j Δ q p 0 2 .
are formed, serving as a basis to calculate the corresponding concentration measures
M i + = STFT y i 1
and
M i = STFT y i 1 .
Now, based on the calculated concentration measures for variations of real and imaginary parts, concentration measure gradient p is calculated and used to determine the direction for the update of β p 0 k
p = 8 Δ M r + M r M n e w + j 8 Δ M i + M i M n e w
where M n e w used for scaling the gradient is calculated as concentration measure of
y = p = 1 P β p k q p ,
scaled by its energy, before updates of coefficient β p k ; that is
M n e w = STFT y y 2 1 .
For coefficients β p k , p i , the gradient is equal to zero; that is, p = 0 , meaning that these coefficients should not be updated.
Coefficient β p k is updated using the calculated gradient, in the steepest descent manner
β p k β p k p , for p = 1 , 2 , , P .
The process is repeated until p = 1 P | p | 2 becomes smaller than a predefined precision ε .

5. Results

For the visual presentation of the results, the discrete Winger distribution (pseudo-Wigner distribution) will be used in our numerical examples. For a discrete signal x ( n ) , this second-order time-frequency representation is calculated according to
W D ( n , k ) = m = 0 N w 1 w ( m ) w ( m ) x ( n + m ) x * ( n m ) e j 4 π m k N w ,
where w ( n ) is a window function of length N w .
For examples 1, 2, and 3, the quality of the decomposition will be determined based on two criteria:
  • WD calculated for the pth original component (signal is given analytically), denoted by W D p o ( n , k ) = W D { s p } , is compared with W D p e ( n , k ) = W D { s ^ p } , being the WD calculated for pth extracted component, for p = 1 , 2 , , P . Here, s ^ denotes the vector of the pth extracted component, whereas s p is the actual (original) pth signal component.
  • Estimation results for the discrete IFs obtained from the two previous WDs are compared by the means of mean squared error (MSE) for each pair of components. The IF estimate based on the WD of the original pth component, W D p o ( n , k ) , p = 1 , 2 , , P is calculated as [3]
    k p o ( n ) = arg max k W D p o ( n , k )
    whereas the IF estimate based on the WD of the pth component extracted by the proposed approach is calculated as
    k p e ( n ) = arg max k W D p e ( n , k ) .
    Since the extracted components do not have any particular order after the decomposition is finished, the corresponding pairs of original and extracted components are automatically determined using the following procedure:
    • For p in 1 , 2 , , P repeat steps (a)–(f)
      (a)
      Calculate k p o ( n ) based on (54) for analytically defined component s p .
      (b)
      Run decomposition algorithm. Use only eigenvectors corresponding to the largest P eigenvalues. Each of these eigenvectors, q p , contain an extracted signal component. If P is not given, estimate the number of components, P, as the number of eigenvalues, λ p , of matrix R , larger than threshold T = σ ε 2 + 10 4 .
      (c)
      Initialize set E , to store the errors between IFs estimated based on the given original component s p and extracted (unordered) components q i , i = 1 , 2 , , P , being the outputs of the decomposition procedure.
      (d)
      For each extracted component, q 1 , q 2 , , q P repeat steps i–iii:
      • Calculate the IF estimate k i e ( n ) as:
        k i e ( n ) arg max k W D { q i } .
      • Calculate mean squared error (MSE) between k p o ( n ) and k i e ( n ) as
        M S E ( i ) 1 N n = 0 N 1 k p o ( n ) k i e ( n ) 2 .
      • E E M S E ( i ) .
      (e)
      p ^ arg min i M S E ( i )
      (f)
      s ^ p q p ^ is the pth estimated component, corresponding to the original component s p .
Upon determining pairs of original and estimated components, ( s p , s ^ p ) , respective IF estimation MSE is calculated for each pair
M S E p = 1 N n = 0 N 1 k p o ( n ) k p e ( n ) 2 , p = 1 , 2 , , P ,
where k p e ( n ) = arg max k W D { s ^ p } .
It should also be noted that in Examples 1–3, in order to avoid IF estimation errors at the ending edges of components (since they are characterized by time-varying amplitudes), the IF estimation is based on the WD auto-term segments larger than 10 % of the maximum absolute value of the WD corresponding to the given component (auto-term), i.e.,
k ^ p o ( n ) = k p o ( n ) , for | W D p o ( n , k ) | T W D o , 0 , for | W D p o ( n , k ) | < T W D o ,
where T W D o = 0.1 max { | W D p o ( n , k ) | } is a threshold used to determine whether a component is present at the considered instant n. If it is smaller than 10 % of the maximal value of the WD, it indicates that the component is not present.

Examples

Example 1.
To evaluate the presented theory, we consider a general form of a multicomponent signal consisted of P non-stationary components
x p ( c ) ( n ) = p = 1 P A p exp n 2 L p 2 exp j 1 N 2 n 3 + j 2 π f p N n 2 + j 2 π ϕ p N n + j ϑ c + ε ( c ) ( n ) ,
128 n 128 and N = 257 . Phases ϑ c , c = 1 , 2 , , C , are random numbers with uniform distribution drawn from interval π , π . The signal is available in the multivariate form x ( n ) = x ( 1 ) ( n ) , x ( 2 ) ( n ) , , x ( C ) ( n ) T , and is consisted of C channels, since it is embedded in a complex-valued, zero-mean noise ε ( c ) ( n ) with a normal distribution of its real and imaginary part, N ( 0 , σ ε 2 ) . Noise variance is σ ε 2 , whereas A p = 1.2 . Parameters f p and ϕ p are FM parameters, while L p is used to define the effective width of the Gaussian amplitude modulation for each component.
We generate the signal of the form (58) with P = 6 components, whereas the noise variance is σ ε = 1 . The respective number of channels is C = 128 . The corresponding autocorrelation matrix, R , is calculated, according to (20), and the presented decomposition approach is used to extract the components. Eigenvalues of matrix R are given in Figure 2a. Largest six eigenvalues correspond to signal components, and they are clearly separable from the remaining eigenvalues corresponding to the noise. WD and spectrogram of the given signal (from one of the channels) are given in Figure 2b,c, indicating that the signal is not suitable for the classical TF analysis, since the components are highly overlapped.
Each of eigenvectors of the matrix R is a linear combination of components, as shown in Figure 3. The presented decomposition approach is applied to extract the components by linearly combining the eigenvectors from Figure 3. The results are shown in Figure 4a–f. Although a small residual noise is present in the extracted components, they highly match the original components, presented in Figure 4g–l. The original components in Figure 4g–l are not corrupted by the noise.
As a measure of quality, we engage M S E p given by (56), which is the error between the IF estimation result based on the pth extracted signal component (shown in Figure 4a–f) versus the IF estimation calculated based on the WD of original, noise-free component (from Figure 4g–l). The IF estimates and the corresponding MSEs are, for each pair of components, presented in Figure 5, for standard deviation of the noise σ ε = 1 , where the number of channels is C = 128 .
Since M S E p given by (56) serves as a measure of the component extraction quality, we evaluate the decomposition performance for various standard deviations of the noise, σ ε 0.1 , 0.4 , 0.7 , 1.0 , 1.3 , 1.9 , 2.1 . Results are presented in Table 1. The presented MSEs are calculated by averaging the results obtained based on 10 realizations of multichannel signal of the form (58) with random phases ϑ c , c = 1 , 2 , , C and corrupted by random realizations of the noise ε ( c ) ( n ) r , for each observed variance (standard deviation) of the noise. Based on the results from Table 1, it can be concluded that each signal component is successfully extracted for noise characterized by standard deviation up to σ ε = 1.3 . For stronger noise, only some components are successfully extracted. It shall be noted that the performance of the algorithm depends also on the number of channels, C. For the results from Table 1, the number of channels was set to C = 256 . A larger value of C increases the probability of successful decomposition, as investigated in [31].
Example 2.
The decomposition algorithm is tested on a more complex signal of the form (58), with P = 8 components, whereas the standard deviation of the noise is now σ ε = 0.1 . The number of channels is C = 128 . After the input autocorrelation matrix, R , is calculated, according to (20), eigendecomposition produced the eigenvalues given in Figure 6a. Signal components overlap in the time-frequency domain and, therefore, the corresponding Wigner distribution and spectrogram shown in Figure 6b,c cannot be used as adequate tools for their analysis. Figure 7 indicates that the components are neither visible in the time-frequency representation of any eigenvector corresponding to the largest eigenvalues. This is in accordance with the fact that eigenvectors contain signal components in the form of their linear combinations. Upon applying the presented multivariate decomposition procedure on this set of eigenvectors, we obtain results presented in Figure 8. By comparing the results with Wigner distributions of individual, noise-free components, shown in Figure 9, comprising the considered multicomponent signal, it can be concluded that the components are successfully extracted with preserved integrity. This is additionally confirmed by the IF estimation results shown in Figure 10, where even lower MSE values for each component can be explained by the lower noise level, as compared with results from the previous example.
Example 3.
To illustrate the applicability of the presented approach in decomposition of components with faster or progressive frequency variations over time, we observe a signal consisted of P = 6 components, three of which have polynomial modulations as components in model (58), whereas three other components have frequency modulations of sinusoidal nature. The first three components are defined as:
s 1 ( c ) ( n ) = exp n / 128 2 exp j 40.5 cos 2.34 π n / N + j 10 π n / N + j ϑ c
s 2 ( c ) ( n ) = exp j 16 sin 9 π n / N + j ϑ c , for 0 exp j 85.33 sin 2 π ( n + 128 ) / N + j ϑ c , for n < 0
s 3 ( c ) ( n ) = exp n / 128 2 exp j 30.5 sin 5.47 π n / N + j 10 π n / N + j ϑ c
The remaining components have polynomial frequency modulation, as in previous examples:
s p ( c ) ( n ) = A p exp n 2 L p 2 exp j 1 N 2 n 3 + j 2 π f p N n 2 + j 2 π ϕ p N n + ϑ c + ε ( c ) ( n )
for p = 4 , 5 , 6 . Again, the signal is defined for discrete indices 128 n 128 and N = 257 phases ϑ c , c = 1 , 2 , , C , are random numbers with uniform distribution drawn from interval π , π . The resulting multicomponent signal is formed in cth channel as:
x p ( c ) ( n ) = p = 1 6 s p ( c ) ( n ) + ε ( c ) ( n ) ,
and is, as in previous examples, embedded in additive, white, complex-valued Gaussian noise, now with variance σ ε = 1 . The number of channels is C = 256 . Eigenvalues of the autocorrelation matrix R , WD and spectrogram are given in Figure 11, again proving the that the considered signal with heavily overlapped components cannot be analyzed with these tools. Eigenvectors corresponding to the largest six eigenvalues are given in Figure 12. Extracted and original components can be visually compared in Figure 13, again proving the efficiency of the approach, even in the case for components with a faster varying frequency content. This is additionally confirmed by IF estimation results in Figure 14. Larger estimation errors when faster sinusoidal frequency modulations are present are related to poorer concentration of the WD in these cases [3].
Example 4.
In this example, we consider the dispersive environment setup described in Section 2.2, with the transmitter located in the water at the depth of z t . However, to obtain a multivariate signal, instead of one sensor, K = 25 sensors are placed at the depth z r , comprising the receiver at distances r + δ c , c = 1 , 2 , , C , from the transmitter. Moreover, the mutual sensor distances are negligible compared with their distance from the transmitter, r = 2000 m; that is δ c r . This further implies that the range direction remains unchanged in our model. As a response to a monochromatic signal s ( n ) = exp ( j ω 0 n ) , at sensor c, the linear combination of modes s p ( c ) ( n ) = A t ( m , ω 0 ) exp ( j ω 0 n j k c ( p , ω 0 ) r ) is received:
x ( c ) ( n ) = p = 1 P s p ( c ) ( n ) = p = 1 P A t ( m , ω 0 ) exp ( j ω 0 n j k c ( p , ω 0 ) r ) ,
where c = 1 , 2 , , C , and wavenumbers are modeled as k c ( m , ω ) [55]
k r 2 ( m , ω ) = ω c 2 ( m 0.5 ) π D + ϑ c 2 ,
D = 20 m and ϑ c is a random variable drawn from interval [ 0.25 , 0.25 ] with uniform distribution; therefore, corresponding to depth variations of ± 0.25 cm, modeling channel depth changes due to surface waves or uneven seabed. The speed of sound propagation underwater is c = 1500 m/s. The same results in this example are obtained for a more precise speed, i.e., at c = 1480 m/s. The received multichannel signal is of the form
x ( n ) = x ( 1 ) ( n ) x ( 2 ) ( n ) x ( C ) ( n ) .
Upon performing the eigenvalue decomposition of autocorrelation matrix R , eigenvalues shown in Figure 15a are obtained. The Wigner distribution of the received signal is shown in Figure 15b, with very close and partially overlapped nodes. Wigner distributions of individual eigenvectors are shown in Figure 16a–e. The presented procedure for the decomposition of multicomponent signals successfully extracted the individual acoustic modes, as presented in Figure 17a–e. Such separated acoustic modes can be further analyzed; for example, their IF can be estimated and characterized.

6. Discussion

Decomposition of non-stationary multicomponent signals has been a long-term, challenging topic in time-frequency signal analysis [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]. Although decomposition of non-overlapping components can be done using the S-method relations with the WD [26], this approach cannot be applied when the components partially overlap, i.e., share the same domain of support in the time-frequency plane.
Other alternative methodologies are specialized for some specific signal classes, and are efficient in the case of partially overlapped components [20,25,27]. In this sense, chirplet and Radon transform-based decomposition is applicable for linear frequency modulated signals [20,25]. Inverse Radon transform has produced excellent results in separation of micro-Doppler appearing in radar signal processing, characterized by a sinusoidal frequency modulation and periodicity [27]. However, outside the scope of their predefined signal models, these techniques are inefficient in separation of non-stationary signals characterized by some different laws of non-stationarity. Another very popular concept, namely the EMD, has been also applied multivariate data [39,40,41,42,43]. However, successful EMD-based multicomponent signal decomposition is possible only for signals having non-overlapping components in the TF plane. Amplitude variations of components pose an additional challenge to the EMD-based decomposition. The efficiency of the proposed method does not depend on the considered frequency range, but only on the ability of a time-frequency representation to concentrate signal components in the time-frequency plane. We use the STFT in concentration measure (31) due to its ability to concentrate signal energy at the instantaneous frequency of individual signal components. The decomposition approach studied in this paper successfully extracts components highly overlapped in the time-frequency plane. The method is not sensitive to the extent of overlap of the signal components.
Since the modes appearing in the considered acoustic dispersive environment framework are characterized by a non-linear (and non-sinusoidal) law of frequency variations and have a partially overlapped support, neither of the mentioned univariate techniques can produce acceptable decomposition results.

7. Conclusions

Characterization of modes in the acoustic dispersive environment is an ongoing research topic. As modes are non-stationary and present in a multicomponent form in the received signals, their separation (extraction) has been a challenging task. In this paper, we have shown that the modes can be successfully extracted based on a multivariate decomposition technique that exploits the eigenanalysis of the autocorrelation matrix of the received signal. This method, which utilizes concentration measures calculated based on time-frequency representations, separates the modes while completely preserving their integrity, thus opening the possibility for their individual analysis. IF estimations based on extracted components were highly accurate, even for a high level of noise. Results indicate that the efficiency of the method is increased with the larger number of sensors (channels). Our future work will be oriented towards the analysis of the separated components. Instantaneous frequency estimation techniques developed within the time-frequency signal analysis field can be applied directly on separated modes, providing new insights and tools for the analysis of dispersive channels.

Author Contributions

Conceptualization, M.B. and I.S.; methodology, M.B. and I.S.; validation, M.B.; writing—original draft preparation, M.B. and I.S.; writing—review and editing, J.L., C.I., E.Z. and M.D.; visualization, I.S.; supervision, C.I. and M.D.; project administration, J.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by COST action CA17137—a network for gravitational waves, geophysics, and machine learning.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boashash, B. Time-Frequency Signal Analysis and Processing—A Comprehensive Reference; Elsevier Science: Oxford, UK, 2003. [Google Scholar]
  2. Flandrin, P. Time-Frequency/Time-Scale Analysis; Academic Press: San Diego, CA, USA, 1998; Volume 10. [Google Scholar]
  3. Stanković, L.; Daković, M.; Thayaparan, T. Time-Frequency Signal Analysis with Applications; Artech House: Norwood, NJ, USA, 2013. [Google Scholar]
  4. Ouahabi, A. Signal and Image Multiresolution Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  5. Boashash, B. Estimating and interpreting the instantaneous frequency of a signal. I. Fundamentals. Proc. IEEE 1992, 80, 520–538. [Google Scholar] [CrossRef]
  6. Stanković, S.; Orović, I.; Sejdić, E. Multimedia Signals and Systems: Basic and Advance Algorithms for Signal Processing; Springer: New York, NY, USA, 2015. [Google Scholar]
  7. Akan, A.; Cura, O.K. Time–frequency signal processing: Today and future. Digit. Signal Process. 2021, 103216. [Google Scholar] [CrossRef]
  8. Hussain, Z.M.; Boashash, B. Adaptive instantaneous frequency estimation of multicomponent FM signals using quadratic time-frequency distributions. IEEE Trans. Signal Process. 2002, 50, 1866–1876. [Google Scholar] [CrossRef]
  9. Shui, P.L.; Shang, H.Y.; Zhao, Y.B. Instantaneous frequency estimation based on directionally smoothed pseudo-Wigner-Ville distribution bank. IET Radar Sonar Navig. 2007, 1, 317–325. [Google Scholar] [CrossRef]
  10. Lerga, J.; Sucic, V. An Instantaneous Frequency Estimation Method Based on the Improved Sliding Pair-Wise ICI Rule. In Proceedings of the 10th International Conference on Information Science, Signal Processing and Their Applications ISSPA, Kuala Lumpur, Malaysia, 10–13 May 2010. [Google Scholar]
  11. Lerga, J.; Sucic, V. Nonlinear IF Estimation Based on the Pseudo WVD Adapted Using the Improved Sliding Pairwise ICI Rule. IEEE Signal Process. Lett. 2009, 16, 953–956. [Google Scholar] [CrossRef]
  12. Barkat, B.; Boashash, B. Instantaneous frequency estimation of polynomial FM signals using the peak of the PWVD: Statistical performance in the presence of additive gaussian noise. IEEE Trans. Signal Process. 1999, 47, 2480–2490. [Google Scholar] [CrossRef]
  13. Sekhar, S.C.; Sreenivas, T.V. Auditory motivated level-crossing approach to instantaneous frequency estimation. IEEE Trans. Signal Process. 2005, 53, 1450–1462. [Google Scholar] [CrossRef] [Green Version]
  14. Lerga, J.; Sucic, V.; Boashash, B. Multicomponent Noisy Signal Adaptive Instantaneous Frequency Estimation Using Components Time Support Information. IET Signal Process. 2014, 8, 277–284. [Google Scholar]
  15. Lerga, J.; Sucic, V.; Boashash, B. An Efficient Algorithm for Instantaneous Frequency Estimation of Nonstationary Multicomponent Signals in Low SNR. EURASIP J. Adv. Signal Process. 2011, 2011, 16. [Google Scholar] [CrossRef] [Green Version]
  16. Katkovnik, V.; Stanković, L. Instantaneous frequency estimation using the Wigner distribution with varying and data driven window length. IEEE Trans. Signal Process. 1998, 46, 2315–2325. [Google Scholar] [CrossRef] [Green Version]
  17. Ivanović, V.N.; Daković, M.; Stanković, L. Performance of Quadratic Time-Frequency Distributions as Instantaneous Frequency Estimators. IEEE Trans. Signal Process. 2003, 51, 77–89. [Google Scholar] [CrossRef]
  18. Stanković, L. A measure of some time–frequency distributions concentration. Signal Process. 2001, 81, 621–631. [Google Scholar] [CrossRef]
  19. Stanković, L. A method for time-frequency signal analysis. IEEE Trans. Signal Process. 1994, 42, 225–229. [Google Scholar] [CrossRef]
  20. Lopez-Risueno, G.; Grajal, J.; Yeste-Ojeda, O. Atomic decomposition-based radar complex signal interception. IEEE Proc. Radar Sonar Navig. 2003, 150, 323–331. [Google Scholar] [CrossRef]
  21. Wei, Y.; Tan, S. Signal decomposition by the S-method with general window functions. Signal Process. 2012, 92, 288–293. [Google Scholar] [CrossRef]
  22. Yang, Y.; Dong, X.; Peng, Z.; Zhang, W.; Meng, G. Component extraction for non-stationary multicomponent signal using parameterized de-chirping and band-pass filter. IEEE Signal Process. Lett. 2015, 22, 1373–1377. [Google Scholar] [CrossRef]
  23. Wang, Y.; Jiang, Y. ISAR Imaging of Maneuvering Target Based on the L-Class of Fourth-Order Complex-Lag PWVD. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1518–1527. [Google Scholar] [CrossRef]
  24. Orović, I.; Stanković, S.; Draganić, A. Time-Frequency Analysis and Singular Value Decomposition Applied to the Highly Multicomponent Musical Signals. Acta Acust. United Acust. 2014, 100, 93–101. [Google Scholar] [CrossRef] [Green Version]
  25. Wood, J.C.; Barry, D.T. Radon transformation of time-frequency distributions for analysis of multicomponent signals. IEEE Trans. Signal Process. 1994, 42, 3166–3177. [Google Scholar] [CrossRef]
  26. Stanković, L.; Thayaparan, T.; Daković, M. Signal Decomposition by Using the S-Method with Application to the Analysis of HF Radar Signals in Sea-Clutter. IEEE Trans. Signal Process. 2006, 54, 4332–4342. [Google Scholar] [CrossRef]
  27. Stanković, L.; Daković, M.; Thayaparan, T.; Popović-Bugarin, V. Inverse Radon Transform Based Micro-Doppler Analysis from a Reduced Set of Observations. IEEE Trans. AES 2015, 51, 1155–1169. [Google Scholar] [CrossRef]
  28. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M. Time-frequency decomposition of multivariate multicomponent signals. Signal Process. 2018, 142, 468–479. [Google Scholar] [CrossRef]
  29. Stanković, L.; Brajović, M.; Daković, M.; Mandic, D. Two-component Bivariate Signal Decomposition Based on Time-Frequency Analysis. In Proceedings of the 22nd International Conference on Digital Signal Processing IEEE DSP, London, UK, 23–25 August 2017. [Google Scholar]
  30. Brajović, M.; Stanković, L.; Daković, M.; Mandic, D. Additive Noise Influence on the Bivariate Two-Component Signal Decomposition. In Proceedings of the 7th Mediterranean Conference on Embedded Computing, MECO, Budva, Montenegro, 11–14 June 2018. [Google Scholar]
  31. Stanković, L.; Brajović, M.; Daković, M.; Mandic, D. On the Decomposition of Multichannel Nonstationary Multicomponent Signals. Signal Process. 2020, 167, 107261. [Google Scholar] [CrossRef]
  32. Brajović, M.; Stanković, I.; Daković, M.; Mandic, D.; Stanković, L. On the Number of Channels in Multicomponent Nonstationary Noisy Signal Decomposition. In Proceedings of the 5th International Conference on Information Technology (IT 2021), Zabljak, Montenegro, 16–20 February 2021. [Google Scholar]
  33. Brajović, M.; Stanković, L.; Daković, M. Decomposition of Multichannel Multicomponent Nonstationary Signals by Combining the Eigenvectors of Autocorrelation Matrix Using Genetic Algorithm. Digit. Signal Process. 2020, 102. [Google Scholar] [CrossRef]
  34. Brajović, M.; Stanković, I.; Stanković, L.; Daković, M. Decomposition of Two-Component Multivariate Signals with Overlapped Domains of Support. In Proceedings of the 11th Int’l Symposium on Image and Signal Processing and Analysis (ISPA 2019), Dubrovnik, Croatia, 23–25 September 2019. [Google Scholar]
  35. Ahrabian, A.; Looney, D.; Stanković, L.; Mandic, D. Synchrosqueezing-Based Time-Frequency Analysis of Multivariate Data. Signal Process. 2015, 106, 331–341. [Google Scholar] [CrossRef]
  36. Lilly, J.M.; Olhede, S.C. Analysis of Modulated Multivariate Oscillations. IEEE Trans. Signal Process. 2012, 60, 600–612. [Google Scholar] [CrossRef]
  37. Omidvarnia, A.; Boashash, B.; Azemi, G.; Colditz, P.; Vanhatalo, S. Generalised phase synchrony within multivariate signals: An emerging concept in time-frequency analysis. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 3417–3420. [Google Scholar]
  38. Lilly, J.M.; Olhede, S.C. Bivariate Instantaneous Frequency and Bandwidth. IEEE Trans. Signal Process. 2010, 58, 591–603. [Google Scholar] [CrossRef] [Green Version]
  39. Mandic, D.P.; Rehman, N.U.; Wu, Z.; Huang, N.E. Empirical Mode Decomposition-Based Time-Frequency Analysis of Multivariate Signals: The Power of Adaptive Data Analysis. IEEE Signal Process. Mag. 2013, 30, 74–86. [Google Scholar] [CrossRef]
  40. Abdullah, S.M.U.; Rehman, N.U.; Khan, M.M.; Mandic, D.P. A Multivariate Empirical Mode Decomposition Based Approach to Pansharpening. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3974–3984. [Google Scholar] [CrossRef]
  41. Hemakom, A.; Ahrabian, A.; Looney, D.; Rehman, N.U.; Mandic, D.P. Nonuniformly sampled trivariate empirical mode decomposition. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 3691–3695. [Google Scholar]
  42. Wang, G.; Teng, C.; Li, K.; Zhang, Z.; Yan, X. The Removal of EOG Artifacts From EEG Signals Using Independent Component Analysis and Multivariate Empirical Mode Decomposition. IEEE J. Biomed. Health Inform. 2016, 20, 1301–1308. [Google Scholar] [CrossRef]
  43. Tavildar, S.; Ashrafi, A. Application of multivariate empirical mode decomposition and canonical correlation analysis for EEG motion artifact removal. In Proceedings of the 2016 Conference on Advances in Signal Processing (CASP), Pune, India, 9–11 June 2016; pp. 150–154. [Google Scholar]
  44. Omidvarnia, A.; Azemi, G.; Colditz, P.B.; Boashash, B. A time-frequency based approach for generalized phase synchrony assessment in nonstationary multivariate signals. Digit. Signal Process. 2013, 23, 780–790. [Google Scholar] [CrossRef] [Green Version]
  45. Cobos, M.; López, J.J. Stereo audio source separation based on time–frequency masking and multilevel thresholding. Digit. Signal Process. 2008, 18, 960–976. [Google Scholar] [CrossRef]
  46. Belouchrani, A.; Abed-Meraim, K.; Cardoso, J.; Moulines, E. A blind source separation technique using second-order statistics. IEEE Trans. Signal Process. 1997, 45, 434–444. [Google Scholar] [CrossRef] [Green Version]
  47. Belouchrani, A.; Amin, M.G. Blind source separation based on time-frequency signal representations. IEEE Trans. Signal Process. 1998, 46, 2888–2897. [Google Scholar] [CrossRef]
  48. Aissa-El-Bey, A.; Linh-Trung, N.; Abed-Meraim, K.; Belouchrani, A.; Grenier, Y. Underdetermined Blind Separation of Nondisjoint Sources in the Time-Frequency Domain. IEEE Trans. Signal Process. 2007, 55, 897–907. [Google Scholar] [CrossRef] [Green Version]
  49. Liu, S.; Yu, K. Successive multivariate variational mode decomposition based on instantaneous linear mixing model. Signal Process. 2021, 190, 108311. [Google Scholar] [CrossRef]
  50. Sadhu, A.; Sony, S.; Friesen, P. Evaluation of progressive damage in structures using tensor decomposition-based wavelet analysis. J. Vib. Control. 2019, 25, 2595–2610. [Google Scholar] [CrossRef]
  51. Labat, V.V.; Remenieras, J.P.; Matar, O.B.; Ouahabi, A.; Patat, F. Harmonic propagation of finite amplitude sound beams: Experimental determination of the nonlinearity parameter B/A. Ultrasonics 2000, 38, 292–296. [Google Scholar] [CrossRef]
  52. Girault, J.M.; Kouamé, D.; Ouahabi, A.; Patat, F. Estimation of the blood Doppler frequency shift by a time-varying parametric approach. Ultrasonics 2000, 38, 682–697. [Google Scholar] [CrossRef] [Green Version]
  53. Zhang, Y.; Amin, M.G.; Obeidat, B.A. Polarimetric Array Processing for Nonstationary Signals. In Adaptive Antenna Arrays: Trends and Applications; Chandran, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 205–218. [Google Scholar]
  54. Su, Z.; Ye, L. Processing of Lamb Wave Signals. In Identification of Damage Using Lamb Waves; Lecture Notes in Applied and Computational Mechanics; Springer: London, UK, 2009; Volume 48. [Google Scholar]
  55. Ioana, C.; Jarrot, A.; Gervaise, C.; Stèphan, Y.; Quinquis, A. Localization in under water dispersive channels using the time-frequency-phase continuity of signals. IEEE Trans. Signal Process. 2010, 58, 4093–4107. [Google Scholar] [CrossRef] [Green Version]
  56. Zhang, J.J.; Papandreou-Suppappola, A.; Gottin, B.; Ioana, C. Time-frequency characterization and receiver waveform design for shallow water environments. IEEE Trans. Signal Process. 2009, 57, 2973–2985. [Google Scholar] [CrossRef] [Green Version]
  57. Tolstoy, I.; Clay, C.S. Ocean Acoustics; McGraw-Hill: New York, NY, USA, 1966; Volume 293. [Google Scholar]
  58. Westwood, E.K.; Tindle, C.T.; Chapman, N.R. A normal mode model for acousto- elastic ocean environments. J. Acoust. Soc. Am. 1996, 100, 3631–3645. [Google Scholar] [CrossRef]
  59. Jensen, F.B.; Kuperman, W.A.; Porter, M.B.; Schmidt, H. Computational Ocean Acoustics; Springer Science & Business Media: New York, NY, USA, 2011. [Google Scholar]
  60. Kuperman, W.A.; Lynch, J.F. Shallow-water acoustics. Phys. Today 2004, 57, 55–61. [Google Scholar] [CrossRef]
  61. Stojanovic, M. Underwater acoustic communication. In Wiley Encyclopedia of Electrical and Electronics Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2001. [Google Scholar]
  62. Stojanovic, M.; Preisig, J. Underwater acoustic communication channels: Propaga- tion models and statistical characterization. IEEE Commun. Mag. 2009, 47, 84–89. [Google Scholar] [CrossRef]
  63. Ioana, C.; Josso, N.; Gervaise, C.; Mars, J.; Stèphan, Y. Signal analysis approach for passive tomography: Applications for dispersive channels and moving configuration. In Proceedings of the 3rd International Conference and Exhibition on Underwater Acoustic Measurements: Technologies and Results, Napflion, Greece, 21–26 June 2009. [Google Scholar]
  64. de Sousa Costa, E.; Medeiros, E.B.; Filardi, J.B.C. Underwater Acoustics Modeling in Finite Depth Shallow Waters. In Modeling and Measurement Methods for Acoustic Waves and for Acoustic Microdevices; IntechOpen: London, UK, 2013; Available online: https://www.intechopen.com/chapters/45579 (accessed on 10 May 2021).
  65. Frisk, G.V. Ocean and Seabed Acoustics: A Theory of Wave Propagation; Pearson Education: London, UK, 1994. [Google Scholar]
  66. Zhang, J.; Papandreou-Suppappola, A. Time-frequency based waveform and receiver design for shallow water communications. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April 2007; Volume 3, p. III-1149. [Google Scholar]
  67. Jiang, Y.; Papandreou-Suppappola, A. Discrete time-frequency characterizations of dispersive linear time-varying systems. IEEE Trans. Signal Process. 2007, 55, 2066–2076. [Google Scholar] [CrossRef]
Figure 1. The considered underwater isovelocity setup. Water depth is D, transmitter depth is z t , the receiver is positioned at depth z r , and the transmitter–receiver range is r.
Figure 1. The considered underwater isovelocity setup. Water depth is D, transmitter depth is z t , the receiver is positioned at depth z r , and the transmitter–receiver range is r.
Mathematics 09 02796 g001
Figure 2. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 1 and (c) Spectrogram of the signal from Example 1. Signal consists of P = 6 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 128 . The largest six eigenvalues correspond to signal components.
Figure 2. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 1 and (c) Spectrogram of the signal from Example 1. Signal consists of P = 6 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 128 . The largest six eigenvalues correspond to signal components.
Mathematics 09 02796 g002
Figure 3. Time-frequency representations of eigenvectors corresponding to the largest six eigenvalues of the autocorrelation matrix R of the signal from Example 1. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation. Panels (af) show Wigner distribution of each eigenvector.
Figure 3. Time-frequency representations of eigenvectors corresponding to the largest six eigenvalues of the autocorrelation matrix R of the signal from Example 1. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation. Panels (af) show Wigner distribution of each eigenvector.
Mathematics 09 02796 g003
Figure 4. Extracted and original signal components of the non-stationary multicomponent multichannel signal considered in Example 1. Panels (af) present extracted using the proposed approach, whereas panels (gl) show Wigner distributions calculated for individual components of the original, noise-free signal.
Figure 4. Extracted and original signal components of the non-stationary multicomponent multichannel signal considered in Example 1. Panels (af) present extracted using the proposed approach, whereas panels (gl) show Wigner distributions calculated for individual components of the original, noise-free signal.
Mathematics 09 02796 g004aMathematics 09 02796 g004b
Figure 5. Instantaneous frequency estimation for individual signal components based on: extracted signal components (dashed black) and original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 1. The noise variance is σ ε 2 = 1 . Decomposition is based on C = 128 channels.
Figure 5. Instantaneous frequency estimation for individual signal components based on: extracted signal components (dashed black) and original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 1. The noise variance is σ ε 2 = 1 . Decomposition is based on C = 128 channels.
Mathematics 09 02796 g005
Figure 6. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 2, and (c) spectrogram of the signal from Example 2. Signal consists of P = 8 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 128 . The largest eight eigenvalues correspond to signal components.
Figure 6. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 2, and (c) spectrogram of the signal from Example 2. Signal consists of P = 8 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 128 . The largest eight eigenvalues correspond to signal components.
Mathematics 09 02796 g006
Figure 7. (ah) Time-frequency representations of eigenvectors corresponding to the largest eight eigenvalues of autocorrelation matrix R of the signal from Example 2. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation.
Figure 7. (ah) Time-frequency representations of eigenvectors corresponding to the largest eight eigenvalues of autocorrelation matrix R of the signal from Example 2. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation.
Mathematics 09 02796 g007
Figure 8. (ah) Extracted signal components of the non-stationary multicomponent multichannel signal considered in Example 2. The decomposition is performed using the presented multivariate approach. The number of components is P = 8 .
Figure 8. (ah) Extracted signal components of the non-stationary multicomponent multichannel signal considered in Example 2. The decomposition is performed using the presented multivariate approach. The number of components is P = 8 .
Mathematics 09 02796 g008
Figure 9. (ah) Original signal components of the non-stationary multicomponent multichannel signal considered in Example 2. Wigner distributions are calculated, each individual, noise free component.
Figure 9. (ah) Original signal components of the non-stationary multicomponent multichannel signal considered in Example 2. Wigner distributions are calculated, each individual, noise free component.
Mathematics 09 02796 g009
Figure 10. Instantaneous frequency estimation for individual signal components based on the extracted signal components (dashed black) and the original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 2. The noise variance is σ ε 2 = 0.1 . Decomposition is based on C = 128 channels.
Figure 10. Instantaneous frequency estimation for individual signal components based on the extracted signal components (dashed black) and the original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 2. The noise variance is σ ε 2 = 0.1 . Decomposition is based on C = 128 channels.
Mathematics 09 02796 g010
Figure 11. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 3, and (c) Spectrogram of the signal from Example 3. Signal consists of P = 6 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 256 . The largest six eigenvalues correspond to the signal components.
Figure 11. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distribution of the signal from Example 3, and (c) Spectrogram of the signal from Example 3. Signal consists of P = 6 non-stationary components. The signal is embedded in an intensive complex, Gaussian, zero-mean noise with σ ε = 1 . The number of channels is C = 256 . The largest six eigenvalues correspond to the signal components.
Mathematics 09 02796 g011
Figure 12. (af) Time-frequency representations of eigenvectors corresponding to the largest six eigenvalues of autocorrelation matrix R of signal from Example 3. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation.
Figure 12. (af) Time-frequency representations of eigenvectors corresponding to the largest six eigenvalues of autocorrelation matrix R of signal from Example 3. Each eigenvector represents a linear combination of non-stationary components with polynomial frequency modulation.
Mathematics 09 02796 g012
Figure 13. Extracted and original signal components of the non-stationary multicomponent multichannel signal considered in Example 3. Panels (af) present the extracted using the proposed approach, whereas panels (gl) show Wigner distributions calculated for individual components of the original, noise-free signal.
Figure 13. Extracted and original signal components of the non-stationary multicomponent multichannel signal considered in Example 3. Panels (af) present the extracted using the proposed approach, whereas panels (gl) show Wigner distributions calculated for individual components of the original, noise-free signal.
Mathematics 09 02796 g013
Figure 14. Instantaneous frequency estimation for individual signal components based on the extracted signal components (dashed black) and the original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 3. The noise variance is σ ε 2 = 1 . Decomposition is based on C = 256 channels.
Figure 14. Instantaneous frequency estimation for individual signal components based on the extracted signal components (dashed black) and the original signal components (solid white). MSEs between the two IF estimates is provided for each component of the signal from Example 3. The noise variance is σ ε 2 = 1 . Decomposition is based on C = 256 channels.
Mathematics 09 02796 g014
Figure 15. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distrubution and (c) spectrogram of the considered acoustic signal from the dispersive environment. The number of sensors is C = 25 , whereas the number of modes is P = 5 .
Figure 15. (a) Eigenvalues of autocorrelation matrix R , (b) Wigner distrubution and (c) spectrogram of the considered acoustic signal from the dispersive environment. The number of sensors is C = 25 , whereas the number of modes is P = 5 .
Mathematics 09 02796 g015
Figure 16. (ae) Time-frequency representations of eigenvectors corresponding to the largest eigenvalues of autocorrelation matrix R . Each eigenvector represents a linear combination of acoustic modes of the signal received from the dispersive environment.
Figure 16. (ae) Time-frequency representations of eigenvectors corresponding to the largest eigenvalues of autocorrelation matrix R . Each eigenvector represents a linear combination of acoustic modes of the signal received from the dispersive environment.
Mathematics 09 02796 g016
Figure 17. (ae) Extracted modes of the signal received from the dispersive acoustic environment. The decomposition is performed using the presented multivariate approach.
Figure 17. (ae) Extracted modes of the signal received from the dispersive acoustic environment. The decomposition is performed using the presented multivariate approach.
Mathematics 09 02796 g017
Table 1. Mean squared errors (MSEs) between IF estimations based on extracted and original components, for signal from Example 1 with P = 6 components. M S E p , p = 1 , 2 , , 6 corresponds to the pth component. The results are presented for various values of the standard deviation of the noise, σ ε . The results are averaged based on 10 random realizations of signals with random phases and noise, for each considered value of σ ε .
Table 1. Mean squared errors (MSEs) between IF estimations based on extracted and original components, for signal from Example 1 with P = 6 components. M S E p , p = 1 , 2 , , 6 corresponds to the pth component. The results are presented for various values of the standard deviation of the noise, σ ε . The results are averaged based on 10 random realizations of signals with random phases and noise, for each considered value of σ ε .
σ ε M S E 1 M S E 2 M S E 3 M S E 4 M S E 5 M S E 6
0.1−20.89 dB−16.12 dB−18.67 dB−15.66 dB−11.86 dB−22.65 dB
0.4−16.63 dB−14.52 dB−11.19 dB−12.44 dB−10.22 dB−17.21 dB
0.7−20.89 dB−12.23 dB−12.04 dB−12.04 dB−9.13 dB−15.66 dB
1.0−13.62 dB−10.89 dB−7.27 dB−10.22 dB−6.21 dB−12.04 dB
1.3−12.65 dB−9.86 dB−7.46 dB−9.53 dB−3.99 dB−13.62 dB
1.6−9.63 dB16.67 dB35.04 dB−10.74 dB27.28 dB36.01 dB
1.9−9.75 dB32.50 dB39.85 dB−12.87 dB30.28 dB34.61 dB
2.1−9.86 dB−7.33 dB−8.42 dB−9.64 dB7.03 dB−7.46 dB
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brajović, M.; Stanković, I.; Lerga, J.; Ioana, C.; Zdravevski, E.; Daković, M. Multivariate Decomposition of Acoustic Signals in Dispersive Channels. Mathematics 2021, 9, 2796. https://doi.org/10.3390/math9212796

AMA Style

Brajović M, Stanković I, Lerga J, Ioana C, Zdravevski E, Daković M. Multivariate Decomposition of Acoustic Signals in Dispersive Channels. Mathematics. 2021; 9(21):2796. https://doi.org/10.3390/math9212796

Chicago/Turabian Style

Brajović, Miloš, Isidora Stanković, Jonatan Lerga, Cornel Ioana, Eftim Zdravevski, and Miloš Daković. 2021. "Multivariate Decomposition of Acoustic Signals in Dispersive Channels" Mathematics 9, no. 21: 2796. https://doi.org/10.3390/math9212796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop