Next Article in Journal
The Learning Rate Is Not a Constant: Sandwich-Adjusted Markov Chain Monte Carlo Simulation
Previous Article in Journal
Variable Bayesian-Based Maximum Correntropy Criterion Cubature Kalman Filter with Application to Target Tracking
Previous Article in Special Issue
The Synergistic Effects of Structural Evolution and Attack Strategies on Network Matching Robustness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Complexity of Time-Frequency Distributions of Signals in Detection and Classification Problems

Institute of Control Sciences of RAS, 117997 Moscow, Russia
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(10), 998; https://doi.org/10.3390/e27100998
Submission received: 15 August 2025 / Revised: 11 September 2025 / Accepted: 24 September 2025 / Published: 24 September 2025
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information, 2nd Edition)

Abstract

The paper considers the problem of detecting and classifying acoustic signals based on information (entropy) criteria. A number of new information features based on time-frequency distributions are proposed, which include the spectrogram and its upgraded version, the reassigned spectrogram. To confirm and verify the proposed characteristics, modeling on synthetic signals and numerical verification of the solution of the multiclass classification problem based on machine learning methods on real hydroacoustic recordings are carried out. The obtained high classification results ( F 1 = 0.95 ) allow us to assert the advantages of using the proposed characteristics.

1. Introduction

The history of signal detection and classification problems is associated with the development of processing methods, the main mathematical apparatus of which are signal transformations that transfer a time series from the time domain to another signal space. The most famous frequency conversion of signals is the Fourier transform, which has become de facto synonymous with frequency signal decomposition due to its mathematical validity and algorithmic efficiency. However, a strong limitation for the Fourier transform and any other frequency transformations remains their inability to track changes in the frequency components of the signal over time. To solve this problem, time-frequency transformations come to the rescue, the most popular of which is again the windowed Fourier transform and its square spectrogram, which is the simplest and most efficient time-frequency transformation. Cohen [1] introduced and described a general class of such transformations, special cases of which are the Wigner–Ville, Richachek, Choi–Williams transformations, etc. Thus, a method based on the Wigner–Ville transformation has become very popular for radar processing due to the convenience of using it to resolve linear frequency modulation (LFM) signals.
A comprehensive description of time-frequency transformations is contained in the monograph by Boilem Boashash [2]. In particular, this monograph, along with a number of the author’s other works [3,4,5] investigated the properties of the Rényi entropy for estimating the number of components of multicomponent signals, which also provides examples of using this estimate as a classification feature in signal classification problems. An original method for estimating the number of signal components based on entropy was also proposed by the current article’s authors in [6].
Since the basis of time-frequency distributions (TFDs) is the nuclear function generated by the reference signal, many authors compare TFDs with various nuclear functions on different signals, which is illustrated in the review [7].
In the case of an unknown reference signal, refs. [8,9] consider three variants of the distance between TFDs based on the Kullback–Leibler divergence, the Jensen–Shannon divergence, and the Rényi cross-entropy between the two distributions. Cross-entropy approach is also utilized in [10]. In turn, Baranyuk in [11,12,13] offers a modified formula for calculating the Jensen–Shannon divergence, taking into account the possible inconsistency of the TFD, where instead of the arithmetic mean, the geometric mean of two TFDs is used.
The problems of detecting mobile targets are considered in [14,15], where the Wigner–Ville entropy of the time-frequency distribution is calculated, and the decision to detect a target is made by exceeding the threshold of the information metric obtained based on the constant false alarm rate (CFAR) algorithm, which is an adaptive algorithm often used in radar to detect targets in conditions of noise, interference, and signal interference.
The work [16] explores the use of entropy in the problem of voice activity detection (VAD) for speech signal processing.
Entropic approaches to the processing of medical electroencephalogram (EEG) signals are investigated in [17], and in [18], the entropy from the Stockwell time-frequency transform (also known as the S-transform) is used to classify and detect heart valve pathology from electrocardiography (ECG) recordings.
In the articles [19,20], the entropy of TFD is used to solve problems of processing technical industrial signals related to monitoring the condition of production equipment.
The current work is devoted to the study of the concept of information complexity of time-frequency distributions in the problem of signal detection and classification. The novelty of the proposed approach is as follows:
  • introduction of the concept of the TFD information complexity;
  • using Rényi entropy to calculate the information complexity of two-dimensional probability distributions;
  • application of the proposed information characteristics to the classification problem of acoustic signals.
The article consists of five main Sections, an Introduction, and a Conclusion. In Section 2 the classification problem of time series of acoustic origin is briefly formulated. In Section 3, Section 4 and Section 5, the necessary mathematical representations of time-frequency distributions are introduced, some of their previously unknown properties are investigated, and entropy and information criteria based on statistical complexity of various nature are proposed, as well as different ways for calculation of discrete distributions. Section 6 is devoted to a classification experiment of real natural and technical recordings using machine learning methods. In Conclusions (Section 7), the main results of the work are analyzed and plans for further research are outlined.

2. Statement of Signal Classification Problem

The problem of multiclass signal classification is based on a training dataset X L = { x i , y i } i = 1 L X × Y consisting of a set of objects { x i } i = 1 L X , where each object x i R 2 N represents a studied signal of length 2 N , and labels { y i } i = 1 L Y such that y i { 1 , 2 , , K } , where K is the number of classes ( K = 2 corresponds to a binary classification, i.e., a detection problem when it is necessary to determine the presence or absence of an useful signal in a signal–noise mixture).
It is required to construct a mapping (parametric dependence model) a ( x , β ) : X × A Y with a vector β A of model parameters that would approximate the real unknown dependence f ( x ) : X Y with some criterion for minimizing empirical risk.
Γ ( β , x i ) min β ,
where Γ ( β , x i ) is the loss function which indicates the deviation of a ( x , β ) from the correct answer.
The next step is the formation of a feature space. Each entry x i can be matched with basic signal characteristics: statistical features of a normalized/whitened signal, and a discrete Fourier spectrum
D F T [ k ] = n = 1 2 N x i [ n ] e i 2 π n k / 2 N ,
or the corresponding power spectrum
F [ k ] = | D F T [ k ] | 2 ,
as well as various information characteristics based on two-dimensional time-frequency distributions presented in the following sections of the article. The purpose of the study is to examine the appropriateness of these characteristics for solution of the signal classification problem.
In various fields of physics and biology, the concepts of a useful signal, noise, and signal–noise mixture are defined independently. In hydroacoustics, which is the main application area of current study, environmental noise is random in nature and the useful signal is determined by the operation of ship mechanisms and is deterministic but unknown, and most importantly, the signal-to-noise ratio is available for indirect measurements.

3. Time-Frequency Distributions

3.1. Spectrogram and Wigner–Ville Distribution

Despite the limited time-frequency resolution, the simplicity of determining the spectrogram makes it one of the most popular time-frequency distributions, both for initial comparative theoretical analysis and as a reference in practical applications [2].
The short-term continuous Fourier transform of the signal x ( t ) is written as follows:
S T F T h ( t , ω ) = R x ( τ ) h ( τ t ) e i ω τ d τ ,
where h ( t ) is some window function. The spectral power density, or the spectrogram S, corresponds to the square of the S T F T value
S ( t , ω ) = | S T F T h ( t , ω ) | 2 .
The short-term discrete Fourier transform of a signal x with a window function h of length 2 N is defined by the following expression:
S T F T h [ n , k ] = l = 2 N n 2 N ( n + 1 ) 1 x [ l ] h [ l 2 N n ] e i 2 π k l / 2 N ,
where 2 N is the length of one window processed with DFT and n is the sequential number of the window. Consequently the discrete spectrogram is defined as
S [ n , k ] = | S T F T h [ n , k ] | 2 , n = 1 , , M , k = 1 , , N ,
where M is the total number of windows, and N is the total number of frequencies, taking into account the symmetry of the spectrum.
The Wigner–Ville distribution is a prototype of distributions that differ qualitatively from the spectrogram. Exploring its strengths and weaknesses has become one of the main directions in the development of this field. Wigner was aware of the existence of other joint densities, but chose the one that has now become the Wigner distribution “because it seems to be the simplest.” The Wigner distribution was introduced into signal analysis by Ville [1] about 15 years after Wigner’s paper. Ville argued for its plausibility and derived it using a method based on characteristic functions.
The Wigner–Ville distribution W x ( t , ω ) for the signal x ( t ) is written as follows:
W x ( t , ω ) = R e i τ ω x * t 1 2 τ x t + 1 2 τ d τ .
The Wigner distribution is considered bilinear in terms of the signal, since the signal is included twice in its calculation.
Remark 1.
W x ( t , ω ) will always have areas of negative values for any signal with one exception—for the LFM chirp signal with Gaussian amplitude. The reason is that for this signal, the Wigner distribution is not bilinear and belongs to the class of positive distributions.
Remark 2.
There is a connection between the spectrogram and the Wigner–Ville distribution through the window function. This relation connects two bilinear distributions, namely
S ( t , ω ) = 1 2 π R 2 W x ( t , ω ) W h ( t t , ω ω ) d t d ω ,
where W h ( t , ω ) is a Wigner–Ville transform of window function h ( t ) .

3.2. Reassigned Spectrogram

An important consequence of the (9) formula is that the value that the spectrogram takes at each point ( t , ω ) is the result of summing the values of W x within a certain time-frequency domain. In other words, S ( t , ω ) is a number that is assigned to the geometric center of the area being viewed over. For example, assigning the total mass of an object to its geometric center—an arbitrary point that, with rare exceptions, has no reason to correspond to the actual mass distribution.
A much more sensible choice is to assign the total mass to the center of gravity, and it is precisely this approach that corresponds to the Reassignment principle [2]:
  • at each point ( t , ω ) , where the value of the spectrogram is defined, two values are also calculated
    t ^ ( t , ω ) = 1 S ( t , ω ) R 2 t W x ( t , ω ) W h ( t t , ω ω ) d t d ω 2 π , ω ^ ( t , ω ) = 1 S ( t , ω ) R 2 ω W x ( t , ω ) W h ( t t , ω ω ) d t d ω 2 π ,
    which define the local distribution centers of W x through the window W h centered at ( t , ω ) ;
  • then the value of the spectrogram is moved from the point ( t , ω ) to this centroid ( t ^ ( t , ω ) , ω ^ ( t , ω ) ) , which allows us to determine the reassigned spectrogram R ( t , ω ) as follows:
    R ( t , ω ) = R 2 S ( t , ω ) δ ( t t ^ ( t , ω ) ) δ ( ω ω ^ ( t , ω ) ) d t d ω ,
    where δ is the Dirac delta function. In general, for ( t * , ω * ) , the value of the spectrogram S ( t * , ω * ) corresponds to a new position on the time-frequency plane, namely, moved to the point ( t ^ ( t * , ω * ) , ω ^ ( t * , ω * ) ) .
In practice, according to [21], to calculate the centroid ( t ^ ( t , ω ) , ω ^ ( t , ω ) ) , one can use a more efficient procedure based on STFT:
t ^ ( t , ω ) = t + Re S T F T t h ( t , ω ) S T F T ( t , ω ) , ω ^ ( t , ω ) = w Im S T F T d h ( t , ω ) S T F T ( t , ω ) ,
where d h = d h d t ( t ) is the derivative of the used window function h ( t ) , and t h = t h ( t ) .

4. Entropy of Time-Frequency Distributions

4.1. Classical Information Criteria and Discrete Distributions

In previous articles by the authors, the normalized Shannon entropy is studied, which is calculated from the square of the amplitude spectral distribution F [ k ] according to (3):
H ( P F ) = 1 log 2 N i = 1 N P F [ i ] log 2 P F [ i ] ,
where P F [ i ] = F [ i ] k = 1 N F [ k ] , i = 1 , , N is a normalized discrete distribution so that i = 1 N P F [ i ] = 1 .
When calculating the sum (13), it is assumed that 0 log 2 0 = 0 by continuity, and this assumption is valid for all subsequent equations.
This measure of entropy evaluates the uniformity of the distribution of signal energy in the frequency domain. High spectral entropy means greater uniformity in the distribution of signal energy, while low entropy means less uniformity. Spectral entropy can be used to discriminate a narrow band signal from a broadband one, for example, to distinguish between a tone signal and white noise; however, they cannot be used to distinguish two broadband signals, for example, a FM signal and noise.
In fact, time-frequency distributions concentrate energy for frequency-modulated signals in the same way that the Fourier transform concentrates energy for harmonic components. Thus, time-frequency extensions of entropy measures can distinguish between two different classes of broadband signals, while the energy of one class of signals can be evenly distributed in the time-frequency domain, for example, in the case of white noise, and the energy of the second class is concentrated in a certain area of the time-frequency plane, for example, in the case of a FM signal [2].
Time-frequency distributions, which are positive over the entire detection range, and normalized to the total energy, can be used to calculate information characteristics of a signal. In this case, the density of the discrete distribution will be determined by the formula
P S [ n , k ] = S [ n , k ] n = 1 M k = 1 N S [ n , k ] ,
where the index S is associated with the spectrogram (7). In case of reassigned spectrogram (11), index R and notation P R for discrete distribution will be used later in the text.
The Shannon entropy of the time-frequency distribution is an extension of the spectral entropy, and can be obtained from the spectral entropy by replacing the Fourier transform of the signal with the time-frequency transform in Equation (13), and then by replacing the one-dimensional summation with a two-dimensional one. In the discrete case, it can be defined as
H ( P S ) = n = 1 M k = 1 N P S [ n , k ] log 2 P S [ n , k ] .
However, most time-frequency transformations do not have the property of positivity, so researchers prefer to use the Rényi entropy as an entropy measure
H ( α ) ( P S ) = 1 1 α log 2 n = 1 M k = 1 N P S α [ n , k ] ,
where the index ( α ) defines the order of Rényi entropy ( α > 0 , α 1 ), usually α > 1 . When the parameter α tends to one, the Rényi entropy converges to the Shannon entropy.
According to [5], the value of the short-term Rényi entropy for a spectrogram of monocomponent harmonic signal is
H ( α ) ( P S ) = log 2 ( Δ t ) + 1 2 π σ w α 1 2 ( 1 α ) .
Equation (17) shows that the local entropy of the time slice of a single-component signal spectrogram depends on the duration of the interval Δ t , the standard deviation σ w of the spectrogram window w ( t ) , and of the order of α of the Rényi entropy. The local Rényi entropy decreases with increasing parameter σ w . Large values of the entropy order emphasize the peak character of the spectrogram, which means they reduce entropy. Note also that the second term in Equation (17) decreases with increasing entropy order α , since lim α α 1 2 ( 1 α ) = 1 . For a two-component signal, one can similarly obtain
H ( α ) ( P S ) log 2 ( Δ t ) + 1 2 π σ w α 1 2 ( 1 α ) + 1 .
Hence follows the countable property of the Rényi entropy, i.e., the direct dependence of entropy on the number of harmonic components in the signal, which is a very powerful feature to determine its number.

4.2. New Information Criteria and Discrete Distributions

Another way to determine the discrete density is to normalize the columns of the matrix S [ n , k ] to the total energy of one signal processing window, as shown below:
P S C [ n , k ] = 1 M S [ n , k ] k = 1 N S [ n , k ] .
Let us set the properties of the matrix P S C .
Remark 3.
P S C calculated by the formula (19) is a right stochastic matrix by construction if M = N .
By the ergodic theorem, for a regular stochastic matrix, there exists a vector Π = ( π 1 , , π M ) , n = 1 M π n = 1 such that
Ω ( P S C ) = lim j P S C j = 1 T ( π 1 , , π M ) ,
where 1 is a unit vector.
Remark 4.
The matrix P S C j is a right stochastic matrix. Therefore, operations for calculating information characteristics, in particular the Shannon and Rényi entropy, are defined for this matrix.
Remark 5.
The Shannon and Rényi entropies can be calculated from the matrix Ω and the vector Π. It turns out that
H ( Ω ) = 1 2 1 + H ( Π ) , H ( α ) ( Ω ) = log 2 M + H ( α ) ( Π ) .
At the same time, the maximum value of the Rényi entropy is H ( α ) = 2 log 2 M .
It should be noted that for matrix elements, on average, P S C n k P S n k is valid for all n , k = 1 , , M .

5. Complexity of Time-Frequency Distributions

In our earlier works [22,23], a characteristic called statistical complexity was investigated. The concepts of the disequilibrium function D and the statistical complexity C(C – complexity) of the discrete probability distribution P were first introduced in [24]:
C ( P ) = H ( P ) · D ( P , Q ) ,
where the disequilibrium D ( P , Q ) determines the distance between the signal distribution P and noise distribution Q and H ( P ) is the entropy of the signal distribution P.
The simplest example of a disequilibrium function is the Euclidean distance in the space of discrete probability distributions, which is convenient to use when evaluating and comparing with noise having a uniform spectral distribution Q = 1 N , , 1 N , is as follows:
D S Q ( P ) = i = 1 N P [ i ] 1 N 2 = i = 1 N P [ i ] 2 1 N .
The statistical complexity, defined through the expression of the disequilibrium by (21), has the form
C S Q ( P ) = 1 log 2 N i = 1 N P [ i ] log 2 P [ i ] · i = 1 N P [ i ] 1 N 2 .
Jensen–Shannon disequilibrium and corresponding statistical complexity are defined as
D J S D ( P ) = J S D ( P , Q ) , C J S D ( P ) = H ( P ) · D J S D ( P ) ,
where J S D ( P , Q ) is Jensen–Shannon divergence
J S D ( P , Q ) = H ( m ) 1 2 ( H ( P ) + H ( Q ) ) , m = P + Q 2 .
The disequilibrium with total variation and corresponding statistical complexity are defined as
D T V ( P ) = T V 2 ( P , Q ) = i = 1 N | P [ i ] 1 N | 2 , C T V ( P ) = 1 4 log 2 N i = 1 N P [ i ] log 2 P [ i ] i = 1 N | P [ i ] 1 N | 2 .
Thus, transitioning from frequency distributions to time-frequency distributions naturally leads us to introduce and define a complexity function for them. Obvious candidates as a disequilibrium function for time-frequency distributions are as follows:
  • Kullback–Leibler divergence D K L ( P , Q ) ;
  • Rényi divergence D ( α ) ( P , Q ) (can be considered a generalization of D K L ( P , Q ) , since when α tends to 1, it becomes D K L ( P , Q ) );
  • Jensen–Shannon divergence for Rényi entropy J ( α ) ( P , Q ) ;
  • Euclidean distance D S Q ( P , Q ) ;
  • Total signed measure of variation D T V ( P , Q ) .
While the last two functions are trivial in terms of the transition to two-dimensional distributions, the first items of the list require some comments and are discussed in more detail below.

5.1. Rényi Divergence

The Rényi divergence between two time-frequency probability distributions P and Q has the form
D ( α ) ( P , Q ) = 1 α 1 log 2 P α ( t , f ) Q 1 α ( t , f ) d t d f
and it becomes a Kullback–Leibler divergence when α tends to 1.
When Q = 1 N M , 1 N M , one can write
D ( α ) ( P , Q ) = 1 α 1 log 2 Q 1 α P α ( t , f ) d t d f = log 2 ( N M ) H ( α ) ( P ) .
A symmetrized divergence is often used. It is relatively straightforward to calculate and takes the following form:
D ( α ) ( P , Q ) + D ( α ) ( Q , P ) = 1 1 α log 2 ( N M ) + α H ( 1 α ) ( P ) ( 1 α ) H ( α ) ( P ) .
In the current work, such a distance is not applicable as a disequilibrium, since in practice α > 1 is used.

5.2. Jensen–Shannon Divergence for Time-Frequency Distributions

Richard Baraniuk et al. in their articles [11,12,13] introduced an analog of the Jensen–Shannon distance for TFDs
J ( α ) ( P , Q ) : = H ( α ) ( P Q ) H ( α ) ( P ) + H ( α ) ( Q ) 2 ,
where P Q ( t , ω ) : = P ( t , ω ) · Q ( t , ω ) , and the sign of “·” is an element-wise matrix multiplication. Now let us explore the properties of the function (29).
Lemma 1.
It is valid for any discrete distributions P and Q for α > 1
J ( α ) ( P , Q ) 0 ,
and equality is possible only if P = Q .
Proof. 
To prove the lemma, we need to evaluate the expression
( a i b i ) 2 a i 2 b i 2 1 ,
which is the square of the Cauchy–Bunyakovsky inequality, and a i and b i are the i-th elements of the corresponding distributions P and Q to the power of α / 2 . By applying the log operation to this inequality, we obtain the statement of the lemma. □
Lemma 2.
Let P =   p i j , i = 1 , N , j = 1 , M be valid for discrete densities P and Q, and Q = P + δ ρ , where δ ρ =   δ ρ i j , δ ρ i j 1 and δ ρ i j / p i j 1 . Then
J ( α ) ( P , Q ) = 1 4 H ( α ) p i j δ ρ i j 2 p i j + o ( δ ρ 2 ) .
Proof. 
Let us decompose the function of many variables H ( α ) ( P ) into a Taylor series up to the third term in the neighborhood of the matrix P. Here and further, a summation is performed using repeated indexes. We get
H ( α ) ( Q ) = H ( α ) ( P ) + H ( α ) p i j δ ρ i j + 1 2 2 H ( α ) p i j p k l δ ρ i j δ ρ k l + o ( δ ρ 2 ) .
In turn, decomposing the function H ( α ) ( P Q ) into a Taylor series in the vicinity of the same point gives
H ( α ) ( P Q ) = H ( α ) ( P ) + 1 2 H ( α ) p i j δ ρ i j 1 4 p i j H ( α ) p i j ( δ ρ i j ) 2 + 1 4 2 H ( α ) p i j p k l δ ρ i j δ ρ k l + o ( δ ρ 2 ) .
Subtracting the penultimate expression from the last one gives the statement of the lemma. □
Let us consider an elementary example to illustrate Lemma 2.
Example 1.
Let us choose
N = 3 , M = 1 , α = 2 , P = ( 1 / 3 , 1 / 3 , 1 / 3 ) , Q = ( 1 / 4 , 1 / 4 , 1 / 2 ) , δ ρ = ( 1 / 12 , 1 / 12 , 1 / 6 ) .
Then P Q = ( 1 / 12 , 1 / 12 , 1 / 6 ) . The calculation of entropies and divergence gives
H ( 2 ) ( P ) = log 2 3 , H ( 2 ) ( Q ) = log 2 8 log 2 3 , H ( 2 ) ( P Q ) = log 2 3 , J ( 2 ) ( P , Q ) = log 2 3 1 2 log 2 8 = ln 3 ln 2 3 2 0.0849 .
On the other hand, according to Lemma 2 we have H ( 2 ) = ( 2 , 2 , 2 ) and
J ( 2 ) ( P , Q ) = 2 4 ln 2 3 12 2 + 3 12 2 + 3 6 2 = 1 18 ln 2 0.0802 .
As one can see, the calculations of J ( 2 ) ( P , Q ) head-on using the formula (29) and Lemma 2 are close.

5.3. New Information Characteristics

Summing up the discussion of the divergences of TFDs from previous Sections, it is proposed to investigate the following signal characteristics for signal classification:
  • Related to Shannon entropy:
    H ( P ) = 1 log 2 N i P [ i ] log 2 P [ i ] , C S Q ( P ) = H ( P ) · i P [ i ] Q [ i ] 2 , C J S D ( P ) = H ( P ) · J S D ( P , Q ) , C T V ( P ) = 1 4 H ( P ) · i P [ i ] Q [ i ] 2 .
  • Related to Rényi entropy:
    H ( α ) ( P ) = 1 log 2 N 1 1 α log 2 i P α [ i ] , C S Q ( α ) ( P ) = H ( α ) ( P ) · i P [ i ] Q [ i ] 2 , C J S D ( α ) ( P ) = H ( α ) ( P ) · J S D ( P , Q ) , C J ( α ) ( α ) ( P ) = H ( α ) ( P ) · J ( α ) ( P , Q ) , C T V ( α ) ( P ) = 1 4 H ( α ) ( P ) · i P [ i ] Q [ i ] 2 .
Moreover, the discrete distributions P can be calculated using different supports and have different indexes accordingly:
  • P F for spectrum (3) one-dimensional discrete distribution;
  • P S for spectrogram (7) two-dimensional discrete distribution;
  • P R for reassigned spectrogram (11) two-dimensional discrete distribution.
Remark 6.
Summation in (36) and (37) is performed:
  • by i = 1 , , N for one-dimensional discrete distributions;
  • by i = [ n , k ] where [ n , k ] is a pair of indices for two-dimensional discrete distributions.
Thus, systems (36), (37), and different discrete distributions present a significant number of information characteristics, which can be used as classification features and will be explored in the next section.

6. Modeling

This section is devoted to verifying of the theoretical proposals presented in the previous sections and the applicability of the proposed information characteristics to solve the problem of detection and classification in numerical experiments with both model signals and more complex real-world recordings.

6.1. Model Signal Description

To illustrate the obtained theoretical results, the following types of model signals are used:
  • harmonic signals;
  • linearly frequency-modulated chirp signals (LFM chirp signals);
  • model signals of marine vessels.
The harmonic signal has the form
x ( t ) = n = 0 K 1 A n sin ( 2 π ( f 0 + n Δ f ) t ) , t [ 0 , T ]
and consists of the sum of K harmonic components with amplitudes A n and frequencies f 0 + n Δ f , where f 0 and Δ f are the constant fundamental frequency and the step between the frequencies, respectively. The sampling frequency and window size are selected so that harmonic samples are not blurred in the resulting spectrum.
The LFM chirp signal is described by the following equation:
x ( t ) = sin ϕ 0 + 2 π c t 2 2 + f 0 t , t [ 0 , T ]
with an initial phase of ϕ 0 and represents a signal with frequency varying according to the following linear law:
f ( t ) = c t + f 0 , t [ 0 , T ] ,
where f 0 is the initial frequency at time t = 0 . The rate of rise of c is determined by the difference in frequencies f 0 , f 1 at the initial and final time moments, respectively
c = f 1 f 0 T .
The signal simulating the acoustic radiation of a marine vessel is modeled according to [25] as
x ( t ) = 1 + n = 1 K A n sin ( 2 π n f 0 t ) w c ( t ) + w e ( t ) ,
where f 0 is the shaft frequency, i.e., the rotation frequency of the propeller shaft;
w c ( t ) , w e ( t ) are the cavitation noise of the propellers and the noise of the marine environment, respectively;
K is the number of harmonic components with the fundamental frequency f 0 forming the signal;
A n are the corresponding amplitudes of each component.
Marine environment noise w e ( t ) is modeled with white Gaussian noise with parameters such as to satisfy a predefined signal-to-noise ratio (SNR). The noise of the shaft and propeller rotation is modulated at the shaft rotation frequency f 0 and at the frequency m f o , equal to the product of the shaft rotation frequency and the number of m propeller blades (blade frequency) [26]. Due to the nonlinear effects that occur during acoustic radiation, the ship’s noise spectrum, as a rule, contains harmonics of the shaft and blade frequencies, forming a single tonal scale with a base equal to the shaft frequency. In some cases, the shaft frequency and its harmonics may not appear, and then the spectrum may contain only a bladed scale.
Cavitation is the process of formation of discontinuities in the medium during rotation of the propeller, characterized by the appearance of vapor-gas bubbles of various sizes and concentrations in the liquid. Cavitation noise w c ( t ) is modeled with white Gaussian noise in a narrow band of cavitation frequencies (from 1 kHz to 3 kHz).

6.2. Description of Real Signals

Three datasets corresponding to different signal types will be used as real signals:
  • Bioacoustic signals;
  • Recordings of hydroacoustic background marine noise;
  • Hydroacoustic ship signals.
Whale recordings from the dataset of the article [27] were used as bioacoustic signals. The recordings are three-second segments containing phonemes of whale sounds.
The recordings of natural background noise are taken from the dataset QiandaoEar22 [28], recorded at night in calm conditions on the Chinese Tsandao Lake. Despite the recording conditions, the acoustic signals obtained are non-trivial in terms of spectral content and are very different from, for example, synthetic white noise.
The ship records are sourced from the Deepship [29] dataset, which is the most popular hydroacoustic dataset for solving classification problems using machine learning methods. The dataset was recorded from 2016 to 2018 in Vancouver Bay. The data in this set is divided into four classes: cargo ship, tugboat, tanker, and passenger ship. The advantage of this set is that it is recorded in a marine environment in different seasons and under different conditions. Along with ship signals, the recordings contain natural background noises, sounds of marine mammals, and noises from other human activities. The distance from the objects ranges from several hundred to two thousand meters.

6.3. Statistical Experiments for Detecting Model Signals

To illustrate the analytical results of the article, we use a statistical modeling technique based on the analysis of generated numerical data and described in detail in our previous works [22,23]. All numerical results were obtained using Python 3.12 and Numpy 1.26 and Scipy 1.16 libraries. The spectrograms are computed using the AudioFlux [30] digital signal processing library.
Let us consider pairs of data sequences corresponding to two hypotheses of signal reception:
Γ 0 : x n = w n , Γ 1 : x n = s n + w n , n = 1 , , N .
The hypothesis Γ 0 corresponds to a decision of receiving only noise, and the hypothesis Γ 1 —of receiving a mixture of a useful signal and noise, where the sequence { x n } , n = 1 , , N is a time series of received data, { s n } —useful signal, { w n } —additive white Gaussian noise, N—the length of the time series of data (i.e., of the frame).
To verify the quality of the separation of the useful deterministic signal and noise, statistics were collected on Q = 20 , 000 numerically generated frames of { x n } signal–noise mixture of length 2N = 16,384 with a spectrum size of N = 8192 , respectively, for each type of signal described in Section 6.1. Spectrograms were computed with windows of size N W = 512 and hopping length N W / 4 with Hann window function. Order of Rényi entropy is chosen as α = 3 .
In all experiments, the { s n } signal for each pair was generated randomly. Thus, for harmonic signals, the number of components K [ 20 , 50 ] with random phases varied. For LFM chirp signals, the initial and final frequencies in the window were randomly selected. The additive white Gaussian noise { w n } was obtained by a Gaussian sequence generator with mean μ = 0 and variance Σ (within a single set of Q frames). The signal amplitude was selected to satisfy a predefined signal-to-noise ratio ( S N R ), which is described by the formula
S N R = 10 log 10 E s i g n a l E n o i s e ,
where E s i g n a l , E n o i s e are total energies of the signal and noise, respectively, calculated as the sum of the spectral decomposition powers of the sequences { s n } and { w n } .
For each resulting sequence { x n } , discrete normalized frequency distributions P k and time-frequency distributions P n , k are calculated. Further, based on these distributions, the values of the information characteristics (36) and (37) are calculated for noise and a mixture of noise with a signal corresponding to the two hypotheses from the expressions (43).
The final result of modeling and comparison of the calculated information metrics is the dependence of the quality of the binary classification A U C R O C and the detection probability P r d on the signal-to-noise ratio S N R .
To obtain such a dependence for a number of Σ noise variance values corresponding to a set of SNRs from −20 dB to 0 dB, the Q frame statistics described above are collected, and histograms of information feature distributions are constructed from it, which are then used to calculate the values of A U C R O C and the detection probability P r d . An example histogram and AUC ROC graph are shown in Figure 1.
In all the figures presented below, only the most revealing information characteristics are retained in order to simplify the visual understanding of the graphs and get rid of unrepresentative results associated with calculation errors for low SNRs. This approach has no effect on the qualitative conclusions reached at the end of the subsection.
Figure 2 shows a comparison of the AUC ROC metric and the probability of detection P r d for harmonic signals (38) for spectral information characteristics.
Figure 3 shows a comparison between the AUC ROC metric and the detection probability P r d for harmonic signals, focusing on information characteristics calculated from a spectrogram. It can be seen that in the case of pure harmonic signals, the spectral characteristics show themselves much better than their time-frequency counterparts.
In turn, the reverse situation is observed for LFM chirp signals (39). In this case, the spectral characteristics degrade significantly even with significant signal-to-noise ratios, as shown in Figure 4, whereas their time-frequency counterparts show good detection quality, as illustrated in Figure 5.
A similar experiment conducted for model cavitation signals of marine vessels (42) demonstrates similar detection quality for both types of information characteristics, as shown in Figure 6 and Figure 7.
The presented graphs correspond to the intuitive idea that time-frequency distributions and features based on them are able to distinguish signals whose frequency composition changes significantly over a time equal to the duration of the window under consideration.
Therefore, in the case of simple harmonic signals, the constant components have a positive effect on the spectral characteristics, since the spectral window is trivially larger than the atomic windows used to calculate the spectrogram, and thus shows adequate signal–noise separation quality for lower SNRs.
However, in the case of LFM chirp signals, their spectral composition changes during the observation window, i.e., the frequency transformation from the entire window is no longer able to adequately reflect the signal features, whereas in the time-frequency distribution matrix, frequencies are localized much better and therefore features based on time-frequency transformations stand out favorably in the problem of detecting such signals.
Lastly, entropy characteristics based on P S C distribution (19) are compared with other criteria. This comparison for LFM chirp signals is illustrated in Figure 8. It can be seen that H ( α ) ( Ω ( P S C ) ) (20) distribution’s detection quality is between the ones related to spectral and spectrogram-based characteristics, which can be of research interest and may be studied in future work.

6.4. H / C Plane for Classification of Real Signals

Next, we will consider the real signals from the datasets described in Section 6.2. It is worth noting that the recordings were not subjected to any pre-processing, except for resampling to a unite frequency, which is necessary for the uniformity of the size of the signal windows of all classes of the training dataset, as well as centering and normalization, which are standard procedures in the practice of working with acoustic signals using machine learning methods.
Entropy characteristics are associated with an effective signal classification mechanism based on the analysis of the H / C plane [31,32]. Without going into the details outlined in the mentioned articles, this approach is justified by the fact that the complexity estimate provides additional insight into the details of the probability distribution of the system, which are not distinguished by entropy. It can also help reveal information related to the correlation structure between the components of the studied physical process [31]. Hence, the entropy–complexity plane H / C allows you to explore the hidden parameters of the signals [6] and can be used to classify them.
Figure 9, Figure 10 and Figure 11 show such planes constructed from signals corresponding to the classes of background marine noise, cargo ship, and biacoustic signatures of whales. The approximate number of signal windows of each class, and, accordingly, the colored dots of each color on the graphs in this experiment is 20,000.
It can be seen that while the spectral characteristics do not allow us to reliably separate the described signal classes, the time-frequency characteristics successfully cope with this task.
In addition, it is worth noting how different time-frequency distributions affect the appearance of such diagrams. Figure 12 shows the H / C planes constructed for the usual (7) and reassigned (11) spectrograms. It can be seen that clouds of points corresponding to different classes are more strongly grouped around their centers of mass, and the distances between the different classes become greater.
The demonstrated results show that information features of frequency and time-frequency distributions of signals can be used to solve the problem of signal classification, which is the subject of the next subsection.

6.5. Using Entropy Features to Classify Signals with Machine Learning Methods

To study the possibilities of classifying real signals using the proposed features and numerically evaluating the quality of such classification, a machine learning method was used, namely the XGBoost [33] gradient boosting algorithm for decision trees, which is one of the most popular approaches for solving machine learning problems based on tabular data. The results were obtained using the XGBoost library of the Python programming language.
For each class of signals, a training dataset has been calculated, which has the size of 20,000 signal windows for 30 features for each class. It is important that the training and test datasets are separated before the stage of feature calculation, i.e., it is guaranteed that the test dataset contains signals that are not present in the training dataset. The test dataset contains 5000 signal windows for each class. Four classes are considered in the experiment:
  • natural marine background noise (Noise);
  • bioacoustic signals of whales (Whale);
  • hydroacoustic signals of a tugboat (Tug);
  • hydroacoustic signals of a passenger ship (Passenger).
As a result of training the classifier, the following metrics were obtained:
A c c u r a c y = 0.95 , M a c r o F 1 = 0.95 , M C C = 0.93 ,
where M a c r o F 1 is a F 1 measure averaged over all classes, and M C C is the Matthews correlation coefficient. Figure 13 shows a normalized error matrix after training the model.
Figure 14 shows a training performance graph, i.e., the dependence of the loss function mlogloss (multiclass logarithmic loss) for training and test data from iteration. A high final quantity of the obtained trees is associated with the chosen learning step l r = 0.001 , which guarantees the smoothness of the gradient convergence process.
Furthermore, it is interesting to examine the significance of the trained classifier’s features. From Figure 15, the influence of the reassigned spectrogram distribution P R for making the final decision becomes obvious, since the features associated with it hold the highest importance and weight in decision-making by the received classifier.
Thus, it can be stated that information features, considered in the current work, achieve good classification quality for acoustic signals of different natures and can be used in machine learning detection and classification systems.

7. Conclusions

The paper provides a time-frequency analysis of the problem of detecting and classifying acoustic signals based on information (entropy) criteria. A new method for calculating the discrete distribution in the time-frequency domain is proposed, including the use of a spectrogram and a reassigned spectrogram. Further information properties of the Ω matrix and the Π vector in the problem of distinguishing close hypotheses for weak signal detection have yet to be established.
To justify the applicability of the proposed characteristics and validate their classification quality, modeling on synthetic signals and numerical verification of the solution of the multiclass classification problem based on machine learning methods on real hydroacoustic recordings are carried out. The obtained high classification results ( F 1 = 0.95 ) confirm the potential of using the proposed characteristics.
Future work will be devoted to an additional study of the problem of classifying similar classes of signals, as well as the joint use of the proposed information characteristics and classical signal features to improve the quality of classification.

Author Contributions

Conceptualization, A.G. and P.L.; methodology, A.G. and P.L.; software, P.L. and L.B.; validation, P.L., A.G. and L.B.; formal analysis, A.G. and L.B.; investigation, A.G., P.L., L.B. and V.B.; resources, P.L. and L.B.; data curation, P.L.; writing—original draft preparation, A.G., P.L., L.B. and V.B.; writing—review and editing, A.G., P.L., L.B. and V.B.; visualization, P.L. and L.B.; supervision, A.G.; project administration, A.G. and P.L.; funding acquisition, A.G. and P.L. All authors have read and agreed to the published version of the manuscript.

Funding

The work was partially supported by the Russian Science Foundation under grant no 23-19-00134.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in https://www.frdr-dfdr.ca/repo/dataset/4a3113e6-1d58-6bb4-aaf2-a9adf75165be (accessed on 10 August 2025), https://github.com/irfankamboh/DeepShip (accessed on 10 August 2025) and https://github.com/xiaoyangdu22/QiandaoEar22 (accessed on 10 August 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUC ROCArea Under the Receiver Operating Characteristic Curve
CFARConstant False Alarm Rate
ECGElectrocardiogram
EEGElectroencephalogram
JSDJensen–Shannon Divergence
LFMLinear Frequency Modulation
SNRSignal-To-Noise Ratio
STFTShort-Time Fourier transform
TFDTime-Frequency Distribution
VADVoice Activity Detection

References

  1. Cohen, L. Time-Frequency Analysis; Electrical Engineering Signal Processing; Prentice Hall PTR: Hoboken, NJ, USA, 1995; p. 320. [Google Scholar]
  2. Boashash, B.; Khan, N.A.; Ben-Jabeur, T. Time–frequency features for pattern recognition using high-resolution TFDs: A tutorial review. Digit. Signal Process. 2015, 40, 1–30. [Google Scholar] [CrossRef]
  3. Malarvili, M.B.; Sucic, V.; Mesbah, M.; Boashash, B. Renyi entropy of quadratic time-frequency distributions: Effects of signals parameters. In Proceedings of the 2007 9th International Symposium on Signal Processing and Its Applications, Sharjah, United Arab Emirates, 12–15 February 2007. [Google Scholar] [CrossRef]
  4. Sucic, V.; Saulig, N.; Boashash, B. Analysis of local time-frequency entropy features for nonstationary signal components time supports detection. Digit. Signal Process. 2014, 34, 56–66. [Google Scholar] [CrossRef]
  5. Sucic, V.; Saulig, N.; Boashash, B. Estimating the number of components of a multicomponent nonstationary signal using the short-term time-frequency Rényi entropy. EURASIP J. Adv. Signal Process. 2011, 2011, 125. [Google Scholar] [CrossRef]
  6. Babikov, V.G.; Galyaev, A.A. Information diagrams and their capabilities for classifying weak signals. Probl. Inf. Transm. 2024, 60, 127–140. [Google Scholar] [CrossRef]
  7. Bačnar, D.; Saulig, N.; Vuksanović, I.P.; Lerga, J. Entropy-Based Concentration and Instantaneous Frequency of TFDs from Cohen’s, Affine, and Reassigned Classes. Sensors 2022, 22, 3727. [Google Scholar] [CrossRef]
  8. Aviyente, S. Divergence measures for time-frequency distributions. In Proceedings of the Seventh International Symposium on Signal Processing and Its Applications, Paris, France, 1–4 July 2003; Volume 1, pp. 121–124. [Google Scholar] [CrossRef]
  9. Zarjam, P.; Azemi, G.; Mesbah, M.; Boashash, B. Detection of newborns’ EEG seizure using time-frequency divergence measures. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 5, pp. 429–432. [Google Scholar] [CrossRef]
  10. Porta, A.; Baselli, G.; Lombardi, F.; Montano, N.; Malliani, A.; Cerutti, S. Conditional entropy approach for the evaluation of the coupling strength. Biol. Cybern. 1999, 81, 119–129. [Google Scholar] [CrossRef]
  11. Baraniuk, R.; Flandrin, P.; Janssen, A.; Michel, O. Measuring time-frequency information content using the Renyi entropies. IEEE Trans. Inf. Theory 2001, 47, 1391–1409. [Google Scholar] [CrossRef]
  12. Michel, O.; Baraniuk, R.; Flandrin, P. Time-frequency based distance and divergence measures. In Proceedings of the IEEE-SP International Symposium on Time- Frequency and Time-Scale Analysis, Philadelphia, PA, USA, 25–28 October 1994. [Google Scholar] [CrossRef]
  13. Flandrin, P.; Baraniuk, R.; Michel, O. Time-frequency complexity and information. In Proceedings of the ICASSP ’94, IEEE International Conference on Acoustics, Speech and Signal Processing, Adelaide, SA, Australia, 19–22 April 1994. [Google Scholar] [CrossRef]
  14. Kalra, M.; Kumar, S.; Das, B. Moving Ground Target Detection With Seismic Signal Using Smooth Pseudo Wigner–Ville Distribution. IEEE Trans. Instrum. Meas. 2020, 69, 3896–3906. [Google Scholar] [CrossRef]
  15. Xu, Y.; Zhao, Y.; Jin, C.; Qu, Z.; Liu, L.; Sun, X. Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy. Opt. Lett. 2010, 35, 475–477. [Google Scholar] [CrossRef]
  16. Vranković, A.; Ipšić, I.; Lerga, J. Entropy-Based Extraction of Useful Content from Spectrograms of Noisy Speech Signals. In Proceedings of the 2021 International Symposium ELMAR, Zadar, Croatia, 13–15 September 2021. [Google Scholar] [CrossRef]
  17. Liu, C.; Gaetz, W.; Zhu, H. Estimation of Time-Varying Coherence and Its Application in Understanding Brain Functional Connectivity. EURASIP J. Adv. Signal Process. 2010, 2010, 390910. [Google Scholar] [CrossRef]
  18. Moukadem, A.; Dieterlen, A.; Brandt, C. Shannon Entropy based on the S-Transform Spectrogram applied on the classification of heart sounds. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 704–708. [Google Scholar] [CrossRef]
  19. Batistić, L.; Lerga, J.; Stanković, I. Detection of motor imagery based on short-term entropy of time–frequency representations. Biomed. Eng. Online 2023, 22, 41. [Google Scholar] [CrossRef]
  20. Sang, Y.F.; Wang, D.; Wu, J.C.; Zhu, Q.P.; Wang, L. Entropy-Based Wavelet De-noising Method for Time Series Analysis. Entropy 2009, 11, 1123–1147. [Google Scholar] [CrossRef]
  21. Auger, F.; Flandrin, P.; Lin, Y.T.; McLaughlin, S.; Meignen, S.; Oberlin, T.; Wu, H.T. Time-frequency reassignment and synchrosqueezing: An overview. IEEE Signal Process. Mag. 2013, 30, 32–41. [Google Scholar] [CrossRef]
  22. Galyaev, A.A.; Babikov, V.G.; Lysenko, P.V.; Berlin, L.M. A New Spectral Measure of Complexity and Its Capabilities for Detecting Signals in Noise. Dokl. Math. 2024, 110, 361–368. [Google Scholar] [CrossRef]
  23. Galyaev, A.A.; Berlin, L.M.; Lysenko, P.V.; Babikov, V.G. Order statistics of the normalized spectral distribution for detecting weak signals in white noise. Autom. Remote Control 2024, 85, 1041–1055. [Google Scholar] [CrossRef]
  24. López-Ruiz, R.; Mancini, H.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef]
  25. Liu, Z.; Lü, L.; Yang, C.; Jiang, Y.; Huang, L.; Du, J. DEMON Spectrum Extraction Method Using Empirical Mode Decomposition. In Proceedings of the 2018 OCEANS—MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 28–31 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
  26. Kudryavtsev, A.A.; Luginets, K.P.; Mashoshin, A.I. Amplitude Modulation of Underwater Noise Produced by Seagoing Vessels. Akust. Zhurnal 2003, 49, 224–228. [Google Scholar] [CrossRef]
  27. Kirsebom, O.S.; Frazao, F.; Simard, Y.; Roy, N.; Matwin, S.; Giard, S. Performance of a deep neural network at detecting North Atlantic right whale upcalls. J. Acoust. Soc. Am. 2020, 147, 2636–2646. [Google Scholar] [CrossRef]
  28. Du, X.; Hong, F. QiandaoEar22: A high-quality noise dataset for identifying specific ship from multiple underwater acoustic targets using ship-radiated noise. EURASIP J. Adv. Signal Process. 2024, 2024, 96. [Google Scholar] [CrossRef]
  29. Irfan, M.; Jiangbin, Z.; Ali, S.; Iqbal, M.; Masood, Z.; Hamid, U. DeepShip: An underwater acoustic benchmark dataset and a separable convolution based autoencoder for classification. Expert Syst. Appl. 2021, 183, 115270. [Google Scholar] [CrossRef]
  30. Tanky; van; Dong, L.; cool; Eberenz, J. libAudioFlux/audioFlux: v0.1.9; Zenodo: Geneva, Switzerland, 2024. [Google Scholar] [CrossRef]
  31. Ribeiro, H.V.; Zunino, L.; Lenzi, E.K.; Santoro, P.A.; Mendes, R.S. Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns. PLoS ONE 2012, 7, e40689. [Google Scholar] [CrossRef] [PubMed]
  32. Wang, J.; Chen, Z. Feature Extraction of Ship-Radiated Noise Based on Intrinsic Time-Scale Decomposition and a Statistical Complexity Measure. Entropy 2019, 21, 1079. [Google Scholar] [CrossRef]
  33. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
Figure 1. Histogram and AUC ROC graph for H ( P F ) , calculated for Q = 20,000 experiments.
Figure 1. Histogram and AUC ROC graph for H ( P F ) , calculated for Q = 20,000 experiments.
Entropy 27 00998 g001
Figure 2. The quality of harmonic signal detection for information characteristics calculated from the spectrum.
Figure 2. The quality of harmonic signal detection for information characteristics calculated from the spectrum.
Entropy 27 00998 g002
Figure 3. The quality of harmonic signal detection for information characteristics calculated from a spectrogram.
Figure 3. The quality of harmonic signal detection for information characteristics calculated from a spectrogram.
Entropy 27 00998 g003
Figure 4. The quality of the LFM chirp signal detection for information characteristics calculated from the spectrum.
Figure 4. The quality of the LFM chirp signal detection for information characteristics calculated from the spectrum.
Entropy 27 00998 g004
Figure 5. The quality of the LFM chirp signal detection for the information characteristics calculated from the spectrogram.
Figure 5. The quality of the LFM chirp signal detection for the information characteristics calculated from the spectrogram.
Entropy 27 00998 g005
Figure 6. The quality of the simulated acoustic radiation of a marine vessel signal detection for information characteristics calculated from the spectrum.
Figure 6. The quality of the simulated acoustic radiation of a marine vessel signal detection for information characteristics calculated from the spectrum.
Entropy 27 00998 g006
Figure 7. The quality of the simulated acoustic radiation of a marine vessel signal detection for the information characteristics calculated from the spectrogram.
Figure 7. The quality of the simulated acoustic radiation of a marine vessel signal detection for the information characteristics calculated from the spectrogram.
Entropy 27 00998 g007
Figure 8. The quality of LFM chirp signal detection for different entropy measures.
Figure 8. The quality of LFM chirp signal detection for different entropy measures.
Entropy 27 00998 g008
Figure 9. Classification planes for the Shannon entropy H and the corresponding complexity C J S D for the spectrum P F and spectrogram P S distributions.
Figure 9. Classification planes for the Shannon entropy H and the corresponding complexity C J S D for the spectrum P F and spectrogram P S distributions.
Entropy 27 00998 g009
Figure 10. Classification planes for the Rényi entropy H ( α ) and the corresponding complexity C J ( α ) ( α ) for the spectrum P F and spectrogram P S distributions.
Figure 10. Classification planes for the Rényi entropy H ( α ) and the corresponding complexity C J ( α ) ( α ) for the spectrum P F and spectrogram P S distributions.
Entropy 27 00998 g010
Figure 11. Classification planes for the Rényi entropy H ( α ) and the corresponding complexity C T V ( α ) for the spectrum P F and spectrogram P S distributions.
Figure 11. Classification planes for the Rényi entropy H ( α ) and the corresponding complexity C T V ( α ) for the spectrum P F and spectrogram P S distributions.
Entropy 27 00998 g011
Figure 12. Classification planes for the Shannon entropy H and the corresponding complexity C T V for conventional P S and reassigned spectrograms P R distributions.
Figure 12. Classification planes for the Shannon entropy H and the corresponding complexity C T V for conventional P S and reassigned spectrograms P R distributions.
Entropy 27 00998 g012
Figure 13. Confusion matrix of the trained classifier.
Figure 13. Confusion matrix of the trained classifier.
Entropy 27 00998 g013
Figure 14. XGBoost classifier training graph.
Figure 14. XGBoost classifier training graph.
Entropy 27 00998 g014
Figure 15. The importance of features in classification.
Figure 15. The importance of features in classification.
Entropy 27 00998 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lysenko, P.; Galyaev, A.; Berlin, L.; Babikov, V. Information Complexity of Time-Frequency Distributions of Signals in Detection and Classification Problems. Entropy 2025, 27, 998. https://doi.org/10.3390/e27100998

AMA Style

Lysenko P, Galyaev A, Berlin L, Babikov V. Information Complexity of Time-Frequency Distributions of Signals in Detection and Classification Problems. Entropy. 2025; 27(10):998. https://doi.org/10.3390/e27100998

Chicago/Turabian Style

Lysenko, Pavel, Andrey Galyaev, Leonid Berlin, and Vladimir Babikov. 2025. "Information Complexity of Time-Frequency Distributions of Signals in Detection and Classification Problems" Entropy 27, no. 10: 998. https://doi.org/10.3390/e27100998

APA Style

Lysenko, P., Galyaev, A., Berlin, L., & Babikov, V. (2025). Information Complexity of Time-Frequency Distributions of Signals in Detection and Classification Problems. Entropy, 27(10), 998. https://doi.org/10.3390/e27100998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop