Next Article in Journal
Properties for Close-to-Convex and Quasi-Convex Functions Using q-Linear Operator
Previous Article in Journal
Sensorless Control of Permanent Magnet Synchronous Motor Drives with Rotor Position Offset Estimation via Extended State Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis

by
Vedran Jurdana
Department of Automation and Electronics, Faculty of Engineering, University of Rijeka, 51000 Rijeka, Croatia
Mathematics 2025, 13(6), 898; https://doi.org/10.3390/math13060898
Submission received: 18 February 2025 / Revised: 3 March 2025 / Accepted: 7 March 2025 / Published: 7 March 2025

Abstract

:
Compressive sensing in the ambiguity domain offers an efficient method for reconstructing high-quality time–frequency distributions (TFDs) across diverse signals. However, evaluating the quality of these reconstructions presents a significant challenge due to the potential loss of auto-terms when a regularization parameter is inappropriate. Traditional global metrics have inherent limitations, while the state-of-the-art local Rényi entropy (LRE) metric provides a single-value assessment but lacks interpretability and positional information of auto-terms. This paper introduces a novel performance criterion that leverages instantaneous frequency and group delay estimations directly in the 2D time–frequency plane, offering a more nuanced evaluation by individually assessing the preservation of auto-terms, resolution quality, and interference suppression in TFDs. Experimental results on noisy synthetic and real-world gravitational signals demonstrate the effectiveness of this measure in assessing reconstructed TFDs, with a focus on auto-term preservation. The proposed metric offers advantages in interpretability and memory efficiency, while its application to meta-heuristic optimization yields high-performing reconstructed TFDs significantly quicker than the existing LRE-based metric. These benefits highlight its usability in advanced methods and machine-related applications.

1. Introduction

Time–frequency distributions (TFDs) are essential for analyzing non-stationary signals, such as biomedical, seismic, and radar signals, due to their ability to capture dynamic changes in signal characteristics [1,2,3,4,5,6,7,8,9,10]. However, linear methods, including short-time Fourier transform, continuous wavelet transform, and S-transform, are constrained by the Heisenberg uncertainty principle, limiting their resolution [11,12]. Quadratic TFDs (QTFDs) often suffer from cross-term interference that obscures useful signal components, referred to as auto-terms [13,14]. Advanced methods, including synchrosqueezing transform [12], synchroextracting transform [15], and sparse representations utilizing compressive sensing (CS) [16,17,18,19,20], address these limitations, with CS-based approaches proving especially effective across diverse signal types [19,21,22,23,24,25].
However, assessing sparse TFDs remains a challenging task, as improper reconstruction parameter choices can lead to artifacts or overly sparse TFDs with insufficient auto-term detail [17,26,27]. Recent studies have highlighted that traditional global measures, which evaluate TFDs as a whole, disregard the positions of auto-terms and treat them equivalently to interference [27,28]. Specifically, these measures often show improved evaluation results in the absence of interference and auto-terms alike, leading to misleading and inaccurate assessments in the context of reconstructed TFDs. Consequently, global measures tend to favor overly sparse solutions, which do not adequately represent the underlying signal structure [27]. Due to these limitations, reconstruction parameter selection is often performed manually or experimentally, a process that is time-consuming, subjective, and prone to inaccuracies [19,26,29].
The study in [27] demonstrated that reconstructed TFDs require evaluation methods that incorporate auto-term positional information. This led to the development of a state-of-the-art solution using the localized Rényi entropy (LRE) method [30,31]. The LRE-based measure evaluates reconstructed TFDs by comparing the estimated local number of signal components in time and frequency slices before and after reconstruction [27]. This method has enabled more reliable comparisons of reconstructed TFDs and facilitated the use of meta-heuristic algorithms for automatic reconstruction parameter optimization [27,28].
However, two major limitations of the LRE-based metric remain unresolved. First, the LRE method estimates the local number of components along time and frequency slices, effectively reducing the 2D spatial context of auto-terms to two 1D vectors. This dimensionality reduction results in a loss of positional information, meaning that the LRE only provides a number of components per slice, but not their exact location. Second, it consolidates unwanted reconstruction artifacts, low-resolution components, and auto-term loss into a single scalar value. This lack of depth can make the results non-interpretable for end-users, who may struggle to identify the specific cause of improper reconstruction. Similarly, automated systems may fail to converge to optimal solutions during meta-heuristic optimization. These limitations highlight a critical gap in the development of performance metrics capable of operating directly within the 2D time–frequency (TF) plane.
This paper addresses this gap by introducing a novel performance metric leveraging instantaneous frequency (IF) and group delay (GD) estimates to assess sparse TFDs. Unlike existing methods, this approach focuses on signal auto-terms, while preserving the unbiased 2D spatial information of their positions. The novelty of this metric lies in its ability to decompose the evaluation into distinct components, providing detailed assessments that are interpretable. Building on the localized evaluation principle of the LRE-based approach, the proposed metric advances this concept by extracting information using the precise position of auto-terms. For each time and frequency slice, the reconstructed TFD is analyzed based on auto-term peak represented by estimated IF or GD, respectively. This enables a more precise assessment of auto-term preservation and resolution quality. Furthermore, the method iteratively removes evaluated auto-terms to facilitate a comprehensive assessment of interference. To enhance efficiency and reduce user dependency, the approach integrates the component alignment map (CAM) [29], which automatically identifies optimal TFD regions for localization along time or frequency slices.
The key properties of the proposed measure include the following:
  • Decomposed Evaluation: Unlike the LRE-based metric, the proposed measure decomposes into three components providing detailed assessments of auto-term preservation, resolution quality, and interference suppression.
  • Improved Interpretability: The proposed measure not only effectively penalizes the loss of auto-terms but also provides interpretable results. This assists users in fine-tuning reconstruction parameters (e.g., the regularization parameter) without the need to visually inspect reconstructed TFDs.
  • Optimization-Ready Design: The components of the proposed measure can serve as objective functions for multi-objective optimization algorithms, enabling a robust, fast, and efficient framework for automated parameter tuning in noisy environments.
  • Memory Efficiency: The effect of the regularization parameter can be described without storing large TFD images.
The effectiveness of the proposed measure is demonstrated through the evaluation of several reconstructed TFDs generated using the two-step iterative shrinkage/thresholding (TwIST) algorithm [32]. The study evaluates both noisy synthetic signals and real-world gravitational signals which present cases of linear and non-linear components, intersecting components, and components with different amplitudes in noise, illustrating the practical utility and robustness of the method.
The remainder of this paper is organized as follows. Section 2.1 reviews sparse TFD reconstruction, while Section 2.2 introduces common metrics for evaluating sparse TFDs, focusing on the current LRE-based measure. Motivated by the limitations of the LRE-based metric described in Section 2.3, Section 2.4 introduces the proposed measure based on the estimated IFs and GDs. Section 3 presents the experimental simulation results, and Section 4 concludes the paper.

2. Materials and Methods

2.1. Sparse Time–Frequency Distributions

The Wigner–Ville distribution (WVD) is a fundamental TFD defined as [13]:
W z ( t , f ) = z t + τ 2 z * t τ 2 e j 2 π f τ d τ ,
where z ( t ) = m = 1 M A m ( t ) e j φ m ( t ) represents the analytic form of a non-stationary signal with M components. Here, A m ( t ) and φ m ( t ) denote the instantaneous amplitude and phase of the m-th signal’s component, respectively. The WVD is known for providing precise IF and GD estimates for signals with a single linear frequency-modulated (LFM) component. The IF, f 0 m ( t ) , reflects the dominant frequency of the m-th component at a given time, while the GD indicates the time at which a particular frequency appears [13]. To address the limitations of the WVD, cross-term suppression in multi-component signals is typically managed in the ambiguity function (AF), A z ( ν , τ ) , defined as [13]
A z ( ν , τ ) = W z ( t , f ) e j 2 π ( f τ ν t ) d t d f .
This approach leads to QTFDs, denoted as ρ z ( t , f ) :
ρ z ( t , f ) = A z ( ν , τ ) g ( ν , τ ) e j 2 π ( ν t f τ ) d ν d τ ,
where g ν , τ serves as a low-pass filter kernel in the AF [13].
Despite improvements offered by QTFDs, there remains an inherent trade-off between auto-term resolution and cross-term suppression which advanced methods seek to overcome [13]. In this study, a CS-based method is utilized, which allows signal reconstruction from a small subset of AF samples, namely the CS-AF region A z C S ( ν , τ ) . This CS-AF region is centered around the AF origin and typically resembles a rectangular shape to include only auto-terms [17,19,21]. This reconstruction employs algorithms that seek the optimal sparse TFD Υ z ( t , f ) [19,21]:
Υ z ( t , f ) = Ψ H · A z C S ( ν , τ ) ,
where Ψ H is the Hermitian transpose of a domain transformation matrix. Since the cardinality of A z C S ( ν , τ ) is smaller than that of Υ z t , f (with dimensions N t × N f , where N t and N f are the number of time instances and frequency bins, respectively), multiple solutions for Υ z ( t , f ) are possible. To identify the optimal solution, a regularization function is employed to emphasize the desired characteristics of the solution. Sparsity is typically promoted using the 1 norm in the reconstruction process [14,18,19,21,26,33], leading to the following unconstrained optimization problem [26,33]:
Υ z 1 ( t , f ) = arg min Υ z ( t , f ) | | Υ z ( t , f ) | | 1 , subject to : | | Υ z ( t , f ) Ψ H A z C S ( ν , τ ) | | 2 2 ϵ ,
where ϵ denotes the tolerance level for the solution. The closed-form solution of this optimization problem is given by [26]
Υ z 1 ( t , f ) = soft λ { Υ z ( t , f ) } ,
where λ > 0 is the regularization parameter, while soft λ { Υ z ( t , f ) } is a soft-threshold function given as [26]:
soft λ { Υ z ( t , f ) } = sgn Υ z ( t , f ) max | Υ z ( t , f ) | λ , 0 .
The selection of λ is crucial, as an incorrect value can either lead to the inclusion of interference if too low, or degrade auto-term preservation if too high [19,26].
The TwIST algorithm is given as [32]
Υ z 1 ( t , f ) [ n + 1 ] = ( 1 α T ) Υ z 1 ( t , f ) [ n 1 ] + ( α T β T ) Υ z 1 ( t , f ) [ n ] + + β T soft λ Υ z 1 ( t , f ) [ n ] + Ψ H A z C S ( ν , τ ) Ψ Υ z 1 ( t , f ) [ n ] ,
where α T 0 , 1 ] and β T 0 , 2 α T ] , α T , β T R are the relaxation parameters.

2.2. Evaluating Sparse TFDs: The Local Rényi Entropy Approach with Meta-Heuristic Optimization

Accurate assessment of TFD reconstruction quality is essential for selecting appropriate reconstruction parameters and ensuring reliable signal analysis. Conventional global measures, such as the Rényi entropy, are defined as [34,35]
R = 1 1 α R log 2 ρ z ( t , f ) ρ z ( t , f ) d t d f α R d t d f ,
where α R is typically an odd integer, and the energy concentration measure (ECM) is given as [36]
E C M = 1 N t N f ρ z ( t , f ) ρ z ( t , f ) d t d f 1 2 d t d f 2 ,
which has proven inadequate for sparse TFDs [27,29]. These measures tend to favor overall sparsity rather than retaining the localized structure of signal auto-terms, resulting in inaccurate assessment [27,29]. To address this issue, the LRE method has been utilized as a solution that provides localized information on signal components. The LRE approach calculates local component numbers by comparing localized Rényi entropies of the signal and reference TFDs, as follows [27,31]:
M t ρ z ( t , f ) ( t 0 ) = 2 R ( χ t 0 { ρ z ( t , f ) } ) R ( χ t 0 { ρ ref ( t , f ) } ) ,
M f ρ z ( t , f ) ( f 0 ) = 2 R ( χ f 0 { ρ z ( t , f ) } ) R ( χ f 0 { ρ ref ( t , f ) } ) ,
where notations t and f indicate localization through time or frequency slices, respectively, and t 0 and f 0 represent the targeted time or frequency slice, respectively, while ρ ref t , f is the reference QTFD. The operators χ t 0 and χ f 0 isolate TFD samples within specified intervals [ t 0 Θ t / 2 , t 0 + Θ t / 2 ] and [ f 0 Θ f / 2 , f 0 + Θ f / 2 ] , respectively, where Θ t and Θ f control the window lengths [27,31]. Note that ρ ref t , f in (11) comprises a single component with a constant normalized frequency of 0.1, whereas in (12), it corresponds to a delta function located at t = 15 as in [27,29]. Also, an equal QTFD has to be used for calculating ρ z ( t , f ) and ρ ref t , f in order for the reference component to be suitable for the analyzed TFD [27,31].
The LRE-based metric for evaluating reconstructed TFDs resembles the mean squared error (MSE) between local numbers of components in the original and reconstructed TFDs [27]:
MSE t = 1 N t t = 1 N t M t ρ z ( t , f ) ( t ) M t Υ z 1 ( t , f ) ( t ) max M t ρ z ( t , f ) ( t ) , M t Υ z 1 ( t , f ) ( t ) 2 ,
MSE f = 1 N f f = 1 N f M f ρ z ( t , f ) ( f ) M f Υ z 1 ( t , f ) ( f ) max M f ρ z ( t , f ) ( f ) , M f Υ z 1 ( t , f ) ( f ) 2 .
The MSE increases when the TFD is excessively sparse, lacking sufficient auto-term retention, and when it has low-resolution auto-terms with unresolved cross-terms. Given that the reconstructed TFD examples in this study were obtained using the TwIST algorithm, M t Υ z 1 ( t , f ) ( t ) (and M f Υ z 1 ( t , f ) ( f ) ) involves a reference TFD obtained using the TwIST algorithm in order for the comparison to be valid [27].
The aforementioned LRE-based measures, MSE t and MSE f , served as objective functions within a multi-objective optimization framework implemented to optimize the TwIST algorithm, as defined by [27]
minimize MSE t , MSE f , E C M { MSE t , MSE f , E C M } , s . t . α T [ 0 , 1 ] , β T [ 0 , 2 α ] , λ [ 0 , 100 ] .
Minimizing the E C M promotes high resolution and suppressed interference in the reconstructed TFD, while minimizing MSE t and MSE f mitigates auto-term loss. This multi-objective optimization was performed utilizing a particle swarm optimization (MOPSO) algorithm [37,38], consistent with prior applications in [27,28]. Inherent to multi-objective optimization is the concept that a single feasible solution cannot simultaneously minimize all objective functions; improvement in one objective typically results in the degradation of at least one other. Consequently, MOPSO generates a Pareto front, representing the set of non-dominated solutions that define the trade-off surface between competing objectives [37,38].
To formally define dominance, consider two reconstructed TFDs, denoted as solutions x and y . Solution x is said to dominate solution y , denoted x y , if and only if the following conditions are met:
objectives : MSE t ( x ) MSE t ( y ) MSE f ( x ) MSE f ( y ) E C M ( x ) E C M ( y ) , objective : MSE t ( x ) < MSE t ( y ) MSE f ( x ) < MSE f ( y ) E C M ( x ) < E C M ( y ) .
In essence, solution x dominates y if it is at least as good in all objectives and strictly better in at least one. The Pareto front represents the set of solutions for which no other solution dominates them.
To select a single optimal solution from the Pareto front, the fuzzy satisfying method (FSM) was employed [39], consistent with the approach in [27]. FSM provides a mechanism for aggregating the multiple objectives into a single composite score, allowing for the identification of the solution that best satisfies the preferences encoded within the fuzzy membership functions.

2.3. Limitations of the Existing Approach

To illustrate the limitations of the existing LRE-based measure, a synthetic signal, z S 1 ( t ) , composed of one LFM and one sinusoidal frequency-modulated (SFM) component, was employed:
z S 1 ( t ) = e j 2 π ( 0.35 t + 0.0004 t 2 ) + e j ( 0.4 π t + 25.6 ( sin ( 0.0245 t π / 2 ) + 1 ) ) ,
where the signal length is N t = 128 samples. Figure 1 portrays four reconstructed TFDs derived using the TwIST algorithm with a varying regularization parameter λ . The λ values were specifically chosen to highlight their impact on reconstruction performance: a high λ (Figure 1b) resulted in an overly sparse TFD with the SFM component absent, a low λ (Figure 1d) yielded a blurred TFD with substantial cross-term interference, and intermediate λ values (Figure 1a,c) produced TFDs with retained auto-terms, albeit with varying degrees of resolution and cross-term contamination.
The LRE-based measure, as defined in (13) and (14), operates by comparing the local number of components estimated via LRE in the original and reconstructed TFDs. Figure 2 presents these LRE estimations for the corresponding TFDs in Figure 1. As expected, a loss of auto-terms is reflected in a decreased M t Υ z 1 ( t , f ) ( t ) (Figure 2b), while the presence of interference or blurred auto-terms leads to an inflated M t Υ z 1 ( t , f ) ( t ) relative to M t ρ z ( t , f ) ( t ) (Figure 2d).
Despite its ability to circumvent the limitations of global measures and avoid favoring overly sparse TFDs, the LRE-based measure exhibits several inherent limitations.
Limitations Regarding Spatial Information: First, LRE fundamentally captures only one-dimensional information about auto-terms within the TFD, despite the inherent two-dimensional nature of the TF plane. This reduction in spatial information means that, for each time or frequency slice, the estimated local number of components only indicates the number of auto-terms, without specifying its precise location within that slice. This leads to a critical ambiguity: LRE cannot distinguish between a well-localized auto-term and a blurred, less informative representation. Moreover, LRE measures the entropy of the entire slice, regardless of whether the entropy originates from a single, concentrated energy atom or from a combination of a lower-resolution auto-term and cross-term artifacts. This is visually illustrated by comparing the TFDs in Figure 1a,c and their corresponding LRE estimations in Figure 2a,c. In this case, the estimated number of components is higher in time slices containing lower-resolution auto-terms compared to those with better-resolved auto-terms but also some cross-term contamination. This demonstrates that relying solely on LRE can lead to misinterpretations regarding the true quality of the reconstructed TFD.
Dependence on Reference Signal: Second, the LRE is inherently sensitive to the choice of reference signal. Prior research has demonstrated that the accuracy of LRE estimates diminishes when applied to components whose alignment deviates significantly from the reference signal [27]. While employing both M t ( t ) and M f ( f ) can partially mitigate this issue, consider two simple LFM components, e j 2 π ( 0.0009765 t 2 ) and e j 2 π ( 0.5 t 0.0009765 t 2 ) , which, despite their distinct time–frequency characteristics, yield identical LRE estimates: M t ( t ) = M f ( f ) = 1 . This demonstrates a fundamental limitation in the ability of LRE to fully capture the nuances of different signal components.
Lack of Interpretability: Furthermore, the MSE t , f values, while normalized to the range [ 0 , 1 ] as in [27], lack intrinsic interpretability. Table 1 presents the obtained MSE t , f values for the reconstructed TFDs in Figure 1. Based solely on these values, it is impossible to discern whether an increase in MSE stems from reconstructed interference, low-resolution auto-terms, or the complete loss of an auto-term. While access to the M t Υ z 1 ( t , f ) ( t ) and M t ρ z ( t , f ) ( t ) curves could provide some additional insight, even this is not always conclusive. For instance, as illustrated in Figure 2c, a decrease in M t Υ z 1 ( t , f ) ( t ) does not invariably indicate a loss of the auto-term. Consequently, the only reliable approach for discerning the true cause of the MSE value is to visually analyze the reconstructed TFD itself, which necessitates the storage and processing of potentially large data matrices, leading to memory inefficiency.
Challenges for Automated Optimization: The aforementioned limitations pose significant challenges for automated systems, particularly meta-heuristic optimization algorithms tasked with identifying optimal parameter settings. Specifically, the conflicting nature of the optimization objectives in (15) exacerbates the problem. Efforts to minimize E C M (i.e., drive the TFD towards an empty representation) can inadvertently lead to an increase in MSE t , f , hindering the algorithm’s ability to converge to a truly optimal solution.
In summary, these limitations—the loss of spatial information, the dependence on reference signals, the lack of interpretability in MSE values, and the challenges they pose for automated optimization—provide a compelling motivation for developing performance criteria that directly leverage the two-dimensional information inherent in auto-terms, as detailed in the subsequent section.

2.4. The Proposed Measure Based on the Instantaneous Frequency and Group Delay

The proposed measure aims to enhance TFD assessment by leveraging IF and GD estimations. In contrast to the LRE approach, which captures predominantly one-dimensional information, IF and GD estimations provide precise auto-term positioning within the two-dimensional TF plane. This enables a more comprehensive and nuanced evaluation based on three core criteria:
  • Auto-Term Preservation: Quantifying the degree to which signal auto-terms are retained in the reconstructed TFD.
  • Auto-Term Resolution: Assessing the sharpness and clarity of the reconstructed auto-terms.
  • Interference Suppression: Measuring the effectiveness of suppressing cross-terms and noise artifacts.
Each criterion is quantified by a corresponding metric component: ξ a t (auto-term preservation), ξ r (auto-term resolution), and ξ c t (cross-term suppression).

2.4.1. Auto-Term Preservation ( ξ a t )

The auto-term preservation component, ξ a t , evaluates the extent to which auto-terms are preserved in the TFD reconstruction. The approach involves analyzing each estimated IF and GD point to determine if a corresponding signal component is present within a defined neighborhood. For each estimated IF point, f 0 m , corresponding to the m -th signal component, the auto-term peak frequency, f x m , is identified by searching for the first local maximum in the TFD amplitude within the frequency range [ f 0 m Δ ξ / 2 , f 0 m + Δ ξ / 2 ] that is closest to f 0 m . The parameter Δ ξ defines a user-specified tolerance range, representing the maximum allowable deviation from the estimated IF within which a TFD value is considered a valid auto-term. The “first local maximum closest to” criterion is used to resolve cases where multiple maxima may be present within the tolerance range. An analogous procedure is applied for GD analysis. For each GD estimate, τ d m , the auto-term peak time, t x m , is identified as the time of the first local maximum within the time range [ τ d m Δ ξ / 2 , τ d m + Δ ξ / 2 ] that is closest to τ d m .
If no positive maximum is detected within the specified range, it indicates a failure to preserve the auto-term, and ξ a t is penalized by adding a value of 1. Otherwise, if an auto-term is detected, the value corresponding to the distance between f 0 m and f x m (or τ d m and t x m ) is added to ξ a t , calculated using a Gaussian function:
ξ a t = 1 exp ( f 0 m f x m ) 2 2 σ G 2 , if IF is used , 1 exp ( τ d m t x m ) 2 2 σ G 2 , if GD is used ,
where σ G is a user-defined tuning parameter that controls the width of the Gaussian function. A larger σ G results in a wider Gaussian function, indicating a lower sensitivity to positional errors, while a smaller σ G results in a narrower Gaussian function, indicating a higher sensitivity. This Gaussian weighting penalizes auto-terms that are located further away from their expected positions, reflecting a degradation in preservation accuracy. The parameter Δ ξ accounts for potential biases introduced by different TFDs and noise levels [13], while the Gaussian function offers a flexible mechanism for controlling the sensitivity to positional deviations. When the auto-term maximum coincides exactly with the estimated IF or GD (i.e., f x m = f 0 m or t x m = τ d m ), ξ a t = 0 , indicating no penalization. The final value of ξ a t is computed as the mean of all computed values and is bounded within the range [ 0 , 1 ] , where lower values indicate better auto-term preservation.

2.4.2. Auto-Term Resolution ( ξ r )

The auto-term resolution component, ξ r , quantifies the sharpness and clarity of the reconstructed auto-terms within the TFD. Specifically, it calculates the mean width of the main lobe for each detected auto-term. For each auto-term, the frequency bins (or time samples) to the left and right of the auto-term maximum are identified at the points where the TFD amplitude decreases to 2 / 2 of the maximum amplitude. This threshold corresponds to the −3 dB point, commonly used in signal processing to define the effective bandwidth or main lobe width [40]. These points establish the boundaries of the main lobe. The width of the main lobe is then normalized by either N t or N f , depending on whether the auto-term was identified using IF or GD estimates, respectively. This normalization ensures that ξ r is interpretable across different TFD dimensions.
Unlike previous approaches that rely on normalized instantaneous bandwidth [40], this method provides a direct measure of the spatial extent of the auto-term in the TF plane. The resulting ξ r value is bounded within the range 1 / N t , f , 1 , where smaller values indicate higher resolution (i.e., sharper auto-terms). The ideal auto-term resolution occurs when the auto-term occupies only a single sample or bin, resulting in ξ r = 1 / N t , f .

2.4.3. Interference Suppression ( ξ c t )

The interference suppression component, ξ c t , evaluates the effectiveness of suppressing cross-terms and noise artifacts within the TFD. To calculate this metric, the identified auto-terms are first removed from the TFD. The remaining samples are then considered to be primarily composed of cross-terms and noise. The metric is then calculated as
ξ c t = 1 N t N f | | ρ ( cross ) ( t , f ) | | 0 ,
where | | ρ ( cross ) ( t , f ) | | 0 represents the 0 norm, which counts the number of non-zero elements in the TFD after auto-term removal. The result is then normalized by the total number of samples in the TFD ( N t × N f ). Consequently, ξ c t can be interpreted as the percentage of the TFD energy that is attributed to cross-terms and noise. This metric is bounded between [ 0 , 1 ] , where lower values indicate better cross-term suppression. An ideal value of ξ c t = 0 indicates that all interference terms have been completely eliminated from the TFD.
Figure 3 provides an illustrative example of the proposed measure applied to time slices of reconstructed TFDs for the signal z S 1 ( t ) . Figure 3a depicts a time slice from an overly sparse TFD. For the first estimated IF, f 0 1 , no auto-term is detected within the tolerance range, resulting in a penalty of 1 for ξ a t . For the second estimated IF, f 0 2 , an auto-term is detected, and the corresponding ξ a t value is calculated using (18). The resolution of this auto-term is then assessed as ξ r = | f l 2 f r 2 | N t . Since the TFD is overly sparse, ξ c t = 0 . Conversely, Figure 3b illustrates a time slice from a TFD reconstructed with a low regularization parameter ( λ ), leading to low-resolution auto-terms and substantial interference. While both auto-terms are detected, their resolution is lower than in Figure 3a. Furthermore, this time slice contains a significant number of interference components, resulting in a higher ξ c t value compared to the time slice in Figure 3a.

2.4.4. Component Alignment Map Integration

To automate the selection of appropriate IF and GD estimates for TFD analysis, the proposed measure leverages the CAM, as introduced in [29]. CAM automatically determines TFD regions where localization is best performed using time slices (i.e., IF analysis) or frequency slices (i.e., GD analysis). CAM analysis exploits the fact that LRE estimates artificially increase when a component’s alignment deviates significantly from the reference signal, suggesting the need for an alternative localization approach. The CAM method extracts TFD segments where such increases occur and then applies connectivity analysis to identify discontinuous component estimates resulting from unsuitable localization approaches [29]. The CAM generates a binary map, where C A M ( t , f ) = 1 indicates TFD regions where localization in time slices (IF analysis) is preferred, and C A M ( t , f ) = 0 indicates regions where localization in frequency slices (GD analysis) is preferred. In the context of the proposed measure, CAM facilitates automatic selection of IF or GD estimates based on the local characteristics of the TFD.

2.4.5. Multi-Objective Optimization

Furthermore, the proposed measure components, ξ a t , ξ r , and ξ c t , can serve as objective functions to be minimized within a multi-objective optimization framework, analogous to the approach described in (15):
minimize α T , β T , λ { ξ a t , ξ r , ξ c t } , s . t . α T [ 0 , 1 ] , β T [ 0 , 2 α ] , λ [ 0 , 100 ] .
A key advantage of the proposed measure components is that they are not inherently conflicting, unlike the LRE-based objective functions in (15). For example, achieving a minimal ξ a t = 0 , signifying perfect auto-term preservation with accurate positioning, does not necessarily imply an increase in ξ r or ξ c t . This property facilitates more robust and efficient optimization, as the algorithm can independently optimize each component without being constrained by inherent trade-offs.

2.4.6. Summary of the Proposed Performance Measure

The key steps of the proposed measure can be summarized as follows:
  • Initialization: Initialize the auto-term preservation ( ξ a t ) and auto-term resolution ( ξ r ) components to zero.
  • IF Analysis (Time Slices): For each estimated IF point where C A M ( t , f ) = 1 , search for the auto-term maximum within the corresponding time slice.
  • Auto-Term Preservation Assessment: If no positive maximum is detected within the tolerance range, increment ξ a t by 1 to penalize the lack of auto-term preservation. Skip steps 4 and 5. Otherwise, increment ξ a t based on the distance between the estimated IF and the detected auto-term maximum, calculated using the Gaussian function in (18).
  • Auto-Term Resolution Assessment: Locate the frequency bins to the left and right of the detected auto-term maximum where the TFD amplitude is equal to 2 / 2 of the maximum value. Increment ξ r with the width of this frequency range.
  • Auto-Term Removal: Remove the considered auto-term from the TFD in a local neighborhood around the detected auto-term maximum.
  • GD Analysis (Frequency Slices): Repeat steps 2–5 for each estimated GD point where C A M ( t , f ) = 0 , using frequency slices.
  • Normalization: Normalize ξ a t by the total number of estimated IF and GD points, and normalize ξ r by the total number of detected auto-term maxima.
  • Interference Suppression Assessment: After removing all identified auto-terms, determine the amount of interference by calculating the number of non-zero elements in the remaining TFD using (19).

3. Experimental Results and Discussion

To evaluate the performance of the proposed measure, a series of experiments were conducted using both synthetic and real-world signals. The synthetic signals allowed for controlled assessment of the measure’s sensitivity to various signal parameters and noise levels, while the real-world signal provided insight into its applicability in practical scenarios.

3.1. Selection of the Signal Examples

The performance of the proposed measure was evaluated using three synthetic signals. The first signal, z S 1 ( t ) with N t = 128 samples, is defined in (17). The second signal, z S 2 ( t ) with N t = 256 samples, consists of four time-limited LFM components with different amplitudes embedded in AWGN with SNR = 3 dB:
z S 1 ( t ) = Π t 70 40 e 2 j π 0.0037 ( t 50 ) 2 + 1.2 e 2 j π 0.2 ( t 50 ) + 0.0037 ( t 50 ) 2 + + Π t 220 40 e 2 j π 0.0037 ( t 200 ) 2 + 1.2 e 2 j π 0.2 ( t 200 ) + 0.0037 ( t 200 ) 2 .
Here, the rectangular function is defined as Π t t 0 k + t f k 2 T k , where t 0 k , t f k , and T k denote the starting time, ending time, and duration of the signal component, respectively. The introduction of time-limited components and AWGN simulates more realistic signal conditions and allows for assessing the measure’s robustness to noise and interference. The third signal, z S 3 ( t ) with N t = 512 samples, comprises two quadratic frequency-modulated (QFM) components:
z S 3 ( t ) = e j 2 π ( 0.3531 t 0.0016 t 2 + 0.0000041 t 3 ) + e j 2 π ( 0.1469 t + 0.0016 t 2 0.0000041 t 3 ) .
In addition to the synthetic signals, a real-world gravitational wave signal (this research has made use of data, software, and/or web tools obtained from the LIGO Open Science Center (https://losc.ligo.org (accessed on 17 February 2025)), a service of LIGO Laboratory and the LIGO Scientific Collaboration. LIGO is funded by the U.S. National Science Foundation), [28,41,42], denoted as z G ( t ) , was included in the evaluation. To prepare the signal for analysis, established pre-processing techniques were employed, including downsampling the original signal by a factor of 14, resulting in N t = 256 samples, corresponding to a duration of 0.25 to 0.45 s and a frequency range of [ 0 , 512 ] Hz. This downsampling procedure is consistent with prior work and serves to reduce computational complexity without significantly compromising the essential signal characteristics [28].
Figure 4 illustrates the WVDs of the considered signals z S 1 ( t ) , z S 2 ( t ) , z S 3 ( t ) , and z G ( t ) . It is evident from these figures that the WVDs are characterized by significant cross-term interference (and noise for z S 2 ( t ) ), due to the lack of any filtering. These artifacts underscore the need for effective cross-term suppression techniques, such as the CS-based approach employed in this study.

3.2. Parameter Settings and Computational Resources

The calculation of the LRE involved the following parameters: α R = 3 and Θ t = Θ f = 11 , consistent with recommendations in prior literature [31,43]. The CS-AF areas for the signals z S 1 ( t ) , z S 2 ( t ) , z S 3 ( t ) , and z G ( t ) were determined to be 25 × 11 , 17 × 31 , 41 × 23 , and 35 × 15 , respectively, using the approach detailed in [26]. The tolerance level for the reconstruction algorithm was set to ϵ = 10 3 , as in [19,26,27,28]. For meta-heuristic optimization, the MOPSO algorithm was employed with parameter settings as specified in [27].
The proposed measure’s parameters, Δ ξ and σ G , were optimized for each synthetic signal by minimizing the 1 -norm between the reconstructed TFD, Υ z 1 ( t , f ) , and an ideal reference TFD, ρ ^ z ( t , f ) , when optimizing with MOPSO. The ideal reference TFDs were generated based on the true IF and GD trajectories of the signals. The optimized values were found to be Δ ξ = N t / 8 and σ G = 10 . The 1 -norm is calculated as
1 n o r m = | | Υ z 1 ( t , f ) ρ ^ z ( t , f ) | | 1 .
The simulations of the algorithms’ execution times, averaged over 1000 independent runs, have been performed on a PC with the Ryzen 7 3700X @ 3.60 GHz Base Clock processor and 32 GB of DDR4 RAM.

3.3. Component Alignment Maps and IF/GD Estimates

Figure 5 and Figure 6 depict the CAMs and estimated GDs and IFs for the signals, respectively. These estimates were derived using a blind source separation algorithm and an automatic IF and GD estimation method as detailed in [29]. As illustrated, signals z S 1 ( t ) and z S 3 ( t ) are well suited for localization through time slices, while signal z S 2 ( t ) is better suited for localization through frequency slices. The gravitational wave signal, z G ( t ) , exhibits characteristics that necessitate both time and frequency localizations, highlighting the adaptability of the proposed measure.

3.4. Results: Synthetic and Real-World Signals

The reconstructed TFDs for the experimental tests were generated using the TwIST algorithm with distinct λ values to represent different reconstruction outcomes. Case 1 utilized a high λ parameter, resulting in significant auto-term loss. Case 2 employed λ values that led to unsuccessful reconstructions dominated by interference. Case 3 TFDs were generated with λ values between those of Cases 1 and 2, achieving a balance of improved auto-term preservation and reduced interference. Figure 7 illustrates these reconstructed TFDs for all considered signals across Cases 1–3, while Table 2 presents a comparative analysis of the proposed measure against existing measures in evaluating the reconstructed TFD performance.
The results, based on the 1 -norm, indicate that the Case 3 reconstructed TFDs consistently provide a closer approximation to the ideal TFDs than those in Cases 1 and 2 across all signal examples. In line with prior research [27], the global measures E C M and R tend to favor overly sparse TFDs, incorrectly identifying Case 1 as optimal due to their inherent disregard for crucial auto-term information. For signals z S 1 ( t ) and z S 2 ( t ) , the LRE-based metric effectively penalizes the TFDs in Cases 1 and 2, assigning them higher error values and correctly identifying the reconstructed TFD in Case 3 as the best performing. However, this trend does not hold for signal z S 3 ( t ) , where MSE f erroneously identifies the overly sparse reconstructed TFD in Case 1 as optimal. It is important to note that MSE t , f values were not computed for the WVD due to two primary reasons: first, LRE is typically applied to QTFDs where interference is mitigated; and second, the appropriate reference TFD for the WVD would also be a WVD, which is inherently incomparable to the reference TFDs used for the reconstructed TFDs.
In contrast, the proposed measure components offer more nuanced insights into TFD performance. Across all signal examples, ξ a t = 0 indicates that the WVD and the reconstructed TFDs in Case 2 retain all auto-terms, while the reconstructed TFDs in Case 3 exhibit superior performance compared to those in Case 1. Furthermore, ξ r effectively identifies the TFDs with the highest auto-term resolution, with the reconstructed TFDs in Case 3 surpassing those in Case 2, which suffer from poor resolution. As for ξ c t , it accurately identifies the overly sparse TFDs as free from both interference and noise artifacts, a characteristic not shared by the WVD. It also confirms that the reconstructed TFDs in Case 3 demonstrate a lower degree of unsuppressed interference compared to those in Case 2. Consequently, when the proposed measure components are equally averaged, the reconstructed TFDs in Case 3 are consistently identified as the best performing for all three signal examples.
These observations concerning the measures’ performance in evaluating reconstructed TFDs extend to the real-world gravitational signal z G ( t ) . Figure 8 illustrates the Case 1–3 reconstructed TFDs for this signal, while Table 3 presents the corresponding performance results. Consistent with the trends observed for the synthetic signals, the E C M and R measures favor overly sparse TFDs, and the measure MSE f incorrectly identifies the corrupted reconstructed TFD in Case 2 as the best performing. The proposed measure component ξ a t highlights the WVD as having full retention of auto-terms. However, in contrast to the synthetic signals, ξ a t reveals that the reconstructed TFDs in Case 2 are missing several auto-term samples. This discrepancy arises from the limitations of the rectangular CS-AF area, which fails to adapt its vertices appropriately for higher-order FM components present in signal z G ( t ) [28]. The ξ r component correctly identifies the overly sparse reconstructed TFDs in Case 1 as exhibiting the best auto-term resolution and minimal interference. In summary, the averaged proposed measure components accurately demonstrate that the reconstructed TFDs in Case 3 offer the best overall performance among the tested cases.

3.5. Meta-Heuristic Optimization Comparison

Figure 9 presents a visual comparison of optimized reconstructed TFDs obtained using MOPSO with two distinct sets of objective functions: the proposed ( ξ a t , ξ r , ξ c t ) measures and the existing ( MSE t , f , E C M ) measures. Table 4 and Table 5 provide a corresponding numerical comparison.
The results indicate that optimization with the proposed ( ξ a t , ξ r , ξ c t ) measures consistently yields reconstructed TFDs with well-preserved auto-terms, high resolution, and suppressed interference. Furthermore, the proposed measures led to a consistent improvement in optimized reconstructed TFDs for all considered signals compared to the results obtained using ( MSE t , f , E C M ) . Specifically, the 1 -norm, a measure of the deviation from an ideal TFD, was improved by 3.68% to 65.10%, demonstrating the superior ability of ( ξ a t , ξ r , ξ c t ) to generate reconstructed TFDs that closely approximate the ideal.
It is important to note that for all considered signals, the proposed ( ξ a t , ξ r , ξ c t ) measures resulted in better preservation of auto-terms, as evidenced by consistently lower ξ a t values. While the ( MSE t , f , E C M ) objectives sometimes yielded slightly higher auto-term resolution (higher ξ r ), this improvement often came at the cost of losing essential parts of the auto-terms (see Figure 9a,c). For the noisy signals z S 2 ( t ) and z G ( t ) , the ( MSE t , f , E C M ) objectives resulted in lower λ values, indicative of less effective interference suppression (see Figure 9b,d), as confirmed by generally worse ξ c t values, except in the case of z S 3 ( t ) .
Finally, the time required to achieve optimized reconstructed TFDs using the proposed ( ξ a t , ξ r , ξ c t ) objectives was significantly reduced, ranging from 44.17% to 60.63%, compared to the time required for the existing ( MSE t , f , E C M ) objectives. This reduction in computational cost highlights the efficiency of the proposed measure for meta-heuristic optimization.

3.6. Noise Sensitivity Analysis

To further evaluate the robustness of meta-heuristic optimization using the proposed objective functions, synthetic signal examples were embedded in AWGN at SNR levels ranging from 0 dB to 9 dB. The results, summarized in Table 6, demonstrate a consistent improvement when using the proposed ( ξ a t , ξ r , ξ c t ) measures over the existing ( MSE t , f , E C M ) measures across all tested SNR levels.
Specifically, the 1 -norm, a measure of the deviation from the ideal TFD, was improved by 16.26% at 0 dB SNR, 61.06% at 3 dB SNR, 57.67% at 6 dB SNR, and 51.79% at 9 dB SNR. This indicates that the proposed objective functions lead to more accurate TFD reconstructions, even in the presence of significant noise.
Furthermore, the results suggest that the performance degradation due to decreasing AWGN SNR levels is less pronounced when optimizing with the proposed ( ξ a t , ξ r , ξ c t ) objectives compared to the ( MSE t , f , E C M ) objectives. This highlights the superior noise resilience of the proposed approach.

3.7. Interpretation of the Results

As shown in [27], global measures E C M and R tend to favor overly sparse TFDs (Case 1) for all signal examples, limiting their applicability when auto-term preservation is a priority. While the LRE-based metric successfully penalizes the reconstructed TFDs in Cases 1 and 2 with higher error values for signals z S 1 ( t ) and z S 2 ( t ) , its performance is inconsistent for signals z S 3 ( t ) and z G ( t ) . This stems from the LRE limitation when analyzing intersecting components, as in signal z S 3 ( t ) , where a drop in the estimated local number of components leads to favoring overly sparse TFDs. In the case of signal z G ( t ) , Figure 10 illustrates that the hyperbolic behavior of the signal’s auto-term makes it unsuitable for LRE localization. Specifically, both LRE estimations exhibit an artificial increase when the component changes its slope with respect to the time or frequency axis, causing MSE f to favor the corrupted reconstructed TFD in Case 2.
Such misclassifications can confuse end-users when selecting the best-performing reconstructed TFD. Furthermore, even when the LRE-based measure provides a correct assessment, it lacks the interpretability needed to guide the adjustment of the λ parameter. Instead, users must analyze the relationship between the M t ρ z ( t , f ) ( t ) and M t Υ z 1 ( t , f ) ( t ) curves to infer the presence of over-sparsity or interference. However, this analysis can be ambiguous, especially when the number of local components increases post-reconstruction, resulting in a higher M t Υ z 1 ( t , f ) ( t ) curve compared to M t ρ z ( t , f ) ( t ) . This discrepancy can stem from low-resolution auto-terms or the reconstruction of cross-terms and noise, yet the precise cause remains elusive.
Conversely, the proposed measure components provide clearer insights than the LRE metric. For instance, a low ξ a t combined with a high ξ r suggests well-preserved auto-terms with reduced resolution. A significantly high ξ c t , indicates interference in the reconstructed TFDs, suggesting that λ is set too low (as in Case 2 with λ = 0.01 ). Conversely, a high ξ a t with very low ξ r and ξ c t = 0 points to an overly sparse reconstructed TFD with inconsistent auto-terms, implying that λ is set too high (as in Case 1). In both scenarios, the proposed measures offer guidance on whether λ should be increased or decreased, providing a distinct advantage over the existing LRE-based metric used in [27]. The capacity to decouple ξ a t , ξ r , and ξ c t is a particular advantage since that enables an unambiguous interpretation of the result.
Quantitatively, a ξ r of 0.0056 for Case 3’s reconstructed TFD for signal z S 3 ( t ) signifies a very high resolution of auto-terms, averaging just two samples per estimated IF. Also, a ξ c t of 0.4149 for Case 2’s reconstructed TFD for signal z S 1 ( t ) signifies that 41.49% of the TFD (of size 128 × 128 ) is corrupted by interference. These quantitative indicators directly enhance the interpretability of the proposed measure.
In addition to the increased interpretability, the findings demonstrate that the proposed measure components, when averaged equally, successfully highlight the best-performing reconstructed TFD. These findings and conclusions drawn from the synthetic signals also extend to the real-world gravitational signal. The various presented signal examples with multiple LFM and QFM components, with varying amplitudes and intersecting cases, demonstrate the robustness and suitability of the proposed measure for a wide range of signals.
When implementing the proposed measure components in meta-heuristic optimization, the results shown in Table 4 and Table 5 illustrate that reconstructed TFDs optimized using the proposed measure’s components exhibit well-preserved auto-terms and suppressed interference. Even though optimizing with the LRE-based measure can lead to reconstructed TFDs with preserved auto-terms and less interference, the usage of the proposed measures further improves optimization performance. To complement these results, Table 6 indicates a meta-heuristic optimization improvement in noisy environments, where reconstructed TFDs optimized using the proposed measure’s components proved to be better than those obtained using the LRE-based measure across all considered SNR levels. This highlights the proposed measure over the existing LRE-based measure in noisy environments, where the limitations of the LRE are pronounced.
Even though the CS-based TFD reconstruction method is typically considered for offline analysis, the results illustrate that the proposed measure is beneficial for optimizing TFD, as the optimization time is significantly reduced by using the proposed measures as objective functions. The reasons for this are twofold. Firstly, the proposed measures are not as conflicted as E C M and MSE t , f . Secondly, MSE t , f can yield the same increased value for both overly sparse TFDs (high λ ) and TFDs with interference (low λ ), which confuses and slows the convergence of meta-heuristics. While memory efficiency for storing information is not always a primary concern in this field, it is important to consider the memory requirement associated with different approaches. Storing λ values alongside the corresponding ( ξ a t , ξ r , ξ c t ) measures provides a comprehensive understanding of the impact of λ on TFD performance. Analyzing the existing ( MSE t , f , E C M ) measures in a similar way would necessitate substantially greater memory resources, as it would involve storing not only ( λ , MSE t , f , E C M ) but also the M t , f vectors from both the original and reconstructed TFDs, and in many cases, the full reconstructed TFDs. The memory would increase significantly, potentially reaching hundreds of megabytes.
The inclusion of CAM in the proposed measure enables the automatic selection of IF and GD estimates, circumventing the need for manual selection of time and frequency localization. Therefore, users are not required to manually identify and separate the components best suited for localization via time or frequency slices, which increases the overall convenience of the method. However, it should be noted that, by its very construction, the inclusion of CAM and a mandatory matrix with IF and/or GD estimates indicates that the proposed measure requires more inputs than the LRE.
The selection of the LRE method should be discussed. This study uses the original LRE approach, which was established in LRE-based measures in [27,28]. This original approach was robust and applicable to a different set of signals. An iterative LRE approach proposed in [44] may be used in special cases of signals comprising multiple components with significant differences in amplitudes. However, the iterative LRE implies several other limitations, such as incorrect component removal for intersecting components and significantly higher sensitivity to noise, making this approach applicable only for special cases. A study in [45] circumvented some limitations of the original and iterative LRE approach by using convolutional neural networks. This involves neural networks trained using a diverse dataset of signals to predict the local number of components from an input QTFD. This study did not utilize that approach for two reasons. Firstly, the network is trained on QTFDs, meaning that it can predict local numbers of components for a starting TFD, but not for a reconstructed TFD. This would require retraining the network on this specific task. Secondly, estimates provided by the network are very volatile, and research on applying some smoothing filters is still required before usage.
The optimization of σ G and Δ ξ parameters relies on numerical simulations in sparse TFD reconstructions. Furthermore, for overall evaluation, the measured components have been averaged for overall interpretation, meaning that all three components have equal importance. Users are encouraged to retest parameters σ G and Δ ξ and pronounce the importance of some measured components if they find them beneficial in specific TFD or advanced methods.
The enhanced assessment of reconstructed TFDs using the proposed measure has significant potential in this field. The improvement is demonstrated through the interpretability of the proposed measure when compared to the existing methods. This opens up many research directions in machine-related applications.

4. Conclusions

CS-based methods have emerged as powerful tools for generating high-performing TFDs. However, a critical challenge remains: the potential loss of auto-terms during reconstruction, which hinders accurate TFD assessment and the selection of optimal reconstruction parameters. The existing measure, which is based on the LRE, addresses some of these limitations, but it also exhibits shortcomings.
This study addresses these challenges by presenting a novel performance measure for evaluating TFD reconstruction quality, with a specific focus on robust auto-term preservation. By incorporating IF and GD estimates directly within the 2D TF plane, the proposed measure offers a comprehensive and nuanced assessment of reconstructed TFDs. This assessment is based on three key, decoupled components: auto-term preservation ( ξ a t ), which quantifies the retention of signal components; auto-term resolution ( ξ r ), which measures the sharpness and clarity of the reconstructed components; and cross-term suppression ( ξ c t ), which evaluates the effectiveness in reducing interference and noise artifacts.
Experimental results, obtained from a diverse set of synthetic signals (encompassing linear and non-linear FM components, intersecting components, and components with varying amplitudes in noise) and real-world gravitational wave data, demonstrate the effectiveness of the proposed metric in detecting common reconstruction issues. Notably, unlike previous LRE-based metrics, the proposed measure reliably distinguishes between overly sparse TFDs and those contaminated by interference, enabling users to gain valuable insights for fine-tuning reconstruction parameters and improving TFD quality.
This study further demonstrates that, when integrated into an automated meta-heuristic optimization framework, the proposed metric significantly enhances the quality of TFDs, leading to improved auto-term preservation and reduced interference, even under challenging noisy conditions. Moreover, the optimized reconstructed TFDs were obtained significantly faster using the proposed measure as objective functions, highlighting its computational efficiency. Also, the regularization parameter can be stored more efficiently by relying only on the proposed measure’s values, rather than the memory-inefficient TFDs, highlighting its practical application.
These findings underscore the significant value of the proposed metric not only for individual users seeking to improve TFD reconstruction but also for machine-driven applications that require an interpretable, reliable, and computationally efficient measure of TFD quality. Given its effectiveness and efficiency, the proposed measure can serve as a valuable feature extraction function for future research directions in deep learning applications.

Funding

This research was funded by University of Rijeka, Croatia; grant number: uniri-mladi-tehnic-23-2 (Analysis of non-stationary signals using time-frequency algorithms and deep learning).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AFAmbiguity function
AWGNAdditive white Gaussian noise
CSCompressive sensing
CAMComponent alignment map
ECMEnergy concentration measure
FMFrequency modulation
FSMFuzzy satisfying method
FTFourier transform
GDGroup delay
IFInstantaneous frequency
LFMLinear frequency modulation
LIGOLaser interferometer gravitational-wave observatory
LRELocalized Renyi entropy
MOPSOMulti-objective particle swarm optimization
MSEMean squared error
QFMQuadratic frequency-modulation
QTFDQuadratic time–frequency distribution
SFMSinusoidal frequency modulation
SNRSignal-to-noise ratio
TFTime–frequency
TFDTime–frequency distribution
TwISTTwo-step iterative shrinkage/thresholding
WVDWigner–Ville distribution

References

  1. Liu, N.; Gao, J.; Zhang, B.; Li, F.; Wang, Q. Time–Frequency Analysis of Seismic Data Using a Three Parameters S Transform. IEEE Geosci. Remote Sens. Lett. 2018, 15, 142–146. [Google Scholar] [CrossRef]
  2. Yi, X.; Yang, S.; Dong, R.; Ren, Q.; Shuai, T.; Li, G.; Gong, W. A composite clock for robust time-frequency signal generation system onboard a navigation satellite. GPS Solut. 2023, 28, 6. [Google Scholar] [CrossRef]
  3. Wu, D.; Yang, Z.; Ruan, Y.; Chen, X. Blind single-channel lamb wave mode separation using independent component analysis on time-frequency signal representation. Appl. Acoust. 2024, 216, 109810. [Google Scholar] [CrossRef]
  4. Song, Y.; Hu, Y.; Chen, X.; Chen, H.; Zhang, P. Time-Reassigning Transform for the Time–Frequency Analysis of Seismic Data. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4505811. [Google Scholar] [CrossRef]
  5. Song, K.; Nie, J.; Li, Y.; Li, J.; Song, P.; Ercisli, S. Regional soil water content monitoring based on time-frequency spectrogram of low-frequency swept acoustic signal. Geoderma 2024, 441, 116765. [Google Scholar] [CrossRef]
  6. Lekkas, G.; Vrochidou, E.; Papakostas, G.A. Time–Frequency Transformations for Enhanced Biomedical Signal Classification with Convolutional Neural Networks. BioMedInformatics 2025, 5, 7. [Google Scholar] [CrossRef]
  7. Adamczyk, K.; Polak, A.G. Online Algorithm for Deriving Heart Rate Variability Components and Their Time–Frequency Analysis. Appl. Sci. 2025, 15, 1210. [Google Scholar] [CrossRef]
  8. Zhang, H.; Wu, Q.; Tang, W.; Yang, J. Acoustic Signal-Based Defect Identification for Directed Energy Deposition-Arc Using Wavelet Time–Frequency Diagrams. Sensors 2024, 24, 4397. [Google Scholar] [CrossRef]
  9. Łuczak, D. Data-Driven Machine Fault Diagnosis of Multisensor Vibration Data Using Synchrosqueezed Transform and Time-Frequency Image Recognition with Convolutional Neural Network. Electronics 2024, 13, 2411. [Google Scholar] [CrossRef]
  10. Wei, M.; Sun, X.; Zong, J. Time–Frequency Domain Seismic Signal Denoising Based on Generative Adversarial Networks. Appl. Sci. 2024, 14, 4496. [Google Scholar] [CrossRef]
  11. Liu, N.; Gao, J.; Jiang, X.; Zhang, Z.; Wang, Q. Seismic Time–Frequency Analysis via STFT-Based Concentration of Frequency and Time. IEEE Geosci. Remote Sens. Lett. 2017, 14, 127–131. [Google Scholar] [CrossRef]
  12. Daubechies, I.; Lu, J.; Wu, H.T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 2011, 30, 243–261. [Google Scholar] [CrossRef]
  13. Boashash, B. Time-Frequency Signal Analysis and Processing, A Comprehensive Reference, 2nd ed.; EURASIP and Academic Press Series in Signal and Image Processing; Elsevier: London, UK, 2016. [Google Scholar]
  14. Stankovic, L.; Dakovic, M.; Thayaparan, T. Time-Frequency Signal Analysis with Applications; Artech House Publishers: Boston, MA, USA, 2013. [Google Scholar]
  15. Yu, G.; Yu, M.; Xu, C. Synchroextracting Transform. IEEE Trans. Ind. Electron. 2017, 64, 8042–8054. [Google Scholar] [CrossRef]
  16. Gholami, A. Sparse time–frequency decomposition and some applications. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3598–3604. [Google Scholar] [CrossRef]
  17. Flandrin, P.; Borgnat, P. Time-frequency energy distributions meet compressed sensing. IEEE Trans. Signal Process. 2010, 58, 2974–2982. [Google Scholar] [CrossRef]
  18. Stanković, L.; Orović, I.; Stanković, S.; Amin, M. Compressive sensing based separation of nonstationary and stationary signals overlapping in time-frequency. IEEE Trans. Signal Process. 2013, 61, 4562–4572. [Google Scholar] [CrossRef]
  19. Volarić, I. Signal Concentration Enhancement in the Time-Frequency Domain Using Adaptive Compressive Sensing. Ph.D. Thesis, Faculty of Engineering, University of Rijeka, Rijeka, Croatia, 2017. [Google Scholar]
  20. Shareef, S.M.; Rao, M.V.G. Separation of overlapping non-stationary signals and compressive sensing-based reconstruction using instantaneous frequency estimation. Digit. Signal Process. 2024, 155, 104737. [Google Scholar] [CrossRef]
  21. Sejdić, E.; Orović, I.; Stanković, S. Compressive sensing meets time-frequency: An overview of recent advances in time-frequency processing of sparse signals. Digit. Signal Process. 2018, 77, 22–35. [Google Scholar] [CrossRef]
  22. Wang, S.; Cheng, C.; Zhou, J.; Qin, F.; Feng, Y.; Ding, B.; Zhao, Z.; Chen, X. Reassignment-enable reweighted sparse time-frequency analysis for sparsity-assisted aeroengine rub-impact fault diagnosis. Mech. Syst. Signal Process. 2023, 183, 109602. [Google Scholar] [CrossRef]
  23. Tu, Q.; Wu, K.; Cheng, E.; Yuan, F. A noise robust sparse time-frequency representation method for measuring underwater gas leakage rate. J. Acoust. Soc. Am. 2024, 155, 2503–2516. [Google Scholar] [CrossRef]
  24. Yang, Y.; Gao, J.; Wang, Z.; Li, Z. Seismic Absorption Qualitative Indicator via Sparse Group-Lasso-Based Time–Frequency Representation. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1680–1684. [Google Scholar] [CrossRef]
  25. Xing, L.; Wen, Y.; Xiao, S.; Zhang, J.; Zhang, D. Compressed-sensing-based time–frequency representation for disturbance characterization of Maglev on-board distribution systems. Electronics 2020, 9, 1909. [Google Scholar] [CrossRef]
  26. Volaric, I.; Sucic, V.; Stankovic, S. A Data Driven Compressive Sensing Approach for Time-Frequency Signal Enhancement. Signal Process. 2017, 141, 229–239. [Google Scholar] [CrossRef]
  27. Jurdana, V.; Volaric, I.; Sucic, V. Sparse time-frequency distribution reconstruction based on the 2D Rényi entropy shrinkage algorithm. Digit. Signal Process. 2021, 118, 103225. [Google Scholar] [CrossRef]
  28. Jurdana, V.; Lopac, N.; Vrankic, M. Sparse Time-Frequency Distribution Reconstruction Using the Adaptive Compressed Sensed Area Optimized with the Multi-Objective Approach. Sensors 2023, 23, 4148. [Google Scholar] [CrossRef]
  29. Jurdana, V.; Vrankic, M.; Lopac, N.; Jadav, G.M. Method for automatic estimation of instantaneous frequency and group delay in time-frequency distributions with application in EEG seizure signals analysis. Sensors 2023, 23, 4680. [Google Scholar] [CrossRef]
  30. Sucic, V.; Saulig, N.; Boashash, B. Estimating the number of components of a multicomponent nonstationary signal using the short-term time-frequency Rényi entropy. EURASIP J. Adv. Signal Process. 2011, 2011, 125. [Google Scholar] [CrossRef]
  31. Sucic, V.; Saulig, N.; Boashash, B. Analysis of local time-frequency entropy features for nonstationary signal components time supports detection. Digit. Signal Process. 2014, 34, 56–66. [Google Scholar] [CrossRef]
  32. Bioucas-Dias, J.M.; Figueiredo, M.A.T. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef]
  33. Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: Algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  34. Baraniuk, R.G.; Flandrin, P.; Janssen, A.J.E.M.; Michel, O.J.J. Measuring time-frequency information content using the Rényi entropies. IEEE Trans. Inf. Theory 2001, 47, 1391–1409. [Google Scholar] [CrossRef]
  35. Aviyente, S.; Williams, W.J. Minimum entropy time-frequency distributions. IEEE Signal Process. Lett. 2005, 12, 37–40. [Google Scholar] [CrossRef]
  36. Stanković, L. A measure of some time–frequency distributions concentration. Signal Process. 2001, 81, 621–631. [Google Scholar] [CrossRef]
  37. Durillo, J.J.; García-Nieto, J.; Nebro, A.J.; Coello, C.A.C.; Luna, F.; Alba, E. Multi-objective particle swarm optimizers: An experimental comparison. In Proceedings of the Evolutionary Multi-Criterion Optimization, Berlin, Heidelberg, 7–10 April 2009; pp. 495–509. [Google Scholar] [CrossRef]
  38. Garg, H. A Hybrid PSO-GA Algorithm for Constrained Optimization Problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  39. Soroudi, A.; Ehsan, M.; Zareipour, H. A practical eco-environmental distribution network planning model including fuel cells and non-renewable distributed energy resources. Renew. Energy 2011, 36, 179–188. [Google Scholar] [CrossRef]
  40. Boashash, B.; Sucic, V. Resolution measure criteria for the objective assessment of the performance of quadratic time-frequency distributions. IEEE Trans. Signal Process. 2003, 51, 1253–1263. [Google Scholar] [CrossRef]
  41. Lopac, N.; Hržić, F.; Vuksanović, I.P.; Lerga, J. Detection of non-stationary GW signals in high noise from Cohen’s class of time-frequency representations using deep learning. IEEE Access 2021, 10, 2408–2428. [Google Scholar] [CrossRef]
  42. Lopac, N. Detection of Gravitational-Wave Signals from Time-Frequency Distributions Using Deep Learning. Ph.D. Thesis, University of Rijeka, Faculty of Engineering, Rijeka, Croatia, 2022. [Google Scholar]
  43. Saulig, N.; Lerga, J.; Miličić, S.; Tomasović, Z. Block-adaptive Rényi entropy-based denoising for non-stationary signals. Sensors 2022, 22, 8251. [Google Scholar] [CrossRef]
  44. Saulig, N.; Pustelnik, N.; Borgnat, P.; Flandrin, P.; Sucic, V. Instantaneous counting of components in nonstationary signals. In Proceedings of the 21st European Signal Processing Conference (EUSIPCO 2013), Marrakech, Morocco, 9–13 September 2013; pp. 1–5. [Google Scholar]
  45. Jurdana, V.; Baressi Šegota, S. Convolutional Neural Networks for Local Component Number Estimation from Time–Frequency Distributions of Multicomponent Nonstationary Signals. Mathematics 2024, 12, 1661. [Google Scholar] [CrossRef]
Figure 1. Reconstructed TFDs for the signal z S 1 ( t ) obtained using the TwiST algorithm with α T = 0.91 , β T = 0.8 and (a) λ = 4 ; (b) λ = 12 ; (c) λ = 0.5 ; (d) λ = 0.01 .
Figure 1. Reconstructed TFDs for the signal z S 1 ( t ) obtained using the TwiST algorithm with α T = 0.91 , β T = 0.8 and (a) λ = 4 ; (b) λ = 12 ; (c) λ = 0.5 ; (d) λ = 0.01 .
Mathematics 13 00898 g001
Figure 2. Local number of components estimated from the starting TFDs (blue solid line) and reconstructed TFDs (dashed red line) for the signal z S 1 ( t ) obtained using the TwIST algorithm with α T = 0.91 ,   β T = 0.8 and (a) λ = 4 ; (b) λ = 12 ; (c) λ = 0.5 ; (d) λ = 0.01 .
Figure 2. Local number of components estimated from the starting TFDs (blue solid line) and reconstructed TFDs (dashed red line) for the signal z S 1 ( t ) obtained using the TwIST algorithm with α T = 0.91 ,   β T = 0.8 and (a) λ = 4 ; (b) λ = 12 ; (c) λ = 0.5 ; (d) λ = 0.01 .
Mathematics 13 00898 g002
Figure 3. Illustrative example of the proposed measure applied to time slices of reconstructed TFDs of signal z S 1 ( t ) : (a) time slice from an overly sparse TFD, illustrating a missing auto-term; (b) time slice from a TFD with low regularization, showing low-resolution auto-terms and reconstructed interference. The green lines represent signal auto-terms, while the blue lines represent regions classified as interference.
Figure 3. Illustrative example of the proposed measure applied to time slices of reconstructed TFDs of signal z S 1 ( t ) : (a) time slice from an overly sparse TFD, illustrating a missing auto-term; (b) time slice from a TFD with low regularization, showing low-resolution auto-terms and reconstructed interference. The green lines represent signal auto-terms, while the blue lines represent regions classified as interference.
Mathematics 13 00898 g003
Figure 4. WVDs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) .
Figure 4. WVDs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) .
Mathematics 13 00898 g004
Figure 5. Obtained CAMs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) . TFD regions in black indicate that localization in time slices in preferred containing IFs, while regions in white indicate that localization in frequency is preferred containing GDs.
Figure 5. Obtained CAMs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) . TFD regions in black indicate that localization in time slices in preferred containing IFs, while regions in white indicate that localization in frequency is preferred containing GDs.
Mathematics 13 00898 g005
Figure 6. Estimated IFs and GDs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) .
Figure 6. Estimated IFs and GDs of the considered synthetic and real-world gravitational signals: (a) z S 1 ( t ) ; (b) z S 2 ( t ) ; (c) z S 3 ( t ) ; (d) z G ( t ) .
Mathematics 13 00898 g006
Figure 7. Reconstructed TFDs of the synthetic signals obtained using the TwIST algorithm with α T = 0.91 , β T = 0.8 , and: (a) z S 2 ( t ) , λ = 15 (Case 1); (b) z S 2 ( t ) , λ = 0.01 (Case 2); (c) z S 2 ( t ) , λ = 2.4 (Case 3); (d) z S 3 ( t ) , λ = 18 (Case 1); (e) z S 3 ( t ) , λ = 0.01 (Case 2); (f) z S 3 ( t ) , λ = 1 (Case 3).
Figure 7. Reconstructed TFDs of the synthetic signals obtained using the TwIST algorithm with α T = 0.91 , β T = 0.8 , and: (a) z S 2 ( t ) , λ = 15 (Case 1); (b) z S 2 ( t ) , λ = 0.01 (Case 2); (c) z S 2 ( t ) , λ = 2.4 (Case 3); (d) z S 3 ( t ) , λ = 18 (Case 1); (e) z S 3 ( t ) , λ = 0.01 (Case 2); (f) z S 3 ( t ) , λ = 1 (Case 3).
Mathematics 13 00898 g007
Figure 8. Reconstructed TFDs of the real-world gravitational signal z G ( t ) obtained using the TwIST algorithm with α T = 0.91 , β T = 0.8 , and (a) λ = 2 (Case 1); (b) λ = 0.01 , (Case 2); (c) λ = 1 , (Case 3).
Figure 8. Reconstructed TFDs of the real-world gravitational signal z G ( t ) obtained using the TwIST algorithm with α T = 0.91 , β T = 0.8 , and (a) λ = 2 (Case 1); (b) λ = 0.01 , (Case 2); (c) λ = 1 , (Case 3).
Mathematics 13 00898 g008
Figure 9. Optimized reconstructed TFDs of the considered synthetic and real-world gravitational signals obtained using the TwIST algorithm and MOPSO with the following objective functions: (a) z S 1 ( t ) , α T = 0.92 ,   β T = 0.85 ,   λ = 9 , ( MSE t , f , E C M ) ; (b) z S 2 ( t ) , α T = 0.91 ,   β T = 0.89 ,   λ = 1.5 , ( MSE t , f , E C M ) ; (c) z S 3 ( t ) , α T = 0.93 ,   β T = 0.85 ,   λ = 6.8 , ( MSE t , f , E C M ) ; (d) z G ( t ) , α T = 0.91 ,   β T = 0.84 ,     λ = 0.6 , ( MSE t , f , E C M ) ; (e) z S 1 ( t ) , α T = 0.93 ,   β T = 0.86 ,   λ = 8 , ( ξ a t , ξ r , ξ c t ) ; (f) z S 2 ( t ) , α T = 0.92 ,   β T = 0.89 ,   λ = 3.5 , ( ξ a t , ξ r , ξ c t ) ; (g) z S 3 ( t ) , α T = 0.92 ,   β T = 0.85 ,   λ = 6 , ( ξ a t , ξ r , ξ c t ) ; (h) z G ( t ) , α T = 0.90 ,   β T = 0.82 ,   λ = 1.2 , ( ξ a t , ξ r , ξ c t ) .
Figure 9. Optimized reconstructed TFDs of the considered synthetic and real-world gravitational signals obtained using the TwIST algorithm and MOPSO with the following objective functions: (a) z S 1 ( t ) , α T = 0.92 ,   β T = 0.85 ,   λ = 9 , ( MSE t , f , E C M ) ; (b) z S 2 ( t ) , α T = 0.91 ,   β T = 0.89 ,   λ = 1.5 , ( MSE t , f , E C M ) ; (c) z S 3 ( t ) , α T = 0.93 ,   β T = 0.85 ,   λ = 6.8 , ( MSE t , f , E C M ) ; (d) z G ( t ) , α T = 0.91 ,   β T = 0.84 ,     λ = 0.6 , ( MSE t , f , E C M ) ; (e) z S 1 ( t ) , α T = 0.93 ,   β T = 0.86 ,   λ = 8 , ( ξ a t , ξ r , ξ c t ) ; (f) z S 2 ( t ) , α T = 0.92 ,   β T = 0.89 ,   λ = 3.5 , ( ξ a t , ξ r , ξ c t ) ; (g) z S 3 ( t ) , α T = 0.92 ,   β T = 0.85 ,   λ = 6 , ( ξ a t , ξ r , ξ c t ) ; (h) z G ( t ) , α T = 0.90 ,   β T = 0.82 ,   λ = 1.2 , ( ξ a t , ξ r , ξ c t ) .
Mathematics 13 00898 g009
Figure 10. Local number of components estimated from the starting TFDs for the signal z G ( t ) : (a) M t ρ z ( t , f ) ( t ) ; (b) M f ρ z ( t , f ) ( f ) .
Figure 10. Local number of components estimated from the starting TFDs for the signal z G ( t ) : (a) M t ρ z ( t , f ) ( t ) ; (b) M f ρ z ( t , f ) ( f ) .
Mathematics 13 00898 g010
Table 1. Evaluation of reconstructed TFDs obtained using the TwIST algorithm using the LRE-based metric for the considered synthetic signal z S 1 ( t ) .
Table 1. Evaluation of reconstructed TFDs obtained using the TwIST algorithm using the LRE-based metric for the considered synthetic signal z S 1 ( t ) .
λ = 4 λ = 12 λ = 0.5 λ = 0.01
MSE t 0.00550.24350.05130.2795
MSE f 0.04640.20420.07280.0851
Table 2. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm for the synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
Table 2. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm for the synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
1 Norm ECM R MSE t MSE f ξ at ξ r ξ ct ξ at + ξ r + ξ ct 3
z S 1 ( t )
WVD (in Figure 4a)0.06642.88449.3373NanNan00.03770.28870.1088
Case 1 (in Figure 1b)0.01730.03328.69450.24350.20420.20620.012600.0729
Case 2 (in Figure 1d)0.03030.506911.50490.27950.085100.04030.41940.1532
Case 3 (in Figure 1a)0.01560.07339.74510.00550.04640.02720.02090.00450.0175
z S 2 ( t )
WVD (in Figure 4b)0.04785.495911.2195NanNan00.00680.29990.1022
Case 1 (in Figure 7a)0.01050.00878.71070.14410.26270.69740.008100.2352
Case 2 (in Figure 7b)0.02140.225612.28570.07690.047100.01450.14600.0535
Case 3 (in Figure 7c)0.02060.032310.47060.03680.04120.08350.00830.00700.0329
z S 3 ( t )
WVD (in Figure 4c)0.02774.618512.4047NanNan00.01100.18030.0638
Case 1 (in Figure 7d)0.00910.022711.96020.15250.03250.26160.009400.0903
Case 2 (in Figure 7e)0.01190.110313.54950.19220.079500.01590.04360.0198
Case 3 (in Figure 7f)0.00830.030912.12150.14710.03310.01510.00560.01470.0118
Table 3. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm for the real-world gravitational signal z G ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
Table 3. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm for the real-world gravitational signal z G ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
ECM R MSE t MSE f ξ at ξ r ξ ct ξ at + ξ r + ξ ct 3
z G ( t )
WVD (in Figure 4d)2.40229.9063NanNan00.01280.26450.0925
Case 1 (in Figure 8a)0.00988.26250.04360.03620.39040.013100.1347
Case 2 (in Figure 8b)0.082110.17550.02990.01760.00170.01470.06530.0306
Case 3 (in Figure 8c)0.01969.06830.01600.01860.04770.01640.00280.0223
Table 4. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for the synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
Table 4. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for the synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
1 norm ECM R ξ at ξ r ξ ct ξ at + ξ r + ξ ct 3 Optimization Time
z S 1 ( t )
LRE-based (in Figure 9a)0.01360.04289.04540.10890.014700.0412230.75 [s]
Proposed (in Figure 9e)0.01310.04829.18590.03610.015800.0173128.83 [s]
z S 2 ( t )
LRE-based (in Figure 9b)0.02250.042810.69760.04520.00680.01750.0232412.02 [s]
Proposed (in Figure 9f)0.00890.054111.22540.03650.01320.00640.0187207.90 [s]
z S 3 ( t )
LRE-based (in Figure 9c)0.00880.015611.43350.12680.004400.04372140.05 [s]
Proposed (in Figure 9g)0.00710.017811.55510.02340.00450.00450.0108855.88 [s]
Table 5. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for the real-world gravitational signal z G ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
Table 5. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for the real-world gravitational signal z G ( t ) . Values in bold indicate superior reconstructed TFD for each measure.
ECM R ξ at ξ r ξ ct ξ at + ξ r + ξ ct 3 Optimization Time
z G ( t )
LRE-based (in Figure 9d)0.01688.78110.09410.01090.00530.0368251.11 [s]
Proposed (in Figure 9h)0.01518.77920.05010.01450.00120.021998.87 [s]
Table 6. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) embedded in AWGN in SNR = [ 0 , 9 ] dB. Values in bold indicate superior reconstructed TFD based on the 1 n o r m metric.
Table 6. Performance comparison of reconstructed TFDs obtained using the TwIST algorithm and MOPSO for synthetic signals z S 1 ( t ) , z S 2 ( t ) and z S 3 ( t ) embedded in AWGN in SNR = [ 0 , 9 ] dB. Values in bold indicate superior reconstructed TFD based on the 1 n o r m metric.
SNR [dB]0369
z S 1 ( t )
LRE-based0.04770.02110.01670.0141
Proposed0.03980.01690.01390.0133
z S 2 ( t )
LRE-based0.06890.02260.01890.0112
Proposed0.05770.00880.00800.0054
z S 3 ( t )
LRE-based0.05870.02560.01410.0101
Proposed0.05120.01780.01220.0089
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jurdana, V. Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis. Mathematics 2025, 13, 898. https://doi.org/10.3390/math13060898

AMA Style

Jurdana V. Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis. Mathematics. 2025; 13(6):898. https://doi.org/10.3390/math13060898

Chicago/Turabian Style

Jurdana, Vedran. 2025. "Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis" Mathematics 13, no. 6: 898. https://doi.org/10.3390/math13060898

APA Style

Jurdana, V. (2025). Interpretable Evaluation of Sparse Time–Frequency Distributions: 2D Metric Based on Instantaneous Frequency and Group Delay Analysis. Mathematics, 13(6), 898. https://doi.org/10.3390/math13060898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop