Next Article in Journal
Integrating Serious Games in Primary Education: A Comprehensive Analysis
Previous Article in Journal
Modeling and Control of Permanent Magnet Generators with Fractional-Slot Concentrated Windings Working with Active Converters for Wind Power
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory †

Centre for Simulation, Analytics and Modelling, Department of Management—Innovation, Technology and Entrepreneurship, University of Exeter Business School, Exeter EX4 4PU, UK
Presented at the 11th International Conference on Time Series and Forecasting, Canaria, Spain, 16–18 July 2025.
Eng. Proc. 2025, 101(1), 18; https://doi.org/10.3390/engproc2025101018
Published: 25 August 2025

Abstract

A wavelet-based noise reduction method for time series is proposed. Traditional denoising techniques often adopt a “trial-and-error” approach, which can prove inefficient and may result in suboptimal filtering outcomes. In contrast, our method systematically selects the most suitable wavelet function from a predefined set, along with its associated tuning parameters, to ensure an optimal denoising process. The denoised series produced by this approach maximizes a suitable objective function based on information-theoretic divergence. This is particularly significant in economic time series, which are frequently characterized by non-linear dynamics and erratic patterns, often influenced by measurement errors and various external disturbances. The method’s performance is evaluated using time series data derived from the Business Confidence Climate Survey, which is freely and publicly accessible via the World Wide Web through the Italian National Institute of Statistics. The results of our empirical analysis demonstrate the effectiveness of the proposed method in delivering robust filtering capabilities, adeptly distinguishing informative signals from noise, and successfully eliminating uninformative components from the time series. This capability not only enhances the clarity of the data, but also significantly improves the overall reliability of subsequent analyses, such as forecasting.

1. Introduction

Among the disturbances affecting the statistical properties of time series, noise is particularly challenging, especially in economics. It arises from various phenomena, including attrition and measurement errors. Unlike controlled experiments, which can be repeated, only one realization of a time series is typically available, making it difficult to extract relevant information. Certified economic data, often released by central banks and statistical bureaus, are crucial for decision-making. Our denoising method leverages wavelets, which are local orthonormal bases that decompose functions into different scales. Wavelets are widely accepted in various fields, such as engineering and economics, particularly for applications involving discrete time. They were introduced in economics in the 1990s, highlighting their modeling capabilities and enhancements in forecasting performance [1]. Donoho pioneered wavelet-based noise reduction methods, leading to various techniques that improve signal clarity in the presence of noise [2]. Our method capitalizes on wavelets’ adaptability to complex signals, which is crucial for economic time series characterized by non-linear dynamics and erratic patterns. We utilize a model-free approach, relying on the assumption of white noise corruption. This design minimizes uncertainty by avoiding complex statistical models, focusing instead on optimizing a limited set of tuning parameters. The selection process for the optimal waveform usually requires a trial-and-error approach; however, constraining the candidate waveforms can lead to suboptimal results due to bias and increased uncertainty.

2. Outline of the Method

A “good” denoiser should preserve important features while filtering out unnecessary noise. Our method can be considered risk-free as it disengages if it cannot separate the signal from white noise. This property has clear advantages, including redirecting the analyst to more suitable denoisers and minimizing the risk of producing distorted filtered series.
Our method processes a predefined set of wavelets individually, performing a meta-parameter grid search to optimize three parameters: decomposition levels, admissible vanishing moments, and threshold levels. The choice of the “best” wavelet involves two functions: a discriminating function that rules out waveforms incapable of producing a white noise sequence of residuals, and an information-theoretic function that selects the waveform maximizing the entropy of this residual sequence.
Our procedure has several strengths:
  • It reduces uncertainty related to:
    (a)
    Model building, known as model uncertainty, which is linked to the various steps required for constructing the final model.
    (b)
    Parameter tuning; a poorly calibrated model can exhibit excessive sensitivity or fail to capture relevant information, resulting in a biased representation of the original series.
  • It is agnostic. The method can handle any number and type of waveforms. If it cannot disentangle the signal from noise, it applies no filtering.
  • It is fully automatic. Once the grid search strategy is chosen, no further interventions are needed, avoiding the manual selection of wavelets and tuning parameters, thereby lowering the overall uncertainty of the analysis.
  • It is fast. In all tested configurations, the computational time has proven reasonable, considering the typical time spans of macroeconomic time series. A model-based approach was discarded to maintain low levels of uncertainty, as building, estimating, and validating suitable models for numerous time series can be cumbersome, especially under time constraints, which is often the case for press releases or policy feedback.

2.1. Statement of the Problem

Let { x t ; t = 1 , 2 , , T } denote the time series of a given stochastic process, which is expected to possess the following characteristics: (i) it is real-valued, (ii) it has a continuous spectral density, (iii) the data are identically and independently distributed, and (iv) it may not necessarily be stationary. The lack of stationarity, as well as other desirable properties, can arise from the nature of many real-life time series, such as the Business Confidence Indexes utilized in the empirical section of this paper. These indexes reflect the judgments made by economic actors, who are often irrational and influenced by biases stemming from personal involvement and a lack of complete information.
Formally, the underlying data generating process can be represented as follows:
x t = s t + δ t + u t ; u t GWN ( 0 , σ u 2 ) ; δ t = g ( t )
Here, s t is the signal—i.e., the portion of the observed data containing the slow-varying, more consolidated components—expected to be (at least) locally stationary and linear. In a sense, we can think of s t as being generated by more rational interpretations of future economic scenarios, and thus it is less affected by contingent situations. On the other hand, the erratic, emotion-driven dynamics are captured by the term δ t = g ( t ) , which, like s t , cannot be observed except for its effects on x t . However, the treatment of the former is more critical because, as previously pointed out, the information of interest—which should be preserved as much as possible—is primarily contained in it (secondarily in the latter).
In more detail, while δ t is crucial for the real-time assessment of the economy (or one of its sectors), s t is typically employed in a backward-looking fashion, i.e., to study the evolution pattern of a given sentiment indicator over the years or measure the impact related to past initiatives, such as performing intervention analysis, see for example [3] or [4]. In general, the extraction of s t is a less complicated matter—which can be performed in various ways using simpler approaches (e.g., Hodrick–Prescott or Baxter–King)—compared to δ t . Finally, u t is the residual sequence of the type Gaussian White Noise (GWN), assumed to have a mean of 0 and constant (unknown) variance σ 2 . As will be explained later, an approximate Gaussian distribution for the residuals is required for the discriminating function to work properly.
The proposed procedure aims to reduce σ u 2 by replacing x t in (1) with its “optimal” filtered counterpart, denoted as x ˜ t * , according to a suitable cost function. Formally, a filter is an operation performed on a given time series—called the input series or simply input ( x t )—to obtain a new series—called the output series or simply output ( x ˜ t * )—which shows a reduced amplitude at one or more frequencies. In symbols, we have:
x t x ˜ t * = u ˜ t * ; u ˜ t * GWN ( 0 , σ u ˜ * 2 ) ; σ u ˜ * 2 < σ u 2
As will be outlined in Section 2.2, there are no guarantees that the inequality in (2) holds with probability one. Therefore, if σ u ˜ * 2 σ u 2 , the procedure should be abandoned in favor of a more suitable one.

2.2. The Discriminating Function

Equation (2) refers to the optimal filtered series u ˜ t * —i.e., the one meeting predetermined optimality conditions—found iteratively by our procedure. Essentially, it is designed to sequentially and exhaustively apply a predefined set of N different wavelet filters to the original time series, according to a grid search strategy involving a small set of wavelet hyperparameters. As a result, a set of filtered series—i.e., { x ˜ a , t ; a = 1 , 2 , , N ; t = 1 , , T } —is obtained. By computing the following N differences (one for each denoised version of the original series):
x t x ˜ 1 , t = u ˜ 1 , t x t x ˜ 2 , t = u ˜ 2 , t x t x ˜ N , t = u ˜ N , t ,
or, more compactly, x t { x ˜ a , t } , the T × N matrix of residuals U ˜ { u ˜ a , t ; a = 1 , 2 , , N ; t = 1 , , T } is generated. Here, each column vector: (i) can be seen as a T-dimensional point in the R T space { u ˜ a , t R T } and (ii) belongs to a set denoted by Ω , with cardinality | Ω | = N (the symbol | · | denotes the cardinality function).
Even though one can legitimately expect that—out of the N filtered series—a larger proportion of filter configurations generates residuals that are not white noise (for instance, those embedding underlying linear structures), it is also reasonable (see Equation (5) and the related discussion) to find a number p N of them producing white noise residual sequences. These series are of particular interest and will be stored in a competition set, denoted by Ω p , from which the “best” one—already denoted as x ˜ t * —will be extracted, i.e., x ˜ t * { Ω p x ˜ 1 , t , x ˜ 2 , t , , x ˜ p , t Ω } .
Let { U ˜ u ˜ t , 1 , u ˜ t , 2 , , u ˜ t , n } be the set of N vectors of residuals computed in (3), and let α be a predetermined significance level associated with the discriminating function L . The filtered series generating residual sequences for which p . val ( L ) > α —denoted by U ˜ p —are stored in a competition set called Ω p . Formally:
Ω p : L ( U ˜ ) p . val ( L ( U ˜ ) ) > α ,
where the symbol ⇔ denotes the statement “if and only if”. It is worth noting that, by design, the search strategy is performed using a wide range of filter configurations, which may vary in adequacy for the time series under investigation. As a result, in general, only a small number of residual sequences satisfy the p . value condition of Equation (4), which is why, in practice, we usually observe that
| Ω p | < | Ω |
At this point, one might consider the construction of the competition set Ω p an unnecessary step, given that maximizing the Ljung–Box test’s p . value is a sufficient and necessary condition:
x ˜ t * : max u ˜ U ˜ p . val ( L ( U ˜ ) ) p . val ( L ( u ˜ ) ) > α
In principle, such an objection is correct, and (6) provides a valid approach. However, we want the residuals generated by the winning filtered series u ˜ t * not only to be maximally uninformative, but also to exhibit the highest relative weight with respect to the original time series. Specifically, if the condition represented in Equation (6) is considered satisfactory from a theoretical perspective, it may not represent an operational solution, as the extracted time series can be practically indistinguishable from the original one. This occurs when the wavelet filter manages to eliminate only a tiny portion of noise—which does satisfy Equation (6)—while leaving the data largely unaltered. Formally, we have that
σ 2 ( u ˜ * ) σ 2 ( x t ) = ϵ ,
where ϵ is a sequence of very “small” numbers.
As already pointed out, Ω p is not a set whose cardinality is necessarily smaller than or equal to 1, but there are two possible options, namely
| Ω p | = 1
and
| Ω p | =
Under these conditions, if | Ω p | = 1 (as expressed in Equation (8)), one is left with no option but to use x ˜ * , which hopefully satisfies the conditions in Equation (4). The least favorable situation is expressed by Equation (9), i.e., when none of the considered wavelet filters is able to verify Equation (4). In such a circumstance, one might either accept progressively lower statistical significance levels ( p . val ( L ) > α ) or decide to abandon our procedure.
In Section 3, a target function I will be presented. Its objective is to find the maximizer of the variance ratio σ 2 ( u ˜ * ) σ 2 ( x t ) and thus to solve or alleviate—when possible—the problem described in the context of Equation (6).

3. The Target Function I

The solution devised to avoid—when possible—the situation described by Equation (6) is to apply a suitable target function I to the subset Ω p . Under the conditions of Equation (6) and provided that Equation (8) is satisfied, a suitable target function is employed to select a better filtered version of the given time series. In practice, we need to pick a new filtered series that is not too similar to the original one, but is still able to pass the Portmanteau test. In this setup, a more meaningful filtered series is one that shows the maximum level of discrepancy compared to the original series, even if this comes at the expense of the whiteness of the vector u ˜ t . The function adopted for this purpose is known as the Kullback–Leibler (KL) divergence (also called relative entropy), which can be roughly defined as an estimator of how one probability distribution differs from a second, reference probability distribution. KL divergence is pivotal in information theory, as it is related to fundamental quantities such as self-information, mutual information, Shannon entropy, conditional entropy, and cross-entropy. Unlike similar measures (e.g., variation of information), it cannot be defined as a metric because it lacks the triangle inequality property. KL divergence is applied in many contexts, such as characterizing the relative Shannon entropy in information systems, assessing randomness in continuous time series, or evaluating the information gain achieved by different statistical models.
Roughly speaking, K-L is a measure of how one probability distribution function (PDF) is different from a second one used as a benchmark. Let x be the T dim vector ( x 1 , , x T ) of realizations of X t (our time series), we define a function f ( x | θ 0 ) identifying the “reference” P D F and a set of m f–approximating function g i ( x | θ i ^ ) i = 1 , 2 , , m .
The K-L information number between a given model f 0 ( x ) and an approximating one g i ( x ) can be expressed, for continuous distribution by
I ( f 0 ( x | θ 0 ) , g i ( x | θ i ) ) = f ( x ) ln f ( x | θ 0 ) g i ( x | θ i ) d x
For discrete probability distributions, say P and Q, defined on the same probability space, the Kullback–Leibler divergence between P and Q is defined to be
I ( P | | Q ) = x P ( x ) log Q ( x ) P ( x )
The relation expressed in (12)–(11) quantifies the amount of information lost due to the application of approximating functions under the condition of iid observations. However, in the present context, such a condition is violated and therefore the K-L discrepancy, as above expressed, is non applicable. Nevertheless, we can still use this theoretical framework, by replacing the density distributions f ( · ) , g ( · ) (10) with the spectral densities computed on the “true” DGP and on its observed realizations, here respectively denoted by the symbols F ( · ) and G ( · ) , i.e.,
I ( F 0 ( x | θ 0 ) , G i ( x | θ i ) ) = F ( x ) ln F ( x | θ 0 ) G i ( x | θ i ) d x
Usually, we want I ( · ) to be as low as possible. For example, the famous Information Criterion of Akaike is aimed at selecting the model that has the smallest Kullback–Leibler divergence to the “true” spectral density F 0 generating the data x. In this regard, F 0 (i.e., the “truth”) is something we want the spectral density computed on our approximating model to be as close as possible, i.e.,
arg min θ I ( F 0 ( x | θ 0 ) , G i ( x | θ i ) )

3.1. The Target Function I

In our particular context, the Kullback–Leibler divergence function I ( · ) will be employed to solve the problem of the selected filtered time series passing, on one hand, the Ljung–Box test—i.e., L > 0.05 (see Equation (4))—but showing, on the other hand, very “small” differences compared to the original time series. In this sense, the use of the function I is diametrically opposite to the particular purpose it has been designed for. In fact, here we are not interested in maximizing the closeness of an approximating function with the true, unknown DGP (as in the case of the Equation (13)), but in finding, among the time series { x ˜ 1 * , x ˜ 2 * , , x ˜ w * } Ω p , the one that generates the greatest amount of discrepancy with respect to the original time series x t . In symbols:
M a x 2 : max C I ( f 0 ( x | θ 0 ) , g i ( x | C )
In this equation, the “truth” is what we actually observe, i.e., the time series; therefore, f 0 is simply the identity function. On the other hand, g i ( · | θ i ) is the wavelet (approximating) function, defined through the 3–dimensional vector of tuning parameters C . As it will be illustrated in Section 3.2, those parameters are: number of resolution levels, number of vanishing moments, and threshold.
In our particular theoretical framework, the maximization of the target function (Equation (14)) is approximatively equivalent to the maximization of the residual variance v a r ( x ˜ t x t ) and consequently of the magnitude of ϵ (see Equation (6)). To see this, we recall that signal and noise are uncorrelated—provided that the maximization of Equation (4) is verified—and therefore, the spectral density of the original series, say F ( λ ) , can be expressed as the sum of a constant quantity, that is the spectral density of the noise F e ( λ ) plus the spectral density of the filtered series F * ( λ ) . By plugging both the last two densities into Equation (12), we can re-express it as follows:
F = I ( F ( · ) , F e ( · ) ) = F ( z ) log F * ( z ) F ( z ) d z = F ( z ) log F ( z ) f e ( z ) F ( z ) d z = = F ( z ) log { 1 f e ( z ) F ( z ) } d z
Now, given that f e ( . ) is “small” compared to F ( · ) , we can use the quasi–equivalence log ( 1 x ) x to rewrite (15) as follows:
I ( F , F * ) F ( z ) f e ( z ) F ( z ) d z = f e ( z ) d z = v a r ( e )
By virtue of this approximation, the following is true:
max C I ( f 0 ( x | θ 0 ) , g i ( x | C ) max C v a r ( x ˜ t x t )
In practice, recalling that the subset Ω p contains all the filtered time series { x ˜ t , i * } i = 1 w , such that p . val ( L ) > 0.05, under strict inequality of Equation (8), the function I ( · ) is pairwise applied to all the element of the set Ω p , as is now explained. Let P ( · ) and Q ( · ) be the frequency distributions of the filtered and original data, respectively, we have that
I ( P | | Q x * ) , x * Ω p ,
so that the series x ˜ 0 * Ω p , which satisfies
max x * Ω p I ( P | | Q x * ) ,
will be the selected denoised version of the original series.

3.2. The Adopted Wavelet Filters

It is assumed that the white noise component translates into the wavelet domain and, as such, is captured by a set of coefficients across the different bands into which the signal is decomposed.
This creates the problem that, in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures that the entire spectrum is covered. For a detailed explanation, see [5]. The wavelet function is effectively a multi-level band-pass filter that progressively halves its bandwidth with the scaling levels j = 1 , 2 , , J , with J typically determined arbitrarily. Our denoising method is of the wavelet shrinkage type. As such, it is particularly suitable for disentangling information from noise with minimum computational complexity. More specifically, in the wavelet domain, the information has a coherent structure whose energy is captured and “stored” by a limited number of high-magnitude coefficients. The remaining coefficients exhibit small magnitudes and account for incoherent low-energy structures (the noise). The sparse structure of the coefficients reflecting the signal is exploited by a threshold-driven shrinkage mechanism to separate those coefficients carrying useful information from those accounting for noisy structures. In our setup, the noise threshold is of the hard type, i.e.,
w ˜ j , i = w j , i : | w j , i | λ j 0 < λ j
with w ˜ j , i and w j , i being the noise and denoised wavelet coefficients, respectively, pertaining to the decomposition level j. Even if (19) refers to the possibility for λ to vary across the j decomposition levels, for the sake of readability—without loss of generality—in what follows, we will restrict our attention to the case of a single threshold. Therefore, (19) reads as
w ˜ j , i = w j , i : | w j , i | λ 0 < λ
As it will be clarified later, our method envisions the threshold λ to be selected out of a set of candidates, i.e., Λ [ λ 1 , , λ r ] , according to a grid searching procedure.
It is evident that the coefficients w i , j exhibit dependence on the selected filter. For the purposes of this study, a comprehensive selection of filters has been considered, including: “haar”, “d2”, “d4”, “d6”, “d8”, “d10”, “d12”, “d14”, “d16”, “d18”, “d20”, “s2”, “s4”, “s6”, “s8”, “s10”, “s12”, “s14”, “s16”, “s18”, “s20,” “l2”, “l4”, “l6”, “l14”, “l18”, “l20”, “c6”, “c12”, “c18”, “c24”, and “c30”. We assume that each filter is represented as H 1 , H 2 , , H h , and, collectively, these are organized within a set denoted by W [ H 1 , H 2 , , H h ] .
The rationale underpinning the proposed method is derived from the assertion that traditional wavelet denoising approaches—characterized by a series of “a priori” selections—can introduce substantial uncertainty into the analytical framework. The uncertainties under consideration are primarily linked to the selection of the waveform, as well as the associated number of vanishing moments, the number of decomposition levels, and the choice of threshold level. The proposed algorithm, as summarized below, deviates from conventional methodologies by assuming no prior knowledge of the structure of the denoiser. Instead, it is specifically designed to systematically evaluate various denoising parameters with respect to the target function L ( · ) , as previously introduced in Section 2.2.

The Algorithm

  • A waveform H s W is selected;
  • A threshold λ Λ is chosen;
  • Select the set K K 1 , , K k , composed of the different numbers of decomposition levels to be tested. It is given that 1 K M , where M = log 2 ( N ) and N = length ( X t ) ;
  • Without loss of generality, we assume the algorithm starts with H = H 1 , Λ = λ 1 , and K = K 1 ;
  • Take the MODWT for all the K 1 detail components into which the original signal is decomposed. As a result, K 1 approximating components become available, i.e., { C 1 , 1 , C 1 , 2 , , C 1 , K 1 } ;
  • Use ( H 1 , λ 1 , K 1 ) on the original signal to obtain the denoised components, i.e., { C ˜ 1 , 1 , C ˜ 1 , 2 , , C ˜ 1 , K 1 } ;
  • Take the Inverse MODWT on { C ˜ 1 , 1 , C ˜ 1 , 2 , , C ˜ 1 , K 1 } ; the first denoised series, x ˜ 1 , t , is obtained;
  • Steps 5 to 7 are repeated until all remaining combinations of the elements belonging to W , Λ , K are sequentially applied, resulting in the respective denoised versions of the original signal, i.e., x ˜ 2 , t , x ˜ 3 , t , , x ˜ N , t ;
  • By conditioning the set of all denoised series x ˜ 1 , t , x ˜ 2 , t , , x ˜ N , t to Max 1 and Max 2 (Section 2.2), the final series x ˜ t * is selected.

3.3. Empirical Experiment

In this study, we analyze a comprehensive dataset consisting of 236 time series, which spans from January 1986 to December 2018, yielding a total of 396 observations. This dataset is particularly significant as it encompasses
In this section, we will present two specific outcomes derived from a broader empirical experiment conducted on time series belonging to the category of Business Confidence Indexes, produced by the Italian National Institute of Statistics, spanning from January 1986 to December 2018 (396 observations). By focusing on these distinct outcomes, we seek to illustrate the efficacy of our denoising procedure and the insights it offers regarding the underlying economic conditions reflected in the data.
In Figure 1, we present the optimal waveform applied to the Istat series titled “Liquidity vs. Operational Needs”. This waveform is classified as “c24”, which denotes a Coiflet filter characterized by 24 coefficients and 12 vanishing moments. The analysis employs five levels of resolution and implements a threshold value of 3.9. The residuals generated from this filtering procedure successfully pass the Portmanteau test, thus reinforcing our confidence in the method’s efficacy for effectively distinguishing and eliminating uninformative (noise) components from the signal. A similar favorable outcome is observed for the time series entitled “Assessment of Finished Goods Inventories”, as depicted in Figure 2. In this case, the optimal filter employed is of the type “c18”, indicating a Coiflet filter with 18 coefficients and 12 vanishing moments, also utilizing five levels of resolution, but with an adjusted threshold set at 4.8. This consistent performance across both analyses underscores the robustness of our denoising methodology and its applicability in extracting meaningful insights from complex economic indicators.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ramsey, J.B. The contribution of wavelets to the analysis of economic and financial data. Philos. Trans. R. Soc. London. Ser. A Math. Phys. Eng. Sci. 1999, 357, 2593–2606. [Google Scholar] [CrossRef]
  2. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  3. Box, G.E.P.; Tiao, G.C. Intervention analysis with applications to economic and environmental problems. J. Am. Stat. Assoc. 1975, 70, 70–79. [Google Scholar] [CrossRef]
  4. Box, G.E.P.; Hillmer, S.; Tiao, G.C. Analysis and Modeling of Seasonal Time Series; NBER: Cambridge, MA, USA, 1979; pp. 309–346. [Google Scholar]
  5. Valens, C. (Ed.) A Really Friendly Guide to Wavelets; The University of New Mexico: Albuquerque, NM, USA, 1999. [Google Scholar]
Figure 1. Upper panel: original time series (continuous line) related to the series “Liquidity vs. Operational Needs” and its filtered version (dashed line). Lower panel: Autocorrelation function of the residuals ( x ˜ t * x t ) .
Figure 1. Upper panel: original time series (continuous line) related to the series “Liquidity vs. Operational Needs” and its filtered version (dashed line). Lower panel: Autocorrelation function of the residuals ( x ˜ t * x t ) .
Engproc 101 00018 g001
Figure 2. Upper panel: original time series (continuous line) related to the series variable “Assessment of Finished Goods Inventories” and its filtered version (dashed line). Lower panel: Autocorrelation function of the residuals ( x ˜ t * x t ) .
Figure 2. Upper panel: original time series (continuous line) related to the series variable “Assessment of Finished Goods Inventories” and its filtered version (dashed line). Lower panel: Autocorrelation function of the residuals ( x ˜ t * x t ) .
Engproc 101 00018 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fenga, L. A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory. Eng. Proc. 2025, 101, 18. https://doi.org/10.3390/engproc2025101018

AMA Style

Fenga L. A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory. Engineering Proceedings. 2025; 101(1):18. https://doi.org/10.3390/engproc2025101018

Chicago/Turabian Style

Fenga, Livio. 2025. "A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory" Engineering Proceedings 101, no. 1: 18. https://doi.org/10.3390/engproc2025101018

APA Style

Fenga, L. (2025). A Rapid, Fully Automated Denoising Method for Time Series Utilizing Wavelet Theory. Engineering Proceedings, 101(1), 18. https://doi.org/10.3390/engproc2025101018

Article Metrics

Back to TopTop