You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

10 June 2025

Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization

and
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-310 Szczecin, Poland
*
Author to whom correspondence should be addressed.

Abstract

Time-varying digital filters are widely used in dynamic signal processing applications, but their transient response can significantly impact performance, particularly in real-time systems. This study focuses on reducing transient time in time-varying filters through second-order section (SOS) optimization. By employing a numerical optimization approach, we selectively adjust the coefficients of a single SOS within a higher-order filter to minimize the transient period while maintaining overall stability. Using a sequential quadratic programming (SQP) algorithm, we determine a time-varying coefficient trajectory over a finite horizon, ensuring a rapid convergence to steady-state behavior. Experimental results demonstrate that this targeted coefficient adaptation reduces transient time by up to 80% compared to conventional static designs, with minimal computational overhead. Additionally, a comparative analysis with traditional linear time-invariant (LTI) filters highlights the advantage of this method in suppressing transient oscillations while preserving long-term filter characteristics. The proposed approach provides a practical and efficient strategy for enhancing filter responsiveness in applications requiring both stability and real-time adaptability. These findings suggest that selective time variation in SOS decomposition can be a valuable tool in digital filter design, improving efficiency without excessive memory or processing demands.

1. Introduction

Digital filters are fundamental tools in signal processing, widely used in applications ranging from telecommunications and control engineering to audio engineering. Traditionally, filters are designed with fixed coefficients optimized to meet specific performance criteria such as frequency response and stability. However, in dynamic systems or time-varying environments, static filters may fail to deliver sufficient responsiveness to changing signal characteristics [1,2,3]. To address this, researchers have developed both adaptive filtering techniques—such as LMS, RLS, or Wiener filters—and time-varying filter architectures that modify filter characteristics over time [4,5,6]. Adaptive filters typically update their coefficients in real time, guided by an error signal derived from system feedback. While powerful, they are often computationally intensive and may require high memory throughput, making them less suitable for real-time or embedded systems with strict resource constraints. In contrast, time-varying filters with precomputed parameter trajectories offer a fixed, feedforward method for enhancing transient performance without incurring significant runtime/online cost.
Time-varying filters, characterized by coefficients that change over time, offer enhanced flexibility and adaptability in signal processing. Despite their advantages, time-varying filters introduce new challenges, such as increased computational complexity and potential stability issues. One critical concern is transient time—the period required for a filter to settle into its steady-state response (typically within 2%). This aspect is particularly important in real-time applications, where delays can degrade system performance. Higher-order filters, while providing better selectivity, are especially prone to prolonged transients, further complicating the design process.
This paper investigates the use of numerical optimization of second-order section (SOS) coefficients to minimize transient times in time-varying filters, offering a pathway to more efficient signal processing.
Transient time is a crucial factor in filter performance, representing the duration required for a filter to reach its steady-state response after a change in input or configuration. Additionally, a filter’s initial conditions (hot/cold start) significantly influence transient duration. In many applications, especially real-time systems, long transient times can introduce unacceptable delays, reducing overall system responsiveness and accuracy. Higher-order filters, while providing precise control over signal characteristics, often exhibit longer transient times, making their practical implementation more challenging. Reducing transient time without compromising filter stability and performance is a significant design challenge that demands innovative solutions.
The primary objective of this paper is to minimize transient time in digital filters with time-varying coefficients. To achieve this, we focus on decomposing higher-order filters into second-order sections (SOSs) to improve numerical stability and parameter control. A numerical optimization approach is proposed to fine-tune the coefficients of one or more SOSs, effectively reducing transient behavior. The goal is to strike a balance between transient time reduction and other critical performance metrics, including stability, steady-state accuracy, and memory consumption. By demonstrating the effectiveness of this method, this study aims to provide a practical framework for designing time-varying filters with improved responsiveness while maintaining efficient memory usage for storing time-varying coefficients.

3. Materials and Methods

3.1. Time-Varying Filters

The concept of time-varying coefficients has primarily been explored in the context of adaptive filtering techniques, where the filter’s coefficients evolve over time to address specific problems or align with the chosen method. In this work, we propose using a predefined control rule to achieve the fastest possible response from the equalized filter while preserving its long-term frequency characteristics. Unlike adaptive filtering, our approach involves calculating the control rule once during the design phase, allowing it to be applied consistently whenever a relevant excitation occurs. The difference equation of a digital linear time-variant system (LTV) can be written as follows:
y   n = k = 1 M a k n · y n k + k = 0 N b k · x n k ,
where
M —The number of previous outputs y n k included in the equation.
N —The number of current and previous inputs x n k considered.
a k n —Time-varying coefficients for the feedback terms (previous outputs).
b k n —Time-varying coefficients for the feedforward terms (inputs).
y n —Current output at time step n .
x n —Current input at time step n .

3.2. Transient Behavior Analysis

Transient time in digital filters is the interval required for the filter’s output to stabilize after a sudden change in the input signal, such as a step input. The duration of the transient response is influenced by the filter’s parameters, including the pole locations, which dictate stability and speed of convergence. Poles closer to the unit circle in the z-plane lead to longer transient times, as the system response decays more slowly. Additionally, the filter order is a significant factor. Higher-order filters typically produce more complex and extended transient behaviors due to the interplay of multiple poles and zeros.
Figure 1 depicts an exemplary step response of the 4th-order low-pass Butterworth filter. The measured 1% steady-state transient time has settled at 0.024 s after step excitation.
Figure 1. Step response and transient time of an exemplary 4th-order low-pass Butterworth filter (sampling frequency Fs-1 kHz; cutoff frequency Fc-50 Hz).
Time-varying coefficients can further complicate transient behavior, introducing variability that may prolong or destabilize the response, underscoring the need for careful parameter optimization.

3.3. Second-Order Section Decomposition

Higher-order digital filters often encounter numerical challenges, such as coefficient quantization sensitivity and overflow, which can degrade performance in low-precision or fixed-point implementations. By decomposing the filter into multiple second-order sections (SOS), these numerical issues are localized and thus more effectively mitigated. Each SOS is represented as a biquad transfer function:
H ( z ) = b 0 + b 1 z 1 + b 2 z 2 1 + a 1 z 1 + a 2 z 2 .
This corresponds to the time-domain difference equation:
y [ n ] = a 1 y [ n 1 ] a 2 y [ n 2 ] + b 0 x [ n ] + b 1 x [ n 1 ] + b 2 x [ n 2 ] .
A higher-order IIR filter of order N is typically realized by cascading M = N 2 second-order sections, with an optional first-order section for odd-order filters:
H t o t a l ( z ) = i = 1 M H i ( z ) .
The decomposition into SOS format is commonly performed using polynomial factorization of the filter’s numerator and denominator.
SOS, with its fewer coefficients and simpler pole-zero structure, is less susceptible to instability and rounding errors. This modular design facilitates targeted optimization, as each second-order section can be tuned independently for optimal performance, reducing the overall computational effort. From a resource perspective, SOS decomposition lowers memory usage and power consumption, making it particularly valuable for real-time and embedded systems. It also enhances flexibility by allowing sections that handle more critical frequency components to use higher-precision resources as needed. Moreover, the ability to add or remove sections without overhauling the entire filter design increases scalability—an important benefit in adaptive or time-varying scenarios where filter parameters may need to change rapidly. Ultimately, SOS decomposition provides a robust framework for achieving high filter performance, efficient resource utilization, and straightforward optimization in higher-order digital filter designs.
In this paper one can find an extension of the benefits of using SOS decomposition in terms of a proposal to introduce time-varying coefficients into a finite horizon to improve the transient time of the whole design.

3.4. Decomposition Approach

Decomposing a higher-order digital filter into second-order sections (SOSs) provides a stable and numerically efficient means of realizing the transfer function. In this work, we use 2023b MATLAB’s built-in function, which takes the filter’s numerator and denominator polynomials (in transfer function form) and factors them into multiple second-order blocks. Each block corresponds to a pair of conjugate poles (and zeros), thereby reducing sensitivity to rounding errors and making it straightforward to manage overflow concerns. We opt for a cascade structure since it simplifies the allocation of scale factors for each block, ensuring a more uniform distribution of gain and helping maintain stability. Moreover, by breaking the filter into multiple SOSs, we can individually optimize or tweak specific sections as needed, offering flexibility for further performance refinements without rewriting the entire filter design.
Figure 2 presents the magnitude response of the full 6th-order low-pass elliptic filter (sampling frequency Fs-1 kHz; cutoff frequency Fc-100 Hz; passband ripple Rp-1dB; stopband attenuation Rs-40dB) and the magnitude responses of the three second-order sections.
Figure 2. Magnitude responses: 6th-order low-pass elliptic filter decomposition.
Figure 3 depicts the step responses of the full and SOS decomposition of the same low-pass elliptic filter.
Figure 3. Step responses: 6th-order low-pass elliptic filter decomposition.
We propose to drive the focus to one of the sections and apply the time-varying coefficient methodology to reduce the transient time of the whole design.

3.5. Numerical Optimization

We focus on optimizing a single second-order section (SOS) whose time-varying difference equation can be expressed as in (3), where a k n and b k n are the time-varying coefficients for k 0,1 , 2 . Over a finite horizon H, these coefficients are allowed to vary from sample to sample, while for n > H they revert to their stationary values, which ensures long-term stability. The objective function is formulated to minimize the transient response by penalizing deviations of the filter output y n from a desired steady-state target y s s during the initial H samples:
J θ = n 1 H y n ; θ y s s 2 ,
where
θ = { a 1 [ n ] , a 2 [ n ] , b 0 [ n ] , b 1 [ n ] , b 2 [ n ]   |   n = 1 , . . .   , H } ,
collectively represents all time-varying coefficients a k n and b k n for 1 n H . The cost function J θ encodes the principle that the filter output should converge as quickly as possible to y s s , thereby reducing transient state. Additionally, a set of stability constraints ensures that the time-varying SOS stays a valid candidate. One can find further details about time-varying filter stability in the author’s previous works. To solve this constrained minimization problem, we employ the Sequential Quadratic Programming (SQP) algorithm for its robust handling of nonlinear constraints. We have also proposed a bounding constraint in the form of
1 a k n , b k n 1 ,
which is a simple yet effective means to prevent runaway filter gains that might otherwise prolong or destabilize the transient.
Once optimized, the coefficient set is stored and used in real-time operation during known transient events (e.g., a step input or signal onset). After H samples, the coefficients revert to their stationary (LTI) values:
a i [ n ] = a i L T I [ n ] ,             b i [ n ] = b i L T I [ n ]     f o r   n > H ,
This hybrid approach maintains the filter’s desired long-term frequency response while minimizing initial settling time.

4. Results

4.1. Optimization Outcomes

As an example let us use the elliptic filter mentioned in Section 3.4. Figure 4 presents the step response of #2 SOS of the final design.
Figure 4. Step response of #2 SOS of the 6th-order elliptic filter.
Table 1 presents the coefficients of the stationery #2 SOS.
Table 1. Linear time- invariant coefficients of the #2 SOS.
Table 2 outlines the performance of the section in terms of transient time duration.
Table 2. Transient time data for classical stationary SOS.
As one can notice, this section’s step response is characterized by significant ripple behavior.
By introducing the iterative SQP solver described in Section 3.5, we have acquired a new time-varying SOS with a significantly improved transient time. Table 3 and Table 4 present the set of coefficients ( a k n and b k n ) that change in the span of five samples (horizon).
Table 3. Time-varying set of a k n coefficients.
Table 4. Time-varying set of b k n coefficients.
As mentioned in Section 3.5, after reaching the horizon (five samples) the coefficients settle on the original #2 SOS parameters to preserve the desired frequency characteristic. Figure 5 outlines the graphical comparison of the LTI and LTV #2 SOS instances.
Figure 5. Comparison of step responses of the #2 SOS designed with classical methodology and the proposed time-varying approach.
Table 5 summarizes the improvement in the reduction in the transient time by using the time-varying coefficients.
Table 5. Transient time data for time-varying SOS.

4.2. Comparison with Baseline

The previous section proved the usefulness of the time-varying concept in terms of reducing the transient time of one of the second-order sections. For the whole design to be useful, one needs to focus on the performance of the final (full) filter design. Figure 6 presents the comparison of the complete static solution with the novel approach to improving one of the SOS parts.
Figure 6. Comparison of full classical time-invariant filter with the novel design with one time-varying SOS.
Figure 6 compares the step responses of a time-invariant, SOS-based digital filter (blue dashed line) with a time-varying, SOS-based design (red solid line). The baseline refers to a standard sixth-order elliptic filter, implemented using three fixed second-order sections. All coefficients remain constant over time, and no transient optimization is applied. In the proposed LTV design, the same sixth-order elliptic filter structure is used, but only one SOS (#2) has its coefficients varied across five samples based on the SQP-optimized trajectory. The other two SOSs remain identical to those in the baseline. Notably, the time-varying filter converges more rapidly to the steady-state level, displaying less overshot and fewer oscillations during the early samples. This outcome underscores the advantage of selectively adjusting SOS coefficients to reduce unwanted oscillations and accelerate the settling process, reducing the transient time of the filtering structure. Table 6 summarizes the overall improvement in the reduction in the transient time of the complete design when compared to the classical time-invariant structure.
Table 6. Transient time comparison between classical (LTI—linear time-invariant) and the proposed (LTV—linear time-varying) method.
As one can notice, the reduction in the transient time has reached up to 80% for the 5% threshold, which should be considered a major improvement.

4.3. Robustness Analysis

A natural way to highlight the benefits of introducing time-varying coefficients into a single second-order section is to compare how both the classical, fully LTI filter and the partially time-varying design perform under changing signal conditions. Specifically, analyzing their time-domain behavior when subjected to different noise levels or varying signal frequencies provides clear evidence of how transient response and overall performance are impacted. By overlaying their outputs on a noisy input signal, one can observe which approach settles faster and better suppresses undesired oscillations. This direct, side-by-side comparison helps quantify whether allowing even a single SOS can significantly reduce the transient period without degrading the filter’s long-term characteristics.
Figure 7 illustrates an exemplary performance of both the classical and the proposed structures within a noised environment.
Figure 7. Classical linear time-invariant filtering structure compared with proposed linear time-varying SoS on exemplary test signal.
In the first experiment, we added a high-frequency (300 Hz) noise component on top of an otherwise simple step input signal. The dashed black trace in Figure 7 shows how the noise significantly distorts the step’s clean rise. A conventional time-invariant filter (blue line) reduces much of the noise but still exhibits a notable transient, including an overshoot and oscillations as it settles. By contrast, modifying a single second-order section to have time-varying coefficients (red line) shortens the settling period while maintaining effective noise suppression. This illustrates how the proposed approach can outperform a purely LTI filter when faced with strong high-frequency interference.
To further evaluate the robustness of the proposed method, we examined a more challenging case where the interfering signal lies just above the passband of the elliptic filter. Specifically, we added a 110 Hz sinusoidal component—which is only 10 Hz above the cutoff frequency of the sixth-order low-pass elliptic filter (Fc = 100 Hz)—to a step signal. This setup stresses the filter’s ability to suppress high-frequency content close to the transition band, while still preserving a rapid transient response.
Figure 8 illustrates an exemplary performance of both the classical and the proposed structures within a challenging noised environment.
Figure 8. Comparison of classical time-invariant filter and proposed time-varying SOS filter under step input with added 110 Hz interference. The 110 Hz component lies just above the 100 Hz cutoff, creating a near-edge suppression challenge. The time-varying design shows faster and cleaner convergence.
As shown in Figure 8, the classical time-invariant filter (blue) exhibits a relatively slow settling behavior with noticeable oscillations caused by the nearby frequency interference. In contrast, the proposed design with one time-varying SOS (red) converges more quickly and cleanly, effectively dampening the transient while maintaining similar steady-state accuracy. This result highlights the method’s strength in handling tight spectral proximity scenarios, which are common in dense signal environments.
Figure 9 presents the classical notch-type IIR filter compared to another implementation of the proposed method.
Figure 9. Comparison of a classical IIR notch filter with the proposed time-varying methodology.
In this experiment, we employ an IIR notch filter designed to attenuate a 100 Hz disturbance while preserving both the step component and any relevant lower frequencies. The dashed black curve again shows the input signal, now containing a step, a 5 Hz “useful” component, and strong 100 Hz noise. The notch filter’s time-invariant version (blue) substantially reduces the 100 Hz amplitude but still suffers from a longer transient and overshoot. By allowing one second-order section to have time-varying coefficients (red), the filter settles faster and more cleanly, with less oscillatory behavior. These results demonstrate that the proposed time-varying SOS technique can be seamlessly integrated into notch-type IIR filters, enabling them to reduce transient times while maintaining precise attenuation at the targeted notch frequency.

5. Discussion

In this study, we found that introducing time-varying coefficients into only one second-order section can substantially shorten the overall transient time, offering a clear advantage over a fully time-invariant design. At the same time, the computational overhead remains relatively modest, since the method requires storing just a limited set of coefficients within a user-defined horizon rather than adapting the entire filter for the time-varying concept. These optimized coefficients, derived through a nonlinear routine that carefully balances settling speed and stability, cannot be readily approximated by a simple mathematical function or curve, so the most reliable approach is to store them directly. Although this adds some memory usage—particularly in low-power or embedded contexts—its footprint is still far smaller than that of a fully time-varying filter architecture. By confining the mechanism to one SOS, the structure preserves the benefits of time-varying behavior exactly where it is most needed, while leaving the rest of the filter in a low-complexity, stationary form. Moreover, empirical tests confirm that this targeted approach preserves steady-state performance across a range of signal and noise conditions. Thus, it provides a practical middle ground, allowing significant transient improvements in return for only a slight increase in storage and computational effort.
Although the approach of using time-varying coefficients in one second-order section has demonstrated notable benefits, it is not universally advantageous across all filter structures or use cases. For instance, Butterworth filters, with their relatively low selectivity, gain much less from this technique, suggesting that filters offering sharper roll-offs or more complex pole-zero placements stand to benefit most. Additionally, while the coefficient adaptation process proves straightforward under a “cold start,” where the filter begins with zero initial conditions, the performance in a “hot start” scenario—when the filter is already operating—requires further investigation. Another critical concern is determining the precise moment at which to initiate time-varying behavior; if it is triggered too early or too late, the potential gains can be diminished or overshadowed by unnecessary overhead. Likewise, rapid changes in the input signal, such as narrow “stairs,” can disrupt the predefined horizon if the filter coefficients are only optimized for a single transient event. These issues underscore the method’s current limitations and highlight areas where additional research—on trigger mechanisms, adaptive horizons, and robust designs for rapidly shifting signals—remains necessary.

6. Conclusions

In this paper, we introduced a novel method for introducing time-varying coefficients into a single second-order section (SOS), thereby reducing transient times without incurring excessive computational overhead. By selectively adapting only one SOS, we strike a balance between fully time-varying filtering and a purely static approach. Our analysis indicates that filters with sharper roll-offs, such as elliptic or Chebyshev designs, derive the most significant benefits, although even low-selectivity filters show moderate improvements. Practical demonstrations revealed a pronounced reduction in settling time under varying noise levels and signal conditions, underscoring the technique’s robustness. We also highlighted the importance of carefully choosing the horizon and triggering mechanism to maximize performance gains. Overall, our findings confirm that a selective time-varying strategy can markedly enhance responsiveness while preserving the filter’s stability.
The time-varying SOS strategy has immediate relevance in scenarios demanding both rapid settling and stable operation, such as live audio signal processing and real-time communication systems. By reducing the filter’s transient period, latency-sensitive applications—like wireless data transmission, virtual reality audio rendering, and adaptive control systems—can benefit from quicker responsiveness. Automotive sensor fusion is another promising domain, where faster convergence aids in smoother integration of rapidly updated sensor inputs. The method’s targeted adaptation also suits power-constrained embedded devices that need to optimize transient behavior without expending excessive computational resources. Finally, its flexibility and moderate complexity make it a practical choice for any system where short-lived disturbances must be addressed quickly while maintaining robust long-term performance.
Future work will involve refining the triggering mechanism for time-varying updates, especially in “hot start” scenarios where the filter has already been operating. Another promising line of research is to develop dynamic or adaptive horizons that better handle signals with narrow “stairs” or sudden changes. Extending this partial adaptation strategy to higher-order or multidimensional filters also merits investigation, as it may open doors to more sophisticated applications like 3D audio or MIMO communication systems. Real-time implementation is equally important, ensuring that the computational overhead of coefficient updates remains feasible for embedded and low-power devices. Finally, an in-depth exploration of the cost–benefit balance between the extra memory needs for storing time-varying parameters and the performance gains in transient reduction can guide practical design choices.

Author Contributions

Conceptualization, P.O. and J.P.; methodology, P.O. and J.P.; software, P.O.; validation, P.O. and J.P.; formal analysis, J.P.; investigation, P.O. and J.P.; resources, P.O.; data curation, P.O.; writing—original draft preparation, P.O.; writing—review and editing, J.P.; visualization, P.O.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Niedźwiecki, M.; Pietrzak, P. High-Precision FIR-Model-Based Dynamic Weighing System. IEEE Trans. Instrum. Meas. 2016, 65, 2349–2359. [Google Scholar] [CrossRef]
  2. Niedźwiecki, M.; Gańcza, A.; Żuławiński, W.; Wyłomańska, A. Robust local basis function algorithms for identification of time-varying FIR systems in impulsive noise environments. In Proceedings of the 2024 IEEE 63rd Conference on Decision and Control (CDC), Milan, Italy, 16–19 December 2024; pp. 3463–3470. [Google Scholar] [CrossRef]
  3. Jaskuła, M.; Kaszyński, R. Using the parametric time-varying analog filter to average-evoked potential signals. IEEE Trans. Instrum. Meas. 2004, 53, 709–715. [Google Scholar] [CrossRef]
  4. Wang, D.; Bazzi, A.; Chafii, M. RIS-Enabled Integrated Sensing and Communication for 6G Systems. In Proceedings of the 2024 IEEE Wireless Communications and Networking Conference (WCNC), Dubai, United Arab Emirates, 21–24 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
  5. Yu, C.; Gu, R.; Wang, Y. The Application of Improved Variable Step-Size LMS Algorithm in Sonar Signal Processing. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 1856–1860. [Google Scholar] [CrossRef]
  6. Ali, A.; Moinuddin, M.; Al-Naffouri, T.Y. NLMS is More Robust to Input-Correlation Than LMS: A Proof. IEEE Signal Process. Lett. 2022, 29, 279–283. [Google Scholar] [CrossRef]
  7. Pei, S.-C.; Tseng, C.-C. Elimination of AC interference in electrocardiogram using IIR notch filter with transient suppression. IEEE Trans. Biomed. Eng. 1995, 42, 1128–1132. [Google Scholar] [PubMed]
  8. Kocoń, S.; Piskorowski, J. Time-Varying IIR Notch Filter with Reduced Transient Response Based on the Bézier Curve Pole Radius Variability. Appl. Sci. 2019, 9, 1309. [Google Scholar] [CrossRef]
  9. Dewald, K.; Bersier, A.; Gardella, P.J.; Jacoby, D. IIR filter transient suppression by signal shifting. In Proceedings of the 2014 IEEE Biennial Congress of Argentina (ARGENCON), Bariloche, Argentina, 11–13 June 2014; pp. 153–158. [Google Scholar]
  10. Piskorowski, J. Digital Q-Varying Notch IIR Filter with Transient Suppression. IEEE Trans. Instrum. Meas. 2010, 59, 866–872. [Google Scholar] [CrossRef]
  11. Tan, L.; Jiang, J.; Wang, L. Pole-Radius-Varying IIR Notch Filter with Transient Suppression. IEEE Trans. Instrum. Meas. 2012, 61, 1684–1691. [Google Scholar] [CrossRef]
  12. Okoniewski, P.; Piskorowski, J. Short Transient Parameter-Varying IIR Filter Based on Analog Oscillatory System. Appl. Sci. 2019, 9, 2013. [Google Scholar] [CrossRef]
  13. Okoniewski, P.; Osypiuk, R.; Piskorowski, J. Short-Transient Discrete Time-Variant Filter Dedicated for Correction of the Dynamic Response of Force/Torque Sensors. Electronics 2020, 9, 1291. [Google Scholar] [CrossRef]
  14. Gutierrez de Anda, M.A.; Meza Dector, I. A second-order lowpass parameter-varying filter based on the interconnection of first-order stages. IEEE Trans. Circuits Syst. I 2011, 58, 1840–1853. [Google Scholar] [CrossRef]
  15. de la Garza, K.T.; Gomez, J.T.; de Lamare, R.C.; Garcia, M.J.F.-G. A variational approach for designing infinite impulse response filters with time-varying parameters. IEEE Trans. Circuits Syst. I 2018, 65, 1303–1313. [Google Scholar] [CrossRef]
  16. Amini, S.; Mozaffari Tazehkand, B. Design of feedback-structured IIR notch filter with transient suppression using gain variation. Biomed. Signal Process. Control 2022, 71, 103075. [Google Scholar] [CrossRef]
  17. Sharma, A.; Kumar Rawat, T.; Agrawal, A. Design and FPGA implementation of lattice wave digital notch filter with minimal transient duration. IET Signal Process. 2020, 14, 440–447. [Google Scholar] [CrossRef]
  18. Jayant, H.K.; Rana, K.P.S.; Kumar, V.; Nair, S.S.; Mishra, P. Efficient IIR notch filter design using minimax optimization for 50Hz noise suppression in ECG. In Proceedings of the International Conference on Signal Processing, Computing and Control (ISPCC), Waknaghat, India, 24–26 September 2015; pp. 290–295. [Google Scholar]
  19. Laroche, J. On the stability of time-varying recursive filters. J. Audio Eng. Soc. 2007, 55, 460–471. [Google Scholar]
  20. Werner, K.J.; McClellan, R. Time-Varying Filter Stability and State Matrix Products. In Proceedings of the 25th International Conference on Digital Audio Effects, Vienna, Austria, 6–10 September 2022; pp. 101–108. [Google Scholar]
  21. Kamen, E.W. The poles and zeros of a linear time-varying system. Linear Algebra Its Appl. 1988, 98, 263–289. [Google Scholar] [CrossRef]
  22. Zhu, J.; Johnson, C.D. Unified canonical forms for matrices over a differential ring. Linear Algebra Its Appl. 1991, 147, 201–248. [Google Scholar] [CrossRef]
  23. Ye, H.; Song, Y. Backstepping design embedded with time-varying command filters. IEEE Trans. Circuits Syst. II 2022, 69, 2832–2836. [Google Scholar] [CrossRef]
  24. Jelfs, B.; Sun, S.; Ghorbani, K.; Gilliam, C. An adaptive all-pass filter for time-varying delay estimation. IEEE Signal Process. Lett. 2021, 28, 628–632. [Google Scholar] [CrossRef]
  25. Wu, L.; Zhao, Y.; He, L.; He, S.; Ren, G. A time-varying filtering algorithm based on short-time fractional Fourier transform. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 17–20 February 2020; pp. 555–560. [Google Scholar]
  26. Cui, L.; Wang, X.; Wang, H.; Ma, J. Research on remaining useful life prediction of rolling element bearings based on time-varying Kalman filter. IEEE Trans. Instrum. Meas. 2020, 69, 2858–2867. [Google Scholar] [CrossRef]
  27. Chilakawad, A.; Kulkarni, P.N. Time varying IIR filters for binaural hearing aids. In Proceedings of the International Conference on Smart Systems for Applications in Electrical Sciences (ICSSES), Tumakuru, India, 3–4 May 2024; pp. 1–5. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.