Next Article in Journal
Monitoring Technologies for Truck Drivers: A Systematic Review of Safety and Driving Behavior
Previous Article in Journal
Evaluation of the Acoustic Impact of the Public Road Network on a Nature Conservation Area: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization

by
Piotr Okoniewski
* and
Jacek Piskorowski
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-310 Szczecin, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6512; https://doi.org/10.3390/app15126512
Submission received: 28 March 2025 / Revised: 4 June 2025 / Accepted: 6 June 2025 / Published: 10 June 2025

Abstract

Time-varying digital filters are widely used in dynamic signal processing applications, but their transient response can significantly impact performance, particularly in real-time systems. This study focuses on reducing transient time in time-varying filters through second-order section (SOS) optimization. By employing a numerical optimization approach, we selectively adjust the coefficients of a single SOS within a higher-order filter to minimize the transient period while maintaining overall stability. Using a sequential quadratic programming (SQP) algorithm, we determine a time-varying coefficient trajectory over a finite horizon, ensuring a rapid convergence to steady-state behavior. Experimental results demonstrate that this targeted coefficient adaptation reduces transient time by up to 80% compared to conventional static designs, with minimal computational overhead. Additionally, a comparative analysis with traditional linear time-invariant (LTI) filters highlights the advantage of this method in suppressing transient oscillations while preserving long-term filter characteristics. The proposed approach provides a practical and efficient strategy for enhancing filter responsiveness in applications requiring both stability and real-time adaptability. These findings suggest that selective time variation in SOS decomposition can be a valuable tool in digital filter design, improving efficiency without excessive memory or processing demands.

1. Introduction

Digital filters are fundamental tools in signal processing, widely used in applications ranging from telecommunications and control engineering to audio engineering. Traditionally, filters are designed with fixed coefficients optimized to meet specific performance criteria such as frequency response and stability. However, in dynamic systems or time-varying environments, static filters may fail to deliver sufficient responsiveness to changing signal characteristics [1,2,3]. To address this, researchers have developed both adaptive filtering techniques—such as LMS, RLS, or Wiener filters—and time-varying filter architectures that modify filter characteristics over time [4,5,6]. Adaptive filters typically update their coefficients in real time, guided by an error signal derived from system feedback. While powerful, they are often computationally intensive and may require high memory throughput, making them less suitable for real-time or embedded systems with strict resource constraints. In contrast, time-varying filters with precomputed parameter trajectories offer a fixed, feedforward method for enhancing transient performance without incurring significant runtime/online cost.
Time-varying filters, characterized by coefficients that change over time, offer enhanced flexibility and adaptability in signal processing. Despite their advantages, time-varying filters introduce new challenges, such as increased computational complexity and potential stability issues. One critical concern is transient time—the period required for a filter to settle into its steady-state response (typically within 2%). This aspect is particularly important in real-time applications, where delays can degrade system performance. Higher-order filters, while providing better selectivity, are especially prone to prolonged transients, further complicating the design process.
This paper investigates the use of numerical optimization of second-order section (SOS) coefficients to minimize transient times in time-varying filters, offering a pathway to more efficient signal processing.
Transient time is a crucial factor in filter performance, representing the duration required for a filter to reach its steady-state response after a change in input or configuration. Additionally, a filter’s initial conditions (hot/cold start) significantly influence transient duration. In many applications, especially real-time systems, long transient times can introduce unacceptable delays, reducing overall system responsiveness and accuracy. Higher-order filters, while providing precise control over signal characteristics, often exhibit longer transient times, making their practical implementation more challenging. Reducing transient time without compromising filter stability and performance is a significant design challenge that demands innovative solutions.
The primary objective of this paper is to minimize transient time in digital filters with time-varying coefficients. To achieve this, we focus on decomposing higher-order filters into second-order sections (SOSs) to improve numerical stability and parameter control. A numerical optimization approach is proposed to fine-tune the coefficients of one or more SOSs, effectively reducing transient behavior. The goal is to strike a balance between transient time reduction and other critical performance metrics, including stability, steady-state accuracy, and memory consumption. By demonstrating the effectiveness of this method, this study aims to provide a practical framework for designing time-varying filters with improved responsiveness while maintaining efficient memory usage for storing time-varying coefficients.

2. Related Work

2.1. Literature Review

One early approach to mitigate long transients was to manipulate a filter’s initial conditions rather than its coefficients. Preloading the filter output with appropriate non-zero initial values can counteract the natural transient decay. Pei and Tseng [7] demonstrated this technique for an ECG notch filter, effectively suppressing startup oscillation in a 50/60 Hz notch by initializing the filter’s output to cancel the impulse response. Later, Kocoń and Piskorowski extended this idea by designing an FIR notch filter derived from an IIR prototype with carefully set initial conditions, achieving significant transient suppression [8]. Another method is to analytically compute the required initial outputs of internal delay elements (filter memory) for a given initial input, often by projecting the initial portion of the input signal onto the filter’s homogeneous response. Dewald et al. [9] introduced an iterative signal-shifting algorithm to automate this process. In their approach, the input signal is shifted in time and fed into the filter repeatedly, adjusting the shift until the filter’s internal memory converges to values that produce no transient. These methods treat the filter’s transient as a signal to be canceled via initial state injection.
Additionally, a more general strategy is to make the filter coefficients time-varying during the transient. Instead of keeping a fixed pole and zero configuration, the filter starts with a gentler configuration (to produce a fast decay) and then smoothly transitions to the final, more selective configuration. Piskorowski’s 2010 work introduced a Q-factor-varying notch filter where the pole radius is initially reduced (broadening the notch but damping the ringing) and then increased exponentially to the desired high Q value [10]. This approach significantly shortened the transient duration while still producing a sharp notch in steady state.
Similarly, Tan et al. [11] proposed a pole-radius variation scheme for notch filters, demonstrating transient suppression by dynamically adjusting the pole radius over time. In the low-pass filter domain, Okoniewski and Piskorowski [12] developed a time-varying IIR low-pass filter based on an analog oscillatory model. They discretized a second-order system and temporarily modulated its natural frequency and damping ratio to quickly attenuate the step-response transient. By adjusting these parameters (essentially the pole locations) during the initial response, they suppressed overshoot and reduced settling time. An iterative optimization procedure was applied to fine-tune the parameter trajectory for minimal settling time. This trend has continued in various applications, including sensing and robotics. For instance, a time-variant filter for force/torque sensors modulates its coefficients at contact onset, reducing impact force transients [13].
Gutierrez de Anda and Meza Dector [14] demonstrated a second-order low-pass filter that automatically adjusts its parameters (via a nonlinear control loop) immediately after a sudden change in the input. By momentarily widening the filter’s bandwidth and/or altering its damping factor, the filter settles much faster than a conventional static design, all while preserving the intended low-pass frequency response once the parameters revert. They also analyzed the stability of this linear time-varying (LTV) filter, showing that it maintains bounded-input bounded-output stability under the parameter adaptation scheme. Building on the idea of time-varying coefficients, de la Garza et al. [15] proposed a variational approach to design the optimal time-variation trajectory for an IIR filter’s cutoff frequency. Using calculus of variations, they derived a closed-form time-course for the filter’s pole movement that minimizes the rise time (settling time) of the step response. This optimal time-varying design achieved a shorter transient than previous ad hoc parameter variation rules, highlighting the benefit of an optimized coefficient schedule.
Amini and Mozaffari Tazehkand [16] recently presented a feedback-structured IIR notch filter with transient suppression achieved by continuously varying the feedback gain. In their design, the pole radius (which determines the notch sharpness and transient length) is kept lower than its final value at the moment of filter startup, thus damping the initial response. Sharma et al. [17] took a similar time-varying approach by leveraging a lattice wave digital filter structure for the notch. Their lattice notch filter begins with a relatively wider notch (lower Q) and progressively reduces the notch bandwidth as time advances, effectively shortening the ringing duration. The lattice wave digital implementation ensures numerical robustness and was shown to produce minimal transient overshoot, which the authors validated through FPGA implementation results.
In addition to time-varying strategies, there are also design-time techniques to improve notch filter performance. Jayant et al. [18] pursued a minimax optimization approach for a fixed-coefficient notch filter targeting 50 Hz noise in ECG. By formulating the filter design as an optimization problem, they adjusted the pole-zero locations to minimize the worst-case error between the desired ideal notch response and the realized filter response.
Introducing time variation in filter coefficients raises important stability considerations. In linear time-invariant (LTI) filters, stability is assured by having all poles inside the unit circle (for discrete-time filters). With time-varying filters, however, poles move over time and traditional LTI stability criteria no longer directly apply [19,20].
Kamen’s work [21] introduced an algebraic theory of poles and zeros for linear time-varying systems, providing a groundwork for understanding how system eigenvalues generalize in the time-varying case. This framework helped define notions of “instantaneous” poles or spectral values that evolve with time, which is critical for discussing stability beyond the static pole locations of LTI systems. Notably, Zhu [22] presented a necessary and sufficient criterion for exponential stability of linear time-varying (LTV) discrete-time systems.
Recent studies have extended time-varying filter techniques across various domains. For instance, Ye and Song [23] embed a command filter with time-varying gain into a backstepping control scheme, simplifying the controller design for high-order systems while preserving prescribed transient performance. Jelfs et al. [24] develop an adaptive all-pass filter to track nonstationary propagation delays, employing an LMS-style coefficient update that continuously adjusts a filter’s parameters for accurate time-varying delay estimation. Furthermore, Wu et al. [25] introduce a novel time-varying filtering algorithm based on a short-time fractional Fourier transform with time-dependent transform order, achieving effective filtering of multi-component signals whose spectral characteristics change over time.
In addition to these algorithmic developments, time-varying filters have proven effective in specific applications. Cui et al. [26] present a Kalman filtering approach for bearing prognostics in which the filter’s dynamics evolve with the system’s degradation state, allowing the model to automatically adapt across different wear stages and improve remaining-life prediction accuracy. Similarly, in the audio domain, Chilakawad and Kulkarni [27] implement time-varying IIR filtering in binaural hearing aids to accommodate device nonlinearities and dynamically changing acoustic environments, illustrating the broad applicability of time-varying filter design in diverse real-world systems.

2.2. Gap Identification

A key challenge in the current state-of-the-art design of time-varying filters is the flexibility of transient time. In this paper, we shift the focus to another practical aspect of these techniques: the memory consumption required to store time-varying parameters. Previous studies have proposed selecting coefficient sets in a way that allows parameter values to be approximated using simple curve-fitting techniques, reducing the need for extensive storage.
This paper introduces an alternative approach to selecting coefficients, aiming to achieve efficient transient time reduction while minimizing memory usage. Specifically, the method focuses on optimizing a single section within a decomposed higher-order digital filter, balancing transient performance with reduced storage requirements.

3. Materials and Methods

3.1. Time-Varying Filters

The concept of time-varying coefficients has primarily been explored in the context of adaptive filtering techniques, where the filter’s coefficients evolve over time to address specific problems or align with the chosen method. In this work, we propose using a predefined control rule to achieve the fastest possible response from the equalized filter while preserving its long-term frequency characteristics. Unlike adaptive filtering, our approach involves calculating the control rule once during the design phase, allowing it to be applied consistently whenever a relevant excitation occurs. The difference equation of a digital linear time-variant system (LTV) can be written as follows:
y   n = k = 1 M a k n · y n k + k = 0 N b k · x n k ,
where
M —The number of previous outputs y n k included in the equation.
N —The number of current and previous inputs x n k considered.
a k n —Time-varying coefficients for the feedback terms (previous outputs).
b k n —Time-varying coefficients for the feedforward terms (inputs).
y n —Current output at time step n .
x n —Current input at time step n .

3.2. Transient Behavior Analysis

Transient time in digital filters is the interval required for the filter’s output to stabilize after a sudden change in the input signal, such as a step input. The duration of the transient response is influenced by the filter’s parameters, including the pole locations, which dictate stability and speed of convergence. Poles closer to the unit circle in the z-plane lead to longer transient times, as the system response decays more slowly. Additionally, the filter order is a significant factor. Higher-order filters typically produce more complex and extended transient behaviors due to the interplay of multiple poles and zeros.
Figure 1 depicts an exemplary step response of the 4th-order low-pass Butterworth filter. The measured 1% steady-state transient time has settled at 0.024 s after step excitation.
Time-varying coefficients can further complicate transient behavior, introducing variability that may prolong or destabilize the response, underscoring the need for careful parameter optimization.

3.3. Second-Order Section Decomposition

Higher-order digital filters often encounter numerical challenges, such as coefficient quantization sensitivity and overflow, which can degrade performance in low-precision or fixed-point implementations. By decomposing the filter into multiple second-order sections (SOS), these numerical issues are localized and thus more effectively mitigated. Each SOS is represented as a biquad transfer function:
H ( z ) = b 0 + b 1 z 1 + b 2 z 2 1 + a 1 z 1 + a 2 z 2 .
This corresponds to the time-domain difference equation:
y [ n ] = a 1 y [ n 1 ] a 2 y [ n 2 ] + b 0 x [ n ] + b 1 x [ n 1 ] + b 2 x [ n 2 ] .
A higher-order IIR filter of order N is typically realized by cascading M = N 2 second-order sections, with an optional first-order section for odd-order filters:
H t o t a l ( z ) = i = 1 M H i ( z ) .
The decomposition into SOS format is commonly performed using polynomial factorization of the filter’s numerator and denominator.
SOS, with its fewer coefficients and simpler pole-zero structure, is less susceptible to instability and rounding errors. This modular design facilitates targeted optimization, as each second-order section can be tuned independently for optimal performance, reducing the overall computational effort. From a resource perspective, SOS decomposition lowers memory usage and power consumption, making it particularly valuable for real-time and embedded systems. It also enhances flexibility by allowing sections that handle more critical frequency components to use higher-precision resources as needed. Moreover, the ability to add or remove sections without overhauling the entire filter design increases scalability—an important benefit in adaptive or time-varying scenarios where filter parameters may need to change rapidly. Ultimately, SOS decomposition provides a robust framework for achieving high filter performance, efficient resource utilization, and straightforward optimization in higher-order digital filter designs.
In this paper one can find an extension of the benefits of using SOS decomposition in terms of a proposal to introduce time-varying coefficients into a finite horizon to improve the transient time of the whole design.

3.4. Decomposition Approach

Decomposing a higher-order digital filter into second-order sections (SOSs) provides a stable and numerically efficient means of realizing the transfer function. In this work, we use 2023b MATLAB’s built-in function, which takes the filter’s numerator and denominator polynomials (in transfer function form) and factors them into multiple second-order blocks. Each block corresponds to a pair of conjugate poles (and zeros), thereby reducing sensitivity to rounding errors and making it straightforward to manage overflow concerns. We opt for a cascade structure since it simplifies the allocation of scale factors for each block, ensuring a more uniform distribution of gain and helping maintain stability. Moreover, by breaking the filter into multiple SOSs, we can individually optimize or tweak specific sections as needed, offering flexibility for further performance refinements without rewriting the entire filter design.
Figure 2 presents the magnitude response of the full 6th-order low-pass elliptic filter (sampling frequency Fs-1 kHz; cutoff frequency Fc-100 Hz; passband ripple Rp-1dB; stopband attenuation Rs-40dB) and the magnitude responses of the three second-order sections.
Figure 3 depicts the step responses of the full and SOS decomposition of the same low-pass elliptic filter.
We propose to drive the focus to one of the sections and apply the time-varying coefficient methodology to reduce the transient time of the whole design.

3.5. Numerical Optimization

We focus on optimizing a single second-order section (SOS) whose time-varying difference equation can be expressed as in (3), where a k n and b k n are the time-varying coefficients for k 0,1 , 2 . Over a finite horizon H, these coefficients are allowed to vary from sample to sample, while for n > H they revert to their stationary values, which ensures long-term stability. The objective function is formulated to minimize the transient response by penalizing deviations of the filter output y n from a desired steady-state target y s s during the initial H samples:
J θ = n 1 H y n ; θ y s s 2 ,
where
θ = { a 1 [ n ] , a 2 [ n ] , b 0 [ n ] , b 1 [ n ] , b 2 [ n ]   |   n = 1 , . . .   , H } ,
collectively represents all time-varying coefficients a k n and b k n for 1 n H . The cost function J θ encodes the principle that the filter output should converge as quickly as possible to y s s , thereby reducing transient state. Additionally, a set of stability constraints ensures that the time-varying SOS stays a valid candidate. One can find further details about time-varying filter stability in the author’s previous works. To solve this constrained minimization problem, we employ the Sequential Quadratic Programming (SQP) algorithm for its robust handling of nonlinear constraints. We have also proposed a bounding constraint in the form of
1 a k n , b k n 1 ,
which is a simple yet effective means to prevent runaway filter gains that might otherwise prolong or destabilize the transient.
Once optimized, the coefficient set is stored and used in real-time operation during known transient events (e.g., a step input or signal onset). After H samples, the coefficients revert to their stationary (LTI) values:
a i [ n ] = a i L T I [ n ] ,             b i [ n ] = b i L T I [ n ]     f o r   n > H ,
This hybrid approach maintains the filter’s desired long-term frequency response while minimizing initial settling time.

4. Results

4.1. Optimization Outcomes

As an example let us use the elliptic filter mentioned in Section 3.4. Figure 4 presents the step response of #2 SOS of the final design.
Table 1 presents the coefficients of the stationery #2 SOS.
Table 2 outlines the performance of the section in terms of transient time duration.
As one can notice, this section’s step response is characterized by significant ripple behavior.
By introducing the iterative SQP solver described in Section 3.5, we have acquired a new time-varying SOS with a significantly improved transient time. Table 3 and Table 4 present the set of coefficients ( a k n and b k n ) that change in the span of five samples (horizon).
As mentioned in Section 3.5, after reaching the horizon (five samples) the coefficients settle on the original #2 SOS parameters to preserve the desired frequency characteristic. Figure 5 outlines the graphical comparison of the LTI and LTV #2 SOS instances.
Table 5 summarizes the improvement in the reduction in the transient time by using the time-varying coefficients.

4.2. Comparison with Baseline

The previous section proved the usefulness of the time-varying concept in terms of reducing the transient time of one of the second-order sections. For the whole design to be useful, one needs to focus on the performance of the final (full) filter design. Figure 6 presents the comparison of the complete static solution with the novel approach to improving one of the SOS parts.
Figure 6 compares the step responses of a time-invariant, SOS-based digital filter (blue dashed line) with a time-varying, SOS-based design (red solid line). The baseline refers to a standard sixth-order elliptic filter, implemented using three fixed second-order sections. All coefficients remain constant over time, and no transient optimization is applied. In the proposed LTV design, the same sixth-order elliptic filter structure is used, but only one SOS (#2) has its coefficients varied across five samples based on the SQP-optimized trajectory. The other two SOSs remain identical to those in the baseline. Notably, the time-varying filter converges more rapidly to the steady-state level, displaying less overshot and fewer oscillations during the early samples. This outcome underscores the advantage of selectively adjusting SOS coefficients to reduce unwanted oscillations and accelerate the settling process, reducing the transient time of the filtering structure. Table 6 summarizes the overall improvement in the reduction in the transient time of the complete design when compared to the classical time-invariant structure.
As one can notice, the reduction in the transient time has reached up to 80% for the 5% threshold, which should be considered a major improvement.

4.3. Robustness Analysis

A natural way to highlight the benefits of introducing time-varying coefficients into a single second-order section is to compare how both the classical, fully LTI filter and the partially time-varying design perform under changing signal conditions. Specifically, analyzing their time-domain behavior when subjected to different noise levels or varying signal frequencies provides clear evidence of how transient response and overall performance are impacted. By overlaying their outputs on a noisy input signal, one can observe which approach settles faster and better suppresses undesired oscillations. This direct, side-by-side comparison helps quantify whether allowing even a single SOS can significantly reduce the transient period without degrading the filter’s long-term characteristics.
Figure 7 illustrates an exemplary performance of both the classical and the proposed structures within a noised environment.
In the first experiment, we added a high-frequency (300 Hz) noise component on top of an otherwise simple step input signal. The dashed black trace in Figure 7 shows how the noise significantly distorts the step’s clean rise. A conventional time-invariant filter (blue line) reduces much of the noise but still exhibits a notable transient, including an overshoot and oscillations as it settles. By contrast, modifying a single second-order section to have time-varying coefficients (red line) shortens the settling period while maintaining effective noise suppression. This illustrates how the proposed approach can outperform a purely LTI filter when faced with strong high-frequency interference.
To further evaluate the robustness of the proposed method, we examined a more challenging case where the interfering signal lies just above the passband of the elliptic filter. Specifically, we added a 110 Hz sinusoidal component—which is only 10 Hz above the cutoff frequency of the sixth-order low-pass elliptic filter (Fc = 100 Hz)—to a step signal. This setup stresses the filter’s ability to suppress high-frequency content close to the transition band, while still preserving a rapid transient response.
Figure 8 illustrates an exemplary performance of both the classical and the proposed structures within a challenging noised environment.
As shown in Figure 8, the classical time-invariant filter (blue) exhibits a relatively slow settling behavior with noticeable oscillations caused by the nearby frequency interference. In contrast, the proposed design with one time-varying SOS (red) converges more quickly and cleanly, effectively dampening the transient while maintaining similar steady-state accuracy. This result highlights the method’s strength in handling tight spectral proximity scenarios, which are common in dense signal environments.
Figure 9 presents the classical notch-type IIR filter compared to another implementation of the proposed method.
In this experiment, we employ an IIR notch filter designed to attenuate a 100 Hz disturbance while preserving both the step component and any relevant lower frequencies. The dashed black curve again shows the input signal, now containing a step, a 5 Hz “useful” component, and strong 100 Hz noise. The notch filter’s time-invariant version (blue) substantially reduces the 100 Hz amplitude but still suffers from a longer transient and overshoot. By allowing one second-order section to have time-varying coefficients (red), the filter settles faster and more cleanly, with less oscillatory behavior. These results demonstrate that the proposed time-varying SOS technique can be seamlessly integrated into notch-type IIR filters, enabling them to reduce transient times while maintaining precise attenuation at the targeted notch frequency.

5. Discussion

In this study, we found that introducing time-varying coefficients into only one second-order section can substantially shorten the overall transient time, offering a clear advantage over a fully time-invariant design. At the same time, the computational overhead remains relatively modest, since the method requires storing just a limited set of coefficients within a user-defined horizon rather than adapting the entire filter for the time-varying concept. These optimized coefficients, derived through a nonlinear routine that carefully balances settling speed and stability, cannot be readily approximated by a simple mathematical function or curve, so the most reliable approach is to store them directly. Although this adds some memory usage—particularly in low-power or embedded contexts—its footprint is still far smaller than that of a fully time-varying filter architecture. By confining the mechanism to one SOS, the structure preserves the benefits of time-varying behavior exactly where it is most needed, while leaving the rest of the filter in a low-complexity, stationary form. Moreover, empirical tests confirm that this targeted approach preserves steady-state performance across a range of signal and noise conditions. Thus, it provides a practical middle ground, allowing significant transient improvements in return for only a slight increase in storage and computational effort.
Although the approach of using time-varying coefficients in one second-order section has demonstrated notable benefits, it is not universally advantageous across all filter structures or use cases. For instance, Butterworth filters, with their relatively low selectivity, gain much less from this technique, suggesting that filters offering sharper roll-offs or more complex pole-zero placements stand to benefit most. Additionally, while the coefficient adaptation process proves straightforward under a “cold start,” where the filter begins with zero initial conditions, the performance in a “hot start” scenario—when the filter is already operating—requires further investigation. Another critical concern is determining the precise moment at which to initiate time-varying behavior; if it is triggered too early or too late, the potential gains can be diminished or overshadowed by unnecessary overhead. Likewise, rapid changes in the input signal, such as narrow “stairs,” can disrupt the predefined horizon if the filter coefficients are only optimized for a single transient event. These issues underscore the method’s current limitations and highlight areas where additional research—on trigger mechanisms, adaptive horizons, and robust designs for rapidly shifting signals—remains necessary.

6. Conclusions

In this paper, we introduced a novel method for introducing time-varying coefficients into a single second-order section (SOS), thereby reducing transient times without incurring excessive computational overhead. By selectively adapting only one SOS, we strike a balance between fully time-varying filtering and a purely static approach. Our analysis indicates that filters with sharper roll-offs, such as elliptic or Chebyshev designs, derive the most significant benefits, although even low-selectivity filters show moderate improvements. Practical demonstrations revealed a pronounced reduction in settling time under varying noise levels and signal conditions, underscoring the technique’s robustness. We also highlighted the importance of carefully choosing the horizon and triggering mechanism to maximize performance gains. Overall, our findings confirm that a selective time-varying strategy can markedly enhance responsiveness while preserving the filter’s stability.
The time-varying SOS strategy has immediate relevance in scenarios demanding both rapid settling and stable operation, such as live audio signal processing and real-time communication systems. By reducing the filter’s transient period, latency-sensitive applications—like wireless data transmission, virtual reality audio rendering, and adaptive control systems—can benefit from quicker responsiveness. Automotive sensor fusion is another promising domain, where faster convergence aids in smoother integration of rapidly updated sensor inputs. The method’s targeted adaptation also suits power-constrained embedded devices that need to optimize transient behavior without expending excessive computational resources. Finally, its flexibility and moderate complexity make it a practical choice for any system where short-lived disturbances must be addressed quickly while maintaining robust long-term performance.
Future work will involve refining the triggering mechanism for time-varying updates, especially in “hot start” scenarios where the filter has already been operating. Another promising line of research is to develop dynamic or adaptive horizons that better handle signals with narrow “stairs” or sudden changes. Extending this partial adaptation strategy to higher-order or multidimensional filters also merits investigation, as it may open doors to more sophisticated applications like 3D audio or MIMO communication systems. Real-time implementation is equally important, ensuring that the computational overhead of coefficient updates remains feasible for embedded and low-power devices. Finally, an in-depth exploration of the cost–benefit balance between the extra memory needs for storing time-varying parameters and the performance gains in transient reduction can guide practical design choices.

Author Contributions

Conceptualization, P.O. and J.P.; methodology, P.O. and J.P.; software, P.O.; validation, P.O. and J.P.; formal analysis, J.P.; investigation, P.O. and J.P.; resources, P.O.; data curation, P.O.; writing—original draft preparation, P.O.; writing—review and editing, J.P.; visualization, P.O.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Niedźwiecki, M.; Pietrzak, P. High-Precision FIR-Model-Based Dynamic Weighing System. IEEE Trans. Instrum. Meas. 2016, 65, 2349–2359. [Google Scholar] [CrossRef]
  2. Niedźwiecki, M.; Gańcza, A.; Żuławiński, W.; Wyłomańska, A. Robust local basis function algorithms for identification of time-varying FIR systems in impulsive noise environments. In Proceedings of the 2024 IEEE 63rd Conference on Decision and Control (CDC), Milan, Italy, 16–19 December 2024; pp. 3463–3470. [Google Scholar] [CrossRef]
  3. Jaskuła, M.; Kaszyński, R. Using the parametric time-varying analog filter to average-evoked potential signals. IEEE Trans. Instrum. Meas. 2004, 53, 709–715. [Google Scholar] [CrossRef]
  4. Wang, D.; Bazzi, A.; Chafii, M. RIS-Enabled Integrated Sensing and Communication for 6G Systems. In Proceedings of the 2024 IEEE Wireless Communications and Networking Conference (WCNC), Dubai, United Arab Emirates, 21–24 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
  5. Yu, C.; Gu, R.; Wang, Y. The Application of Improved Variable Step-Size LMS Algorithm in Sonar Signal Processing. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 1856–1860. [Google Scholar] [CrossRef]
  6. Ali, A.; Moinuddin, M.; Al-Naffouri, T.Y. NLMS is More Robust to Input-Correlation Than LMS: A Proof. IEEE Signal Process. Lett. 2022, 29, 279–283. [Google Scholar] [CrossRef]
  7. Pei, S.-C.; Tseng, C.-C. Elimination of AC interference in electrocardiogram using IIR notch filter with transient suppression. IEEE Trans. Biomed. Eng. 1995, 42, 1128–1132. [Google Scholar] [PubMed]
  8. Kocoń, S.; Piskorowski, J. Time-Varying IIR Notch Filter with Reduced Transient Response Based on the Bézier Curve Pole Radius Variability. Appl. Sci. 2019, 9, 1309. [Google Scholar] [CrossRef]
  9. Dewald, K.; Bersier, A.; Gardella, P.J.; Jacoby, D. IIR filter transient suppression by signal shifting. In Proceedings of the 2014 IEEE Biennial Congress of Argentina (ARGENCON), Bariloche, Argentina, 11–13 June 2014; pp. 153–158. [Google Scholar]
  10. Piskorowski, J. Digital Q-Varying Notch IIR Filter with Transient Suppression. IEEE Trans. Instrum. Meas. 2010, 59, 866–872. [Google Scholar] [CrossRef]
  11. Tan, L.; Jiang, J.; Wang, L. Pole-Radius-Varying IIR Notch Filter with Transient Suppression. IEEE Trans. Instrum. Meas. 2012, 61, 1684–1691. [Google Scholar] [CrossRef]
  12. Okoniewski, P.; Piskorowski, J. Short Transient Parameter-Varying IIR Filter Based on Analog Oscillatory System. Appl. Sci. 2019, 9, 2013. [Google Scholar] [CrossRef]
  13. Okoniewski, P.; Osypiuk, R.; Piskorowski, J. Short-Transient Discrete Time-Variant Filter Dedicated for Correction of the Dynamic Response of Force/Torque Sensors. Electronics 2020, 9, 1291. [Google Scholar] [CrossRef]
  14. Gutierrez de Anda, M.A.; Meza Dector, I. A second-order lowpass parameter-varying filter based on the interconnection of first-order stages. IEEE Trans. Circuits Syst. I 2011, 58, 1840–1853. [Google Scholar] [CrossRef]
  15. de la Garza, K.T.; Gomez, J.T.; de Lamare, R.C.; Garcia, M.J.F.-G. A variational approach for designing infinite impulse response filters with time-varying parameters. IEEE Trans. Circuits Syst. I 2018, 65, 1303–1313. [Google Scholar] [CrossRef]
  16. Amini, S.; Mozaffari Tazehkand, B. Design of feedback-structured IIR notch filter with transient suppression using gain variation. Biomed. Signal Process. Control 2022, 71, 103075. [Google Scholar] [CrossRef]
  17. Sharma, A.; Kumar Rawat, T.; Agrawal, A. Design and FPGA implementation of lattice wave digital notch filter with minimal transient duration. IET Signal Process. 2020, 14, 440–447. [Google Scholar] [CrossRef]
  18. Jayant, H.K.; Rana, K.P.S.; Kumar, V.; Nair, S.S.; Mishra, P. Efficient IIR notch filter design using minimax optimization for 50Hz noise suppression in ECG. In Proceedings of the International Conference on Signal Processing, Computing and Control (ISPCC), Waknaghat, India, 24–26 September 2015; pp. 290–295. [Google Scholar]
  19. Laroche, J. On the stability of time-varying recursive filters. J. Audio Eng. Soc. 2007, 55, 460–471. [Google Scholar]
  20. Werner, K.J.; McClellan, R. Time-Varying Filter Stability and State Matrix Products. In Proceedings of the 25th International Conference on Digital Audio Effects, Vienna, Austria, 6–10 September 2022; pp. 101–108. [Google Scholar]
  21. Kamen, E.W. The poles and zeros of a linear time-varying system. Linear Algebra Its Appl. 1988, 98, 263–289. [Google Scholar] [CrossRef]
  22. Zhu, J.; Johnson, C.D. Unified canonical forms for matrices over a differential ring. Linear Algebra Its Appl. 1991, 147, 201–248. [Google Scholar] [CrossRef]
  23. Ye, H.; Song, Y. Backstepping design embedded with time-varying command filters. IEEE Trans. Circuits Syst. II 2022, 69, 2832–2836. [Google Scholar] [CrossRef]
  24. Jelfs, B.; Sun, S.; Ghorbani, K.; Gilliam, C. An adaptive all-pass filter for time-varying delay estimation. IEEE Signal Process. Lett. 2021, 28, 628–632. [Google Scholar] [CrossRef]
  25. Wu, L.; Zhao, Y.; He, L.; He, S.; Ren, G. A time-varying filtering algorithm based on short-time fractional Fourier transform. In Proceedings of the International Conference on Computing, Networking and Communications (ICNC), Big Island, HI, USA, 17–20 February 2020; pp. 555–560. [Google Scholar]
  26. Cui, L.; Wang, X.; Wang, H.; Ma, J. Research on remaining useful life prediction of rolling element bearings based on time-varying Kalman filter. IEEE Trans. Instrum. Meas. 2020, 69, 2858–2867. [Google Scholar] [CrossRef]
  27. Chilakawad, A.; Kulkarni, P.N. Time varying IIR filters for binaural hearing aids. In Proceedings of the International Conference on Smart Systems for Applications in Electrical Sciences (ICSSES), Tumakuru, India, 3–4 May 2024; pp. 1–5. [Google Scholar]
Figure 1. Step response and transient time of an exemplary 4th-order low-pass Butterworth filter (sampling frequency Fs-1 kHz; cutoff frequency Fc-50 Hz).
Figure 1. Step response and transient time of an exemplary 4th-order low-pass Butterworth filter (sampling frequency Fs-1 kHz; cutoff frequency Fc-50 Hz).
Applsci 15 06512 g001
Figure 2. Magnitude responses: 6th-order low-pass elliptic filter decomposition.
Figure 2. Magnitude responses: 6th-order low-pass elliptic filter decomposition.
Applsci 15 06512 g002
Figure 3. Step responses: 6th-order low-pass elliptic filter decomposition.
Figure 3. Step responses: 6th-order low-pass elliptic filter decomposition.
Applsci 15 06512 g003
Figure 4. Step response of #2 SOS of the 6th-order elliptic filter.
Figure 4. Step response of #2 SOS of the 6th-order elliptic filter.
Applsci 15 06512 g004
Figure 5. Comparison of step responses of the #2 SOS designed with classical methodology and the proposed time-varying approach.
Figure 5. Comparison of step responses of the #2 SOS designed with classical methodology and the proposed time-varying approach.
Applsci 15 06512 g005
Figure 6. Comparison of full classical time-invariant filter with the novel design with one time-varying SOS.
Figure 6. Comparison of full classical time-invariant filter with the novel design with one time-varying SOS.
Applsci 15 06512 g006
Figure 7. Classical linear time-invariant filtering structure compared with proposed linear time-varying SoS on exemplary test signal.
Figure 7. Classical linear time-invariant filtering structure compared with proposed linear time-varying SoS on exemplary test signal.
Applsci 15 06512 g007
Figure 8. Comparison of classical time-invariant filter and proposed time-varying SOS filter under step input with added 110 Hz interference. The 110 Hz component lies just above the 100 Hz cutoff, creating a near-edge suppression challenge. The time-varying design shows faster and cleaner convergence.
Figure 8. Comparison of classical time-invariant filter and proposed time-varying SOS filter under step input with added 110 Hz interference. The 110 Hz component lies just above the 100 Hz cutoff, creating a near-edge suppression challenge. The time-varying design shows faster and cleaner convergence.
Applsci 15 06512 g008
Figure 9. Comparison of a classical IIR notch filter with the proposed time-varying methodology.
Figure 9. Comparison of a classical IIR notch filter with the proposed time-varying methodology.
Applsci 15 06512 g009
Table 1. Linear time- invariant coefficients of the #2 SOS.
Table 1. Linear time- invariant coefficients of the #2 SOS.
Coefficients012
a1−1.583145802859050.867241363463590
b1−1.388888370100721
Table 2. Transient time data for classical stationary SOS.
Table 2. Transient time data for classical stationary SOS.
Transient Time ThresholdSample IndexTime
1%540.053 s
2%470.046 s
5%310.03 s
Table 3. Time-varying set of a k n coefficients.
Table 3. Time-varying set of a k n coefficients.
Sample a 0 n a 1 n a 2 n
n = 110.1058−0.0093
n = 210.10970.5325
n = 310.15950.4362
n = 410.37040.2778
n = 51−10.6328
Table 4. Time-varying set of b k n coefficients.
Table 4. Time-varying set of b k n coefficients.
Sample b 0 n b 1 n b 2 n
n = 10.97270.52540.6714
n = 20.19200.53030.7413
n = 30.57330.79910.7783
n = 40.96130.9910.9952
n = 50.44640.46460.4498
Table 5. Transient time data for time-varying SOS.
Table 5. Transient time data for time-varying SOS.
Transient Time ThresholdSample IndexTime
1%30.02 s
2%30.02 s
5%30.02 s
Table 6. Transient time comparison between classical (LTI—linear time-invariant) and the proposed (LTV—linear time-varying) method.
Table 6. Transient time comparison between classical (LTI—linear time-invariant) and the proposed (LTV—linear time-varying) method.
Transient Time ThresholdSample Index (LTI)Time (LTI)Sample Index (LTV)Time (LTV)
1%1980.197 s1180.117
2%1760.175 s680.067
5%860.085 s160.015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Okoniewski, P.; Piskorowski, J. Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization. Appl. Sci. 2025, 15, 6512. https://doi.org/10.3390/app15126512

AMA Style

Okoniewski P, Piskorowski J. Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization. Applied Sciences. 2025; 15(12):6512. https://doi.org/10.3390/app15126512

Chicago/Turabian Style

Okoniewski, Piotr, and Jacek Piskorowski. 2025. "Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization" Applied Sciences 15, no. 12: 6512. https://doi.org/10.3390/app15126512

APA Style

Okoniewski, P., & Piskorowski, J. (2025). Transient Time Reduction in Time-Varying Digital Filters via Second-Order Section Optimization. Applied Sciences, 15(12), 6512. https://doi.org/10.3390/app15126512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop