# Live Convolution with Time-Varying Filters

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Time-Varying Filters

#### 1.2. Convolution and Other Sound Transformations, Live Use

“Live sampling during performance.... uses the Now as its subject” [31].

## 2. Time-Varying Finite Impulse Response Filters

## 3. Dynamic Replacement of Impulse Responses

- It provides the minimum possible latency for a filter update and even allows convolution with a filter to start in parallel with the generation/recording of the filter impulse response itself.

#### Example

## 4. Time-Varying Convolution

#### 4.1. Fixing Coefficients

#### 4.2. Test Signals

## 5. Implementation

#### 5.1. Direct Convolution

#### 5.2. Fast Convolution

- Overlap-add algorithm (OLA): N samples of each input are collected padded with zeros to make a $2N$ block to which the transforms are applied and their product taken. The output is obtained by taking the inverse FFT of the convolution spectra every N samples. Since this is a $2N$ block of samples, we will need to overlap each output block by N samples (the convolution size is actually $2N-1$ samples, but we expect the last sample of the block to be zero). In a streaming process, this can be achieved by saving the last N samples of the previous output and mixing these with the first N samples from the current one. In this case, we will save the final N samples of the current output as we produce the final overlapped mix. This process is demonstrated in Figure 9.
- Overlap-save algorithm (OLS): $2N$ samples are collected from one of the inputs, and N samples are collected from the other, padded to the filter length. The signals are aligned in such a way that the second half of the first input block corresponds to the start of the second. The products of their spectra is taken and then converted back to the time-domain. The first N samples of this block are discarded, and the second half is output. In a streaming implementation, each iteration will have saved the second half of the last input block (N samples) to use as the first block of the next input to the DFT. A flowchart for this algorithm is shown in Figure 10.Since this algorithm depends on the circular property of the DFT, which cannot be guaranteed with a fully time-varying impulse response, it cannot be used in a practical implementation of the TVFIR described by Equations (24)–(26), This is because the OLS algorithm expects that the impulse response data will not vary over the duration of the convolution, which is not the case if both signals are continuously varying. Even in the more restricted scheme of stepwise replacement of impulse responses, the OLS algorithm does not appear to be applicable. With overlapping input blocks, we can no longer assume that the coefficients of the old and new filter are convolved with separate segments of the input signal.

#### 5.3. Partitioned Convolution

#### 5.4. Csound Opcodes

`liveconv`, which is an extensive modification of an existing

`ftconv`unit generator. It implements partitioned convolution employing an external function table as a means of sourcing one of the two input signals (nominally the impulse response). The second is

`tvconv`, which takes two audio signal inputs and applies the process for a given filter and partitioned length. In this section, we examine these two implementations in some detail.

#### 5.4.1. `liveconv`

`liveconv`opcode implements dynamic replacement of impulse responses (see Section 3). It employs partitioned convolution with the overlap-add (OLA) scheme. The opcode takes one input signal and a table for holding the impulse response (IR) data:

`ares liveconv ain, ift, iplen, kupdate, kclear,`

`ares`: Output signal.`ain`: Input signal.`ift`: Table number for storing the impulse response (IR) for convolution. The table may be filled with new data at any time while the convolution is running.`iplen`: Length of impulse response partition in samples; must be an integer power of two. Lower settings allow for shorter output delay but will increase CPU usage.`kupdate`: Flag indicating whether the IR table should be updated. If kupdate = 1, the IR table ift is loaded partition by partition, starting with the next partition. If kupdate = −1, the IR table ift is unloaded (cleared to zero) partition by partition, starting with the next partition. Other values have no effect.`kclear`: Flag for clearing all internal buffers. If kclear has any value ! = zero, the internal buffers are cleared immediately. This operation is not free of artefacts.

`tvconv`without freezing.

Algorithm 1: Liveconv opcode implementation. IR loading marked with blue color. |

#### 5.4.2. `tvconv`

`tvconv`opcode takes two input signals and implements time-varying convolution. We can nominally take one of these signals as the impulse response and the other as the input signal, but, in practice, no such distinction is made. The opcode takes the length of the filter and its partitions as parameters, and includes switches to optionally fix coefficients instead of updating them continuously:

`asig tvconv ain1, ain2, xupdate1, xupdate2, ipartsize, ifilsize,`

`ain1, ain2`: input signals.`xupdate1, xupdate2`: update switches u for each input signal. If $u=0$, there is no update from the respective input signal, thus fixing the filter coefficients. If $u>0$, the input signal updates the filter as normal. This parameter can be driven from an audio signal, which would work on a sample-by-sample basis, from a control signal, which would work on a block of samples at a time (depending on the`ksmps`system parameter, the block size), or it can be a constant. Each input signal can be independently frozen using this parameter.`ipartsize`: partition size, an integer P, $0<P\le N$, where N is the filter size. For values $P>1$, the actual partition size will be quantised to $Q={2}^{k}$, $k\in \mathbb{Z}$, $Q\le P$.`ifilsize`: filter size, an integer N, $N\ge P$, where P is the partition size. For partition size values $P>1$, the actual filter size will be quantised to $O={2}^{k}$, $k\in \mathbb{Z}$, $O\le N$.

`TVConv`class. In this code, there are, in fact, two implementations of the process, which are employed according to the partition size:

- For partition size = 1: direct convolution in the time domain is used, and any filter size is allowed. The following method in
`TVConv`implements this (listing 1. The vectors`in`and`ir`hold the two delay lines, which take their inputs from the signals in`inp`and`irp`. The variables`frz1`and`frz2`are signals that control the freezing/updating operation for each input.Listing 1: Direct convolution implementation. `int dconv() {``csnd::AudioSig insig(this, inargs(0));``csnd::AudioSig irsig(this, inargs(1));``csnd::AudioSig outsig(this, outargs(0));``auto irp = irsig.begin();``auto inp = insig.begin();``auto frz1 = inargs(2);``auto frz2 = inargs(3);``auto inc1 = csound->is_asig(frz1);``auto inc2 = csound->is_asig(frz2);``for (auto &s : outsig) {``if(*frz1 > 0) *itn = *inp;``if(*frz2 > 0) *itr = *irp;``itn++, itr++;``if(itn == in.end()) {``itn = in.begin();``itr = ir.begin();``}``s = 0.;``for (csnd::AuxMem<MYFLT>::iterator it1 = itn,``it2 = ir.end() - 1; it2 >= ir.begin();``it1++, it2--) {``if(it1 == in.end()) it1 = in.begin();``s += *it1 * *it2;``}``frz1 += inc1, frz2 += inc2;``inp++, irp++;``}``return OK;``}` - For partition size $>1$, partitioned convolution is used (listing 2), through an overlap-add algorithm. In this case, the process is implemented in the spectral domain, and in order to make it as efficient as possible, power-of-two partition and filter sizes are enforced internally.
Listing 2: Partitioned convolution implementation. `int pconv() {``csnd::AudioSig insig(this, inargs(0));``csnd::AudioSig irsig(this, inargs(1));``csnd::AudioSig outsig(this, outargs(0));``auto irp = irsig.begin();``auto inp = insig.begin();``auto *frz1 = inargs(2);``auto *frz2 = inargs(3);``auto inc1 = csound->is_asig(frz1);``auto inc2 = csound->is_asig(frz2);``for (auto &s : outsig) {``if(*frz1 > 0) itn[n] = *inp;``if(*frz2 > 0) itr[n] = *irp;``s = out[n] + saved[n];``saved[n] = out[n + pars];``if (++n == pars) {``cmplx *ins, *irs, *ous = to_cmplx(out.data());``std::copy(itn, itn + ffts, itnsp);``std::copy(itr, itr + ffts, itrsp);``std::fill(out.begin(), out.end(), 0.);``// FFT``csound->rfft(fwd, itnsp);``csound->rfft(fwd, itrsp);``// increment iterators``itnsp += ffts, itrsp += ffts;``itn += ffts, itr += ffts;``if (itnsp == insp.end()) {``itnsp = insp.begin();``itrsp = irsp.begin();``itn = in.begin();``itr = ir.begin();``}``// spectral delay line``for (csnd::AuxMem<MYFLT>::iterator it1 = itnsp,``it2 = irsp.end() - ffts; it2 >= irsp.begin();``it1 += ffts, it2 -= ffts) {``if (it1 == insp.end()) it1 = insp.begin();``ins = to_cmplx(it1);``irs = to_cmplx(it2);``// spectral product``for (uint32_t i = 1; i < pars; i++)``ous[i] += ins[i] * irs[i];``ous[0] += real_prod(ins[0], irs[0]);``}``// IFFT``csound->rfft(inv, out.data());``n = 0;``}``frz1 += inc1, frz2 += inc2;``irp++, inp++;``}``return OK;``}`

## 6. Applications and Use Cases

#### 6.1. Liveconvolver

#### Performative Roles with Liveconv

#### 6.2. TV Convolver

#### Practical Experiments with Tvconv

#### 6.3. Demo Sounds

#### 6.4. Future Work

## 7. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Dolson, M. Recent Advances in Musique Concrète at CARL. In Proceedings of the 1985 International Computer Music Conference, ICMC, Burnaby, BC, Canada, 19–22 August 1985. [Google Scholar]
- Roads, C. Musical Sound Transformation by Convolution. In Proceedings of the 1993 International Computer Music Conference Opening a New Horizon ICMC, Tokyo, Japan, 10–15 September 1993. [Google Scholar]
- Truax, B. Convolution Techniques. Available online: https://www.sfu.ca/~truax/Convolution%20Techniques.pdf (accessed on 31 October 2017).
- Truax, B. Sound, Listening and Place: The aesthetic dilemma. Organ. Sound
**2012**, 17, 193–201. [Google Scholar] [CrossRef] - Moore, A. Sonic Art: An Introduction to Electroacoustic Music Composition; Routledge: London, UK, 2016. [Google Scholar]
- Gardner, W.G. Efficient Convolution without Input-Output Delay. J. Audio Eng. Soc.
**1995**, 43, 127–136. [Google Scholar] - Brandtsegg, Ø. Cross Adaptive Processing as Musical Intervention; Exploring Radically New Modes of Musical Interaction in Live Performance. Available online: http://crossadaptive.hf.ntnu.no/ (accessed on 7 January 2018).
- Brandtsegg, Ø.; Saue, S. Live Convolution with Time-Variant Impulse Response. In Proceedings of the 20th International Conference on Digital Audio Effects (DAFx-17), Edinburgh, UK, 5–9 September 2017; pp. 239–246. [Google Scholar]
- Shmaliy, Y. Continuous-Time Systems; Springer: Heidelberg, Germany, 2007. [Google Scholar]
- Cherniakov, M. An Introduction to Parametric Digital Filters and Oscillators; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
- Zetterberg, L.H.; Zhang, Q. Elimination of transients in adaptive filters with application to speech coding. Signal Process.
**1988**, 15, 419–428. [Google Scholar] [CrossRef] - Verhelst, W.; Nilens, P. A modified-superposition speech synthesizer and its applications. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’86), Tokyo, Japan, 7–11 April 1986; Volume 11, pp. 2007–2010. [Google Scholar]
- Wishnick, A. Time-Varying Filters for Musical Applications. In Proceedings of the 17th International Conference on Digital Audio Effects (DAFx-14), Erlangen, Germany, 1–5 September 2014; pp. 69–76. [Google Scholar]
- Ding, Y.; Rossum, D. Filter morphing for audio signal processing. In Proceedings of the IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, USA, 15–18 October 1995; pp. 217–221. [Google Scholar]
- Zoelzer, U.; Redmer, B.; Bucholtz, J. Strategies for Switching Digital Audio Filters; Audio Engineering Society Convention 95; Audio Engineering Society: New York, NY, USA, 1993. [Google Scholar]
- Carty, B. Movements in Binaural Space: Issues in HRTF Interpolation and Reverberation, with Applications to Computer Music; Lambert Academic Publishing: Duesseldorf, Germany, 2012. [Google Scholar]
- Jot, J.M.; Larcher, V.; Warusfel, O. Digital Signal Processing Issues in the Context of Binaural and Transaural Stereophony; Audio Engineering Society Convention 98; Audio Engineering Society: New York, NY, USA, 1995. [Google Scholar]
- Lee, K.S.; Abel, J.S.; Välimäki, V.; Stilson, T.; Berners, D.P. The switched convolution reverberator. J. Audio Eng. Soc.
**2012**, 60, 227–236. [Google Scholar] - Välimäki, V.; Laakso, T.I. Suppression of transients in variable recursive digital filters with a novel and efficient cancellation method. IEEE Trans. Signal Process.
**1998**, 46, 3408–3414. [Google Scholar] [CrossRef] - Mourjopoulos, J.N.; Kyriakis-Bitzaros, E.D.; Goutis, C.E. Theory and real-time implementation of time-varying digital audio filters. J. Audio Eng. Soc.
**1990**, 38, 523–536. [Google Scholar] - Abel, J.S.; Berners, D. The Time-Varying Bilinear Transform; Audio Engineering Society Convention 141; Audio Engineering Society: New York, NY, USA, 2016. [Google Scholar]
- Wefers, F.; Vorländer, M. Efficient time-varying FIR filtering using crossfading implemented in the DFT domain. In Proceedings of the 2014 7th Medical and Physics Conference Forum Acusticum, Cracow, Poland, 7–12 September 2014. [Google Scholar]
- Wefers, F. Partitioned Convolution Algorithms for Real-Time Auralization; Logos Verlag Berlin GmbH: Berlin, Germany, 2015; Volume 20. [Google Scholar]
- Vickers, E. Frequency-Domain Implementation of Time-Varying FIR Filters; Audio Engineering Society Convention 133; Audio Engineering Society: New York, NY, USA, 2012. [Google Scholar]
- Settel, Z.; Lippe, C. Real-time timbral transformation: FFT-based resynthesis. Contemp. Music Rev.
**1994**, 10, 171–179. [Google Scholar] [CrossRef] - Wishart, T.; Emmerson, S. On Sonic Art; Contemporary Music Studies; Routledge: New York, NY, USA, 1996. [Google Scholar]
- Roads, C. Composing Electronic Music: A New Aesthetic; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
- Engum, T. Real-time Control and Creative Convolution. In Proceedings of the International Conference on New Interfaces for Musical Expression, Oslo, Norway, 30 May–1 June 2011; pp. 519–522. [Google Scholar]
- Aimi, R.M. Hybrid Percussion: Extending Physical Instruments Using Sampled Acoustics. Ph.D. Thesis, Massachusetts Institute of Technology, Department of Architecture, Program In Media Arts and Sciences, Cambridge, MA, USA, 2007. [Google Scholar]
- Schwarz, D.; Tremblay, P.A.; Harker, A. Rich Contacts: Corpus-Based Convolution of Contact Interaction Sound for Enhanced Musical Expression. In Proceedings of the International Conference on New Interfaces for Musical Expression, London, UK, 30 June–3 July 2014; pp. 247–250. [Google Scholar]
- Morris, J.M. Ontological Substance and Meaning in Live Electroacoustic Music. In Genesis of Meaning in Sound and Music, Proceedings of the 5th International Symposium on Computer Music Modeling and Retrieval, Copenhagen, Denmark, 19–23 May 2008; Revised Papers; Ystad, S., Kronland-Martinet, R., Jensen, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 216–226. [Google Scholar]
- Kagel, M. Transición II; Phonophonie Liner Notes. Available online: http://www.moderecords.com/catalog/127kagel.html (accessed on 4 December 2017).
- Emmerson, S. Living Electronic Music; Ashgate: Farnham, UK, 2007. [Google Scholar]
- Casserley, L. A Digital Signal Processing Instrument for Improvised Music. J. Electroacoust. Music
**1998**, 11, 25–29. [Google Scholar] - Oppenheim, A.V.; Schafer, R.W.; Buck, J.R. Discrete-Time Signal Processing, 2nd ed.; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
- Stockham, T.G., Jr. High-speed convolution and correlation. In Proceedings of the Spring Joint Computer Conference ACM, Boston, MA, USA, 26–28 April 1966; pp. 229–233. [Google Scholar]
- Laroche, J. On the stability of time-varying recursive filters. J. Audio Eng. Soc.
**2007**, 55, 460–471. [Google Scholar] - Lazzarini, V. Computer Music Instruments; Springer: Heildeberg, Germany, 2017. [Google Scholar]
- Lazzarini, V.; Kleimola, J.; Timoney, J.; Välimäki, V. Five Variations on a Feedback Theme. In Proceedings of the 12th International Conference on Digital Audio Effects, Como, Italy, 1–4 September 2009; pp. 139–145. [Google Scholar]
- Lazzarini, V.; Kleimola, J.; Timoney, J.; Välimäki, V. Aspects of Second-order Feedback AM synthesis. In Proceedings of the International Computer Music Conference, Huddersfield, UK, 31 July–5 August 2011; pp. 92–98. [Google Scholar]
- Kleimola, J.; Lazzarini, V.; Välimäki, V.; Timoney, J. Feedback amplitude modulation synthesis. EURASIP J. Adv. Signal Process.
**2011**, 2011. [Google Scholar] [CrossRef] - Timoney, J.; Pekonen, J.; Lazzarini, V.; Välimäki, V. Dynamic Signal Phase Distortion Using Coefficient-Modulated Allpass Filters. J. Audio Eng. Soc.
**2014**, 62, 596–610. [Google Scholar] [CrossRef] - Lazzarini, V.; Ffitch, J.; Yi, S.; Heintz, J.; Brandtsegg, Ø.; McCurdy, I. Csound: A Sound and Music Computing System; Springer: Heidelberg, Germany, 2016. [Google Scholar]
- Saue, S. Liveconv Source Code. Available online: https://github.com/csound/csound/blob/develop/Opcodes/liveconv.c (accessed on 7 January 2018).
- Lazzarini, V. The Csound Plugin Opcode Framework. In Proceedings of the 14th Sound and Music Computing Conference 2017, Aalto University, Espoo, Finland, 5–8 July 2017; pp. 267–274. [Google Scholar]
- Lazzarini, V. Supporting an Object-Oriented Approach to Unit Generator Development: The Csound Plugin Opcode Framework. Appl. Sci.
**2017**, 7, 970. [Google Scholar] [CrossRef] - Walsh, R. Cabbage; A Framework for Audio Software Development. Available online: http://cabbageaudio.com/ (accessed on 7 January 2018).
- Brandtsegg, Ø. Liveconvolver; Csound Instrument Based around the Liveconvolver Opcode. Available online: https://github.com/Oeyvind/liveconvolver (accessed on 7 January 2018).
- Brandtsegg, Ø. A Toolkit for Experimentation with Signal Interaction. In Proceedings of the 18th International Conference on Digital Audio Effects (DAFx-15), Trondheim, Norway, 30 November–3 December 2015; pp. 42–48. [Google Scholar]
- Wigan, E.R.; Alkin, E.G. The BBC Research Labs Frequency Shift PA Stabiliser: A Field Report. BBC Internal Memoranda. 1960. Available online: http://works.bepress.com/edmund-wigan/17/ (accessed on 7 January 2018).
- Brandtsegg, Ø. Liveconvolver Experiences, San Diego. Available online: http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/liveconvolver-experiences-san-diego/ (accessed on 7 January 2018).
- Wærstad, B.I. Live Convolution Session in Oslo, March 2017. Available online: http://crossadaptive.hf.ntnu.no/index.php/2017/06/07/live-convolution-session-in-oslo-march-2017/ (accessed on 7 January 2018).
- Brandtsegg, Ø. Session with 4 Singers, Trondheim, August 2017. Available online: http://crossadaptive.hf.ntnu.no/index.php/2017/10/09/session-with-4-singers-trondheim-august-2017/ (accessed on 7 January 2018).
- Brandtsegg, Ø. Convolution Demo Sounds. Available online: http://crossadaptive.hf.ntnu.no/index.php/2017/12/07/convolution-demo-sounds/ (accessed on 7 January 2018).
- Donahue, C.; Erbe, T.; Puckette, M. Extended Convolution Techniques for Cross-Synthesis. In Proceedings of the International Computer Music Conference 2016, Utrecht, The Netherlands, 12–16 September 2016; pp. 249–252. [Google Scholar]

**Figure 2.**Demonstration of dynamic filter replacement:

**Top**: The two filter impulse responses ${h}_{A}\left(n\right)$ and ${h}_{B}\left(n\right)$.

**Bottom**: The input signal $x\left(n\right)$.

**Figure 3.**Demonstration of dynamic filter replacement.

**Top**: The output from convolving the input $x\left(n\right)$ with impulse response ${h}_{A}\left(n\right)$ before ${n}_{s}$ = 1 s

**Middle**: The output from convolving the input $x\left(n\right)$ with impulse response ${h}_{B}\left(n\right)$ after ${n}_{s}$ = 1 s

**Bottom**: the output from convolving the input $x\left(n\right)$ with ${h}_{A}\left(n\right)$ and its stepwise with ${h}_{B}\left(n\right)$ starting at ${n}_{s}$ = 1 s The vertical lines mark the time indices ${n}_{s}$, ${n}_{s}+N/3$, ${n}_{s}+2N/3$, and ${n}_{s}+N$ in the transition region.

**Figure 4.**Demonstration of dynamic filter replacement: The content of the filter impulse response buffer at four different points in time: Before transition (1.0 s), 1/3 into the transition (1.5 s), 2/3 into the transition (2.0 s) and after the transition (2.5 s). These time indices are marked with vertical lines in Figure 3.

**Figure 5.**Time-varying convolution using a pulse train with frequency ${f}_{s}/1024$ Hz and a sine wave of 100 Hz as inputs, with filter size $N=1024$ and sampling rate ${f}_{s}=$ 44,100.

**Figure 6.**Time-varying convolution using a pulse train with frequency ${f}_{s}/1124$ Hz and a sine wave of 100 Hz as inputs, with filter size $N=1024$ and sampling rate ${f}_{s}=$ 44,100.

**Figure 7.**Time-varying convolution using a pulse train with frequency ${f}_{s}/924$ Hz and a sine wave of 100 Hz as inputs, with filter size $N=1024$ and sampling rate ${f}_{s}=$ 44,100.

**Figure 13.**Liveconvolver instrument user interface. As an attempt to visualize when the impulse response is taken from, we use a circular colouring scheme to display the circular input buffer (thin coloured band labeled “input” in the image). We also represent the IR (broader coloured band at the bottom of the image) using the same colours. Time (of the input buffer) is thus represented by colour.

**Figure 17.**Example of tvconv buffer content when freezing is allowed only at filter boundaries. One contiguous block of audio remains in the filter when frozen.

**Figure 18.**Example of tvconv buffer content if freezing is allowed at an arbitrary point. Old content remains in the latter part of the buffer while the first part has been written with new content.

**Figure 19.**Impulse response with initial section low on perceptual features, transient occuring later in the filter.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Brandtsegg, Ø.; Saue, S.; Lazzarini, V.
Live Convolution with Time-Varying Filters. *Appl. Sci.* **2018**, *8*, 103.
https://doi.org/10.3390/app8010103

**AMA Style**

Brandtsegg Ø, Saue S, Lazzarini V.
Live Convolution with Time-Varying Filters. *Applied Sciences*. 2018; 8(1):103.
https://doi.org/10.3390/app8010103

**Chicago/Turabian Style**

Brandtsegg, Øyvind, Sigurd Saue, and Victor Lazzarini.
2018. "Live Convolution with Time-Varying Filters" *Applied Sciences* 8, no. 1: 103.
https://doi.org/10.3390/app8010103