Special Issue "Digital Audio Effects"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (15 December 2019).

Special Issue Editors

Prof. Vesa Välimäki
Website SciProfiles
Guest Editor
Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University, P.O. Box 13000 FI-00076 Aalto, Espoo, Finland
Interests: acoustic signal processing; audio signal processing; audio systems; music technology
Special Issues and Collections in MDPI journals
Assoc. Prof. Federico Fontana
Website
Guest Editor
University of Udine, Udine, Italy
Interests: Interactive Audio Signal Processing; Design and Evaluation of Musical Interfaces

Special Issue Information

Dear colleagues,

Digital audio effect applications are pervasive in many fields, from musical signal analysis and synthesis to music production, and from acoustics to machine listening. Innovations in this area are increasingly specialised and advanced, and can be rooted in several technical, artistic and psychological disciplines.

In this Special Issue we welcome both original research papers and review articles on diverse topics such as:

  • Capture and analysis of audio and music
  • Representation, transformation and modelling of audio signals
  • Transmission and resynthesis of audio
  • Effects and manipulation of musical sound
  • Perception, psychoacoustics and evaluation
  • Spatial sound analysis, coding and synthesis
  • Audio source separation
  • Physical, virtual acoustic and analogue modelling
  • Sound synthesis, composition and sonification
  • Hardware and software design for digital audio effects

Submissions will be judged on their academic quality, novelty, and relevance to the topic of digital audio effects, through peer review.

Additionally, authors of excellent contributions to relevant conferences such as the 22nd International Conference on Digital Audio Effects (DAFx-19) in Birmingham, UK, will be invited to submit an extended version of their paper to this Special Issue.

Prof. Dr. Vesa Välimäki
Assoc. Prof. Federico Fontana
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

Open AccessEditorial
Special Issue on Digital Audio Effects
Appl. Sci. 2020, 10(7), 2449; https://doi.org/10.3390/app10072449 - 03 Apr 2020
Abstract
Digital audio effects (DAFx) play a constantly increasing role in music, which inspires their design and is branded in its turn by their peculiar action [...] Full article
(This article belongs to the Special Issue Digital Audio Effects)

Research

Jump to: Editorial, Review, Other

Open AccessArticle
Multisensory Plucked Instrument Modeling in Unity3D: From Keytar to Accurate String Prototyping
Appl. Sci. 2020, 10(4), 1452; https://doi.org/10.3390/app10041452 - 21 Feb 2020
Cited by 1
Abstract
Keytar is a plucked guitar simulation mockup developed with Unity3D that provides auditory, visual, and haptic feedback to the player through a Phantom Omni robotic arm. Starting from a description of the implementation of the virtual instrument, we discuss our ongoing work. The [...] Read more.
Keytar is a plucked guitar simulation mockup developed with Unity3D that provides auditory, visual, and haptic feedback to the player through a Phantom Omni robotic arm. Starting from a description of the implementation of the virtual instrument, we discuss our ongoing work. The ultimate goal is the creation of a set of software tools available for developing plucked instruments in Unity3D. Using such tools, sonic interaction designers can efficiently simulate plucked string prototypes and realize multisensory interactions with virtual instruments for unprecedented purposes, such as testing innovative plucked string interfaces or training machine learning algorithms with data about the dynamics of the performance, which are immediately accessible from the machine. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Third-Octave and Bark Graphic-Equalizer Design with Symmetric Band Filters
Appl. Sci. 2020, 10(4), 1222; https://doi.org/10.3390/app10041222 - 11 Feb 2020
Cited by 1
Abstract
This work proposes graphic equalizer designs with third-octave and Bark frequency divisions using symmetric band filters with a prescribed Nyquist gain to reduce approximation errors. Both designs utilize an iterative weighted least-squares method to optimize the filter gains, accounting for the interaction between [...] Read more.
This work proposes graphic equalizer designs with third-octave and Bark frequency divisions using symmetric band filters with a prescribed Nyquist gain to reduce approximation errors. Both designs utilize an iterative weighted least-squares method to optimize the filter gains, accounting for the interaction between the different band filters, to ensure excellent accuracy. A third-octave graphic equalizer with a maximum magnitude-response error of 0.81 dB is obtained, which outperforms the previous state-of-the-art design. The corresponding error for the Bark equalizer, which is the first of its kind, is 1.26 dB. This paper also applies a recently proposed neural gain control in which the filter gains are predicted with a multilayer perceptron having two hidden layers. After the training, the resulting network quickly and accurately calculates the filter gains for third-order and Bark graphic equalizers with maximum errors of 0.86 dB and 1.32 dB, respectively, which are not much more than those of the corresponding weighted least-squares designs. Computing the filter gains is about 100 times faster with the neural network than with the original optimization method. The proposed designs are easy to apply and may thus lead to widespread use of accurate auditory graphic equalizers. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Denoising Directional Room Impulse Responses with Spatially Anisotropic Late Reverberation Tails
Appl. Sci. 2020, 10(3), 1033; https://doi.org/10.3390/app10031033 - 04 Feb 2020
Cited by 2
Abstract
Directional room impulse responses (DRIR) measured with spherical microphone arrays (SMA) enable the reproduction of room reverberation effects on three-dimensional surround-sound systems (e.g., Higher-Order Ambisonics) through multichannel convolution. However, such measurements inevitably contain a nondecaying noise floor that may produce an audible “infinite [...] Read more.
Directional room impulse responses (DRIR) measured with spherical microphone arrays (SMA) enable the reproduction of room reverberation effects on three-dimensional surround-sound systems (e.g., Higher-Order Ambisonics) through multichannel convolution. However, such measurements inevitably contain a nondecaying noise floor that may produce an audible “infinite reverberation effect” upon convolution. If the late reverberation tail can be considered a diffuse field before reaching the noise floor, the latter may be removed and replaced with an extension of the exponentially-decaying tail synthesized as a zero-mean Gaussian noise. This has previously been shown to preserve the diffuse-field properties of the late reverberation tail when performed in the spherical harmonic domain (SHD). In this paper, we show that in the case of highly anisotropic yet incoherent late fields, the spatial symmetry of the spherical harmonics is not conducive to preserving the energy distribution of the reverberation tail. To remedy this, we propose denoising in an optimized spatial domain obtained by plane-wave decomposition (PWD), and demonstrate that this method equally preserves the incoherence of the late reverberation field. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Musical Emotion Recognition with Spectral Feature Extraction Based on a Sinusoidal Model with Model-Based and Deep-Learning Approaches
Appl. Sci. 2020, 10(3), 902; https://doi.org/10.3390/app10030902 - 30 Jan 2020
Cited by 1
Abstract
This paper presents a method for extracting novel spectral features based on a sinusoidal model. The method is focused on characterizing the spectral shapes of audio signals using spectral peaks in frequency sub-bands. The extracted features are evaluated for predicting the levels of [...] Read more.
This paper presents a method for extracting novel spectral features based on a sinusoidal model. The method is focused on characterizing the spectral shapes of audio signals using spectral peaks in frequency sub-bands. The extracted features are evaluated for predicting the levels of emotional dimensions, namely arousal and valence. Principal component regression, partial least squares regression, and deep convolutional neural network (CNN) models are used as prediction models for the levels of the emotional dimensions. The experimental results indicate that the proposed features include additional spectral information that common baseline features may not include. Since the quality of audio signals, especially timbre, plays a major role in affecting the perception of emotional valence in music, the inclusion of the presented features will contribute to decreasing the prediction error rate. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Real-Time Guitar Amplifier Emulation with Deep Learning
Appl. Sci. 2020, 10(3), 766; https://doi.org/10.3390/app10030766 - 21 Jan 2020
Cited by 2
Abstract
This article investigates the use of deep neural networks for black-box modelling of audio distortion circuits, such as guitar amplifiers and distortion pedals. Both a feedforward network, based on the WaveNet model, and a recurrent neural network model are compared. To determine a [...] Read more.
This article investigates the use of deep neural networks for black-box modelling of audio distortion circuits, such as guitar amplifiers and distortion pedals. Both a feedforward network, based on the WaveNet model, and a recurrent neural network model are compared. To determine a suitable hyperparameter configuration for the WaveNet, models of three popular audio distortion pedals were created: the Ibanez Tube Screamer, the Boss DS-1, and the Electro-Harmonix Big Muff Pi. It is also shown that three minutes of audio data is sufficient for training the neural network models. Real-time implementations of the neural networks were used to measure their computational load. To further validate the results, models of two valve amplifiers, the Blackstar HT-5 Metal and the Mesa Boogie 5:50 Plus, were created, and subjective tests were conducted. The listening test results show that the models of the first amplifier could be identified as different from the reference, but the sound quality of the best models was judged to be excellent. In the case of the second guitar amplifier, many listeners were unable to hear the difference between the reference signal and the signals produced with the two largest neural network models. This study demonstrates that the neural network models can convincingly emulate highly nonlinear audio distortion circuits, whilst running in real-time, with some models requiring only a relatively small amount of processing power to run on a modern desktop computer. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Graphical abstract

Open AccessArticle
Deep Learning for Black-Box Modeling of Audio Effects
Appl. Sci. 2020, 10(2), 638; https://doi.org/10.3390/app10020638 - 16 Jan 2020
Cited by 3
Abstract
Virtual analog modeling of audio effects consists of emulating the sound of an audio processor reference device. This digital simulation is normally done by designing mathematical models of these systems. It is often difficult because it seeks to accurately model all components within [...] Read more.
Virtual analog modeling of audio effects consists of emulating the sound of an audio processor reference device. This digital simulation is normally done by designing mathematical models of these systems. It is often difficult because it seeks to accurately model all components within the effect unit, which usually contains various nonlinearities and time-varying components. Most existing methods for audio effects modeling are either simplified or optimized to a very specific circuit or type of audio effect and cannot be efficiently translated to other types of audio effects. Recently, deep neural networks have been explored as black-box modeling strategies to solve this task, i.e., by using only input–output measurements. We analyse different state-of-the-art deep learning models based on convolutional and recurrent neural networks, feedforward WaveNet architectures and we also introduce a new model based on the combination of the aforementioned models. Through objective perceptual-based metrics and subjective listening tests we explore the performance of these models when modeling various analog audio effects. Thus, we show virtual analog models of nonlinear effects, such as a tube preamplifier; nonlinear effects with memory, such as a transistor-based limiter and nonlinear time-varying effects, such as the rotating horn and rotating woofer of a Leslie speaker cabinet. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Flow Synthesizer: Universal Audio Synthesizer Control with Normalizing Flows
Appl. Sci. 2020, 10(1), 302; https://doi.org/10.3390/app10010302 - 31 Dec 2019
Cited by 1
Abstract
The ubiquity of sound synthesizers has reshaped modern music production, and novel music genres are now sometimes even entirely defined by their use. However, the increasing complexity and number of parameters in modern synthesizers make them extremely hard to master. Hence, the development [...] Read more.
The ubiquity of sound synthesizers has reshaped modern music production, and novel music genres are now sometimes even entirely defined by their use. However, the increasing complexity and number of parameters in modern synthesizers make them extremely hard to master. Hence, the development of methods allowing to easily create and explore with synthesizers is a crucial need. Recently, we introduced a novel formulation of audio synthesizer control based on learning an organized latent audio space of the synthesizer’s capabilities, while constructing an invertible mapping to the space of its parameters. We showed that this formulation allows to simultaneously address automatic parameters inference, macro-control learning, and audio-based preset exploration within a single model. We showed that this formulation can be efficiently addressed by relying on Variational Auto-Encoders (VAE) and Normalizing Flows (NF). In this paper, we extend our results by evaluating our proposal on larger sets of parameters and show its superiority in both parameter inference and audio reconstruction against various baseline models. Furthermore, we introduce disentangling flows, which allow to learn the invertible mapping between two separate latent spaces, while steering the organization of some latent dimensions to match target variation factors by splitting the objective as partial density evaluation. We show that the model disentangles the major factors of audio variations as latent dimensions, which can be directly used as macro-parameters. We also show that our model is able to learn semantic controls of a synthesizer, while smoothly mapping to its parameters. Finally, we introduce an open-source implementation of our models inside a real-time Max4Live device that is readily available to evaluate creative applications of our proposal. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Frequency-Dependent Schroeder Allpass Filters
Appl. Sci. 2020, 10(1), 187; https://doi.org/10.3390/app10010187 - 25 Dec 2019
Cited by 2
Abstract
Since the introduction of feedforward–feedback comb allpass filters by Schroeder and Logan, its popularity has not diminished due to its computational efficiency and versatile applicability in artificial reverberation, decorrelation, and dispersive system design. In this work, we present an extension to the Schroeder [...] Read more.
Since the introduction of feedforward–feedback comb allpass filters by Schroeder and Logan, its popularity has not diminished due to its computational efficiency and versatile applicability in artificial reverberation, decorrelation, and dispersive system design. In this work, we present an extension to the Schroeder allpass filter by introducing frequency-dependent feedforward and feedback gains while maintaining the allpass characteristic. By this, we directly improve upon the design of Dahl and Jot which exhibits a frequency-dependent absorption but does not preserve the allpass property. At the same time, we also improve upon Gerzon’s allpass filter as our design is both less restrictive and computationally more efficient. We provide a complete derivation of the filter structure and its properties. Furthermore, we illustrate the usefulness of the structure by designing an allpass decorrelation filter with frequency-dependent decay characteristics. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Antiderivative Antialiasing for Stateful Systems
Appl. Sci. 2020, 10(1), 20; https://doi.org/10.3390/app10010020 - 18 Dec 2019
Cited by 1
Abstract
Nonlinear systems, such as guitar distortion effects, play an important role in musical signal processing. One major problem encountered in digital nonlinear systems is aliasing distortion. Consequently, various aliasing reduction methods have been proposed in the literature. One of these is based on [...] Read more.
Nonlinear systems, such as guitar distortion effects, play an important role in musical signal processing. One major problem encountered in digital nonlinear systems is aliasing distortion. Consequently, various aliasing reduction methods have been proposed in the literature. One of these is based on using the antiderivative of the nonlinearity and has proven effective, but is limited to memoryless systems. In this work, it is extended to a class of stateful systems which includes but is not limited to systems with a single one-port nonlinearity. Two examples from the realm of virtual analog modeling show its applicability to and effectiveness for commonly encountered guitar distortion effect circuits. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Open AccessArticle
Learning Low-Dimensional Embeddings of Audio Shingles for Cross-Version Retrieval of Classical Music
Appl. Sci. 2020, 10(1), 19; https://doi.org/10.3390/app10010019 - 18 Dec 2019
Cited by 1
Abstract
Cross-version music retrieval aims at identifying all versions of a given piece of music using a short query audio fragment. One previous approach, which is particularly suited for Western classical music, is based on a nearest neighbor search using short sequences of chroma [...] Read more.
Cross-version music retrieval aims at identifying all versions of a given piece of music using a short query audio fragment. One previous approach, which is particularly suited for Western classical music, is based on a nearest neighbor search using short sequences of chroma features, also referred to as audio shingles. From the viewpoint of efficiency, indexing and dimensionality reduction are important aspects. In this paper, we extend previous work by adapting two embedding techniques; one is based on classical principle component analysis, and the other is based on neural networks with triplet loss. Furthermore, we report on systematically conducted experiments with Western classical music recordings and discuss the trade-off between retrieval quality and embedding dimensionality. As one main result, we show that, using neural networks, one can reduce the audio shingles from 240 to fewer than 8 dimensions with only a moderate loss in retrieval accuracy. In addition, we present extended experiments with databases of different sizes and different query lengths to test the scalability and generalizability of the dimensionality reduction methods. We also provide a more detailed view into the retrieval problem by analyzing the distances that appear in the nearest neighbor search. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research, Other

Open AccessReview
A History of Audio Effects
Appl. Sci. 2020, 10(3), 791; https://doi.org/10.3390/app10030791 - 22 Jan 2020
Cited by 1
Abstract
Audio effects are an essential tool that the field of music production relies upon. The ability to intentionally manipulate and modify a piece of sound has opened up considerable opportunities for music making. The evolution of technology has often driven new audio tools [...] Read more.
Audio effects are an essential tool that the field of music production relies upon. The ability to intentionally manipulate and modify a piece of sound has opened up considerable opportunities for music making. The evolution of technology has often driven new audio tools and effects, from early architectural acoustics through electromechanical and electronic devices to the digitisation of music production studios. Throughout time, music has constantly borrowed ideas and technological advancements from all other fields and contributed back to the innovative technology. This is defined as transsectorial innovation and fundamentally underpins the technological developments of audio effects. The development and evolution of audio effect technology is discussed, highlighting major technical breakthroughs and the impact of available audio effects. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Show Figures

Figure 1

Other

Open AccessMeeting Report
22nd International Conference on Digital Audio Effects DAFx 2019 (2–6 September 2019, Birmingham, United Kingdom)
Appl. Sci. 2020, 10(3), 1048; https://doi.org/10.3390/app10031048 - 05 Feb 2020
Cited by 1
Abstract
This meeting report gives an overview of the DAFx 2019 conference held in September 2019 at Birmingham City University, Birmingham, UK. The conference had the same theme as this special issue: digital audio effects. In total, 51 papers were presented at DAFx 2019 [...] Read more.
This meeting report gives an overview of the DAFx 2019 conference held in September 2019 at Birmingham City University, Birmingham, UK. The conference had the same theme as this special issue: digital audio effects. In total, 51 papers were presented at DAFx 2019 either in oral or in poster sessions. The conference had 157 delegates, almost half from industry and the rest from universities around the world. As the number of submissions and participants remains sufficiently high, it is planned that the DAFx conference series will be continued every autumn. Full article
(This article belongs to the Special Issue Digital Audio Effects)
Back to TopTop