Music and the Machine: Contemporary Music Production

A special issue of Arts (ISSN 2076-0752). This special issue belongs to the section "Musical Arts and Theatre".

Deadline for manuscript submissions: closed (5 July 2019) | Viewed by 66099

Special Issue Editors


E-Mail Website
Guest Editor
Assistant Professor of Music Technology, School of Music, Faculty of Fine Arts, University of Victoria, Victoria, BC V8T 3X7, Canada
Interests: music and sound recording; electronic and electro-acoustic performance

E-Mail Website
Guest Editor
Founder, Semantic Audio Labs, iCentrum, Holt Street, Birmingham B7 4BP, UK
Interests: intelligent music production system; perceptual evaluation of musical signals

Special Issue Information

Dear Colleagues,

Every day I watched how a bare metal frame, rolling down the line would come off the other end, a spanking brand new car. What a great idea! Maybe, I could do the same thing with my music. Create a place where a kid off the street could walk in one door, an unknown, go through a process, and come out another door, a star.

—Berry Gordy, founder of the Motown record label.

Formulas, systems, and methods are not new to music production. The Brill Building songwriters, Motown, The Spice Girls and Boy Band equivalents, and more recently, K-Pop have all used carefully constructed systems to churn out their hits. However, until recently, taking the human actors out of these systems and handing creative control over to computers—for either music creation or music production—was unthinkable. Today, that idea is very real with more and more music being made by, and with assistance from, algorithmic and learning systems. This Special Issue invites contributions from authors working in the complementary fields of automatic music creation and automatic music production, in order to develop a broader understanding of these disciplines, what the future holds and how this is changing our relationship with the making of music.

Mr. Kirk McNally
Dr. Brecht De Man
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Arts is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • music production
  • contemporary
  • algorithm
  • automatic music creation

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 488 KiB  
Article
Artificial Intelligence & Popular Music: SKYGGE, Flow Machines, and the Audio Uncanny Valley
by Melissa Avdeeff
Arts 2019, 8(4), 130; https://doi.org/10.3390/arts8040130 - 11 Oct 2019
Cited by 16 | Viewed by 15629
Abstract
This article presents an overview of the first AI-human collaborated album, Hello World, by SKYGGE, which utilizes Sony’s Flow Machines technologies. This case study is situated within a review of current and emerging uses of AI in popular music production, and connects [...] Read more.
This article presents an overview of the first AI-human collaborated album, Hello World, by SKYGGE, which utilizes Sony’s Flow Machines technologies. This case study is situated within a review of current and emerging uses of AI in popular music production, and connects those uses with myths and fears that have circulated in discourses concerning the use of AI in general, and how these fears connect to the idea of an audio uncanny valley. By proposing the concept of an audio uncanny valley in relation to AIPM (artificial intelligence popular music), this article offers a lens through which to examine the more novel and unusual melodies and harmonization made possible through AI music generation, and questions how this content relates to wider speculations about posthumanism, sincerity, and authenticity in both popular music, and broader assumptions of anthropocentric creativity. In its documentation of the emergence of a new era of popular music, the AI era, this article surveys: (1) The current landscape of artificial intelligence popular music focusing on the use of Markov models for generative purposes; (2) posthumanist creativity and the potential for an audio uncanny valley; and (3) issues of perceived authenticity in the technologically mediated “voice”. Full article
(This article belongs to the Special Issue Music and the Machine: Contemporary Music Production)
Show Figures

Figure 1

14 pages, 248 KiB  
Article
Approaches in Intelligent Music Production
by David Moffat and Mark B. Sandler
Arts 2019, 8(4), 125; https://doi.org/10.3390/arts8040125 - 25 Sep 2019
Cited by 15 | Viewed by 8391
Abstract
Music production technology has made few advancements over the past few decades. State-of-the-art approaches are based on traditional studio paradigms with new developments primarily focusing on digital modelling of analog equipment. Intelligent music production (IMP) is the approach of introducing some level of [...] Read more.
Music production technology has made few advancements over the past few decades. State-of-the-art approaches are based on traditional studio paradigms with new developments primarily focusing on digital modelling of analog equipment. Intelligent music production (IMP) is the approach of introducing some level of artificial intelligence into the space of music production, which has the ability to change the field considerably. There are a multitude of methods that intelligent systems can employ to analyse, interact with, and modify audio. Some systems interact and collaborate with human mix engineers, while others are purely black box autonomous systems, which are uninterpretable and challenging to work with. This article outlines a number of key decisions that need to be considered while producing an intelligent music production system, and identifies some of the assumptions and constraints of each of the various approaches. One of the key aspects to consider in any IMP system is how an individual will interact with the system, and to what extent they can consistently use any IMP tools. The other key aspects are how the target or goal of the system is created and defined, and the manner in which the system directly interacts with audio. The potential for IMP systems to produce new and interesting approaches for analysing and manipulating audio, both for the intended application and creative misappropriation, is considerable. Full article
(This article belongs to the Special Issue Music and the Machine: Contemporary Music Production)
Show Figures

Figure 1

15 pages, 286 KiB  
Article
Artificial Intelligence and Music: Open Questions of Copyright Law and Engineering Praxis
by Bob L. T. Sturm, Maria Iglesias, Oded Ben-Tal, Marius Miron and Emilia Gómez
Arts 2019, 8(3), 115; https://doi.org/10.3390/arts8030115 - 6 Sep 2019
Cited by 24 | Viewed by 28270
Abstract
The application of artificial intelligence (AI) to music stretches back many decades, and presents numerous unique opportunities for a variety of uses, such as the recommendation of recorded music from massive commercial archives, or the (semi-)automated creation of music. Due to unparalleled access [...] Read more.
The application of artificial intelligence (AI) to music stretches back many decades, and presents numerous unique opportunities for a variety of uses, such as the recommendation of recorded music from massive commercial archives, or the (semi-)automated creation of music. Due to unparalleled access to music data and effective learning algorithms running on high-powered computational hardware, AI is now producing surprising outcomes in a domain fully entrenched in human creativity—not to mention a revenue source around the globe. These developments call for a close inspection of what is occurring, and consideration of how it is changing and can change our relationship with music for better and for worse. This article looks at AI applied to music from two perspectives: copyright law and engineering praxis. It grounds its discussion in the development and use of a specific application of AI in music creation, which raises further and unanticipated questions. Most of the questions collected in this article are open as their answers are not yet clear at this time, but they are nonetheless important to consider as AI technologies develop and are applied more widely to music, not to mention other domains centred on human creativity. Full article
(This article belongs to the Special Issue Music and the Machine: Contemporary Music Production)
Show Figures

Figure 1

28 pages, 6178 KiB  
Article
User-Influenced/Machine-Controlled Playback: The variPlay Music App Format for Interactive Recorded Music
by Justin Paterson, Rob Toulson and Russ Hepworth-Sawyer
Arts 2019, 8(3), 112; https://doi.org/10.3390/arts8030112 - 3 Sep 2019
Viewed by 6120
Abstract
This paper concerns itself with an autoethnography of the five-year ‘variPlay’ project. This project drew from three consecutive rounds of research funding to develop an app format that could host both user interactivity to change the sound of recorded music in real-time, and [...] Read more.
This paper concerns itself with an autoethnography of the five-year ‘variPlay’ project. This project drew from three consecutive rounds of research funding to develop an app format that could host both user interactivity to change the sound of recorded music in real-time, and a machine-driven mode that could autonomously remix, playing back a different version of a song upon every listen, or changing part way on user demand. The final funded phase involved commercialization, with the release of three apps using artists from the roster of project partner, Warner Music Group. The concept and operation of the app is discussed, alongside reflection on salient matters such as product development, music production, mastering, and issues encountered through the commercialization itself. The final apps received several thousand downloads around the world, in territories such as France, USA, and Mexico. Opportunities for future development are also presented. Full article
(This article belongs to the Special Issue Music and the Machine: Contemporary Music Production)
Show Figures

Graphical abstract

21 pages, 3945 KiB  
Article
Learning to Build Natural Audio Production Interfaces
by Bryan Pardo, Mark Cartwright, Prem Seetharaman and Bongjun Kim
Arts 2019, 8(3), 110; https://doi.org/10.3390/arts8030110 - 29 Aug 2019
Cited by 1 | Viewed by 5559
Abstract
Improving audio production tools provides a great opportunity for meaningful enhancement of creative activities due to the disconnect between existing tools and the conceptual frameworks within which many people work. In our work, we focus on bridging the gap between the intentions of [...] Read more.
Improving audio production tools provides a great opportunity for meaningful enhancement of creative activities due to the disconnect between existing tools and the conceptual frameworks within which many people work. In our work, we focus on bridging the gap between the intentions of both amateur and professional musicians and the audio manipulation tools available through software. Rather than force nonintuitive interactions, or remove control altogether, we reframe the controls to work within the interaction paradigms identified by research done on how audio engineers and musicians communicate auditory concepts to each other: evaluative feedback, natural language, vocal imitation, and exploration. In this article, we provide an overview of our research on building audio production tools, such as mixers and equalizers, to support these kinds of interactions. We describe the learning algorithms, design approaches, and software that support these interaction paradigms in the context of music and audio production. We also discuss the strengths and weaknesses of the interaction approach we describe in comparison with existing control paradigms. Full article
(This article belongs to the Special Issue Music and the Machine: Contemporary Music Production)
Show Figures

Figure 1

Back to TopTop