Next Article in Journal
Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images
Previous Article in Journal
Method and System for Heart Rate Estimation Using Linear Prediction Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Implementation of a Musical System for the Development of Creative Activities Through Electroacoustics in Educational Contexts

by
Esteban Peris
1,
Adolf Murillo
2,* and
Jesús Tejada
2
1
Departament d’Educació i Didàctiques Específiques, Universitat Jaume I de Castelló, 12071 Castelló de la Plana, Spain
2
Institut de Creativitat i Innovacions Educatives, Universitat de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Signals 2025, 6(2), 16; https://doi.org/10.3390/signals6020016
Submission received: 27 August 2024 / Revised: 21 January 2025 / Accepted: 7 March 2025 / Published: 1 April 2025

Abstract

:
In the field of music education, the incorporation of technology originally designed for professionals presents both significant opportunities and challenges. These technologies, although advanced and powerful, are often not adapted to meet the specific needs of the educational environment. Therefore, this study details the design and implementation process of a system consisting of a hardware device called “Play Box” and associated software “Imaginary Play Box”. The design sciences research methodology (DSRM) specifically adapted to software development was used to structure the project. The three phases shown in this study ranged from the conception of an initial prototype to the realisation of working software. During the design phase, a questionnaire was developed to evaluate various aspects of the software, such as the visual interface, the programming of components, and the sound interactivity provided by the Play Box. The technique of panels of experts in music pedagogy and programming in MAX-MSP was used to obtain critical feedback. This expert evaluation was crucial to iterate and polish the process of iteration and refining the software, culminating in a beta version optimised for the creation of electroacoustic music for music education.

1. Introduction

The adoption of digital technologies in music pedagogy has emerged as a central axis of discussion in modern education, as recent studies have shown [1]. The emergence of such technologies has reshaped the landscape of traditional music teaching and creation, as reported by Himonides and Purves [2]. In the face of this evolution, the specialised academic literature in the field has highlighted the urgency of integrating these innovative tools into music teacher training study plans, as suggested by William Bauer or Jonathan Savage [3,4], with a particular focus on collaborative methodologies that foster creativity [5].
From a theoretical perspective [3], the inclusion of technological resources in the context of music education is postulated as a highly valuable didactic mechanism, capable of enriching students’ understanding of fundamental musical concepts and increasing their participation in learning. Webster [6] supports this position, proposing that digital technologies offer students enriching opportunities for individual and collective music creation and production, especially by applying pedagogical approaches centred on practical tasks [7].
Thus, the incorporation of digital technologies in music production has facilitated an unprecedented level of experimentation in the generation of sounds and instruments, leading, according to Ruthmann and Mantie [8], to the emergence of new sonorities and expanding the boundaries of our definition of music. This expansion has extended music education into contemporary dimensions of expression, reducing conflicts between the traditional Western tonal music model and other musical practises [9,10,11]. Music technology has favoured the development of tools that allow sounds to be manipulated and altered dynamically, fostering creative and novel interactions with sound, which has opened up new pathways for music making in music education [2,12,13].
Additionally, the development of new musical instruments and the exploration of new sonorities have a considerable impact on compositional methodologies and performance techniques, as highlighted by Simon Emmerson [14]. These instruments and technologies not only bring new sonic possibilities but also influence the creative process and the interaction between students and their instruments.
Despite significant transformation in many educational fields being driven by technological advances, music education continues to show a remarkable affinity to traditional educational paradigms that, surprisingly, have failed to integrate key technological innovations that would facilitate sound exploration and creation. As Jeffrey Martin [15] argues, these conventional educational models tend to disregard technological developments that encourage a more experimental and sound-centred approach.
Resistance to adopting methodologies that favour sound experimentation is manifested by displacing tonal approaches using technologies that replicate traditional models, historically predominant in the field of music education [16,17,18]. These traditional paradigms act as restrictive filters that limit the inclusion of innovative approaches capable of transcending the conventional conception of the musical note to address richer and more experimental dimensions of sound learning, according to researchers [19,20,21,22].
These findings underline the need to review and reformulate pedagogical approaches in music education to incorporate emerging technologies that promote greater diversity in learning and sound creation practises, aligning with contemporary pedagogical trends that emphasise experimentation and innovation.
This rethinking demands a constant relearning and re-evaluation of the ways in which we organise and conceptualise sound according to Holland and Chapman [23]. This sound-centred approach underlines the need for new methodologies that allow students to interact with sound in a more direct and tangible way, thus promoting deeper and more experiential learning.
On the other hand, there is a growing concern among certain sectors of society and educational leaders about the excessive use of screens and technology by young people [24]. This trend often translates into a reticence towards the use of technology within the educational context, adopting positions that could be described as anti-technology [25,26]. Contrary to this sentiment, the hardware design proposed in this study seeks to integrate real-world instruments that allow students to reconfigure their interactions through physical contact with the instrument, with the aim of establishing a bridge between the analogue and the digital. The creation of hybrid systems, combining hardware and software, can enhance embodied experiences with sound [27,28,29]. Along these lines, embodied cognition theory argues that knowledge develops not through passive perception but through dynamic interaction with the environment, highlighting the importance of the body as an essential intermediary in the connection between mind and matter [30,31,32]. This approach criticises previous cognitive paradigms for their limited focus on decontextualised mental activity and stresses the relevance of multimodal and phenomenological processes in cognition.
In music education, the incorporation of commercial technology originally designed for professionals presents significant opportunities and challenges. These technologies, while advanced and powerful, are often not adapted to meet the specific needs of the educational environment [33]. Professional tools are robust and have functionalities that can be overwhelming for students and educators due to their complexity and lack of pedagogical adaptability [34].
One of the main challenges is the interface of these tools, which is often not designed for a user who is in the process of learning and can be overly complicated. This can lead to a complex learning curve, where students spend more time trying to understand how technology works than learning musical concepts. Furthermore, the lack of flexibility in these technologies can inhibit the creative and personalised exploration that is crucial in music education [35].
To address these challenges, it is vital to develop additional interfaces or modules that simplify the use of these technologies without compromising their advanced capabilities. This would allow students and educators to take full advantage of the possibilities offered by these tools without being overwhelmed by their complexity. In addition, it is essential to provide specialised training for educators, equipping them with the necessary skills to effectively integrate these technologies into their teaching methods.
Therefore, the aim of this study is to present the design of hardware and software tools that offer greater adaptability to educational needs within a sound-based approach, ensuring that they are accessible and relevant to students and educators alike. This will not only maximise the educational potential of music technology but also open up new avenues for more creative and innovative teaching in the field of music education.

2. Method

The methodology adopted to carry out this project was the design science research methodology (DSRM) [36], oriented to product creation and educational innovation [37]. This section details the four phases of the model, namely (1) problem identification; (2) background and planning; (3) innovations and iterations; and (4) results and final design (Figure 1).

2.1. Background

In the initial phase of the research, essential issues were identified for the development of a hardware/software system oriented towards music education, based on a sound approach.
Among the problems identified, the following stands out: (1) the shortage of educational software that integrates the pedagogical knowledge of experts in both music education and programming. (2) Existing commercial software often does not meet the pedagogical requirements of a sound-based approach. (3) The integration of music technologies in non-professional educational contexts presents significant challenges for teachers, resulting in educational practises that limit creativity and are often disconnected from creative and physical sound experiences. These elements are crucial to ensure that the design and development of the proposed system are both functional and innovative within the field of music education.
The “Imaginary Play Box” hardware and its associated software “Imaginary Sound Box” are part of a set of educational software designed by this research group and oriented towards the design of tools, both analogue and digital, for sound creation and experimentation in primary and secondary school classrooms and conservatories (Aglaya.org, n.d) [38]. Previously, this research team had designed music software such as Aglaya Play, developed for collaborative sound creation through control with mobile devices [22], and Acouscapes, developed for the creation and experimentation with soundscapes [39].
Based on this previous experience in the design of educational music software and the problems identified, a first alpha prototype of the hardware and software was built to establish internal tests with the system and the one presented below.

2.1.1. Hardware Description: Play Box

The research team saw the design and incorporation of non-conventional instruments as an opportunity to facilitate processes of sound experimentation through the direct manipulation of the musical object. In this sense, what was sought was a greater relationship between the body and the musical object.
According to (author 2) [40], this type of relationship can be promising in the understanding of sound phenomena, as the performer establishes connections by interacting with the physical elements that make up the instrument, providing information related to timbre, dynamics, and other musical elements that depend on the type of interactions that take place between the body and the manipulated object (instrument).

2.1.2. Play Box Design Elements (Hardware)

The experimental “Play Box” device was inspired by the “acoustic laptop” (https://youtu.be/2g3hVm-KfD0?si=fcSo9ULgH_AZTI5g) (accesed: 1 July 2024). It has a rectangular metal structure with compact dimensions of 10 cm long, 6 cm wide, and 3 cm high. Internally, it has two contact microphones that capture the sounds emitted by integrated acoustic elements, such as springs of various tensions, a music box with a rotating mechanism that plays melodies, a scraper, multi-pitch blades, metal spirals, and elastic bands of various tensions (Figure 2 and Figure 3). Some of these elements are interchangeable or adjustable, allowing for new sonorities. Additionally, it includes an audio jack output that facilitates connection, amplification, or transformation through connection to an audio interface, as well as a connection to the effects pedalboards hardware or to the Imaginary Play Box v.1 software under study in this research.
The design of this instrument prioritised aspects such as portability and small size, facilitating its transport and enhancing its functionality in performance contexts, offering the performer freedom of movement and the possibility of interacting dynamically with the sound space. This mobility is enhanced by the wireless amplification capability, which allows for a flexible spatial arrangement of the instrument during use.
The sound components of the “Play Box” are interchangeable, and their shapes and tensions can be readjusted, which provides great variability in the manipulation and transformation of sound. Likewise, in its execution, various sound activators can be used and combined, such as hands, metal or wooden sticks, and other objects that facilitate exploration, providing a wide range of textures and sound combinations, significantly enriching the sound spectrum of the instrument (Figure 3).
The device used in the tests (Figure 4) for wireless transmission between the Play Box and the sound card is the commercial model AirBorne 2.4 Ghz Instrument manufactured by Harley Benton (Musikhaus Thomann, Treppendorf, Germany) with a plug-and-play system. The device, which includes a transmitter and receiver, transmits using the 2.4 GHz radio standard with a signal range of up to 30 m (depending on room conditions). The input impedance (transmitter) is approx. 1 MOhm, and the output impedance (receiver) is approx. 2 KOhm with a latency of <5 ms. The frequency response of the device is 20 Hz–20 kHz and the bit depth is 24 bits/48 KHz. Both the transmitter and receiver are battery powered, with a battery life of up to 4.5 h (depending on the operating situation).

2.2. Initial Programming of the Imaginary Play Box Software v.1.1

The Imaginary Play Box has been programmed exclusively using Max-MSP v.8 (hereafter referred to as Max) [41].
Max programming was performed through a modular or graphical environment by adding interconnected objects (Figure 5). These objects were differentiated according to whether they were for controlling data (MIDI), an audio signal, or video signal, with each group of objects corresponding to a different block of the programme as follows: (1) objects in the Max block: MIDI (continuous grey cable). (2) Objects in the MSP block: audio signal (a tick is added to the name of the object and a dashed yellow cable). (3) Objects in the Jitter block: video signal (the prefix t. precedes the object name and green dashed wire). In addition to these object blocks, version 8 of the programme added multi-channel expansion with its own object group with the prefix MC in front of the object name and the blue dotted wires [42,43,44].
The software was organised with a graphical interface divided into six modules (Figure 6), each providing a different digital sound transformation technique as follows: (1) granular synthesis, (2) delay, (3) reverb, (4) bandpass filter bank, (5) external player, and (6) looper.

2.2.1. Granular

The “Granular” module allows for the transformation of audio files using granular synthesis.
Granular synthesis is the practical application of the quantum sound concept presented by physicist Dennis Gabor [45]. According to Davis Gabor, the quantum is the acoustically indivisible unit of information on which all larger-scale sound phenomena are based [45].
The application of granular sound synthesis works by disintegrating a continuous sound into a large number of small independent sound events, overlapped in time, called sonic grains, with a duration between 1 and 100 ms so that each of these sonic grains contains a waveform formed by an amplitude envelope [46,47].
In the ENV GEN block (Figure 7), the duration and amplitude values are assigned to generate an envelope. Then, in the OSC block, the output frequency is assigned, and the different sound events are generated according to the envelope created in the ENV GEN block. Finally, from the grain spatial position input of the OUTN module, the output channel is assigned to each of the sound events.
For this module, a graphic control window has been programmed through which by means of digital potentiometers, it is possible to quickly, clearly, and intuitively set and vary the different parameters necessary for granular synthesis, as well as the spatialisation and the number of granulators that act on the audio file at the same time.
From the potentiometers, it is possible to control and set the following parameters: 1. duration of the micro fragments into which to divide the audio file (grains); 2. panning of the sound left–right; and 3. amplitude of the grains (Figure 8).
The control window is completed by a slider to control the output volume of the module and two LEDs showing the amplitude of the output signal of the left and right channels.
As granular synthesis does not work in real time, requiring an audio file to read from, for live audio input from microphones, a buffer has been programmed which records the incoming audio for one second and rewrites it every second.
The module can also receive audio routing from the external sampler module. To perform this, the audio file coming from the external sampler is written into the buffer that the synthesiser reads from and, finally, for the amplitude envelope that shapes the sonic grains, a bell-shaped (Gaussian) envelope has been chosen (Figure 9).
The module was programmed to choose the control parameters randomly in a previously fixed range, which is known as controlled randomness [48,49,50,51]. The different parameters, with the exception of the number of granulators acting at the same time, have been programmed by setting adjustable minimum and maximum values so that the user indicates a minimum and maximum value for each of the controllable parameters and the synthesiser chooses random values within the set range.
The randomised algorithm works according to Brownian motion, a concept coming from the field of physics and widely used in stochastic music (music based on randomness) [50,52,53].
For this purpose, the granulator has been programmed around the Brownian~ (external object belonging to the “RTC-Lib” object library. Software library for algorithmic composition with Max. Programmed by Karlheinz Essl and others) object, which is a random number generator based on Brownian motions that produces random numbers between a minimum and a maximum value but excluding the maximum. The synthesiser contains several of these interconnected objects so that it receives the data set in each of the parameters, performs the stochastic operations, and returns random values within the input range (Figure 10).

2.2.2. Delay and Reverb

The next sound transformation module is a joint block of modules that allows for the addition of delay (signal delay of more than one second, like an echo) and reverberation (signal delay of less than one second so that the delayed signal is superimposed on the original signal) to the sound (Figure 11)
As with the granulator module, users set the value of the different parameters by means of potentiometers. In the case of the delay module, there are two potentiometers that allow the delay time to be set between 0 and 1000 ms and the feedback time between 0 and 100 ms.
Users can control the volume of the input signal and delay separately via sliders, and the module also offers the interesting possibility of adding a transposition between 1 and 5 octaves to the delay, which could enrich the sound result obtained. The delay programming in Max is shown in Figure 12.
The reverb module allows you to add reverb to the signal, controlling the reverb parameters by means of potentiometers, involving the decay time (signal attenuation time) and the mix of the original signal and the delayed signal (wet/dry), and it also provides the possibility of routing the signal of the different loopers and using it as an input signal for the module to add delay or reverb to any sound signal. A schema of reverberator’s operation is shown in Figure 13.
Figure 13. Schematic of operation of the implemented reverberator [41]. The transmission formula is as follows:
Figure 13. Schematic of operation of the implemented reverberator [41]. The transmission formula is as follows:
Signals 06 00016 g013
y n + a M y n M = x n + x [ n M ]

2.2.3. Bandpass Filter Bank

The software also implements a subtractive synthesis module. Substractive synthesis is one of the most common methods used to create electronic music sounds. It consists of removing (subtracting) a certain range of frequencies from a complex waveform by using filters. This module is based on two bandpass filter banks (bandpass filter), comprising one bank of 10 filters and one bank of 50 filters, which can be selected by the user (Figure 14). For the programming of the module in Max, the native fffb~ (fast-fixed filter bank) object has been chosen. The fffb~ object implements a bank of bandpass filter objects in such a way that it receives a single input signal for all filters, but the outputs of each filter are available separately. The educational purpose of the software and the complexity of working with audio filters have also been taken into account so the module has been programmed to be easy and intuitive to use without previous knowledge.
The number of filters in the filter bank is set by default, and the three parameters necessary for the operation of the filters, centre frequency at which the filters will start to operate, Q factor (bandwidth), and gain of the filters are chosen and set randomly by the module itself from a predefined range as follows: frequency between 10 and 10,000 Hz; Q factor between 0 and 9999; and gain between 0.5 and 9.5.
Figure 15 shows the operating scheme of a resonant bandpass filter (Figure 15) according to the following formula:
H z = g ( 1 r Z 2 ) 1 + C 1 Z 1 + C 2 Z 2
The different functionalities of this module are activated or deactivated by means of switches (Max programming in Figure 16). Thus, the user can switch the module on and off, choose between the 10-filter bank or the 50-filter bank, order the frequency shift, and choose the input signal to a module between an external player, white noise, or live input from a microphone. A slider allows you to control the volume of the module’s output signal. The frequency of response of band-pass filters is shown in Figure 17.

2.2.4. External Sampler

This module consists of an external audio file player and allows you to add an audio file to the live input signal from a microphone and/or route the signal from the player to the different modules so that this is the working audio signal (Figure 18). In addition to the usual player controls, there are options to loop the file, modify the pitch, and play the file in reverse mode. These options are activated or deactivated by means of switches. Similarly, by activating switches, the signal from the player is routed to the granular, delay, and looper modules (it is only possible to route the signal to one module; it is not possible to route the signal to all three modules at the same time. The module is programmed in such a way that activating the routing switch to one of the modules automatically deactivates the routing switch of the other modules). The output volume of the module is controlled by a slider, and an indicator light shows the output signal strength. The Max programming of this object is shown in Figure 19.

2.2.5. Looper

The looper module allows for sound creation based on the repetition of audio fragments [54,55]. For this purpose, the module offers four loopers that can either be made to work separately or in synchronisation (Figure 20).
Each of the loopers takes audio from a buffer (one for each looper) that has been programmed to capture three seconds of audio (from a microphone input) and then rewrite itself until the recording stops. These buffers receive the audio file coming from the external sampler module in case of routing of the signal from this module to the looper module.
In terms of controls, each looper has a switch that activates and deactivates audio capture, with operating commands of play, pause, reverse (the looper works by playing the audio from the end to the beginning), and clear (clears the buffer content). In addition, the possibility of transforming the resulting audio has been included, wherein users are allowed to vary the looper’s playback speed manually by means of an LFO or time stretch. The LFO and time stretch are activated by a switch and controlled by a potentiometer. To change the speed manually, it is not necessary to activate a switch; it is sufficient to manipulate the potentiometer directly.
The volume of the output signal of each looper is controlled by a slider which also functions as a signal strength indicator and, finally, on the left side of the module, there are three larger switches that affect the whole looper, and they can record the same audio in all the buffers, activate/deactivate the whole looper, and synchronise the operation of the four loopers. Max programming of this object is shown in Figure 21.
Finally, at the bottom right of the interface, we find the general control block of the software from which we can activate/deactivate the audio, choose the audio input and output source, activate the recording of the general sound output of the software, and control the gain of the input signal and the volume of the general output.
Phase 3. Innovations and iterations
After the design and programming of the software, a first version (functional prototype) was shared with experts, both in Max programming and in education, to obtain their assessment by means of a survey requesting a quantitative assessment, asking them to rate from 1 to 5 different items related to the design and functionality of the software and a qualitative assessment, as well as proposals for improvement.

2.3. Results of Software Evaluation by Experts

2.3.1. Max and Music Technology Expert Ratings

For the evaluation of the first functional version of the software, the team requested the collaboration of five experts with recognised experience both in programming with Max and in the field of music technology. The criteria for their selection were to have more than 10 years of experience in Max programming and to teach music technology in educational centres.
In order to collect their quantitative assessment, the survey was sent to each of them. For the Max and technology experts, the survey consisted of seven evaluation blocks, one for each software module. Each block included 5 or 6 questions depending on the block, with an ordinal 5-point scale where 1 was the lowest score and 5 the highest.
For the analysis of the experts’ quantitative assessment, the maximum possible score for each block was calculated to obtain the sum of the total of the experts’ assessment and the corresponding percentage of the maximum possible score for each block.
The data in Table 1 show that all modules were rated above 60% by the experts, which can be considered a high rating, but there is also a wide difference between the module with the lowest rating, the BPFB (bandpass filter bank) module, and the rest of the modules. The best rated module is the looper module, with 78%.
The assessment of the modules is fairly homogeneous, with a difference of only 0.177% between the lowest and highest rated module.
The high ratings of the individual modules translate into a high overall rating of the software by the experts.
The overall rating of the software is 82% of the highest possible score, which can be considered as a very satisfactory rating as a starting point for the programming part of the first functional version of the software but with a wide margin for improvement, as indicated by the experts in their qualitative assessments and proposals for improvement.
In the qualitative assessments and proposals for improvement, most of the experts agree that a series of modifications and improvements to the BPFB module could be necessary. We found a large coincidence in the responses, suggesting the addition of controls that would allow for greater intervention in the module. These considerations refer mostly to the ability to choose the number of filters to be activated without having only two pre-set selection options, as well as adding the functionality to manually set, from the module interface, the different parameters necessary for the operation of the filters, including cut-off frequency, filter gain, bandwidth and Q factor, or the frequency attenuation curve.
In their replies, the experts considered the inability to manually intervene in the filters to be important, which explains the low rating of this module.
On the other hand, another of the most recurrent suggestions from experts is to add presets to the reverb module that allow for the selection of reverberations that imitate specific architectural spaces, such as a church, auditorium, or room.

2.3.2. Experts in Music Education

As for the experts in education, the collaboration of six actively teaching experts in music education was requested, to whom a survey was sent with a similar evaluation to the one sent to the experts in Max and music technology. The criteria applied to their selection were having experience in the use of technology in the classroom and having the Play Box hardware, which was provided by the research team of this study.
The survey that was applied consisted of several blocks of rating items for the interface, one for each module (six in total), in which experts were asked to rate 1. the ability to generate and transform sound from an external instrument; 2. the ability to generate and transform sound from the software globally; and 3. the suitability of the software in educational contexts in the classroom.
For the analysis of the quantitative evaluation by the experts, the same procedure was followed as in the previous analysis, i.e., the maximum possible score for each block was calculated in order to subsequently obtain the sum of the total of the experts’ evaluations and then the corresponding percentage of the maximum score possible for the block.
Analysis of the data (Table 2) shows that both the interface and each of the modules are rated at over 64%, with 65% being the lowest and 85% the highest.
Similarly, it can be seen that the modules with the lowest ratings are the modules dedicated to synthesis (granular: 78% and BPFB: 77%) and the looper module (65%), the latter being the module with the lowest rating, completely opposite to the result obtained by this module in the rating by the Max and technology experts.
The best rated modules are the sampler and reverb, with 85% each and the delay, with 84%.
We can infer from the above data that the synthesis modules may be more complex and less intuitive for students to use in the classroom, while the modules offering delay and reverb, as well as the sampler, may be simpler and more intuitive to use.
However, this difference in ratings between modules is not reflected in the rating of each of the modules for generating and transforming sound, obtaining in this section an overall rating of 137 out of 150 (91%) and 26 out of 30 (86%) in the overall capacity of the software (Table 3).
Regarding the suitability of the software for classroom use, the experts’ rating was again very high, with 27 out of 30 (90%), which shows that the software was generally very well received. In terms of qualitative assessments and proposals for improvement, almost all experts pointed to the need to increase the size of the controllers, considering them to be too small and impractical.
Likewise, the majority said that consideration should be given to the possibility of adding some kind of pop-up window to provide explanations in the modules that they consider to be more complex to use, including granular, BPFB, and looper.
On the other hand, there is agreement between education experts and Max and technology experts on the recommendation to add some selectable presets with reverberations of different architectural spaces to the reverb module.

3. Implementation of Innovations from the Expert Panels

In the process that followed the analysis of the data collected through the collaboration with the experts, a first iteration of the design was carried out. The innovations implemented are detailed below.

3.1. Changes in the Graphical Interface of the Software Imaginary Play Box v.1.2

Following the analysis of the experts’ evaluations, the qualitative suggestions made by the experts were taken into account, and some aspects and parameters of the software were modified based on the contributions and proposals for improvement received (Figure 22).
As mentioned in the data analysis section of the music education experts’ evaluations, most of them agreed that it would be desirable to reduce the font size of the text in the modules and to increase the size of the controllers. These changes have therefore been made to the interface.
It has also been considered appropriate to change the background colours of the interface, as well as to unify the colours of the on-off buttons for the sake of coherence.

3.2. Transformations Applied to Max Programming

With regard to the suggestions and proposals for improvement made by the Max experts, the decision was made to recode the software to add the proposals relating to providing a greater possibility of intervention in the filters in the bandpass filter bank module and to add presets for selecting reverberation modes based on architectural spaces.
Likewise, a series of pop-up windows will be added to the Max programming in the synthesis modules and the looper module to provide basic information on the principles of the type of synthesis used and the operation of the modules.
These programming modifications will be implemented in a beta version that will start its classroom iterations of the software in phase 3 of the project.

3.3. Phase 4 Beta Design and Introduction of the Proposed Innovation in the Educational Context

The project is currently at the beginning of phase 4, the introduction of the proposed innovation in the educational context with students (first iteration), data collection and analysis, and subsequent iterations [37].
During the school year 2024/25, the hardware and a beta version of the software will be sent to the collaborating schools to test its functionality in real educational contexts and start collecting test data.
Once the data have been collected, phase 4 will begin; this is the phase that provides results, solutions to the problem posed, and the final design of innovations in the classrooms, which is the conclusion of the research [37].

4. Some Educational Applications of the System

In the course of the development of the tools presented in this study, different sound creation workshops have been carried out with the participation of students from different educational contexts, which have served as a guideline for the design of these didactic activities (Figure 23). In this sense, the practical applications presented here aim to offer the reader some simple ideas about their didactic possibilities applied in music education.
Vignette number 1: A story sound
Students will select a short literary narrative to carry out a sound process. Initially, a planning phase will be carried out, in which sound elements representing actions, characters, and environments described in the text will be identified and assigned. Then, using the Play Box hardware, students will develop a process of sound exploration to search for sounds that fit the characteristics of the text, encouraging the use of experimental and creative methods. The post-production phase will involve the digital manipulation of these sounds through the effects of the software Imaginary Play Box v.1. Then, using the recording of the story and playback on the external sampler, a precise synchronisation with the narrative structure of the story will be sought. The activity will culminate in a group presentation in which each team will present their sound work, leading to a critical discussion on the aesthetic and technical decisions taken.
Vignette number 2: Composing sound for video
A video clip without sound will be selected for this activity. Students will carry out a detailed analysis of the visual content to determine specific sound requirements, such as ambience, dialogue, and incidental music. Using the Play Box, participants will experiment with generating sounds that correspond to the visual dynamics of the clip, promoting innovation in sound manipulation and capture. The integration of these sounds will be carried out using the software, where timing and final mix adjustments will be made. In this activity, two versions will be presented, one of a real-time sound recording using the Play Boxes connected to the software and another post-production activity that will involve recording work using different software in combination with the Imaginary Play Box.
Vignette number 3: Sound library generation
In the design of different projects, the creation of original sound is a common creative activity in the professional world and is very motivating for students. In this sense, this activity will be based on a small debate where students will discuss and define the categories of sounds that will be most beneficial for their library, considering a variety of contexts and applications. The process of creating sounds through the Play Box will be able to combine sounds from other sound sources, from ambient sounds to vocals, which can then be mixed through the different effects modules that make up the Imaginary Play Box software v.1. The detailed organisation and cataloguing will allow for an efficient and effective retrieval of the sounds for their application in different pedagogical and creative contexts.

5. Conclusions

This study presented the innovative design of the software “Imaginary Play Box”, aimed at facilitating creative processes through a sound-based approach [21]. This type of study responds to contemporary needs for the integration of digital technologies in music education, highlighted by authors such as Chen, O’Neill, Himonides, or Purves [1,2], who underline the transformation of traditional teaching paradigms through the adoption of new technologies.
The development of this software benefited from interdisciplinary collaboration between music pedagogy experts and Max programmers, which deeply informed the final software design for both pedagogical and technical considerations. Using the design sciences research methodology [37], the project was able to iterate effectively on the initial design through continuous feedback from expert panels, ensuring that the final product was not only innovative but also practical and theoretically solid.
The present hardware–software ensemble focuses on a vision of music teaching that departs from the traditional model of teaching based on the tradition of Western classical music and its characteristic systems of tuning and scalar organisation [56,57] to focus on teaching that sets sound and its own characteristics at the centre of learning [58,59,60].
This approach to teaching is closely related to technology, so the different phases of the design could be indicative for other researchers interested in the design of educational music software, since the “Imaginary Play Box” facilitates an immersive interaction with sound, which could help to understand musical concepts in an intuitive way and could also push the boundaries of traditional music learning, as demonstrated by exploring new sonorities generated through digital technology [8].
On the other hand, the results obtained indicate that the “Imaginary Play Box” could have a significant impact on this type of musical learning, facilitating students’ exploration and development of creative skills in an intuitive and effective way. This software in combination with the Play Box hardware represent valuable tools that can be integrated into music curricula to enhance teaching and creative learning, as the combination of both systems facilitates an approach to the sound phenomenon from a physical experience that is completed or expanded through the different modules that make up the software created [27]. However, it will be central to understand and consider the significance of teachers in the interpretation and adoption of innovation in practice [61,62].
This study shows how technology can be designed and applied to effectively transform music education, aligning with pedagogical paradigms that promote experimentation and innovation. Future research should continue to explore how the integration of hardware and software can be optimised to foster creativity and interaction in music learning, enhancing students’ expression and skill development through their actual implementation in educational contexts.

Author Contributions

Conceptualization, E.P. and A.M.; methodology, A.M.; software, E.P.; validation, J.T.; investigation, E.P. and A.M.; writing—original draft preparation, E.P., A.M. and J.T.; writing—review and editing, E.P., A.M. and J.T.; supervision, J.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

To consult the data of this research you can check the archives at: https://mega.nz/folder/ATwEBBzY#Fa7D1R0pM1_0_XeuPPqDQw (accessed on 1 August 2024). Additional information can also be found on our website: www.aglaya.org (accessed on 1 August 2024). The software can be downloaded at https://www.aglaya.org/descargas (accessed on 1 August 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, J.C.W.; O'Neill, S.A. Computer-mediated composition pedagogy: Students engagement and learning in popular music and classical music. Music Educ. Res. 2020, 22, 185–200. [Google Scholar]
  2. Himonides, E.; Purves, R. The role of technology. In Music Education in the 21st Century in the United Kingdom: Achievements, Analysis and Aspirations; Hallam, S., Creech, A., Eds.; Institute of Education: Dublin, Ireland, 2010; pp. 123–140. [Google Scholar]
  3. Bauer, W. Music Learning Today: Digital Pedagogy for Creating, Performing, and Responding to Music; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  4. Savage, J. Technology and the Music Teacher. In Music and Music Education in People's Lives: An Oxford Handbook of Music Education; McPherson, G., Welch, G., Eds.; Oxford University Press: Oxford, UK, 2019; pp. 567–582. [Google Scholar]
  5. Bautista, A.; Toh, G.Z.; Mancenido, Z.; Wong, J. Student-centered pedagogies in the Singapore music classroom: A case study on collaborative composition. Aust. J. Educ. 2021, 43, 1–25. [Google Scholar]
  6. Webster, R.P. Computer-based technology. In The Child as Musician: A Handbook of Musical Development; McPherson, G., Ed.; Oxford University Press: Oxford, UK, 2016; pp. 500–519. [Google Scholar]
  7. Virtaluoto, J.; Suojanen, T.; Isohella, S. Minimalism Heuristics Revisited: Developing a Practical Review Tool. Tech. Commun. 2021, 68, 20–36. [Google Scholar]
  8. Ruthmann, A.; Mantie, R. The Oxford Handbook of Technology and Music Education; Oxford Academic: Oxford, UK, 2017. [Google Scholar]
  9. Jorgensen, E.R. Values and Music Education; Indiana University Press: Bloomington, IN, USA, 2021. [Google Scholar]
  10. O'Neill, S.A. Transformative music engagement and musical flourishing. In The Child as Musician, 2nd ed.; McPherson, G.E., Ed.; Oxford University Press: Oxford, UK, 2016; pp. 606–625. [Google Scholar]
  11. Yang, X. The perspectives of teaching electroacoustic music in the digital environment in higher music education. Interact. Learn. Environ. 2022, 32, 1183–1193. [Google Scholar]
  12. Grossman, P.; McDonald, M. Back to the Future: Directions for Research in Teaching and Teacher Education. Am. Educ. Res. J. 2008, 45, 184–205. [Google Scholar]
  13. Tan, A.-G.; Yukiko, T.; Oie, M.; Mito, H. Creativity and music education: A state of art reflection. In Creativity in Music Education; Yukiko, T., Tan, A.-G., Oie, M., Eds.; Springer: London, UK, 2019; pp. 3–16. [Google Scholar]
  14. Emmerson, S. Living Electronic Music; Ashgate: Farnham, UK, 2007. [Google Scholar]
  15. Martin, J. Tradition and Transformation: Addressing the gap between electroacoustic music and the middle and secondary school curriculum. Organ Sound 2013, 18, 101–107. [Google Scholar]
  16. Bull, A. Class, control, and Classical Music; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  17. Bull, A.; Scharff, C. ‘McDonald's music’ versus ‘serious music’: How production and consumption practices help to re-produce class inequality in the classical music profession. Cult. Sociol. 2017, 11, 283–301. [Google Scholar]
  18. Dwyer, R. Music Teachers' Values and Beliefs; Routledge: London, UK, 2016. [Google Scholar]
  19. Landy, L. Understanding the Art of Sound Organization; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  20. Landy, L. Making Music with Sounds; Routledge: London, UK, 2012. [Google Scholar]
  21. Holland, D. A constructivist approach for opening minds to sound-based music. J. Music Technol. Educ. 2015, 8, 23–29. [Google Scholar]
  22. Murillo, A.; Riaño, M.E.; Tejada, J. Aglaya Play: Designing a Software solution for group compositions in the music classroom. Music Technol. Educ. 2021, 13, 239–261. [Google Scholar]
  23. Holland, D.; Chapman, D. Introducing New Audiences to Sound-Based Music through Creative Engagement. Organ Sound 2019, 24, 240–251. [Google Scholar]
  24. Kazi, S. Screens, Swipes, and Society: The Future of Digital Citizenship in an Ever-Changing Tech Landscape. Childhood Educ. 2024, 100, 48–51. [Google Scholar]
  25. Gall, M.R.; Breeze, N. The sub-culture of music and ICT in the classroom. Technol. Pedagog. Educ. 2007, 16, 41–56. [Google Scholar]
  26. Gall, M. Trainee teachers' perceptions: Factors that constrain the use of music technology in teaching placements. Music Technol. Educ. 2013, 6, 5–27. [Google Scholar]
  27. Leman, M. Embodied Music Cognition and Mediation Technology; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  28. Leman, M. Musical entrainment subsumes bodily gestures: Its definition needs a spatiotemporal dimension. Empir. Musicol. Rev. 2012, 7, 63–67. [Google Scholar]
  29. Leman, M.; Maes, P.-J. The Role of Embodiment in the Perception of Music. Empir. Musicol. Rev. 2012, 9, 236–246. [Google Scholar]
  30. Maturana, H.R.; Varela, F.J. Autopoiesis and Cognition: The Realization of the Living; Reidel: Dordrecht, The Netherlands, 1980. [Google Scholar]
  31. Maturana, H.R.; Varela, F.J. The Tree of Knowledge: The Biological Roots of Human Understanding; New Science Library: Boston, MA, USA, 1987. [Google Scholar]
  32. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; The MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  33. Hein, E. The Promise and Pitfalls of the Digital Studio. In The Oxford Handbook of Thecnology and Music Education; Ruthmann, A., Mantie, R., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 383–395. [Google Scholar]
  34. Howell, G. Getting in the way? Limitations of technology in community music. In The Oxford Handbook of Technology and Music Education; Ruthmann, A., Mantie, R., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 449–463. [Google Scholar]
  35. Peppler, K. Interest-Driven Music Education: Youth, Technology, and Music Making Today. In The Oxford Handbook of Technology and Music Education; Ruthmann, S.A., Mantie, R., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 191–202. [Google Scholar]
  36. Peffers, K.; Tuunanen, T.; Rothenberger, M.A.; Chatterjee, S. A design science research methodology for information systems research. J. Manage. Inform. Syst. 2007, 24, 45–77. [Google Scholar]
  37. Štemberger, T.; Cencič, M. Design Based Research: The Way of Developing and Implementing Educational Innovation. World J. Educ. Technol. 2016, 8, 180–189. [Google Scholar]
  38. Aglaya.org. Available online: https://www.aglaya.org/about (accessed on 15 August 2024).
  39. Tejada, J.; Murillo, A.; Berenguer, J.M. Acouscapes: A software for ecoacoustic education and soundscape composition in primary and secondary education. Organ Sound 2023, 29, 55–63. [Google Scholar]
  40. Murillo, A.; Riaño, M.E. Play Box: A sound artefact to approach sound experimentation in the classroom. An exploratory study based on an electroacoustic creation intervention in initial teacher training. Rev. Interuniv. Form. P 2023, 98, 93–116. [Google Scholar]
  41. Cycling'74; IRCAM. Max-MSP v.8, Computer Software: Paris, France, 2019.
  42. Cipriani, A.; Giri, M. Electronic Music and Sound Design: Theory and Practice with Max 8; Contemponet: Roma, Italy, 2019. [Google Scholar]
  43. Manzo, V.J. Max/MSP/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  44. Taylor, G. Step by Step: Adventures in Sequencing with Max/MSP.; Cycling ‘74: San Francisco, CA, USA, 2018. [Google Scholar]
  45. Gabor, D. Acoustical quanta and the theory of hearing. Nature 1947, 159, 591–594. [Google Scholar]
  46. Roads, C. Microsound; The MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  47. Vaggione, H. Articulating Microtime. Comput. Music J. 1996, 20, 33–38. [Google Scholar]
  48. Morgan, R.P. La música del Siglo XX.; Akal Ediciones: Madrid, Spain, 1994. [Google Scholar]
  49. Pritchett, J. The Music of John Cage; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  50. Roig-Francolí, M.A. Understanding Post-Tonal Music; McGraw-Hill Education: New York, NY, USA, 2008. [Google Scholar]
  51. Xenakis, I. Formalized Music: Thought and Mathematics in Composition; Pendragon Press: Stuyvesant, NY, USA, 1992. [Google Scholar]
  52. de León, L.P.; Betored, P.S.; Mayo, R.M.D.; Prada, R.P. The Language of Aleatoric Music: Trends, Reflections, and Didactic Proposals; Wanceulen Editorial S.L.: Sevilla, Spain, 2023. [Google Scholar]
  53. Santamaría, J. Brownian Motion: A Paradigm of Soft Matter and Biology. Cienc. Exact Fís Nat. 2013, 106, 39–54. [Google Scholar]
  54. Manning, P. Electronic and Computer Music; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  55. Roads, C. Composing Electronic Music: A New Aesthetic; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  56. Kühn, C. History Of Musical Composition In Annotated Examples; Idea Books: Amsterdam, The Netherlands, 2004. [Google Scholar]
  57. Schoenberg, A. Theory of Harmony; University of California Press: Oakland, CA, USA, 2010. [Google Scholar]
  58. Chion, M. Sound: An Acoulogical Treatise; Duke University Press: Durham, NC, USA, 2016. [Google Scholar]
  59. Schafer, R.M. The Composer in the Classroom; Melos: Buenos Aires, Argentina, 2007. [Google Scholar]
  60. Schaeffer, P. Treatise on Musical Objects; Éditions du Seuil: Paris, France, 1966. [Google Scholar]
  61. Tejada, J.; Thayer, T. Design and validation of a music technology course for initial music teacher education based on the TPACK Model and the Project-Based Learning approach. J. Music Technol. Educ. 2019, 12, 225–246. [Google Scholar]
  62. Könings, K.D.; Brand-Gruwel, S.; van Merriënboer, J.J.G. Teachers' perspectives on innovations: Implications for educational design. Teach. Teach. Educ. 2007, 23, 985–997. [Google Scholar]
Figure 1. DSRM process model [36].
Figure 1. DSRM process model [36].
Signals 06 00016 g001
Figure 2. Schematic and internal image of the Play Box.
Figure 2. Schematic and internal image of the Play Box.
Signals 06 00016 g002
Figure 3. External image of the Play Box hardware.
Figure 3. External image of the Play Box hardware.
Signals 06 00016 g003
Figure 4. Play Box connected to a wireless sound system.
Figure 4. Play Box connected to a wireless sound system.
Signals 06 00016 g004
Figure 5. Matrix object in the different Max modules.
Figure 5. Matrix object in the different Max modules.
Signals 06 00016 g005
Figure 6. Graphical interface initial version.
Figure 6. Graphical interface initial version.
Signals 06 00016 g006
Figure 7. Schematic diagram of the operation of a granulator [46].
Figure 7. Schematic diagram of the operation of a granulator [46].
Signals 06 00016 g007
Figure 8. Granular module.
Figure 8. Granular module.
Signals 06 00016 g008
Figure 9. Overview of the programming of the granular module in Max with the amplitude envelope on the bottom right.
Figure 9. Overview of the programming of the granular module in Max with the amplitude envelope on the bottom right.
Signals 06 00016 g009
Figure 10. Granulator operating core.
Figure 10. Granulator operating core.
Signals 06 00016 g010
Figure 11. Delay and reverb module.
Figure 11. Delay and reverb module.
Signals 06 00016 g011
Figure 12. Max delay programming.
Figure 12. Max delay programming.
Signals 06 00016 g012
Figure 14. Bandpass filter bank module.
Figure 14. Bandpass filter bank module.
Signals 06 00016 g014
Figure 15. Operation diagram of a resonant bandpass filter [41].
Figure 15. Operation diagram of a resonant bandpass filter [41].
Signals 06 00016 g015
Figure 16. Max programming of the bandpass filter bank module.
Figure 16. Max programming of the bandpass filter bank module.
Signals 06 00016 g016
Figure 17. Frequency response of one of the bandpass filters.
Figure 17. Frequency response of one of the bandpass filters.
Signals 06 00016 g017
Figure 18. External sampler module.
Figure 18. External sampler module.
Signals 06 00016 g018
Figure 19. Programming the external sampler module in Max.
Figure 19. Programming the external sampler module in Max.
Signals 06 00016 g019
Figure 20. Looper module.
Figure 20. Looper module.
Signals 06 00016 g020
Figure 21. Max programming of a looper.
Figure 21. Max programming of a looper.
Signals 06 00016 g021
Figure 22. New graphical environment of the software Imaginary Play Box v.1.2.
Figure 22. New graphical environment of the software Imaginary Play Box v.1.2.
Signals 06 00016 g022
Figure 23. Students’ sound experimentation processes with the Play Box and different hardware effect modules.
Figure 23. Students’ sound experimentation processes with the Play Box and different hardware effect modules.
Signals 06 00016 g023
Table 1. Scoring for each of the assessment blocks.
Table 1. Scoring for each of the assessment blocks.
ModuleGranular (Max 100)Dealy (Max 125)Reverb (Max 100)BPFB (Max 125)Sampler (Max 125)Looper (Max 200)
Score6992667695157
%697466607678
Table 2. Scoring of the interface assessment blocks and the different modules.
Table 2. Scoring of the interface assessment blocks and the different modules.
ModuleInterface
(Max 120)
Granular (Maxx120)Delay
(Max 120)
Reverb
(Max 120)
BPFB
(Max 120)
Sampler
(Max 120)
Looper
(Max 120)
Score92851011029310279
%76718485778566
Table 3. Overall rating of software for generating and transforming sound.
Table 3. Overall rating of software for generating and transforming sound.
Global by ModuleGlobal Overall
Score13726
%9186
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peris, E.; Murillo, A.; Tejada, J. Design and Implementation of a Musical System for the Development of Creative Activities Through Electroacoustics in Educational Contexts. Signals 2025, 6, 16. https://doi.org/10.3390/signals6020016

AMA Style

Peris E, Murillo A, Tejada J. Design and Implementation of a Musical System for the Development of Creative Activities Through Electroacoustics in Educational Contexts. Signals. 2025; 6(2):16. https://doi.org/10.3390/signals6020016

Chicago/Turabian Style

Peris, Esteban, Adolf Murillo, and Jesús Tejada. 2025. "Design and Implementation of a Musical System for the Development of Creative Activities Through Electroacoustics in Educational Contexts" Signals 6, no. 2: 16. https://doi.org/10.3390/signals6020016

APA Style

Peris, E., Murillo, A., & Tejada, J. (2025). Design and Implementation of a Musical System for the Development of Creative Activities Through Electroacoustics in Educational Contexts. Signals, 6(2), 16. https://doi.org/10.3390/signals6020016

Article Metrics

Back to TopTop