Next Article in Journal
The Role of Blockchain in Improving the Processes and Workflows in Construction Projects
Next Article in Special Issue
Thermal Image Generation for Robust Face Recognition
Previous Article in Journal
Effect of Shot Peening on the Evolution of Scale on T91 Steel Exposed to Steam
Previous Article in Special Issue
Combinatorial Optimization Problems and Metaheuristics: Review, Challenges, Design, and Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Music for Therapy: Effectiveness and Perspectives

1
Istituti Clinici Scientifici Maugeri IRCCS, 27100 Pavia, Italy
2
Dipartimento di Informatica Sistemistica e Comunicazione, Università degli Studi di Milano-Bicocca, Viale Sarca, 336, 20126 Milano, Italy
3
NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
4
ETSI Informatica, Andalucia Tech, University of Malaga, 29071 Malaga, Spain
5
Dipartimento di Matematica e Geoscienze, Università degli Studi di Trieste, Via Valerio 12/1, 34127 Trieste, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(19), 8833; https://doi.org/10.3390/app11198833
Submission received: 7 September 2021 / Revised: 16 September 2021 / Accepted: 18 September 2021 / Published: 23 September 2021
(This article belongs to the Special Issue Generative Models in Artificial Intelligence and Their Applications)

Abstract

:
This study assessed the short-term effects of conventional (i.e., human-composed) and algorithmic music on the relaxation level. It also investigated whether algorithmic compositions are perceived as music and are distinguishable from human-composed music. Three hundred twenty healthy volunteers were recruited and randomly allocated to two groups where they listened to either their preferred music or algorithmic music. Another 179 healthy subjects were allocated to four listening groups that respectively listened to: music composed and performed by a human, music composed by a human and performed by a machine; music composed by a machine and performed by a human, music composed and performed by a machine. In the first experiment, participants underwent one of the two music listening conditions—preferred or algorithmic music—in a comfortable state. In the second one, participants were asked to evaluate, through an online questionnaire, the musical excerpts they listened to. The Visual Analogue Scale was used to evaluate their relaxation levels before and after the music listening experience. Other outcomes were evaluated through the responses to the questionnaire. The relaxation level obtained with the music created by the algorithms is comparable to the one achieved with preferred music. Statistical analysis shows that the relaxation level is not affected by the composer, the performer, or the existence of musical training. On the other hand, the perceived effect is related to the performer. Finally, music composed by an algorithm and performed by a human is not distinguishable from that composed by a human.

1. Introduction

Music Therapy was born and developed as a discipline strongly characterized by the relational component. It is mainly based on active techniques, especially in sound-music improvisation [1,2]: patient and therapist interact using rhythmic-melodic musical instruments, expressing emotions and activating a process of sharing and (co)regulation. This allows for significant effects on the communication/relationship level, but also on the behavioral, psychological, and cognitive aspects [3,4,5]. Alongside this approach, active rehabilitation techniques supported by music have also been developed. Neurological Music Therapy can be considered an example [6,7,8,9]. However, as shown in the literature, the use of music listening is also growing in the field of music therapy, showing how the musical contents (independently from the relational component) have an important impact on human beings, producing significant effects on the physiological, psychological, and behavioral level [4,10,11,12,13]. Music listening is widely used in some pathological contexts such as the treatment of pain [14,15], cancer [16], in a hospital setting to reduce anxiety and stress resulting from hospitalization or medical interventions [17,18], and the treatment of some psycho-behavioral symptoms that characterize chronic or degenerative diseases [19,20,21,22]. The complexity of the musical phenomenon makes the scientific approach to therapy with music difficult, and therefore the understanding of the cause–effect connections (what produces what). In this regard, the possible introduction of information technology in music and music therapy fields can be a valuable and important support. Recently, artificial intelligence has been used in various areas including music composition [23,24]. Generally, this allows for the creation of new music styles or music based on a previous style [25,26]. However, the possibility of creating music for therapy remains largely unexplored. Some previous studies describe the Melomics-Health algorithm that introduced this possibility [27,28,29,30]. Melomics-Health can create music based on specific therapeutic needs, mainly to reduce stress, anxiety [27], and pain [3]. In further detail, algorithmic music relies on composition methods developed for specific therapeutic use [29]. Thus, it does not constitute a musical genre, but a way of composing music. The algorithm simplifies the complexity of musical structures by composing melodies characterized by musical parameters (timbre, tempo, intervals, tonal/modal background, pitch range, duration of sounds, pauses) controlled and modulated in relation to therapeutic needs [29]. On one hand, this allows for standardized musical proposals, while on the other hand, it allows people to relate the musical structure with its potential effects. Algorithmic listening is different from the main approaches to music listening usually used in the music therapy context, which are based mainly on conventional musical repertoires, familiar to the patients and then predictable, and that also take into account their musical tastes [31]. Although neuroscience has shown the impact of music based on these characteristics [32,33], therapeutic musical choices should also consider musical parameters, especially when music is used not only to produce well-being but also to reduce specific symptoms such as anxiety, stress, and pain [27,34,35]. With these premises, two musical models aimed at activating or relaxing have been created through Melomics-Health. In particular, researchers have paid attention to relaxation, which is a fundamental and transversal topic in the field of therapy. This study completes previous research in which predictive factors of success in the use of conventional or Melomics-Health music approaches were identified through machine learning techniques [36]. The objectives of this study are (a) to understand whether listening to algorithmic music is more or less relaxing than listening to conventional music (Phase 1 of the study); (b) to assess whether algorithmic music is perceived as music or not, how much the composer and performer (human or machine) affect music listening concerning relaxation, and whether music composed by a human can be distinguished by that composed by an algorithm (Phase 2 of the study).

2. Materials and Methods

2.1. Phase 1

This part of the study involved the same sample of healthy subjects described in the abovementioned article by Raglio et al. [36]. This is a sample of 323 healthy volunteers (162 females and 161 males). The distribution of volunteers by age is the following: 109 between 25 and 44 years, 141 between 45 and 64 years, and 64 at least 65 years old. A total of 156 participants had musical training, while the remaining 167 were without musical training (here, musical training is defined as at least three years of musical education and practice). In the considered population, less than 1% rarely listen to music, 32% listen to music infrequently (not daily), 38% listen daily for less than one hour, while the remaining subjects listen to music at least one hour a day. The volunteers were allocated to two homogeneous groups (with and without musical training or practice) and stratified by gender, age, and education. Each final subgroup was formally randomized to conventional (self-selected) or Melomics-Health music listening. For musical generation, the pipeline consisted of MIDI file generation with the Music21 Python library (http://web.mit.edu/music21/, accessed on 3 September 2021) and, subsequently, by using Apple Logic Pro 10 and the Native Instruments music library (Kontakt 5) for synthesis. A sample of algorithmically generated music is available at https://open.spotify.com/album/4QBVhPmHTwXN9IWiB8ZgJS (accessed on 3 September 2021). We considered these professional music-generation tools to avoid the creation of music that the volunteers would perceive as “mechanical”. To this end, we also relied on the expertise of musicians that helped us with the setting of the parameters of these tools. After randomization, the participants underwent one of the two music listening conditions (with earphones) in a state of comfort. Each music listening session lasted approximately 9 min. In the conventional music listening group, before the experiment, the researcher asked each subject to select a list of 2–3 preferred relaxing pieces of music; the only restriction was a total length of 9 min at most. In the Melomics-Health group, participants listened to three relaxing pieces of music with a total length of 9 min composed by the Melomics-Health algorithm. Before and after music listening experiences, both groups (Individualized Music Listening Group = IL and Melomics-Health Group = MH) completed the Visual Analogue Scale (VAS) to evaluate the level of relaxation of different types of music. In addition, at the end of the experience, the participants were asked to express how much they enjoyed music listening and how it evoked images and/or emotions. These aspects were evaluated through completion of the VAS.

2.2. Phase 2

This part of the study involved 179 healthy subjects with and without training or music practice who participated in an online questionnaire. A first form, distributed through several different channels—different websites, and different social media—was used to recruit potential participants: it provided information about the goals of the study, the estimated duration of overall music listening and questionnaire completion, and the fact that participation required a quiet situation in which the listener would not disturb or be disturbed by people nearby, preferably listening to the different pieces with earphones. This first form gathered contact information that was used to send a link to the actual form that proposed the actual music listening experience and questionnaire. A professional composer was asked to create a song considered relaxing using the same parameters that were used by the algorithm. Therefore, pieces composed for the same instrument (clarinet) and with overlapping characteristics (tempo, duration, harmonic background, range of pitch) were used. After recruitment, the participants were randomized to achieve a balance between the four different groups: the first listened to pieces composed and performed by a human (HH group); the second listened to pieces composed by a human and performed by a machine (HC group); a third listened to pieces composed and performed by a machine (CC group) and the fourth listened to pieces composed by a machine and performed by a human (CH group). Participants were asked to evaluate if what they had listened to: (1) could be considered music; (2) produced an immediate relaxation/de-activation; (3) was composed by a human being or a machine. To keep the two tasks separate, the first required listening to three 90 s pieces, while the second listened to two 60 s pieces. Statistical analyses were based on the Mann–Whitney U test [37] to compare the relaxation levels for different types of listening, as shown in Tables 1 and 3, and the Chi-squared test to compare responses to questions that had a yes/no answer: music/no music; human/artificial composer, as shown in Tables 2 and 4. The research was approved by the local ethics committee (Protocol Number 2175CE, 11 January 2018), and participants signed an informed consent form before enrollment.

3. Results

3.1. Phase 1

Figure 1a shows that the relaxation level obtained with the music created by the algorithms (MH) is similar to the one achieved with the pieces of music individually selected by the participants (IL).
Moreover, musical training does not result in a different variation of the relaxation level. Figure 1b displayed how the participants liked the music they listened to. The algorithmically created music does not reach the same VAS values of the individually selected music. Furthermore, in this case, similar behavior can be observed despite the existence or lack of musical training. Figure 1c,d shows how IL and MH music can evoke images or emotions in the participants. Individualized listening has a greater evocative value for both images and emotions. However, algorithmic music also has an effect in this regard. The impact of music on the evocative aspect is maintained constantly in both groups (IL and ML) in the presence and absence of music training. Only the subjects with training, belonging to the MH group, evaluated with a slightly lower score compared to the subjects without training for the evocative value of images given by listening to algorithmic music, just as the subjects with training, belonging to the IL group, showed greater emotional sensitivity than the subjects without training. The aforementioned results have been validated through a statistical validation procedure reported in Table 1.

3.2. Phase 2

Figure 2 shows how participants perceived the sonorous stimuli (music or non-music) that experimenters proposed to them.
The majority of the participants perceived what they had listened to as music, independently from musical training (a–d). However, when the composer was a computer, more participants identified what they had listened to as “non-music” (c, d). The same trend appears when the analyses are specifically related to the subjects with (e–h) or without (i–l) musical training. Table 2 contains the p-values resulting from the chi-squared test (with α = 0.05 ) assessing whether the perception of the music (music or non-music) was different for the four different combinations of human/computer composer/performer.
Based on the p-values, it is possible to state that, in some cases, the perceived effect of the music is affected by the composer. A reasonable hypothesis is that the algorithmic music is not “competent” enough, and is then spotted by the participants. In particular, focusing on the combinations HH/CH and CC/HC, the p-values indicated that the perceived effect is different even though the performer is the same (a human in the former pair, and a computer in the latter pair). This result is coherent with the existing literature demonstrating that a computer performer can easily bias musical perception [38]. Focusing on the specific groups of participants (with and without musical training), music performed by a human being or by a computer is perceived by the participants in the same manner, regardless of whether the composer is a human or a computer. The only statistically significant difference among participants with musical training arises when focusing on the combinations HC/CH and HH/CC: when both the composer and the performer are not the same entity, the participants perceived what they had heard differently (music or non-music). Similarly, for participants without musical training, the only statistically significant difference can be noticed when comparing CC to HH. Figure 3 shows that the immediate level of relaxation was comparable across participants in the HC/CC and HH/CH subgroups (a). Subjects without musical training (b) obtained the same relaxation level in HH, CH, and CC subgroups and a lesser relaxation in the HC subgroup. Subjects with musical training (c) obtained a greater level of relaxation in the HC and CC subgroups and a lower and similar level of relaxation in the HH and CH subgroups. The resulting p-values (Table 3) suggest that no statistically significant difference emerged across the considered group of participants.
Finally, Figure 4 summarizes how different groups of listeners distinguished music composed by a human from that composed by an algorithm. Comparing plots (a) and (b), it seems that it is not simple to identify the human as a composer independently from the performer being a computer or a human being. It seems that the performer does not play a significant role in understanding whether a piece of music was composed by a human. Moreover, comparing the plots (c) and (d) in which the composer is a computer, it seems that the composer can be identified when the performer is also a computer. On the other hand, it is difficult for the participants to identify the composer when a human performs music composed by a computer.
The second row of plots performs a similar analysis but focuses only on the participants with musical training. Focusing on plots (e)–(h), it is possible to strengthen the previous findings. Finally, the fourth plot (h) makes it clear that the participants can identify the computer as the composer when the music is performed by the computer itself. This analysis seems to suggest that the performer is the key element. This is quite evident when focusing on the pairs of plots (e)/(f) and (g)/(h), where a different performer makes it simpler for the participants to identify the composer. While usually the performer adds a layer of expressiveness on top of the composed music, in order to reduce potential differences in execution by the human with respect to the machine (expressiveness, dynamics, tempo, etc.), the human performer was asked to perform the musical pieces avoiding variations and seeking both regularity and uniformity in the execution. Still, for the listener, the performer has a relevant effect in the identification of the composer of the music.
The last row of plots in Figure 4 considers the participants without musical training. For this subset of participants, the four plots suggest that it is generally difficult for them to recognize a composer of music, except for music composed by a human and performed by a computer (j). Additionally, it is interesting to compare the plots (e) and (h) reported in the second row (participants with musical training) with those in the third row (i) and (l) (participants without musical training). From this comparison, it seems that when the performer and the composer are the same entity (human or computer),it is simple for the participants with musical training to correctly identify the composer. On the other hand, participants without musical training still have some difficulties in identifying the composer. To validate the aforementioned qualitative findings, we performed a statistical analysis. Table 4 reports the p-values of the chi-squared test (with α = 0.05 ) comparing the perception of the composer as human or artificial among all four possible combinations of human/computer and performer/composer.
As one can observe (Table 4), the p-values suggest that, for the vast majority of the considered combinations (composer/performer), what really matters for the perception of the composer is the performer. In particular, independently from musical training, when the performer is a human being, the perceived composer is not distinguishable for the participants, independently from their musical training. From the p-values, it is possible to extract other findings: for the participants (independently from their musical training), there is a statistically significant difference when considering the combinations CC and HC. In other words, a participant (subject) can identify the composer with a different level of confidence if the performer is a computer. Focusing on the whole group of participants and the ones with musical training, there is a statistically significant difference between the combinations HH and CC. In other words, it is possible to identify the composer with a different level of confidence when both the performers and the composers are humans or computers.

4. Discussion

4.1. Phase 1

From these results, it is possible to state that the relaxation level obtained through the music created by the algorithms is similar to the one achieved with the musical pieces individually selected by the participants. Participants belonging to the IL group seemed satisfied with the choices they made. On the other hand, participants belonging to the MH group rated the experience as less pleasant. This is probably due to the fact that, in this case, the music was proposed by the experimenter and unknown to the subjects. We think that enjoyment may also be related to the level of familiarity or predictability of the music. While this limitation might have partially biased the result, we expect these conditions to actually happen in real-life situations, where people could favor music that they know and enjoy. In this sense, repeated listening could also increase the level of enjoyment. Despite this, in the MH group, the enjoyment score reached a high level (6/10 in the VAS). This observation is in line with the existing literature showing that well-known music has a potential effect given its recognition and predictability characteristics by leading to a reward effect [31,39,40,41]. However, it is not possible to exclude that musical parameters and structures can also determine important effects. For this reason, algorithmic music (which tends to exclude cultural and recognition aspects, relying on peculiar structural and parametric aspects of music) can be an important tool for studying the links between music and the effects produced [27,29]. Concerning the possibility that IL and MH music can evoke images and emotions in the participants, the results of the study showed that algorithmically created music can also induce this process. This result makes the two types of music more comparable. Again, it is conceivable that individualized listening can produce more images and emotions related to personal memories or experiences. However, it is important to emphasize that MH music was created for therapeutic purposes and that, for this purpose, the evocative component assumes less importance. In addition, less evocative or emotional exposure could facilitate the process of relaxation. According to the p-values, it is clear that individualized listening and Melomics-Health produced a different effect in terms of relaxation level, evoked emotions and images, and appreciation of the music on the considered groups of participants, independently from musical training. However, it is important to highlight that, similarly to individualized music listening, Melomics-Health can produce relaxation and evoke images and emotions. This part of the study analyzed the short-term effect of music listening only in healthy subjects. It will be important to think in the future about studies involving different types of pathological populations with increased and prolonged exposure to musical stimulus. This aspect could on one hand create more familiarity with algorithmic music and, on the other, strengthen therapeutic changes at the psycho-physiological level.

4.2. Phase 2

Concerning the perceived effect (music or non-music), the results of the statistical analysis corroborate the qualitative findings: the effect is mostly related to the performer, while the fact that the music is composed by a human or by a computer does not significantly affect its perception. Moreover, the levels of relaxation are comparable independently from the composer, performer, and musical training. This is the most important result of this study because it confirms the suitability of algorithmically created music for therapeutic purposes. Finally, we have explored whether the music composed by a computer is distinguishable from the one composed by a human being. Based on the results obtained, we can state the following: when music is performed by a human being, it is difficult to identify the computer as the composer. Similarly, it is difficult for participants without musical training to identify the human composer of music performed by a computer. When the composer and the performer are both computers, participants with musical training are more likely than participants without musical training to detect the composer. This finding suggests the suitability of Melomics-Health algorithms for creating music that is potentially not perceived as artificial by the patients (usually without musical training). Additionally, the statistical analysis confirms the qualitative results previously discussed. One of the limitations of the study, especially regarding Phase 2, was that younger people with adequate technological expertise probably adhered more easily to the experiment, and, conversely, older people with little computer knowledge may have had some difficulties in participating. Another remark: it was not technically possible to control whether the music was played completely while participants completed the online questionnaire. Nonetheless, two factors lead us to consider that participants actually followed the proposed procedure: first, several recruited participants did not complete the listening experience or they did not complete the questionnaire (informally, some of the participants who completed the procedure reported that the overall experience was considered quite, and potentially overly, long); second, the questionnaire included open questions about situations or images suggested or evoked by the listening experience, and most of the completed questionnaires reported very plausible phrases, suggesting that the participant actually listened to the proposed pieces and devoted significant time and attention to the overall experience. Finally, from a musical point of view, the brevity of the music listening experience, especially in the second phase of the study, allowed us to verify only the immediate impact of music on relaxation. Future studies should verify the long-term effects of algorithmic music through randomized controlled clinical trials in different fields.

5. Conclusions

This study investigates the relaxation level (one of the most relevant aims of non-pharmacological therapies) induced by algorithmic music, and by comparing it with that obtained with conventional music. Subsequently, an extensive study was performed to capture important features characterizing algorithmic music. First, a study was performed aimed at assessing whether algorithmic music is perceived as music; a second step investigated how much the composer and performer impact on the relaxation level produced through music listening; finally, the study analyzed whether the music composed by a human can be distinguished from that composed by an algorithm. Based on the results obtained, it is possible to state that the relaxation levels obtained through conventional music and algorithmic music are comparable. Musical training does not significantly influence the relaxation level. Concerning the second part of the study, perceiving as music what the patients heard is mostly related to the performer of the piece of music, while whether the composer is a computer or a human is not relevant. Similarly, what makes music composed by an algorithm distinguishable from the one composed by a computer is still the performer. In other words, using music composed by a computer for therapeutic reasons is a viable option. In particular, algorithmic music does not reduce the relaxation level of the patients when compared to the relaxation level obtained with music composed by a human. What really matters is the performer. It will be important to evaluate outcomes through a more objective assessment (biological markers, neuroimaging, electrophysiological responses, etc.) in clinical settings including other outcomes such as pain, stress, and anxiety. Moreover, for the second phase of this study, as participants were listening at home, many factors were not controlled (e.g., the environment in which the listening was performed, whether the listening was completed, and so on). For this reason, it would be important to further validate the results of this study in a fully controlled environment, thus guaranteeing the same listening conditions for all participants. Another aim of the research in this field can be the creation of an adaptive algorithm that, starting from the therapist’s proposal, composes the most appropriate and tailored music for every need at all times based on specific recommendations by the patient. This introduces the possibility of creating an adaptive mechanism such that the algorithm can follow individual patient changes through therapeutic treatment. In this approach to musical therapy, the therapist could be considered not only a trained professional in the use of music in the therapeutic relationship but also a professional who “designs” music according to the therapeutic needs of the patient by integrating musical, psychological, medical, and neuroscientific knowledge.

Author Contributions

Conceptualization, A.R., P.B., M.I., M.C., F.V. and L.M.; Data curation, A.R. and G.V.; Funding acquisition, M.C.; Investigation, G.V., M.I., M.C., S.M. and L.M.; Methodology, A.R., M.C., F.V. and L.M.; Software, L.M.; Validation, A.R. and M.C.; Visualization, L.M.; Writing—original draft, A.R., G.V., M.I., M.C., S.M., F.V. and L.M.. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was partially supported by FCT, Portugal, through funding of the project GADgET (DSAIPA/DS/0022/2018), and the financial support from the Slovenian Research Agency (research core funding no. P5-0410).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raglio, A. When music becomes music therapy. Psychiatry Clin. Neurosci. 2011, 65, 682–683. [Google Scholar] [CrossRef] [PubMed]
  2. Gold, C.; Solli, H.P.; Krüger, V.; Lie, S.A. Dose–response relationship in music therapy for people with serious mental disorders: Systematic review and meta-analysis. Clin. Psychol. Rev. 2009, 29, 193–207. [Google Scholar] [CrossRef] [PubMed]
  3. Raglio, A. More music, more health! J. Public Health 2020. [Google Scholar] [CrossRef]
  4. Raglio, A.; Oasi, O. Music and health: What interventions for what results? Front. Psychol. 2015, 6, 230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Koelsch, S. A neuroscientific perspective on music therapy. Ann. N. Y. Acad. Sci. 2009, 1169, 374–384. [Google Scholar] [CrossRef] [PubMed]
  6. Raglio, A. Music and neurorehabilitation: Yes, we can! Funct. Neurol. 2018, 33, 173–174. [Google Scholar] [PubMed]
  7. Scholz, D.S.; Rohde, S.; Nikmaram, N.; Brückner, H.P.; Großbach, M.; Rollnik, J.D.; Altenmüller, E.O. Sonification of arm movements in stroke rehabilitation—A novel approach in neurologic music therapy. Front. Neurol. 2016, 7, 106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Altenmüller, E.; Schlaug, G. Apollo’s gift: New aspects of neurologic music therapy. Prog. Brain Res. 2015, 217, 237–252. [Google Scholar]
  9. Thaut, M.; Hoemberg, V. Handbook of Neurologic Music Therapy; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  10. Sihvonen, A.J.; Särkämö, T.; Leo, V.; Tervaniemi, M.; Altenmüller, E.; Soinila, S. Music-based interventions in neurological rehabilitation. Lancet Neurol. 2017, 16, 648–660. [Google Scholar] [CrossRef] [Green Version]
  11. Simon, H.B. Music as medicine. Am. J. Med. 2015, 128, 208–210. [Google Scholar] [CrossRef]
  12. Chanda, M.L.; Levitin, D.J. The neurochemistry of music. Trends Cogn. Sci. 2013, 17, 179–193. [Google Scholar] [CrossRef] [Green Version]
  13. Boso, M.; Politi, P.; Barale, F.; Emanuele, E. Neurophysiology and neurobiology of the musical experience. Funct. Neurol. 2006, 21, 187. [Google Scholar]
  14. Lee, J.H. The effects of music on pain: A meta-analysis. J. Music. Ther. 2016, 53, 430–477. [Google Scholar] [CrossRef]
  15. Linnemann, A.; Kappert, M.B.; Fischer, S.; Doerr, J.M.; Strahler, J.; Nater, U.M. The effects of music listening on pain and stress in the daily life of patients with fibromyalgia syndrome. Front. Hum. Neurosci. 2015, 9, 434. [Google Scholar] [CrossRef] [Green Version]
  16. Bradt, J.; Dileo, C.; Magill, L.; Teague, A. Music interventions for improving psychological and physical outcomes in cancer patients. Cochrane Database Syst. Rev. 2016, 8. [Google Scholar] [CrossRef]
  17. Raglio, A. Therapeutic use of music in hospitals: A possible intervention model. Am. J. Med. Qual. 2019, 34, 618–620. [Google Scholar] [CrossRef]
  18. Hole, J.; Hirsch, M.; Ball, E.; Meads, C. Music as an aid for postoperative recovery in adults: A systematic review and meta-analysis. Lancet 2015, 386, 1659–1671. [Google Scholar] [CrossRef]
  19. Gaviola, M.A.; Inder, K.J.; Dilworth, S.; Holliday, E.G.; Higgins, I. Impact of individualised music listening intervention on persons with dementia: A systematic review of randomised controlled trials. Australas. J. Ageing 2020, 39, 10–20. [Google Scholar] [CrossRef]
  20. Raglio, A.; Attardo, L.; Gontero, G.; Rollino, S.; Groppo, E.; Granieri, E. Effects of music and music therapy on mood in neurological patients. World J. Psychiatry 2015, 5, 68. [Google Scholar] [CrossRef]
  21. Gerdner, L.A. Individualized music for dementia: Evolution and application of evidence-based protocol. World J. Psychiatry 2012, 2, 26. [Google Scholar] [CrossRef]
  22. Särkämö, T.; Tervaniemi, M.; Laitinen, S.; Forsblom, A.; Soinila, S.; Mikkonen, M.; Autti, T.; Silvennoinen, H.M.; Erkkilä, J.; Laine, M.; et al. Music listening enhances cognitive recovery and mood after middle cerebral artery stroke. Brain 2008, 131, 866–876. [Google Scholar] [CrossRef] [Green Version]
  23. Lopez-Rincon, O.; Starostenko, O.; Ayala-San Martín, G. Algoritmic music composition based on artificial intelligence: A survey. In Proceedings of the 2018 IEEE International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 187–193. [Google Scholar]
  24. Liu, C.H.; Ting, C.K. Computational intelligence in music composition: A survey. IEEE Trans. Emerg. Top. Comput. Intell. 2016, 1, 2–15. [Google Scholar] [CrossRef]
  25. De Prisco, R.; Zaccagnino, G.; Zaccagnino, R. EvoComposer: An Evolutionary Algorithm for 4-Voice Music Compositions. Evol. Comput. 2020, 28, 489–530. [Google Scholar] [CrossRef]
  26. Wiggins, G.A. Computer models of musical creativity: A review of computer models of musical creativity by David Cope. Lit. Linguist. Comput. 2008, 23, 109–116. [Google Scholar] [CrossRef] [Green Version]
  27. Raglio, A.; Bellandi, D.; Gianotti, M.; Zanacchi, E.; Gnesi, M.; Monti, M.; Montomoli, C.; Vico, F.; Imbriani, C.; Giorgi, I.; et al. Daily music listening to reduce work-related stress: A randomized controlled pilot trial. J. Public Health 2020, 42, e81–e87. [Google Scholar] [CrossRef]
  28. Raglio, A. What Happens When Algorithmic Music Meets Pain Medicine. Pain Med. 2020, 21, 3736–3737. [Google Scholar]
  29. Raglio, A.; Vico, F. Music and technology: The curative algorithm. Front. Psychol. 2017, 8, 2055. [Google Scholar] [CrossRef] [Green Version]
  30. Requena, G.; Sánchez, C.; Corzo-Higueras, J.L.; Reyes-Alvarado, S.; Rivas-Ruiz, F.; Vico, F.; Raglio, A. Melomics music medicine (M3) to lessen pain perception during pediatric prick test procedure. Pediatr. Allergy Immunol. 2014, 25, 721–724. [Google Scholar] [CrossRef] [PubMed]
  31. Zatorre, R.J. Why Do We Love Music? In Cerebrum: The Dana Forum on Brain Science; Dana Foundation: New York, NY, USA, 2018; Volume 2018. [Google Scholar]
  32. Holbrook, M.B.; Schindler, R.M. Some exploratory findings on the development of musical tastes. J. Consum. Res. 1989, 16, 119–124. [Google Scholar] [CrossRef]
  33. Way, S.F.; Gil, S.; Anderson, I.; Clauset, A. Environmental changes and the dynamics of musical identity. In Proceedings of the International AAAI Conference on Web and Social Media, Munich, Germany, 11–14 June 2019; Volume 13, pp. 527–536. [Google Scholar]
  34. Martin-Saavedra, J.S.; Vergara-Mendez, L.D.; Pradilla, I.; Velez-van Meerbeke, A.; Talero-Gutierrez, C. Standardizing music characteristics for the management of pain: A systematic review and meta-analysis of clinical trials. Complement. Ther. Med. 2018, 41, 81–89. [Google Scholar] [CrossRef]
  35. Guétin, S.; Giniès, P.; Siou, D.K.A.; Picot, M.C.; Pommié, C.; Guldner, E.; Gosp, A.M.; Ostyn, K.; Coudeyre, E.; Touchon, J. The effects of music intervention in the management of chronic pain: A single-blind, randomized, controlled trial. Clin. J. Pain 2012, 28, 329–337. [Google Scholar] [CrossRef] [PubMed]
  36. Raglio, A.; Imbriani, M.; Imbriani, C.; Baiardi, P.; Manzoni, S.; Gianotti, M.; Castelli, M.; Vanneschi, L.; Vico, F.; Manzoni, L. Machine learning techniques to predict the effectiveness of music therapy: A randomized controlled trial. Comput. Methods Programs Biomed. 2020, 185, 105160. [Google Scholar] [CrossRef] [PubMed]
  37. Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  38. Kirke, A.; Miranda, E.R. A survey of computer systems for expressive music performance. ACM Comput. Surv. CSUR 2009, 42, 1–41. [Google Scholar] [CrossRef]
  39. Blood, A.J.; Zatorre, R.J. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. USA 2001, 98, 11818–11823. [Google Scholar] [CrossRef] [Green Version]
  40. Salimpoor, V.N.; Benovoy, M.; Larcher, K.; Dagher, A.; Zatorre, R.J. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat. Neurosci. 2011, 14, 257–262. [Google Scholar] [CrossRef]
  41. Salimpoor, V.N.; van den Bosch, I.; Kovacevic, N.; McIntosh, A.R.; Dagher, A.; Zatorre, R.J. Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science 2013, 340, 216–219. [Google Scholar] [CrossRef]
Figure 1. Points of variation in relaxation level (a) and enjoying the music (b), evoked images (c) and evoked emotional levels (d) assessed by the Visual Analogue Scale (VAS) in subjects recruited for the study (all, without and with training) in both groups (Individualized Music Listening Group = IL and Melomics-Health Group = MH). On each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers.
Figure 1. Points of variation in relaxation level (a) and enjoying the music (b), evoked images (c) and evoked emotional levels (d) assessed by the Visual Analogue Scale (VAS) in subjects recruited for the study (all, without and with training) in both groups (Individualized Music Listening Group = IL and Melomics-Health Group = MH). On each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers.
Applsci 11 08833 g001
Figure 2. Number of participants in the considered groups: whole group (ad), with musical training (eh), and without musical training (il), that perceived what they heard as music or non-music. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine; w/ = with; w/o = without.
Figure 2. Number of participants in the considered groups: whole group (ad), with musical training (eh), and without musical training (il), that perceived what they heard as music or non-music. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine; w/ = with; w/o = without.
Applsci 11 08833 g002
Figure 3. Boxplots summarizing the relaxation level of the different groups of participants (whole group, with and without musical training). In each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine. w/ = with; w/o = without. (a) refers to all the participants; (b) refers to the participants without musical training; (c) refers to the participants with musical training.
Figure 3. Boxplots summarizing the relaxation level of the different groups of participants (whole group, with and without musical training). In each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points that are not considered outliers. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine. w/ = with; w/o = without. (a) refers to all the participants; (b) refers to the participants without musical training; (c) refers to the participants with musical training.
Applsci 11 08833 g003
Figure 4. Number of participants in the considered groups (whole group, with musical training, and without musical training) who identified the composer of the music they listened to as Human or Computer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine; w/ = with; w/o = without.
Figure 4. Number of participants in the considered groups (whole group, with musical training, and without musical training) who identified the composer of the music they listened to as Human or Computer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine; w/ = with; w/o = without.
Applsci 11 08833 g004
Table 1. Results of a two-tailed Mann–Whitney U-test comparing the Individualized Listening group with the Melomics-Health group: All Subjects, Subjects with musical training and Subjects without musical training.
Table 1. Results of a two-tailed Mann–Whitney U-test comparing the Individualized Listening group with the Melomics-Health group: All Subjects, Subjects with musical training and Subjects without musical training.
Relaxation LevelsLiked the MusicEvoked ImagesEvoked Emotions
All subjects1.71 × 10 10 2.01 × 10 30 2.48 × 10 12 3.68 × 10 19
Musical training2.07 × 10 5 1.80 × 10 12 1.19 × 10 4 1.07 × 10 6
No musical training1.02 × 10 6 3.34 × 10 20 3.51 × 10 10 4.06 × 10 15
Table 2. Chi-squared test (with α = 0.05 ) assessing whether the perception of the music (music or non-music) was different for all the possible combinations of human/computer performer/composer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
Table 2. Chi-squared test (with α = 0.05 ) assessing whether the perception of the music (music or non-music) was different for all the possible combinations of human/computer performer/composer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
All Subjects
HHHCCHCC
HH10.283130.0010.0003
HC0.2831310.052170.02241
CH0.0010.0521710.79334
CC0.00030.022410.793341
With musical training
HHHCCHCC
HH10.788850.002840.00909
HC0.7888510.030440.07029
CH0.002840.0304410.95809
CC0.009090.070290.958091
Without musical training
HHHCCHCC
HH10.485850.16590.02745
HC0.4858510.686410.21269
CH0.16590.6864110.53559
CC0.027450.212690.535591
Table 3. Two-tailed Mann–Whitney U test (with α = 0.05 ) comparing the relaxation level achieved and considering the combinations of performer/composer (HH, HC, CH, and CC) on the entire population and the subgroups with and without musical training. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
Table 3. Two-tailed Mann–Whitney U test (with α = 0.05 ) comparing the relaxation level achieved and considering the combinations of performer/composer (HH, HC, CH, and CC) on the entire population and the subgroups with and without musical training. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
All Subjects
HHHCCHCC
HH10.283130.0010.0003
HC0.2831310.052170.02241
CH0.0010.0521710.79334
CC0.00030.022410.793341
With musical training
HHHCCHCC
HH10.788850.002840.00909
HC0.7888510.030440.07029
CH0.002840.0304410.95809
CC0.009090.070290.958091
Without musical training
HHHCCHCC
HH10.485850.16590.02745
HC0.4858510.686410.21269
CH0.16590.6864110.53559
CC0.027450.212690.535591
Table 4. Chi-squared test (with α = 0.05 ) comparing the perception of the composer as human or artificial among all the possible combinations of human/computer performer/composer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
Table 4. Chi-squared test (with α = 0.05 ) comparing the perception of the composer as human or artificial among all the possible combinations of human/computer performer/composer. HH = music composed and performed by a human; HC = music composed by a human and performed by a machine; CH = music composed by a machine and performed by a human; CC = music composed and performed by a machine.
All Subjects
HHHCCHCC
HH 10.645370.295250.00528
HC 0.6453710.111020.00111
CH 0.295250.1110210.11378
CC 0.005280.001110.113781
With musical training
HHHCCHCC
HH 10.628490.143440.0012
HC 0.6284910.504690.0153
CH 0.143440.5046910.08687
CC 0.00120.01530.086871
Without musical training
HHHCCHCC
HH 10.160930.87440.66243
HC 0.1609310.213890.04366
CH 0.87440.2138910.64532
CC 0.662430.043660.645321
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Raglio, A.; Baiardi, P.; Vizzari, G.; Imbriani, M.; Castelli, M.; Manzoni, S.; Vico, F.; Manzoni, L. Algorithmic Music for Therapy: Effectiveness and Perspectives. Appl. Sci. 2021, 11, 8833. https://doi.org/10.3390/app11198833

AMA Style

Raglio A, Baiardi P, Vizzari G, Imbriani M, Castelli M, Manzoni S, Vico F, Manzoni L. Algorithmic Music for Therapy: Effectiveness and Perspectives. Applied Sciences. 2021; 11(19):8833. https://doi.org/10.3390/app11198833

Chicago/Turabian Style

Raglio, Alfredo, Paola Baiardi, Giuseppe Vizzari, Marcello Imbriani, Mauro Castelli, Sara Manzoni, Francisco Vico, and Luca Manzoni. 2021. "Algorithmic Music for Therapy: Effectiveness and Perspectives" Applied Sciences 11, no. 19: 8833. https://doi.org/10.3390/app11198833

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop