Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = sad music

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 643 KiB  
Article
Cross-Cultural Biases of Emotion Perception in Music
by Marjorie G. Li, Kirk N. Olsen and William Forde Thompson
Brain Sci. 2025, 15(5), 477; https://doi.org/10.3390/brainsci15050477 - 29 Apr 2025
Cited by 1 | Viewed by 1750
Abstract
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features [...] Read more.
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features that predict emotion perception. Methods: White-European (Western) listeners from the UK, USA, New Zealand, and Australia (N = 100) listened to 48 ten-second excerpts of Western classical and Chinese traditional bowed-string music that were validated by experts to convey happiness, sadness, agitation, and calmness. After each excerpt, participants rated the familiarity, enjoyment, and perceived intensity of the four emotions. Musical features were computationally extracted for regression analyses. Results: Western listeners experienced Western classical music as more familiar and enjoyable than Chinese music. Happiness and sadness were recognised more accurately in Western classical music, whereas agitation was more accurately identified in Chinese music. The perceived intensity of happiness and sadness was greater for Western classical music; conversely, the perceived intensity of agitation was greater for Chinese music. Furthermore, emotion perception was influenced by both culture-shared (e.g., timbre) and culture-specific (e.g., dynamics) musical features. Conclusions: Our findings reveal clear cultural biases in the way individuals perceive and classify music, highlighting how these biases are shaped by the interaction between cultural familiarity and the emotional and structural qualities of the music. We discuss the possibility that purposeful engagement with music from diverse cultural traditions—especially in educational and therapeutic settings—may cultivate intercultural empathy and an appreciation of the values and aesthetics of other cultures. Full article
(This article belongs to the Special Issue Advances in Emotion Processing and Cognitive Neuropsychology)
Show Figures

Graphical abstract

24 pages, 992 KiB  
Systematic Review
Enhancing Emotional Intelligence in Autism Spectrum Disorder Through Intervention: A Systematic Review
by Laura García-García, Manuel Martí-Vilar, Sergio Hidalgo-Fuentes and Javier Cabedo-Peris
Eur. J. Investig. Health Psychol. Educ. 2025, 15(3), 33; https://doi.org/10.3390/ejihpe15030033 - 10 Mar 2025
Viewed by 2922
Abstract
Limitations in some emotional characteristics that are conceptualized in the definition of emotional intelligence can be seen among people with autism spectrum disorder. The main objective of this study is the analysis of the effectiveness of interventions directed to enhance emotional recognition and [...] Read more.
Limitations in some emotional characteristics that are conceptualized in the definition of emotional intelligence can be seen among people with autism spectrum disorder. The main objective of this study is the analysis of the effectiveness of interventions directed to enhance emotional recognition and emotional regulation among this specific population. A systematic review was carried out in databases such as Psycinfo, WoS, SCOPUS, and PubMed, identifying a total of 572 articles, of which 29 met the inclusion criteria. The total sample included 1061 participants, mainly children aged between 4 and 13 years. The analyzed interventions focused on improving emotional recognition, with significant results in the identification of emotions such as happiness, sadness, and anger, although some showed limitations in the duration of these effects. The most used programs included training in facial recognition, virtual reality, and the use of new technologies such as robots. These showed improvements in both emotional recognition and social skills. Other types of interventions such as music therapy or the use of drama techniques were also implemented. However, a gender bias and lack of consistency between results from different cultures were observed. The conclusions indicate that, although the interventions reviewed seem effective, more research is needed to maximize their impact on the ASD population. Full article
Show Figures

Figure 1

23 pages, 2556 KiB  
Article
Investigation of Deficits in Auditory Emotional Content Recognition by Adult Cochlear Implant Users through the Study of Electroencephalographic Gamma and Alpha Asymmetry and Alexithymia Assessment
by Giulia Cartocci, Bianca Maria Serena Inguscio, Andrea Giorgi, Dario Rossi, Walter Di Nardo, Tiziana Di Cesare, Carlo Antonio Leone, Rosa Grassia, Francesco Galletti, Francesco Ciodaro, Cosimo Galletti, Roberto Albera, Andrea Canale and Fabio Babiloni
Brain Sci. 2024, 14(9), 927; https://doi.org/10.3390/brainsci14090927 - 17 Sep 2024
Cited by 3 | Viewed by 1647
Abstract
Background/Objectives: Given the importance of emotion recognition for communication purposes, and the impairment for such skill in CI users despite impressive language performances, the aim of the present study was to investigate the neural correlates of emotion recognition skills, apart from language, in [...] Read more.
Background/Objectives: Given the importance of emotion recognition for communication purposes, and the impairment for such skill in CI users despite impressive language performances, the aim of the present study was to investigate the neural correlates of emotion recognition skills, apart from language, in adult unilateral CI (UCI) users during a music in noise (happy/sad) recognition task. Furthermore, asymmetry was investigated through electroencephalographic (EEG) rhythm, given the traditional concept of hemispheric lateralization for emotional processing, and the intrinsic asymmetry due to the clinical UCI condition. Methods: Twenty adult UCI users and eight normal hearing (NH) controls were recruited. EEG gamma and alpha band power was assessed as there is evidence of a relationship between gamma and emotional response and between alpha asymmetry and tendency to approach or withdraw from stimuli. The TAS-20 questionnaire (alexithymia) was completed by the participants. Results: The results showed no effect of background noise, while supporting that gamma activity related to emotion processing shows alterations in the UCI group compared to the NH group, and that these alterations are also modulated by the etiology of deafness. In particular, relative higher gamma activity in the CI side corresponds to positive processes, correlated with higher emotion recognition abilities, whereas gamma activity in the non-CI side may be related to positive processes inversely correlated with alexithymia and also inversely correlated with age; a correlation between TAS-20 scores and age was found only in the NH group. Conclusions: EEG gamma activity appears to be fundamental to the processing of the emotional aspect of music and also to the psychocognitive emotion-related component in adults with CI. Full article
(This article belongs to the Special Issue Recent Advances in Hearing Impairment)
Show Figures

Figure 1

16 pages, 963 KiB  
Article
“Beyond Quantum Music”—A Pioneering Art and Science Project as a Platform for Building New Instruments and Creating a New Musical Genre
by Sonja Lončar and Andrija Pavlović
Arts 2024, 13(4), 127; https://doi.org/10.3390/arts13040127 - 25 Jul 2024
Cited by 2 | Viewed by 2465 | Correction
Abstract
In this text, we discuss the “Beyond Quantum Music” project, which inspired pianists, composers, researchers, and innovators Sonja Lončar and Andrija Pavlović (LP Duo) to go beyond the boundaries of classical and avant-garde practices to create a new style in composition and performance [...] Read more.
In this text, we discuss the “Beyond Quantum Music” project, which inspired pianists, composers, researchers, and innovators Sonja Lončar and Andrija Pavlović (LP Duo) to go beyond the boundaries of classical and avant-garde practices to create a new style in composition and performance on two unique DUALITY hybrid pianos that they invented and developed to create a new stage design for multimedia concert performances and establish a new musical genre as a platform for future musical expression. “Beyond Quantum Music” is a continuation of the groundbreaking art and science project “Quantum Music”, which began in 2015; we envisioned it as a long-term project. In order to build an experimental dialogue between music and quantum physics, we created the DUALITY Portable Hybrid Piano System. This innovative instrument was essential for expanding the current sound of the classical piano. As a result, new compositions and new piano sounds were produced using various synthesizers and sound samples derived from scientific experiments. The key place for this dialogue between music and science was the Delft University of Technology, the Netherlands, where Andrija Pavlović, as a Kavli artist in residence, and Sonja Lončar, as an expert, spent several months in 2022 collaborating with scientists to compose new music. Later on, we collaborated with the visual artist “Incredible Bob” to develop the idea for the multimedia concert “LP Duo plays Beyond Quantum Music” to be performed at various locations, including the Scientific Institute MedILS Split (Croatia), the Theater Hall JDP Belgrade (Serbia), the Congress Hall TU Delft (the Netherlands), and open-air concerts at the Kaleidoskop Festival (Novi Sad, Serbia) and Ars Electronica Festival in Linz (Austria). Full article
(This article belongs to the Special Issue Applied Musicology and Ethnomusicology)
Show Figures

Figure 1

12 pages, 269 KiB  
Article
Effects of Sadness and Fear on Moral Judgments in Public Emergency Events
by Mufan Zheng, Shiyao Qin and Junhua Zhao
Behav. Sci. 2024, 14(6), 468; https://doi.org/10.3390/bs14060468 - 31 May 2024
Cited by 1 | Viewed by 2013
Abstract
With the rapid development of society and the deteriorating natural environment, there has been an increase in public emergencies. This study aimed to explore how sadness and fear in the context of public emergencies influence moral judgments. This research first induced feelings of [...] Read more.
With the rapid development of society and the deteriorating natural environment, there has been an increase in public emergencies. This study aimed to explore how sadness and fear in the context of public emergencies influence moral judgments. This research first induced feelings of sadness and fear by using videos about public emergencies and music, and then used moral scenarios from the CNI model (C parameter: sensitivity to consequences; N parameter: sensitivity to norms; I parameter: general preference for inaction) to assess participants’ moral thinking. In Study 1, participants were divided into a sadness group and a neutral group, while in Study 2, participants were divided into a fear group and a neutral group. During the experiment, participants were exposed to different videos related to public emergencies to induce the corresponding emotions, and emotional music was continuously played throughout the entire experiment. Participants were then asked to answer questions requiring moral judgments. The results showed that based on the CNI model, sadness induced in the context of public emergencies significantly increased the C parameter, without affecting the N or I parameters. Fear increased the I parameter, without affecting the C or I parameters. That is, sadness and fear induced in the context of a public emergency can influence moral judgments. Specifically, sadness increases individuals’ sensitivity to consequences and fear increases the general preference for inaction in moral judgments. Full article
16 pages, 2437 KiB  
Article
Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians
by Christine Nussbaum, Annett Schirmer and Stefan R. Schweinberger
Brain Sci. 2023, 13(11), 1563; https://doi.org/10.3390/brainsci13111563 - 7 Nov 2023
Cited by 2 | Viewed by 2321
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, [...] Read more.
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150–250 ms) and N400 (300–500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500–1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation. Full article
(This article belongs to the Special Issue The Role of Sounds and Music in Emotion and Cognition)
Show Figures

Figure 1

12 pages, 278 KiB  
Article
Unlocking the Beat: Dopamine and Eye Blink Response to Classical Music
by Leigh M. Riby, Sam K. Fenwick, Dimana Kardzhieva, Beth Allan and Deborah McGann
NeuroSci 2023, 4(2), 152-163; https://doi.org/10.3390/neurosci4020014 - 20 Jun 2023
Cited by 2 | Viewed by 7698
Abstract
The present study examined music-induced dopamine release, as measured by a proxy measure of spontaneous eye blinks. Specifically, we explored the effects of uplifting and sombre tones in different sections of Vivaldi’s Four Seasons to investigate the affective content of musical pieces within [...] Read more.
The present study examined music-induced dopamine release, as measured by a proxy measure of spontaneous eye blinks. Specifically, we explored the effects of uplifting and sombre tones in different sections of Vivaldi’s Four Seasons to investigate the affective content of musical pieces within one composition. Seventeen participants listened to four concertos (Major modes: “Spring” and “Autumn”, Minor modes: “Summer” and “Winter”) and a silence condition while completing a three-stimulus odd-ball attention task. Electrooculograms were recorded from electrodes placed above and under the left eye. Self-reported arousal and music preference measures were also gathered during the testing session. In addition, the P3a Event-Related Potential (ERP) component was analysed as another potential index of dopamine function. Results revealed significant differences in the blink rates during music listening and silence, with the largest effect observed for the sad, melancholic “Winter” concerto. However, no significant correlation was found between blink rate and music preference or arousal. Furthermore, no reliable association was found between blink rate and the P3a ERP component, suggesting that these measures tap into different aspects of dopamine function. These findings contribute to understanding the link between dopamine and blink rate, particularly in response to classical music. Crucially, the study’s discovery that the “Winter” concerto, with its sorrowful tone, significantly increased the blink rate highlights the significance of sad music and perhaps the programmatic qualities of this concerto to induce a strong emotional response. Full article
19 pages, 413 KiB  
Article
Music, Pleasure, and Meaning: The Hedonic and Eudaimonic Motivations for Music (HEMM) Scale
by Merrick Powell, Kirk N. Olsen and William Forde Thompson
Int. J. Environ. Res. Public Health 2023, 20(6), 5157; https://doi.org/10.3390/ijerph20065157 - 15 Mar 2023
Cited by 4 | Viewed by 3710
Abstract
Many people listen to music that conveys challenging emotions such as sadness and anger, despite the commonly assumed purpose of media being to elicit pleasure. We propose that eudaimonic motivation, the desire to engage with aesthetic experiences to be challenged and facilitate meaningful [...] Read more.
Many people listen to music that conveys challenging emotions such as sadness and anger, despite the commonly assumed purpose of media being to elicit pleasure. We propose that eudaimonic motivation, the desire to engage with aesthetic experiences to be challenged and facilitate meaningful experiences, can explain why people listen to music containing such emotions. However, it is unknown whether music containing violent themes can facilitate such meaningful experiences. In this investigation, three studies were conducted to determine the implications of eudaimonic and hedonic (pleasure-seeking) motivations for fans of music with violent themes. In Study 1, we developed and tested a new scale and showed that fans exhibit high levels of both types of motivation. Study 2 further validated the new scale and provided evidence that the two types of motivations are associated with different affective outcomes. Study 3 revealed that fans of violently themed music exhibited higher levels of eudaimonic motivation and lower levels of hedonic motivation than fans of non-violently themed music. Taken together, the findings support the notion that fans of music with violent themes are driven to engage with this music to be challenged and to pursue meaning, as well as to experience pleasure. Implications for fans’ well-being and future applications of the new measure are discussed. Full article
32 pages, 460 KiB  
Review
Affective Responses to Music: An Affective Science Perspective
by Federico Lauria
Philosophies 2023, 8(2), 16; https://doi.org/10.3390/philosophies8020016 - 23 Feb 2023
Cited by 2 | Viewed by 9888
Abstract
Music has strong emotional powers. How are we to understand affective responses to music? What does music teach us about emotions? Why are musical emotions important? Despite the rich literature in philosophy and the empirical sciences, particularly psychology and neuroscience, little attention has [...] Read more.
Music has strong emotional powers. How are we to understand affective responses to music? What does music teach us about emotions? Why are musical emotions important? Despite the rich literature in philosophy and the empirical sciences, particularly psychology and neuroscience, little attention has been paid to integrating these approaches. This extensive review aims to redress this imbalance and establish a mutual dialogue between philosophy and the empirical sciences by presenting the main philosophical puzzles from an affective science perspective. The chief problem is contagion. Sometimes, listeners perceive music as expressing some emotion and this elicits the same emotion in them. Contagion is perplexing because it collides with the leading theory of emotions as experiences of values. This article mostly revolves around the critical presentation of the philosophical solutions to this problem in light of recent developments in emotion theory and affective science. It also highlights practical issues, particularly the role of musical emotions in well-being and health, by tackling the paradox of sad music, i.e., the question of why people enjoy sad music. It thus bridges an important gap between theoretical and real-life issues as well as between philosophical and empirical investigations on affective responses to music. Full article
11 pages, 1104 KiB  
Article
The Effect of Sadness on Visual Artistic Creativity in Non-Artists
by Massimiliano Palmiero, Laura Piccardi, Marco Giancola, Raffaella Nori and Paola Guariglia
Brain Sci. 2023, 13(1), 149; https://doi.org/10.3390/brainsci13010149 - 15 Jan 2023
Cited by 2 | Viewed by 3585
Abstract
The study of the relationships between mood and creativity is long-standing. In this study, the effects of mood states on artistic creativity were investigated in ninety non-artist participants. Mood states were induced by instructing participants to listen to self-selected happy, sad, or neutral [...] Read more.
The study of the relationships between mood and creativity is long-standing. In this study, the effects of mood states on artistic creativity were investigated in ninety non-artist participants. Mood states were induced by instructing participants to listen to self-selected happy, sad, or neutral music for ten minutes. Then, all participants were asked to make two artistic drawings. To check for mood manipulation, the Profile of Mood States (POMS) was administered before and after listening to the self-selected music. After the mood induction, the negative group reported higher scores than the other two groups in the ‘depression’ subscale and lower scores than the other two groups in the ‘vigour’ subscale of the POMS; the positive mood group showed more vigour than the negative mood group. Yet, three independent judges assigned higher ratings of creativity and emotionality to the drawings produced by participants in the negative mood group than drawings produced by participants in the other two groups. These results confirmed that specific negative mood states (e.g., sadness) positively affect artistic creativity, probably because participants are more likely to engage in mood-repairing. Limitations and future research directions are presented. Full article
(This article belongs to the Special Issue The Role of Sounds and Music in Emotion and Cognition)
Show Figures

Figure 1

20 pages, 3432 KiB  
Article
EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare
by Tie Hua Zhou, Wenlong Liang, Hangyu Liu, Ling Wang, Keun Ho Ryu and Kwang Woo Nam
Int. J. Environ. Res. Public Health 2023, 20(1), 378; https://doi.org/10.3390/ijerph20010378 - 26 Dec 2022
Cited by 10 | Viewed by 4389
Abstract
Music therapy is increasingly being used to promote physical health. Emotion semantic recognition is more objective and provides direct awareness of the real emotional state based on electroencephalogram (EEG) signals. Therefore, we proposed a music therapy method to carry out emotion semantic matching [...] Read more.
Music therapy is increasingly being used to promote physical health. Emotion semantic recognition is more objective and provides direct awareness of the real emotional state based on electroencephalogram (EEG) signals. Therefore, we proposed a music therapy method to carry out emotion semantic matching between the EEG signal and music audio signal, which can improve the reliability of emotional judgments, and, furthermore, deeply mine the potential influence correlations between music and emotions. Our proposed EER model (EEG-based Emotion Recognition Model) could identify 20 types of emotions based on 32 EEG channels, and the average recognition accuracy was above 90% and 80%, respectively. Our proposed music-based emotion classification model (MEC model) could classify eight typical emotion types of music based on nine music feature combinations, and the average classification accuracy was above 90%. In addition, the semantic mapping was analyzed according to the influence of different music types on emotional changes from different perspectives based on the two models, and the results showed that the joy type of music video could improve fear, disgust, mania, and trust emotions into surprise or intimacy emotions, while the sad type of music video could reduce intimacy to the fear emotion. Full article
(This article belongs to the Special Issue Disease Prediction, Machine Learning, and Healthcare)
Show Figures

Figure 1

18 pages, 2888 KiB  
Article
Movie Reviews Classification through Facial Image Recognition and Emotion Detection Using Machine Learning Methods
by Tehseen Mazhar, Muhammad Amir Malik, Muhammad Asgher Nadeem, Syed Agha Hassnain Mohsan, Inayatul Haq, Faten Khalid Karim and Samih M. Mostafa
Symmetry 2022, 14(12), 2607; https://doi.org/10.3390/sym14122607 - 9 Dec 2022
Cited by 18 | Viewed by 4495
Abstract
The critical component of HCI is face recognition technology. Emotional computing heavily relies on the identification of facial emotions. Applications for emotion-driven face animation and dynamic assessment are numerous (FER). Universities have started to support real-world face expression recognition research as a result. [...] Read more.
The critical component of HCI is face recognition technology. Emotional computing heavily relies on the identification of facial emotions. Applications for emotion-driven face animation and dynamic assessment are numerous (FER). Universities have started to support real-world face expression recognition research as a result. Short video clips are continually uploaded and shared online, building up a library of videos on various topics. The enormous amount of movie data appeals to system engineers and researchers of autonomous emotion mining and sentiment analysis. The main idea is that categorizing things may be done by looking at how individuals feel about specific issues. People might choose to have a basic or complex facial appearance. People worldwide continually express their feelings through their faces, whether they are happy, sad, or uncertain. An online user can visually express themselves through a video’s editing, music, and subtitles. Additionally, before the video data can be used, noise in the data must frequently be eliminated. Automatically figuring out how someone feels in a video is a challenging task that will only get harder over time. Therefore, this paper aims to show how facial recognition video analysis can be used to show how sentiment analysis can help with business growth and essential decision-making. To determine how people are affected by reviewers’ writing, we use a technique for deciding emotions in this analysis. The feelings in movies are assessed using machine learning algorithms to categorize them. A lightweight machine learning algorithm is proposed to help in Aspect-oriented emotion classification for movie reviews. Moreover, to analyze real and published datasets, experimental results are compared with different Machine Learning algorithms, i.e., Naive Bayes, Support Vector Machine, Random Forest, and CNN. The proposed approach obtained 84.72 accuracy and 79.24 sensitivity. Furthermore, the method has a specificity of 90.64 and a precision of 90.2. Thus, the proposed method significantly increases the accuracy and sensitivity of the emotion detection system from facial feature recognition. Our proposed algorithm has shown contribution to detect datasets of different emotions with symmetric characteristics and symmetrically-designed facial image recognition tasks. Full article
Show Figures

Figure 1

21 pages, 4046 KiB  
Article
Toward Affirmation of Recovery of Deeply Embedded Autobiographical Memory with Background Music and Identification of an EEG Biomarker in Combination with EDA Signal Using Wearable Sensors
by Rupak Kumar Das, Nabiha Zainab Imtiaz and Arshia Khan
Clin. Transl. Neurosci. 2022, 6(4), 26; https://doi.org/10.3390/ctn6040026 - 6 Dec 2022
Cited by 1 | Viewed by 3545
Abstract
There is no disputing the role that background music plays in memory recall. Music has the power to activate the brain and trigger deeply ingrained memories. For dementia patients, background music is a common therapy because of this. Previous studies used music to [...] Read more.
There is no disputing the role that background music plays in memory recall. Music has the power to activate the brain and trigger deeply ingrained memories. For dementia patients, background music is a common therapy because of this. Previous studies used music to recall lyrics, series of words, and long- and short-term memories. In this research, electroencephalogram (EEG) and electrodermal activity (EDA) data are collected from 40 healthy participants using wearable sensors during nine music sessions (three happy, three sad, and three neutral). A post-study survey is given to all participants after each piece of music to know if they recalled any autobiographical memories. The main objective is to find an EEG biomarker using the collected qualitative and quantitative data for autobiographical memory recall. The study finds that for all four EEG channels, alpha power rises considerably (on average 16.2%) during the memory “recall” scenario (F3: p = 0.0066, F7: p = 0.0386, F4: p = 0.0023, and F8: p = 0.0288) compared to the “no-recall” situation. Beta power also increased significantly for two channels (F3: p = 0.0100 and F4: p = 0.0210) but not for others (F7: p = 0.6792 and F8: p = 0.0814). Additionally, the phasic standard deviation (p = 0.0260), phasic max (p = 0.0011), phasic energy (p = 0.0478), tonic min (p = 0.0092), tonic standard deviation (p = 0.0171), and phasic energy (p = 0.0478) are significantly different for the EDA signal. The authors conclude by interpreting increased alpha power (8–12 Hz) as a biomarker for autobiographical memory recall. Full article
Show Figures

Figure 1

18 pages, 279 KiB  
Article
Over-Generalizing, Under-Promising, and Over-Promising: Singing Sadness and Joy in the Church
by Daniel Jesse
Religions 2022, 13(12), 1172; https://doi.org/10.3390/rel13121172 - 1 Dec 2022
Cited by 3 | Viewed by 2857
Abstract
In this article, I examine the emotional content of songs sung in Christian churches. An analysis of the lyrical content of the songs that have been tracked by Christian Copyright Licensing International (CCLI) from 1988 to 2018, shows there is a definition of [...] Read more.
In this article, I examine the emotional content of songs sung in Christian churches. An analysis of the lyrical content of the songs that have been tracked by Christian Copyright Licensing International (CCLI) from 1988 to 2018, shows there is a definition of the Christian life that is set before the church and in turn sung by it. The word “joy” appears 37 times and the word “praise” is used 152 times in the 133 songs that comprise the contemporary praise and worship hymnody in the defined time period. In the same time frame, the word sad or any of its derivatives (sadly, sadness, etc.) never occurs in the group of songs that are being discussed. Nor is the word “sorrow” ever used. There are two conclusions that can be drawn from the lack of the use of the word sad. The first is that sadness is undervalued. The second conclusion is that the word “sad” is not a good song word, meaning that it is awkward to sing and fit in the rhythm or meter of a song. The first conclusion relates to the lexical value of a word and the second to the semantic value. To understand the emotional content of music, the texts which provide a lexical meaning need to be examined. Secondly, the semantic meaning, which is composed of the cultural connotations, needs to be considered. The first part, the lexical, is considered by looking at only the text. The second, the semantic, involves looking at how the words and music (both apart and together) conceal and reveal meanings that surpass the lexical level. Thus, the first part of the present work will look at the lyric’s words devoid of context while the second part of the essay will examine the fullness of the songs. As the semantic levels are explored, they will be brought together with the lyrics and the previous level and the question of whether there is an overpromising of joy in the songs will be answered. Full article
27 pages, 11431 KiB  
Article
“Found in Translation”: An Evolutionary Framework for Auditory–Visual Relationships
by Ana Rodrigues, Bruna Sousa, Amílcar Cardoso and Penousal Machado
Entropy 2022, 24(12), 1706; https://doi.org/10.3390/e24121706 - 22 Nov 2022
Viewed by 1810
Abstract
The development of computational artifacts to study cross-modal associations has been a growing research topic, as they allow new degrees of abstraction. In this context, we propose a novel approach to the computational exploration of relationships between music and abstract images, grounded by [...] Read more.
The development of computational artifacts to study cross-modal associations has been a growing research topic, as they allow new degrees of abstraction. In this context, we propose a novel approach to the computational exploration of relationships between music and abstract images, grounded by findings from cognitive sciences (emotion and perception). Due to the problem’s high-level nature, we rely on evolutionary programming techniques to evolve this audio–visual dialogue. To articulate the complexity of the problem, we develop a framework with four modules: (i) vocabulary set, (ii) music generator, (iii) image generator, and (iv) evolutionary engine. We test our approach by evolving a given music set to a corresponding set of images, steered by the expression of four emotions (angry, calm, happy, sad). Then, we perform preliminary user tests to evaluate if the user’s perception is consistent with the system’s expression. Results suggest an agreement between the user’s emotional perception of the music–image pairs and the system outcomes, favoring the integration of cognitive science knowledge. We also discuss the benefit of employing evolutionary strategies, such as genetic programming on multi-modal problems of a creative nature. Overall, this research contributes to a better understanding of the foundations of auditory–visual associations mediated by emotions and perception. Full article
Show Figures

Figure 1

Back to TopTop