Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = auditory emotional arousal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1195 KiB  
Article
Harmonizing Sight and Sound: The Impact of Auditory Emotional Arousal, Visual Variation, and Their Congruence on Consumer Engagement in Short Video Marketing
by Qiang Yang, Yudan Wang, Qin Wang, Yushi Jiang and Jingpeng Li
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 69; https://doi.org/10.3390/jtaer20020069 - 8 Apr 2025
Cited by 1 | Viewed by 3234
Abstract
Social media influencers strategically design the auditory and visual features of short videos to enhance consumer engagement. Among these, auditory emotional arousal and visual variation play crucial roles, yet their interactive effects remain underexplored. Drawing on multichannel integration theory, this study applies multimodal [...] Read more.
Social media influencers strategically design the auditory and visual features of short videos to enhance consumer engagement. Among these, auditory emotional arousal and visual variation play crucial roles, yet their interactive effects remain underexplored. Drawing on multichannel integration theory, this study applies multimodal machine learning to analyze 12,842 short videos from Douyin, integrating text analysis, sound recognition, and image processing. The results reveal an inverted U-shaped relationship between auditory emotional arousal and consumer engagement, where moderate arousal maximizes interaction while excessively high or low arousal reduces engagement. Visual variation, however, exhibits a positive linear effect, with greater variation driving higher engagement. Notably, audiovisual congruence significantly enhances engagement, as high alignment between arousal and visual variation optimizes consumer information processing. These findings advance short video marketing research by uncovering the multisensory interplay in consumer engagement. They also provide practical guidance for influencers in optimizing voice and visual design strategies to enhance content effectiveness. Full article
(This article belongs to the Topic Interactive Marketing in the Digital Era)
Show Figures

Figure 1

22 pages, 873 KiB  
Article
EEG-Based Music Emotion Prediction Using Supervised Feature Extraction for MIDI Generation
by Oscar Gomez-Morales, Hernan Perez-Nastar, Andrés Marino Álvarez-Meza, Héctor Torres-Cardona and Germán Castellanos-Dominguez
Sensors 2025, 25(5), 1471; https://doi.org/10.3390/s25051471 - 27 Feb 2025
Cited by 2 | Viewed by 1629
Abstract
Advancements in music emotion prediction are driving AI-driven algorithmic composition, enabling the generation of complex melodies. However, bridging neural and auditory domains remains challenging due to the semantic gap between brain-derived low-level features and high-level musical concepts, making alignment computationally demanding. This study [...] Read more.
Advancements in music emotion prediction are driving AI-driven algorithmic composition, enabling the generation of complex melodies. However, bridging neural and auditory domains remains challenging due to the semantic gap between brain-derived low-level features and high-level musical concepts, making alignment computationally demanding. This study proposes a deep learning framework for generating MIDI sequences aligned with labeled emotion predictions through supervised feature extraction from neural and auditory domains. EEGNet is employed to process neural data, while an autoencoder-based piano algorithm handles auditory data. To address modality heterogeneity, Centered Kernel Alignment is incorporated to enhance the separation of emotional states. Furthermore, regression between feature domains is applied to reduce intra-subject variability in extracted Electroencephalography (EEG) patterns, followed by the clustering of latent auditory representations into denser partitions to improve MIDI reconstruction quality. Using musical metrics, evaluation on real-world data shows that the proposed approach improves emotion classification (namely, between arousal and valence) and the system’s ability to produce MIDI sequences that better preserve temporal alignment, tonal consistency, and structural integrity. Subject-specific analysis reveals that subjects with stronger imagery paradigms produced higher-quality MIDI outputs, as their neural patterns aligned more closely with the training data. In contrast, subjects with weaker performance exhibited auditory data that were less consistent. Full article
(This article belongs to the Special Issue Advances in ECG/EEG Monitoring)
Show Figures

Figure 1

31 pages, 14510 KiB  
Article
Combined Effects of the Visual–Acoustic Environment on Public Response in Urban Forests
by Yuxiang Lan, Yuanyang Tang, Zhanhua Liu, Xiong Yao, Zhipeng Zhu, Fan Liu, Junyi Li, Jianwen Dong and Ye Chen
Forests 2024, 15(5), 858; https://doi.org/10.3390/f15050858 - 14 May 2024
Cited by 3 | Viewed by 1890 | Correction
Abstract
Urban forests are increasingly recognized as vital components of urban ecosystems, offering a plethora of physiological and psychological benefits to residents. However, the existing research has often focused on single dimensions of either visual or auditory experiences, overlooking the combined impact of audio–visual [...] Read more.
Urban forests are increasingly recognized as vital components of urban ecosystems, offering a plethora of physiological and psychological benefits to residents. However, the existing research has often focused on single dimensions of either visual or auditory experiences, overlooking the combined impact of audio–visual environments on public health and well-being. This study addresses this gap by examining the effects of composite audio–visual settings within three distinct types of urban forests in Fuzhou, China: mountain, mountain–water, and waterfront forests. Through field surveys and quantitative analysis at 24 sample sites, we assessed visual landscape elements, soundscapes, physiological indicators (e.g., heart rate, skin conductance), and psychological responses (e.g., spiritual vitality, stress relief, emotional arousal, attention recovery) among 77 participants. Our findings reveal that different forest types exert varying influences on visitors’ physiology and psychology, with waterfront forests generally promoting relaxation and mountain–water forests inducing a higher degree of tension. Specific audio–visual elements, such as plant, water scenes, and natural sounds, positively affect psychological restoration, whereas urban noise is associated with increased physiological stress indicators. In conclusion, the integrated effects of audio–visual landscapes significantly shape the multisensory experiences of the public in urban forests, underscoring the importance of optimal design that incorporates natural elements to create restorative environments beneficial to the health and well-being of urban residents. These insights not only contribute to the scientific understanding of urban forest impact but also inform the design and management of urban green spaces for enhanced public health outcomes. Full article
Show Figures

Figure 1

15 pages, 2332 KiB  
Article
Enhancing Dimensional Emotion Recognition from Speech through Modulation-Filtered Cochleagram and Parallel Attention Recurrent Network
by Zhichao Peng, Hua Zeng, Yongwei Li, Yegang Du and Jianwu Dang
Electronics 2023, 12(22), 4620; https://doi.org/10.3390/electronics12224620 - 12 Nov 2023
Viewed by 1659
Abstract
Dimensional emotion can better describe rich and fine-grained emotional states than categorical emotion. In the realm of human–robot interaction, the ability to continuously recognize dimensional emotions from speech empowers robots to capture the temporal dynamics of a speaker’s emotional state and adjust their [...] Read more.
Dimensional emotion can better describe rich and fine-grained emotional states than categorical emotion. In the realm of human–robot interaction, the ability to continuously recognize dimensional emotions from speech empowers robots to capture the temporal dynamics of a speaker’s emotional state and adjust their interaction strategies in real-time. In this study, we present an approach to enhance dimensional emotion recognition through modulation-filtered cochleagram and parallel attention recurrent neural network (PA-net). Firstly, the multi-resolution modulation-filtered cochleagram is derived from speech signals through auditory signal processing. Subsequently, the PA-net is employed to establish multi-temporal dependencies from diverse scales of features, enabling the tracking of the dynamic variations in dimensional emotion within auditory modulation sequences. The results obtained from experiments conducted on the RECOLA dataset demonstrate that, at the feature level, the modulation-filtered cochleagram surpasses other assessed features in its efficacy to forecast valence and arousal. Particularly noteworthy is its pronounced superiority in scenarios characterized by a high signal-to-noise ratio. At the model level, the PA-net attains the highest predictive performance for both valence and arousal, clearly outperforming alternative regression models. Furthermore, the experiments carried out on the SEWA dataset demonstrate the substantial enhancements brought about by the proposed method in valence and arousal prediction. These results collectively highlight the potency and effectiveness of our approach in advancing the field of dimensional speech emotion recognition. Full article
(This article belongs to the Special Issue Applied AI in Emotion Recognition)
Show Figures

Figure 1

11 pages, 651 KiB  
Review
The Psychoneurobiology of Insomnia: Hyperarousal and REM Sleep Instability
by Dieter Riemann, Raphael J. Dressle, Fee Benz, Laura Palagini and Bernd Feige
Clin. Transl. Neurosci. 2023, 7(4), 30; https://doi.org/10.3390/ctn7040030 - 28 Sep 2023
Cited by 10 | Viewed by 9353
Abstract
Chronic insomnia (insomnia disorder—ID) afflicts up to 10% of the adult population, increases with age and affects more women than men. ID is associated with significant daytime impairments and an increased risk for developing major somatic and mental disorders, especially depression and anxiety [...] Read more.
Chronic insomnia (insomnia disorder—ID) afflicts up to 10% of the adult population, increases with age and affects more women than men. ID is associated with significant daytime impairments and an increased risk for developing major somatic and mental disorders, especially depression and anxiety disorders. Almost all insomnia models assume persistent hyperarousal on cognitive, emotional, cortical and physiological levels as a central pathophysiological component. The marked discrepancy between only minor objective alterations in polysomnographic parameters of sleep continuity and the profound subjective impairment in patients with insomnia is still puzzling. We and others have proposed that alterations in the microstructure of sleep, especially in REM sleep (REM sleep instability), may explain this discrepancy and be at the core of the experience of fragmented and poor sleep in ID. The REM sleep instability concept is based on evidence showing REM time to be related to subjective wake time in insomnia as well as increased micro- and macro-arousals during REM sleep in insomnia patients compared to good-sleeper controls. Our own work showed that ID patients awoken from REM sleep more frequently reported the perception of having been awake than good sleepers as well as having had more negative ideations. The continuous measurement of event-related potentials throughout the whole night demonstrated reduced P2 amplitudes specifically during phasic REM sleep in insomnia, which points to a mismatch negativity in ID reflecting automatic change detection in the auditory system and a concomitant orienting response. REM sleep represents the most highly aroused brain state during sleep and thus might be particularly prone to fragmentation in individuals with persistent hyperarousal, resulting in a more conscious-like wake experience reflecting pre-sleep concerns of patients with ID, i.e., worries about poor sleep and its consequences, thus leading to the subjective over-estimation of nocturnal waking time and the experience of disrupted and non-restorative sleep. Chronic REM sleep instability might also lead to a dysfunction in a ventral emotional neural network, including limbic and paralimbic areas activated during REM sleep. Along with a postulated weakened functioning in a dorsal executive neural network, including frontal and prefrontal areas, this might contribute to emotional and cognitive alterations and an elevated risk of developing depression and anxiety. Full article
(This article belongs to the Special Issue Sleep–Wake Medicine)
Show Figures

Figure 1

15 pages, 2491 KiB  
Review
The Role of Sound in Livestock Farming—Selected Aspects
by Katarzyna Olczak, Weronika Penar, Jacek Nowicki, Angelika Magiera and Czesław Klocek
Animals 2023, 13(14), 2307; https://doi.org/10.3390/ani13142307 - 14 Jul 2023
Cited by 18 | Viewed by 4777
Abstract
To ensure the optimal living conditions of farm animals, it is essential to understand how their senses work and the way in which they perceive their environment. Most animals have a different hearing range compared to humans; thus, some aversive sounds may go [...] Read more.
To ensure the optimal living conditions of farm animals, it is essential to understand how their senses work and the way in which they perceive their environment. Most animals have a different hearing range compared to humans; thus, some aversive sounds may go unnoticed by caretakers. The auditory pathways may act through the nervous system on the cardiovascular, gastrointestinal, endocrine, and immune systems. Therefore, noise may lead to behavioral activation (arousal), pain, and sleep disorders. Sounds on farms may be produced by machines, humans, or animals themselves. It is worth noting that vocalization may be very informative to the breeder as it is an expression of an emotional state. This information can be highly beneficial in maintaining a high level of livestock welfare. Moreover, understanding learning theory, conditioning, and the potential benefits of certain sounds can guide the deliberate use of techniques in farm management to reduce the aversiveness of certain events. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

11 pages, 305 KiB  
Article
Vowel Length Expands Perceptual and Emotional Evaluations in Written Japanese Sound-Symbolic Words
by Zihan Lin, Nan Wang, Yan Yan and Toshimune Kambara
Behav. Sci. 2021, 11(6), 90; https://doi.org/10.3390/bs11060090 - 21 Jun 2021
Cited by 6 | Viewed by 3176
Abstract
In this study, we examined whether vowel length affected the perceptual and emotional evaluations of Japanese sound-symbolic words. The perceptual and emotional features of Japanese sound-symbolic words, which included short and long vowels, were evaluated by 209 native Japanese speakers. The results showed [...] Read more.
In this study, we examined whether vowel length affected the perceptual and emotional evaluations of Japanese sound-symbolic words. The perceptual and emotional features of Japanese sound-symbolic words, which included short and long vowels, were evaluated by 209 native Japanese speakers. The results showed that subjective evaluations of familiarity, visual imageability, auditory imageability, tactile imageability, emotional valence, arousal, and length were significantly higher for sound-symbolic words with long vowels compared to those with short vowels. Additionally, a subjective evaluation of speed was significantly higher for written Japanese sound-symbolic words with short vowels than for those with long vowels. The current findings suggest that vowel length in written Japanese sound-symbolic words increases the perceptually and emotionally subjective evaluations of Japanese sound-symbolic words. Full article
7 pages, 229 KiB  
Brief Report
Association of Anxiety Awareness with Risk Factors of Cognitive Decline in MCI
by Ariela Gigi and Merav Papirovitz
Brain Sci. 2021, 11(2), 135; https://doi.org/10.3390/brainsci11020135 - 21 Jan 2021
Cited by 9 | Viewed by 2608
Abstract
Studies demonstrate that anxiety is a risk factor for cognitive decline. However, there are also study findings regarding anxiety incidence among people with mild cognitive impairment (MCI), which mostly examined general anxiety evaluated by subjective questionnaires. This study aimed to compare subjective and [...] Read more.
Studies demonstrate that anxiety is a risk factor for cognitive decline. However, there are also study findings regarding anxiety incidence among people with mild cognitive impairment (MCI), which mostly examined general anxiety evaluated by subjective questionnaires. This study aimed to compare subjective and objective anxiety (using autonomic measures) and anxiety as a general tendency and anxiety as a reaction to memory examination. Participants were 50 adults aged 59–82 years who were divided into two groups: MCI group and control group, according to their objective cognitive performance in the Rey Auditory Verbal Learning Test. Objective changes in the anxiety response were measured by skin conductivity in all tests and questionnaires. To evaluate subjective anxiety as a reaction to memory loss, a questionnaire on “state-anxiety” was used immediately after completing memory tests. Our main finding was that although both healthy and memory-impaired participants exhibited elevations in physiological arousal during the memory test, only healthy participants reported an enhanced state anxiety (p = 0.025). Our results suggest that people with MCI have impaired awareness of their emotional state. Full article
(This article belongs to the Special Issue The Risk Factors of Neurocognitive Dysfunction)
14 pages, 2661 KiB  
Article
An Auditory-Perceptual and Pupillometric Study of Vocal Strain and Listening Effort in Adductor Spasmodic Dysphonia
by Mojgan Farahani, Vijay Parsa, Björn Herrmann, Mason Kadem, Ingrid Johnsrude and Philip C. Doyle
Appl. Sci. 2020, 10(17), 5907; https://doi.org/10.3390/app10175907 - 26 Aug 2020
Cited by 6 | Viewed by 2740
Abstract
This study evaluated ratings of vocal strain and perceived listening effort by normal hearing participants while listening to speech samples produced by talkers with adductor spasmodic dysphonia (AdSD). In addition, objective listening effort was measured through concurrent pupillometry to determine whether listening to [...] Read more.
This study evaluated ratings of vocal strain and perceived listening effort by normal hearing participants while listening to speech samples produced by talkers with adductor spasmodic dysphonia (AdSD). In addition, objective listening effort was measured through concurrent pupillometry to determine whether listening to disordered voices changed arousal as a result of emotional state or cognitive load. Recordings of the second sentence of the “Rainbow Passage” produced by talkers with varying degrees of AdSD served as speech stimuli. Twenty naïve young adult listeners perceptually evaluated these stimuli on the dimensions of vocal strain and listening effort using two separate visual analogue scales. While making the auditory-perceptual judgments, listeners’ pupil characteristics were objectively measured in synchrony with the presentation of each voice stimulus. Data analyses revealed moderate-to-high inter- and intra-rater reliability. A significant positive correlation was found between the ratings of vocal strain and listening effort. In addition, listeners displayed greater peak pupil dilation (PPD) when listening to more strained and effortful voice samples. Findings from this study suggest that when combined with an auditory-perceptual task, non-volitional physiologic changes in pupil response may serve as an indicator of listening and cognitive effort or arousal. Full article
Show Figures

Figure 1

18 pages, 1454 KiB  
Article
Seeing a Face in a Crowd of Emotional Voices: Changes in Perception and Cortisol in Response to Emotional Information across the Senses
by Sarah C. Izen, Hannah E. Lapp, Daniel A. Harris, Richard G. Hunter and Vivian M. Ciaramitaro
Brain Sci. 2019, 9(8), 176; https://doi.org/10.3390/brainsci9080176 - 25 Jul 2019
Cited by 3 | Viewed by 4184
Abstract
One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. [...] Read more.
One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another. Yet, less is known regarding interactions across different modalities when perceiving emotions, despite our common experience of seeing emotion in a face while hearing the corresponding emotion in a voice. Our study examined if visual and auditory emotions of matched valence (congruent) conferred stronger perceptual and physiological effects compared to visual and auditory emotions of unmatched valence (incongruent). We quantified how exposure to emotional faces and/or voices altered perception using psychophysics and how it altered a physiological proxy for stress or arousal using salivary cortisol. While we found no significant advantage of congruent over incongruent emotions, we found that changes in cortisol were associated with perceptual changes. Following exposure to negative emotional content, larger decreases in cortisol, indicative of less stress, correlated with more positive perceptual after-effects, indicative of stronger biases to see neutral faces as happier. Full article
(This article belongs to the Special Issue Perceptual and Affective Mechanisms in Facial Expression Recognition)
Show Figures

Figure 1

17 pages, 1405 KiB  
Article
A Crucial Role of Attention in Lateralisation of Sound Processing?
by Martine Hausberger, Hugo Cousillas, Anaïke Meter, Genta Karino, Isabelle George, Alban Lemasson and Catherine Blois-Heulin
Symmetry 2019, 11(1), 48; https://doi.org/10.3390/sym11010048 - 3 Jan 2019
Cited by 12 | Viewed by 4090
Abstract
Studies on auditory laterality have revealed asymmetries for processing, particularly species-specific signals, in vertebrates and that each hemisphere may process different features according to their functional “value”. Processing of novel, intense emotion-inducing or finer individual features may require attention and we hypothesised that [...] Read more.
Studies on auditory laterality have revealed asymmetries for processing, particularly species-specific signals, in vertebrates and that each hemisphere may process different features according to their functional “value”. Processing of novel, intense emotion-inducing or finer individual features may require attention and we hypothesised that the “functional pertinence” of the stimuli may be modulating attentional processes and hence lateralisation of sound processing. Behavioural measures in “(food) distracted” captive Campbell’s monkeys and electrophysiological recordings in anesthetised (versus awake) European starlings were performed during the broadcast of auditory stimuli with different functional “saliences” (e.g., familiar/novel). In Campbell’s monkeys, only novel sounds elicited lateralised responses, with a right hemisphere preference. Unfamiliar sounds elicited more head movements, reflecting enhanced attention, whereas familiar (usual in the home environment) sounds elicited few responses, and thus might not be arousing enough to stimulate attention. In starlings, in field L, when awake, individual identity was processed more in the right hemisphere, whereas, when anaesthetised, the left hemisphere was more involved in processing potentially socially meaningless sounds. These results suggest that the attention-getting property of stimuli may be an adapted concept for explaining hemispheric auditory specialisation. An attention-based model may reconcile the different existing hypotheses of a Right Hemisphere-arousal/intensity or individual based lateralisation. Full article
(This article belongs to the Special Issue Left Versus Right Asymmetries of Brain and Behaviour)
Show Figures

Figure 1

13 pages, 2414 KiB  
Article
Evaluation of Sheep Anticipatory Response to a Food Reward by Means of Functional Near-Infrared Spectroscopy
by Matteo Chincarini, Lina Qiu, Lorenzo Spinelli, Alessandro Torricelli, Michela Minero, Emanuela Dalla Costa, Massimo Mariscoli, Nicola Ferri, Melania Giammarco and Giorgio Vignola
Animals 2019, 9(1), 11; https://doi.org/10.3390/ani9010011 - 29 Dec 2018
Cited by 17 | Viewed by 5072
Abstract
Anticipatory behaviour to an oncoming food reward can be triggered via classical conditioning, implies the activation of neural networks, and may serve to study the emotional state of animals. The aim of this study was to investigate how the anticipatory response to a [...] Read more.
Anticipatory behaviour to an oncoming food reward can be triggered via classical conditioning, implies the activation of neural networks, and may serve to study the emotional state of animals. The aim of this study was to investigate how the anticipatory response to a food reward affects the cerebral cortex activity in sheep. Eight ewes from the same flock were trained to associate a neutral auditory stimulus (water bubble) to the presence of a food reward (maize grains). Once conditioned, sheep were trained to wait 15 s behind a gate before accessing a bucket with food (anticipation phase). For 6 days, sheep were submitted to two sessions of six consecutive trials each. Behavioural reaction was filmed and changes in cortical oxy- and deoxy-hemoglobin concentration ([ΔO2Hb] and [ΔHHb] respectively) following neuronal activation were recorded by functional near infrared spectroscopy (fNIRS). Compared to baseline, during the anticipation phase sheep increased their active behaviour, kept the head oriented to the gate (Wilcoxon’s signed rank test; p ≤ 0.001), and showed more asymmetric ear posture (Wilcoxon’s signed rank test; p ≤ 0.01), most likely reflecting a learnt association and an increased arousal. Results of trial-averaged [ΔO2Hb] and [ΔHHb] within individual sheep showed in almost every sheep a cortical activation during the anticipation phase (Student T-test; p ≤ 0.05). The sheep showed a greater response of the right hemisphere compared to the left hemisphere, possibly indicating a negative affective state, such as frustration. Behavioural and cortical changes observed during anticipation of a food reward reflect a learnt association and an increased arousal, but no clear emotional valence of the sheep subjective experience. Future work should take into consideration possible factors affecting the accurateness of measures, such as probe’s location and scalp vascularization. Full article
(This article belongs to the Section Animal Welfare)
Show Figures

Figure 1

10 pages, 1152 KiB  
Article
The Sound of Words Evokes Affective Brain Responses
by Arash Aryani, Chun-Ting Hsu and Arthur M. Jacobs
Brain Sci. 2018, 8(6), 94; https://doi.org/10.3390/brainsci8060094 - 23 May 2018
Cited by 23 | Viewed by 6481
Abstract
The long history of poetry and the arts, as well as recent empirical results suggest that the way a word sounds (e.g., soft vs. harsh) can convey affective information related to emotional responses (e.g., pleasantness vs. harshness). However, the neural correlates of the [...] Read more.
The long history of poetry and the arts, as well as recent empirical results suggest that the way a word sounds (e.g., soft vs. harsh) can convey affective information related to emotional responses (e.g., pleasantness vs. harshness). However, the neural correlates of the affective potential of the sound of words remain unknown. In an fMRI study involving passive listening, we focused on the affective dimension of arousal and presented words organized in two discrete groups of sublexical (i.e., sound) arousal (high vs. low), while controlling for lexical (i.e., semantic) arousal. Words sounding high arousing, compared to their low arousing counterparts, resulted in an enhanced BOLD signal in bilateral posterior insula, the right auditory and premotor cortex, and the right supramarginal gyrus. This finding provides first evidence on the neural correlates of affectivity in the sound of words. Given the similarity of this neural network to that of nonverbal emotional expressions and affective prosody, our results support a unifying view that suggests a core neural network underlying any type of affective sound processing. Full article
Show Figures

Figure 1

Back to TopTop