Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (109)

Search Parameters:
Keywords = music stimuli

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 506 KiB  
Article
Exploring the Link Between Sound Quality Perception, Music Perception, Music Engagement, and Quality of Life in Cochlear Implant Recipients
by Ayşenur Karaman Demirel, Ahmet Alperen Akbulut, Ayşe Ayça Çiprut and Nilüfer Bal
Audiol. Res. 2025, 15(4), 94; https://doi.org/10.3390/audiolres15040094 - 2 Aug 2025
Viewed by 84
Abstract
Background/Objectives: This study investigated the association between cochlear implant (CI) users’ assessed perception of musical sound quality and their subjective music perception and music-related quality of life (QoL). The aim was to provide a comprehensive evaluation by integrating a relatively objective Turkish [...] Read more.
Background/Objectives: This study investigated the association between cochlear implant (CI) users’ assessed perception of musical sound quality and their subjective music perception and music-related quality of life (QoL). The aim was to provide a comprehensive evaluation by integrating a relatively objective Turkish Multiple Stimulus with Hidden Reference and Anchor (TR-MUSHRA) test and a subjective music questionnaire. Methods: Thirty CI users and thirty normal-hearing (NH) adults were assessed. Perception of sound quality was measured using the TR-MUSHRA test. Subjective assessments were conducted with the Music-Related Quality of Life Questionnaire (MuRQoL). Results: TR-MUSHRA results showed that while NH participants rated all filtered stimuli as perceptually different from the original, CI users provided similar ratings for stimuli with adjacent high-pass filter settings, indicating less differentiation in perceived sound quality. On the MuRQoL, groups differed on the Frequency subscale but not the Importance subscale. Critically, no significant correlation was found between the TR-MUSHRA scores and the MuRQoL subscale scores in either group. Conclusions: The findings demonstrate that TR-MUSHRA is an effective tool for assessing perceived sound quality relatively objectively, but there is no relationship between perceiving sound quality differences and measures of self-reported musical engagement and its importance. Subjective music experience may represent different domains beyond the perception of sound quality. Therefore, successful auditory rehabilitation requires personalized strategies that consider the multifaceted nature of music perception beyond simple perceptual judgments. Full article
Show Figures

Figure 1

24 pages, 4226 KiB  
Article
Digital Signal Processing of the Inharmonic Complex Tone
by Tatjana Miljković, Jelena Ćertić, Miloš Bjelić and Dragana Šumarac Pavlović
Appl. Sci. 2025, 15(15), 8293; https://doi.org/10.3390/app15158293 - 25 Jul 2025
Viewed by 190
Abstract
In this paper, a set of digital signal processing (DSP) procedures tailored for the analysis of complex musical tones with prominent inharmonicity is presented. These procedures are implemented within a MATLAB-based application and organized into three submodules. The application follows a structured DSP [...] Read more.
In this paper, a set of digital signal processing (DSP) procedures tailored for the analysis of complex musical tones with prominent inharmonicity is presented. These procedures are implemented within a MATLAB-based application and organized into three submodules. The application follows a structured DSP chain: basic signal manipulation; spectral content analysis; estimation of the inharmonicity coefficient and the number of prominent partials; design of a dedicated filter bank; signal decomposition into subchannels; subchannel analysis and envelope extraction; and, finally, recombination of the subchannels into a wideband signal. Each stage in the chain is described in detail, and the overall process is demonstrated through representative examples. The concept and the accompanying application are initially intended for rapid post-processing of recorded signals, offering a tool for enhanced signal annotation. Additionally, the built-in features for subchannel manipulation and recombination enable the preparation of stimuli for perceptual listening tests. The procedures have been tested on a set of recorded tones from various string instruments, including those with pronounced inharmonicity, such as the piano, harp, and harpsichord. Full article
(This article belongs to the Special Issue Musical Acoustics and Sound Perception)
Show Figures

Figure 1

13 pages, 3767 KiB  
Article
An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2)
by Adam Rosiński
Arts 2025, 14(4), 76; https://doi.org/10.3390/arts14040076 - 14 Jul 2025
Viewed by 318
Abstract
This paper presents an analysis of G. P. Telemann’s Sonata in C Major for Recorder and Basso Continuo (TWV 41:C2, Allegro), with the aim of investigating the occurrence of perceptual streams. The presence of perceptual streams in musical works helps to organise [...] Read more.
This paper presents an analysis of G. P. Telemann’s Sonata in C Major for Recorder and Basso Continuo (TWV 41:C2, Allegro), with the aim of investigating the occurrence of perceptual streams. The presence of perceptual streams in musical works helps to organise the sound stimuli received by the listener in a specific manner. This enables each listener to perceive the piece in an individual and distinctive manner, granting primacy to selected sounds over others. Directing the listener’s attention to particular elements of the auditory image leads to the formation of specific mental representations. This, in turn, results in distinctive interpretations of the acoustic stimuli. All of these processes are explored and illustrated in this analysis. Full article
(This article belongs to the Special Issue Sound, Space, and Creativity in Performing Arts)
Show Figures

Figure 1

28 pages, 19935 KiB  
Article
Effects of Violin Back Arch Height Variations on Auditory Perception
by Luca Jost, Mehmet Ercan Altinsoy and Hannes Vereecke
Acoustics 2025, 7(2), 27; https://doi.org/10.3390/acoustics7020027 - 14 May 2025
Viewed by 1547
Abstract
One of the quintessential goals of musical instrument acoustics is to improve the perceived sound produced by, e.g., a violin. To achieve this, the connections between physical (mechanical and geometrical) properties and perceived sound output need to be understood. In this article, a [...] Read more.
One of the quintessential goals of musical instrument acoustics is to improve the perceived sound produced by, e.g., a violin. To achieve this, the connections between physical (mechanical and geometrical) properties and perceived sound output need to be understood. In this article, a single facet of this complex problem will be discussed using experimental results obtained for six violins of varying back arch height. This is the first investigation of its kind to focus on back arch height. It may serve to inform instrument makers and researchers alike about the variation in sound that can be achieved by varying this parameter. The test instruments were constructed using state-of-the-art methodology to best represent the theoretical case of changing back arch height on a single instrument. Three values of back arch height (12.1, 14.8 and 17.5 mm) were investigated. The subsequent perceptual tests consisted of a free sorting task in the playing situation and three two-alternative forced choice listening tests. The descriptors “round” and “warm” were found to be linked to back arch height. The trend was non-linear, meaning that both low- and high-arch height instruments were rated as possessing more of these descriptors than their medium-arch height counterparts. Additional results were obtained using stimuli created by hybrid synthesis. However, these could not be linked to those using real playing or recordings. The results of this study serve to inform violin makers about the relative importance of back arch height and its specific influence on sound output. The discussion of the applied methodology and interpretation of results may serve to inform researchers about important new directions in the field of musical instrument acoustics. Full article
Show Figures

Figure 1

22 pages, 1588 KiB  
Article
An Eye-Tracking Study on Text Comprehension While Listening to Music: Preliminary Results
by Georgia Andreou and Maria Gkantaki
Appl. Sci. 2025, 15(7), 3939; https://doi.org/10.3390/app15073939 - 3 Apr 2025
Viewed by 2173
Abstract
The aim of the present study was to examine the effect of background music on text comprehension using eye-tracking technology. Ten Greek undergraduate students read four texts under the following four reading conditions: preferred music, non-preferred music, café noise, and in silence. Eye [...] Read more.
The aim of the present study was to examine the effect of background music on text comprehension using eye-tracking technology. Ten Greek undergraduate students read four texts under the following four reading conditions: preferred music, non-preferred music, café noise, and in silence. Eye movements were tracked to assess visual patterns, while reading performance and attitudes were also evaluated. The results showed that fixation measures remained stable across conditions, suggesting that early visual processing is not significantly influenced by auditory distractions. However, reading performance significantly declined under non-preferred music, highlighting its disruptive impact on cognitive processing. Participants also reported greater difficulty and fatigue in this condition, consistent with an increased cognitive load. In contrast, preferred music and silence were associated with enhanced understanding, confidence, and immersion, café noise also had a moderate but manageable effect on reading outcomes. These findings underscore the importance of tailoring reading environments to individual preferences in order to optimize reading performance and engagement. Future research studies should focus on the effects of different musical attributes, such as tempo and genre, and use more complex reading tasks, in order to better understand how auditory stimuli interact with cognitive load and visual processing. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

34 pages, 740 KiB  
Systematic Review
Exploring the Intersection of ADHD and Music: A Systematic Review
by Phoebe Saville, Caitlin Kinney, Annie Heiderscheit and Hubertus Himmerich
Behav. Sci. 2025, 15(1), 65; https://doi.org/10.3390/bs15010065 - 13 Jan 2025
Viewed by 10027
Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder, affecting both children and adults, which often leads to significant difficulties with attention, impulsivity, and working memory. These challenges can impact various cognitive and perceptual domains, including music perception and performance. Despite [...] Read more.
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder, affecting both children and adults, which often leads to significant difficulties with attention, impulsivity, and working memory. These challenges can impact various cognitive and perceptual domains, including music perception and performance. Despite these difficulties, individuals with ADHD frequently engage with music, and previous research has shown that music listening can serve as a means of increasing stimulation and self-regulation. Moreover, music therapy has been explored as a potential treatment option for individuals with ADHD. As there is a lack of integrative reviews on the interaction between ADHD and music, the present review aimed to fill the gap in research. Following PRISMA guidelines, a comprehensive literature search was conducted across PsychInfo (Ovid), PubMed, and Web of Science. A narrative synthesis was conducted on 20 eligible studies published between 1981 and 2023, involving 1170 participants, of whom 830 had ADHD or ADD. The review identified three main areas of research: (1) music performance and processing in individuals with ADHD, (2) the use of music listening as a source of stimulation for those with ADHD, and (3) music-based interventions aimed at mitigating ADHD symptoms. The analysis revealed that individuals with ADHD often experience unique challenges in musical tasks, particularly those related to timing, rhythm, and complex auditory stimuli perception, though these deficits did not extend to rhythmic improvisation and musical expression. Most studies indicated that music listening positively affects various domains for individuals with ADHD. Furthermore, most studies of music therapy found that it can generate significant benefits for individuals with ADHD. The strength of these findings, however, was limited by inconsistencies among the studies, such as variations in ADHD diagnosis, comorbidities, medication use, and gender. Despite these limitations, this review provides a valuable foundation for future research on the interaction between ADHD and music. Full article
(This article belongs to the Special Issue Innovations in Music Based Interventions for Psychological Wellbeing)
Show Figures

Figure 1

17 pages, 1898 KiB  
Article
Musical Pitch Perception and Categorization in Listeners with No Musical Training Experience: Insights from Mandarin-Speaking Non-Musicians
by Jie Liang, Fen Zhang, Wenshu Liu, Zilong Li, Keke Yu, Yi Ding and Ruiming Wang
Behav. Sci. 2025, 15(1), 30; https://doi.org/10.3390/bs15010030 - 31 Dec 2024
Cited by 1 | Viewed by 1211
Abstract
Pitch is a fundamental element in music. While most previous studies on musical pitch have focused on musicians, our understanding of musical pitch perception in non-musicians is still limited. This study aimed to explore how Mandarin-speaking listeners who did not receive musical training [...] Read more.
Pitch is a fundamental element in music. While most previous studies on musical pitch have focused on musicians, our understanding of musical pitch perception in non-musicians is still limited. This study aimed to explore how Mandarin-speaking listeners who did not receive musical training perceive and categorize musical pitch. Two experiments were conducted in the study. In Experiment 1, participants were asked to discriminate musical tone pairs with different intervals. The results showed that the nearer apart the tones were, the more difficult it was to distinguish. Among adjacent note pairs at major 2nd pitch distance, the A4–B4 pair was perceived as the easiest to differentiate, while the C4–D4 pair was found to be the most difficult. In Experiment 2, participants completed a tone discrimination and identification task with the C4–D4 and A4–B4 musical tone continua as stimuli. The results revealed that the C4–D4 tone continuum elicited stronger categorical perception than the A4–B4 continuum, although the C4–D4 pair was previously found to be more difficult to distinguish in Experiment 1, suggesting a complex interaction between pitch perception and categorization processing. Together, these two experiments revealed the cognitive mechanism underlying musical pitch perception in ordinary populations and provided insights into future musical pitch training strategies. Full article
Show Figures

Figure 1

34 pages, 2098 KiB  
Review
Physiological Entrainment: A Key Mind–Body Mechanism for Cognitive, Motor and Affective Functioning, and Well-Being
by Marco Barbaresi, Davide Nardo and Sabrina Fagioli
Brain Sci. 2025, 15(1), 3; https://doi.org/10.3390/brainsci15010003 - 24 Dec 2024
Cited by 1 | Viewed by 3901
Abstract
Background: The human sensorimotor system can naturally synchronize with environmental rhythms, such as light pulses or sound beats. Several studies showed that different styles and tempos of music, or other rhythmic stimuli, have an impact on physiological rhythms, including electrocortical brain activity, heart [...] Read more.
Background: The human sensorimotor system can naturally synchronize with environmental rhythms, such as light pulses or sound beats. Several studies showed that different styles and tempos of music, or other rhythmic stimuli, have an impact on physiological rhythms, including electrocortical brain activity, heart rate, and motor coordination. Such synchronization, also known as the “entrainment effect”, has been identified as a crucial mechanism impacting cognitive, motor, and affective functioning. Objectives: This review examines theoretical and empirical contributions to the literature on entrainment, with a particular focus on the physiological mechanisms underlying this phenomenon and its role in cognitive, motor, and affective functions. We also address the inconsistent terminology used in the literature and evaluate the range of measurement approaches used to assess entrainment phenomena. Finally, we propose a definition of “physiological entrainment” that emphasizes its role as a fundamental mechanism that encompasses rhythmic interactions between the body and its environment, to support information processing across bodily systems and to sustain adaptive motor responses. Methods: We reviewed the recent literature through the lens of the “embodied cognition” framework, offering a unified perspective on the phenomenon of physiological entrainment. Results: Evidence from the current literature suggests that physiological entrainment produces measurable effects, especially on neural oscillations, heart rate variability, and motor synchronization. Eventually, such physiological changes can impact cognitive processing, affective functioning, and motor coordination. Conclusions: Physiological entrainment emerges as a fundamental mechanism underlying the mind–body connection. Entrainment-based interventions may be used to promote well-being by enhancing cognitive, motor, and affective functions, suggesting potential rehabilitative approaches to enhancing mental health. Full article
(This article belongs to the Special Issue Exploring the Role of Music in Cognitive Processes)
Show Figures

Figure 1

24 pages, 9053 KiB  
Article
An Ensemble Deep Learning Approach for EEG-Based Emotion Recognition Using Multi-Class CSP
by Behzad Yousefipour, Vahid Rajabpour, Hamidreza Abdoljabbari, Sobhan Sheykhivand and Sebelan Danishvar
Biomimetics 2024, 9(12), 761; https://doi.org/10.3390/biomimetics9120761 - 14 Dec 2024
Cited by 1 | Viewed by 2364
Abstract
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are [...] Read more.
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are critical for accurate emotion recognition. In this study, a novel approach is presented for classifying emotions into three categories, positive, negative, and neutral, using a custom-collected dataset. The dataset used in this study was specifically collected for this purpose from 16 participants, comprising EEG recordings corresponding to the three emotional states induced by musical stimuli. A multi-class Common Spatial Pattern (MCCSP) technique was employed for the processing stage of the EEG signals. These processed signals were then fed into an ensemble model comprising three autoencoders with Convolutional Neural Network (CNN) layers. A classification accuracy of 99.44 ± 0.39% for the three emotional classes was achieved by the proposed method. This performance surpasses previous studies, demonstrating the effectiveness of the approach. The high accuracy indicates that the method could be a promising candidate for future BCI applications, providing a reliable means of emotion detection. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

18 pages, 5749 KiB  
Article
Multivariantism of Auditory Perceptions as a Significant Element of the Auditory Scene Analysis Concept
by Adam Rosiński
Arts 2024, 13(6), 180; https://doi.org/10.3390/arts13060180 - 9 Dec 2024
Cited by 1 | Viewed by 1179
Abstract
The concept of auditory scene analysis, popularized in scientific experiments by A. S. Bregman, the primary architect of the perceptual streaming theory, and his research team, along with more recent analyses by subsequent researchers, highlights a specific scientific gap that has not been [...] Read more.
The concept of auditory scene analysis, popularized in scientific experiments by A. S. Bregman, the primary architect of the perceptual streaming theory, and his research team, along with more recent analyses by subsequent researchers, highlights a specific scientific gap that has not been thoroughly explored in previous studies. This article seeks to expand on this concept by introducing the author’s observation of the multivariant nature of auditory perception. This notion suggests that listeners focusing on different components of an auditory image (such as a musical piece) may perceive the same sounds but interpret them as distinct sound structures. Notably, even the same listener may perceive various structures (different mental figures) when re-listening to the same piece, depending on which musical elements they focus on. The thesis of multivariantism was examined and confirmed through the analysis of selected classical music pieces, providing concrete evidence of different interpretations of the same sound stimuli. To enhance clarity and understanding, the introduction to multivariantism was supplemented with graphic examples from the visual arts, which were then related to musical art through score excerpts from the works of composers such as C. Saint-Saëns, F. Liszt, and F. Mendelssohn Bartholdy. Full article
(This article belongs to the Special Issue Applied Musicology and Ethnomusicology)
Show Figures

Figure 1

25 pages, 609 KiB  
Article
Emotion-Driven Music and IoT Devices for Collaborative Exer-Games
by Pedro Álvarez, Jorge García de Quirós and Javier Fabra
Appl. Sci. 2024, 14(22), 10251; https://doi.org/10.3390/app142210251 - 7 Nov 2024
Cited by 1 | Viewed by 1663
Abstract
Exer-games are interactive experiences in which participants engage in physical exercises to achieve specific goals. Some of these games have a collaborative nature, wherein the actions and achievements of one participant produce immediate effects on the experiences of others. Music serves as a [...] Read more.
Exer-games are interactive experiences in which participants engage in physical exercises to achieve specific goals. Some of these games have a collaborative nature, wherein the actions and achievements of one participant produce immediate effects on the experiences of others. Music serves as a stimulus that can be integrated into these games to influence players’ emotions and, consequently, their actions. In this paper, a framework of music services designed to enhance collaborative exer-games is presented. These services provide the necessary functionality to generate personalized musical stimuli that regulate players’ affective states, induce changes in their physical performance, and improve the game experience. The solution requires to determine the emotions that each song may evoke in players. These emotions are considered when recommending the songs that are used as part of stimuli. Personalization seeds based on players’ listening histories are also integrated in the recommendations in order to foster the effects of those stimuli. Emotions and seeds are computed from the information available in Spotify data services, one of the most popular commercial music providers. Two small-scale experiments present promising preliminary results on how the players’ emotional responses match the affective information included in the musical elements of the solution. The added value of these affective services is that they are integrated into an ecosystem of Internet of Things (IoT) devices and cloud computing resources to support the development of a new generation of emotion-based exer-games. Full article
(This article belongs to the Special Issue Recent Advances in Information Retrieval and Recommendation Systems)
Show Figures

Figure 1

15 pages, 3317 KiB  
Article
Musicianship Modulates Cortical Effects of Attention on Processing Musical Triads
by Jessica MacLean, Elizabeth Drobny, Rose Rizzi and Gavin M. Bidelman
Brain Sci. 2024, 14(11), 1079; https://doi.org/10.3390/brainsci14111079 - 29 Oct 2024
Cited by 1 | Viewed by 1262
Abstract
Background: Many studies have demonstrated the benefits of long-term music training (i.e., musicianship) on the neural processing of sound, including simple tones and speech. However, the effects of musicianship on the encoding of simultaneously presented pitches, in the form of complex musical [...] Read more.
Background: Many studies have demonstrated the benefits of long-term music training (i.e., musicianship) on the neural processing of sound, including simple tones and speech. However, the effects of musicianship on the encoding of simultaneously presented pitches, in the form of complex musical chords, is less well established. Presumably, musicians’ stronger familiarity and active experience with tonal music might enhance harmonic pitch representations, perhaps in an attention-dependent manner. Additionally, attention might influence chordal encoding differently across the auditory system. To this end, we explored the effects of long-term music training and attention on the processing of musical chords at the brainstem and cortical levels. Method: Young adult participants were separated into musician and nonmusician groups based on the extent of formal music training. While recording EEG, listeners heard isolated musical triads that differed only in the chordal third: major, minor, and detuned (4% sharper third from major). Participants were asked to correctly identify chords via key press during active stimulus blocks and watched a silent movie during passive blocks. We logged behavioral identification accuracy and reaction times and calculated information transfer based on the behavioral chord confusion patterns. EEG data were analyzed separately to distinguish between cortical (event-related potential, ERP) and subcortical (frequency-following response, FFR) evoked responses. Results: We found musicians were (expectedly) more accurate, though not faster, than nonmusicians in chordal identification. For subcortical FFRs, responses showed stimulus chord effects but no group differences. However, for cortical ERPs, whereas musicians displayed P2 (~150 ms) responses that were invariant to attention, nonmusicians displayed reduced P2 during passive listening. Listeners’ degree of behavioral information transfer (i.e., success in distinguishing chords) was also better in musicians and correlated with their neural differentiation of chords in the ERPs (but not high-frequency FFRs). Conclusions: Our preliminary results suggest long-term music training strengthens even the passive cortical processing of musical sounds, supporting more automated brain processing of musical chords with less reliance on attention. Our results also suggest that the degree to which listeners can behaviorally distinguish chordal triads is directly related to their neural specificity to musical sounds primarily at cortical rather than subcortical levels. FFR attention effects were likely not observed due to the use of high-frequency stimuli (>220 Hz), which restrict FFRs to brainstem sources. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

22 pages, 1401 KiB  
Review
From Sound to Movement: Mapping the Neural Mechanisms of Auditory–Motor Entrainment and Synchronization
by Marija Pranjić, Thenille Braun Janzen, Nikolina Vukšić and Michael Thaut
Brain Sci. 2024, 14(11), 1063; https://doi.org/10.3390/brainsci14111063 - 25 Oct 2024
Cited by 9 | Viewed by 4153
Abstract
Background: Humans exhibit a remarkable ability to synchronize their actions with external auditory stimuli through a process called auditory–motor or rhythmic entrainment. Positive effects of rhythmic entrainment have been demonstrated in adults with neurological movement disorders, yet the neural substrates supporting the transformation [...] Read more.
Background: Humans exhibit a remarkable ability to synchronize their actions with external auditory stimuli through a process called auditory–motor or rhythmic entrainment. Positive effects of rhythmic entrainment have been demonstrated in adults with neurological movement disorders, yet the neural substrates supporting the transformation of auditory input into timed rhythmic motor outputs are not fully understood. We aimed to systematically map and synthesize the research on the neural correlates of auditory–motor entrainment and synchronization. Methods: Following the PRISMA-ScR guidelines for scoping reviews, a systematic search was conducted across four databases (MEDLINE, Embase, PsycInfo, and Scopus) for articles published between 2013 and 2023. Results: From an initial return of 1430 records, 22 studies met the inclusion criteria and were synthesized based on the neuroimaging modality. There is converging evidence that auditory–motor synchronization engages bilateral cortical and subcortical networks, including the supplementary motor area, premotor cortex, ventrolateral prefrontal cortex, basal ganglia, and cerebellum. Specifically, the supplementary motor area and the basal ganglia are essential for beat-based timing and internally guided rhythmic movements, while the cerebellum plays an important role in tracking and processing complex rhythmic patterns and synchronizing to the external beat. Self-paced tapping is associated with additional activations in the prefrontal cortex and the basal ganglia, suggesting that tapping in the absence of auditory cues requires more neural resources. Lastly, existing studies indicate that movement rate and the type of music further modulate the EEG power in the alpha and beta frequency bands. Conclusions: These findings are discussed in the context of clinical implications and rhythm-based therapies. Full article
(This article belongs to the Special Issue Focusing on the Rhythmic Interventions in Movement Disorders)
Show Figures

Figure 1

11 pages, 240 KiB  
Review
Recent Developments in the Non-Pharmacological Management of Children’s Behavior Based on Distraction Techniques: A Concise Review
by Jieyi Chen, Ke Deng, Dikuan Yu, Cancan Fan, Limin Liu, Haijing Gu, Fang Huang and Yongbiao Huo
Healthcare 2024, 12(19), 1940; https://doi.org/10.3390/healthcare12191940 - 27 Sep 2024
Cited by 3 | Viewed by 3244
Abstract
Oral diseases and conditions affect children’s oral health and negatively influence their overall health. Early detection and intervention are important in mitigating these negative consequences. However, dental fear and anxiety (DFA) regarding dental procedures often hinder children from seeking necessary dental care. Non-pharmacological [...] Read more.
Oral diseases and conditions affect children’s oral health and negatively influence their overall health. Early detection and intervention are important in mitigating these negative consequences. However, dental fear and anxiety (DFA) regarding dental procedures often hinder children from seeking necessary dental care. Non-pharmacological behavior management strategies, such as distraction techniques, are commonly adopted to manage children’s behaviors. Distraction techniques have been developed rapidly in recent years and are widely accepted by both health professionals and parents due to their noninvasive and low-cost nature. This concise review aims to summarize current distraction techniques applied during dental treatments, especially for children. The most commonly reported techniques for children are audio distraction, audio-visual distraction, tactile distraction, olfactory distraction, and gustatory distraction. Audio distraction techniques involving music and storytelling help children relax. Audio-visual distraction techniques help to divert children’s attention from the dental treatment. Tactile stimuli can reduce the transmission of pain signals. Olfactory stimuli can help children feel comfortable and relaxed. Gustatory distraction involving sweet substances can create a positive environment. These distraction techniques effectively reduce DFA in children and improve their satisfaction with dental procedures. As technology continues to develop, further research is needed to provide more robust, evidence-based guidance for dentists using distraction techniques. Full article
(This article belongs to the Special Issue Prevention and Management of Oral Diseases Among Children)
23 pages, 2556 KiB  
Article
Investigation of Deficits in Auditory Emotional Content Recognition by Adult Cochlear Implant Users through the Study of Electroencephalographic Gamma and Alpha Asymmetry and Alexithymia Assessment
by Giulia Cartocci, Bianca Maria Serena Inguscio, Andrea Giorgi, Dario Rossi, Walter Di Nardo, Tiziana Di Cesare, Carlo Antonio Leone, Rosa Grassia, Francesco Galletti, Francesco Ciodaro, Cosimo Galletti, Roberto Albera, Andrea Canale and Fabio Babiloni
Brain Sci. 2024, 14(9), 927; https://doi.org/10.3390/brainsci14090927 - 17 Sep 2024
Cited by 3 | Viewed by 1652
Abstract
Background/Objectives: Given the importance of emotion recognition for communication purposes, and the impairment for such skill in CI users despite impressive language performances, the aim of the present study was to investigate the neural correlates of emotion recognition skills, apart from language, in [...] Read more.
Background/Objectives: Given the importance of emotion recognition for communication purposes, and the impairment for such skill in CI users despite impressive language performances, the aim of the present study was to investigate the neural correlates of emotion recognition skills, apart from language, in adult unilateral CI (UCI) users during a music in noise (happy/sad) recognition task. Furthermore, asymmetry was investigated through electroencephalographic (EEG) rhythm, given the traditional concept of hemispheric lateralization for emotional processing, and the intrinsic asymmetry due to the clinical UCI condition. Methods: Twenty adult UCI users and eight normal hearing (NH) controls were recruited. EEG gamma and alpha band power was assessed as there is evidence of a relationship between gamma and emotional response and between alpha asymmetry and tendency to approach or withdraw from stimuli. The TAS-20 questionnaire (alexithymia) was completed by the participants. Results: The results showed no effect of background noise, while supporting that gamma activity related to emotion processing shows alterations in the UCI group compared to the NH group, and that these alterations are also modulated by the etiology of deafness. In particular, relative higher gamma activity in the CI side corresponds to positive processes, correlated with higher emotion recognition abilities, whereas gamma activity in the non-CI side may be related to positive processes inversely correlated with alexithymia and also inversely correlated with age; a correlation between TAS-20 scores and age was found only in the NH group. Conclusions: EEG gamma activity appears to be fundamental to the processing of the emotional aspect of music and also to the psychocognitive emotion-related component in adults with CI. Full article
(This article belongs to the Special Issue Recent Advances in Hearing Impairment)
Show Figures

Figure 1

Back to TopTop