Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (92)

Search Parameters:
Keywords = style of music

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 415 KiB  
Article
Enhancing MusicGen with Prompt Tuning
by Hohyeon Shin, Jeonghyeon Im and Yunsick Sung
Appl. Sci. 2025, 15(15), 8504; https://doi.org/10.3390/app15158504 (registering DOI) - 31 Jul 2025
Viewed by 213
Abstract
Generative AI has been gaining attention across various creative domains. In particular, MusicGen stands out as a representative approach capable of generating music based on text or audio inputs. However, it has limitations in producing high-quality outputs for specific genres and fully reflecting [...] Read more.
Generative AI has been gaining attention across various creative domains. In particular, MusicGen stands out as a representative approach capable of generating music based on text or audio inputs. However, it has limitations in producing high-quality outputs for specific genres and fully reflecting user intentions. This paper proposes a prompt tuning technique that effectively adjusts the output quality of MusicGen without modifying its original parameters and optimizes its ability to generate music tailored to specific genres and styles. Experiments were conducted to compare the performance of the traditional MusicGen with the proposed method and evaluate the quality of generated music using the Contrastive Language-Audio Pretraining (CLAP) and Kullback–Leibler Divergence (KLD) scoring approaches. The results demonstrated that the proposed method significantly improved the output quality and musical coherence, particularly for specific genres and styles. Compared with the traditional model, the CLAP score was increased by 0.1270, and the KLD score was increased by 0.00403 on average. The effectiveness of prompt tuning in optimizing the performance of MusicGen validated the proposed method and highlighted its potential for advancing generative AI-based music generation tools. Full article
Show Figures

Figure 1

24 pages, 6637 KiB  
Article
Style, Tradition, and Innovation in the Sacred Choral Music of Rhona Clarke
by Laura Sheils and Róisín Blunnie
Religions 2025, 16(8), 984; https://doi.org/10.3390/rel16080984 - 29 Jul 2025
Viewed by 675
Abstract
Sacred choral music continues to hold a significant place in contemporary concert settings, with historical and newly composed works featuring in today’s choral programmes. Contemporary choral composers have continued to engage with the longstanding tradition of setting sacred texts to music, bringing fresh [...] Read more.
Sacred choral music continues to hold a significant place in contemporary concert settings, with historical and newly composed works featuring in today’s choral programmes. Contemporary choral composers have continued to engage with the longstanding tradition of setting sacred texts to music, bringing fresh interpretations through their innovative compositional techniques and fusion of styles. Irish composer Rhona Clarke’s (b. 1958) expansive choral oeuvre includes a wealth of both sacred and secular compositions but reveals a notable propensity for the setting of sacred texts in Latin. Her synthesis of archaic and contemporary techniques within her work demonstrates both the solemn and visceral aspects of these texts, as well as a clear nod to tradition. This article focuses on Clarke’s choral work O Vis Aeternitatis (2020), a setting of a text by the medieval musician and saint Hildegard of Bingen (c. 1150). Through critical score analysis, we investigate the piece’s melodic, harmonic, and textural frameworks; the influence of Hildegard’s original chant; and the use of extended vocal techniques and contrasting vocal timbres as we articulate core characteristics of Clarke’s compositional style and underline her foregrounding of the more visceral aspects of Hildegard’s words. Clarke’s fusion of creative practices from past and present spotlights moments of dramatic escalation and spiritual importance, and exhibits the composer’s distinctive compositional voice as she reimagines Hildegard’s text for the twenty-first century. Full article
(This article belongs to the Special Issue Sacred Music: Creation, Interpretation, Experience)
Show Figures

Figure 1

29 pages, 21077 KiB  
Article
Precise Recognition of Gong-Che Score Characters Based on Deep Learning: Joint Optimization of YOLOv8m and SimAM/MSCAM
by Zhizhou He, Yuqian Zhang, Liumei Zhang and Yuanjiao Hu
Electronics 2025, 14(14), 2802; https://doi.org/10.3390/electronics14142802 - 11 Jul 2025
Viewed by 234
Abstract
In the field of music notation recognition, while the recognition technology for common notation systems such as staff notation has become quite mature, the recognition techniques for traditional Chinese notation systems like guqin tablature (jianzipu) and Kunqu opera gongchepu remain relatively underdeveloped. As [...] Read more.
In the field of music notation recognition, while the recognition technology for common notation systems such as staff notation has become quite mature, the recognition techniques for traditional Chinese notation systems like guqin tablature (jianzipu) and Kunqu opera gongchepu remain relatively underdeveloped. As an important carrier of China’s thousand-year musical culture, the digital preservation and inheritance of Kunqu opera’s Gongche notation hold significant cultural value and practical significance. By addressing the unique characteristics of Gongche notation, this study overcomes the limitations of Western staff notation recognition technologies. By constructing a deep learning model adapted to the morphology of Chinese character-style notation symbols, it provides technical support for establishing an intelligent processing system for Chinese musical documents, thereby promoting the innovative development and inheritance of traditional music in the era of artificial intelligence. This paper has constructed the LGRC2024 (Gong-che notation based on Lilu Qu Pu) dataset. It has also employed data augmentation operations such as image translation, rotation, and noise processing to enhance the diversity of the dataset. For the recognition of Gong-che notation, the YOLOv8 model was adopted, and the network performances of its lightweight (n) and medium-weight (m) versions were compared and analyzed. The superior-performing YOLOv8m was selected as the basic model. To further improve the model’s performance, SimAM, Triplet Attention, and Multi-scale Convolutional Attention Module (MSCAM) were introduced to optimize the model. The experimental results show that the accuracy of the basic YOLOv8m model increased from 65.9% to 78.2%. The improved models based on YOLOv8m achieved recognition accuracies of 80.4%, 81.8%, and 83.6%, respectively. Among them, the improved model with the MSCAM module demonstrated the best performance in all aspects. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

21 pages, 564 KiB  
Article
Sounding Identity: A Technical Analysis of Singing Styles in the Traditional Music of Sub-Saharan Africa
by Alfred Patrick Addaquay
Arts 2025, 14(3), 68; https://doi.org/10.3390/arts14030068 - 16 Jun 2025
Viewed by 963
Abstract
This article presents an in-depth examination of the technical and cultural dimensions of singing practices within the traditional music of sub-Saharan Africa. Utilizing an extensive body of theoretical and ethnomusicological research, comparative transcription, and culturally situated observation, it presents a comprehensive framework for [...] Read more.
This article presents an in-depth examination of the technical and cultural dimensions of singing practices within the traditional music of sub-Saharan Africa. Utilizing an extensive body of theoretical and ethnomusicological research, comparative transcription, and culturally situated observation, it presents a comprehensive framework for understanding the significance of the human voice in various performance contexts. The study revolves around a tripartite model—auditory clarity, ambiguous auditory clarity, and occlusion—that delineates the varying levels of audibility of vocal lines amidst intricate instrumental arrangements. The article examines case studies from West, East, and Southern Africa, highlighting essential vocal techniques such as straight tone, nasal resonance, ululation, and controlled (or delayed) vibrato. It underscores the complex interplay between language, melody, and rhythm in tonal languages. The analysis delves into the influence of sound reinforcement technologies on vocal presence and cultural authenticity, positing that PA systems have the capacity to either enhance or disrupt the equilibrium between traditional aesthetics and modern requirements. This research is firmly rooted in a blend of African and Western theoretical frameworks, drawing upon the contributions of Nketia, Agawu, Chernoff, and Kubik. It proposes a nuanced methodology that integrates technical analysis with cultural significance. It posits that singing in African traditional music transcends mere expression, serving as a vessel for collective memory, identity, and the socio-musical framework. The article concludes by emphasizing the enduring strength and flexibility of African vocal traditions, illustrating their capacity for evolution while preserving fundamental communicative and artistic values. Full article
22 pages, 3451 KiB  
Article
LSTM-Based Music Generation Technologies
by Yi-Jen Mon
Computers 2025, 14(6), 229; https://doi.org/10.3390/computers14060229 - 11 Jun 2025
Viewed by 643
Abstract
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch, [...] Read more.
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch, intensity, rhythm, notes, chords, and more, necessitates the extraction of these elements from extensive datasets, making the preliminary work arduous. To address this, we employed various tools to deconstruct the musical structure, conduct step-by-step learning, and then reconstruct it. This article primarily presents the techniques for dissecting musical components in the preliminary phase. Subsequently, it introduces the use of LSTM to build a deep learning network architecture, enabling the learning of musical features and temporal coherence. Finally, through in-depth analysis and comparative studies, this paper validates the efficacy of the proposed research methodology, demonstrating its ability to capture musical coherence and generate compositions with similar styles. Full article
Show Figures

Figure 1

16 pages, 553 KiB  
Article
Improving Phrase Segmentation in Symbolic Folk Music: A Hybrid Model with Local Context and Global Structure Awareness
by Xin Guan, Zhilin Dong, Hui Liu and Qiang Li
Entropy 2025, 27(5), 460; https://doi.org/10.3390/e27050460 - 24 Apr 2025
Viewed by 488
Abstract
The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF methods mainly rely on local semantics, making it difficult to capture long-range dependencies, leading to inaccurate phrase boundary recognition across measures or themes. Traditional Transformer [...] Read more.
The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF methods mainly rely on local semantics, making it difficult to capture long-range dependencies, leading to inaccurate phrase boundary recognition across measures or themes. Traditional Transformer models use static embeddings, limiting their adaptability to different musical styles, structures, and melodic evolutions. Moreover, multi-head self-attention struggles with local context modeling, causing the loss of short-term information (e.g., pitch variation, melodic integrity, and rhythm stability), which may result in over-segmentation or merging errors. To address these issues, we propose a segmentation method integrating local context enhancement and global structure awareness. This method overcomes traditional models’ limitations in long-range dependency modeling, improves phrase boundary recognition, and adapts to diverse musical styles and melodies. Specifically, dynamic note embeddings enhance contextual awareness across segments, while an improved attention mechanism strengthens both global semantics and local context modeling. Combining these strategies ensures reasonable phrase boundaries and prevents unnecessary segmentation or merging. The experimental results show that our method outperforms the state-of-the-art methods for symbolic music phrase segmentation, with phrase boundaries better aligned to musical structures. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 2662 KiB  
Article
Uplifting Moods: Augmented Reality-Based Gamified Mood Intervention App with Attention Bias Modification
by Yun Jung Yeh, Sarah S. Jo and Youngjun Cho
Software 2025, 4(2), 8; https://doi.org/10.3390/software4020008 - 1 Apr 2025
Cited by 1 | Viewed by 777
Abstract
Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood [...] Read more.
Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood intervention app that combines gamified ABM and augmented reality (AR) to address the limitation associated with the repetitive nature of ABM. By harnessing the benefits of mobile AR’s low-cost, portable, and accessible characteristics, this approach is to help users easily take part in ABM, positively shifting one’s emotions. We conducted a mixed methods study with 24 participants, which involves a controlled experiment with Self-Assessment Manikin as its primary measure and a semi-structured interview. Our analysis reports that the approach uniquely adds fun, exploring, and challenging features, helping improve engagement and feeling more cheerful and less under control. It also highlights the importance of personalization and consideration of gaming style, music preference, and socialization in designing a daily AR ABM game as an effective mental wellbeing intervention. Full article
Show Figures

Figure 1

17 pages, 674 KiB  
Article
Graph Neural Network and LSTM Integration for Enhanced Multi-Label Style Classification of Piano Sonatas
by Sibo Zhang, Yang Liu and Mengjie Zhou
Sensors 2025, 25(3), 666; https://doi.org/10.3390/s25030666 - 23 Jan 2025
Cited by 1 | Viewed by 1024
Abstract
In the field of musicology, the automatic style classification of compositions such as piano sonatas presents significant challenges because of their intricate structural and temporal characteristics. Traditional approaches often fail to capture the nuanced relationships inherent in musical works. This paper addresses the [...] Read more.
In the field of musicology, the automatic style classification of compositions such as piano sonatas presents significant challenges because of their intricate structural and temporal characteristics. Traditional approaches often fail to capture the nuanced relationships inherent in musical works. This paper addresses the limitations of traditional neural networks in piano sonata style classification and feature extraction by proposing a novel integration of graph convolutional neural networks (GCNs), graph attention networks (GATs), and Long Short-Term Memory (LSTM) networks to conduct the automatic multi-label classification of piano sonatas. Specifically, the method combines the graph convolution operations of GCNs, the attention mechanism of GATs, and the gating mechanism of LSTMs to perform the graph structure representation, feature extraction, allocation weighting, and coding of time-dependent features of music data layer by layer. The aim is to optimize the representation of the structural and temporal features of musical elements, as well as the dependence between discovery features, so as to improve classification performance. In addition, we utilize MIDI files of several piano sonatas to construct a dataset, spanning the 17th to the 19th centuries (i.e., the late Baroque, Classical, and Romantic periods). The experimental results demonstrate that the proposed method effectively improves the accuracy of style classification by 15% over baseline schemes. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 1637 KiB  
Article
Advancements in End-to-End Audio Style Transformation: A Differentiable Approach for Voice Conversion and Musical Style Transfer
by Shashwat Aggarwal, Shashwat Uttam, Sameer Garg, Shubham Garg, Kopal Jain and Swati Aggarwal
AI 2025, 6(1), 16; https://doi.org/10.3390/ai6010016 - 17 Jan 2025
Cited by 1 | Viewed by 1926
Abstract
Introduction: This study introduces a fully differentiable, end-to-end audio transformation network designed to overcome these limitations by operating directly on acoustic features. Methods: The proposed method employs an encoder–decoder architecture with a global conditioning mechanism. It eliminates the need for parallel utterances, intermediate [...] Read more.
Introduction: This study introduces a fully differentiable, end-to-end audio transformation network designed to overcome these limitations by operating directly on acoustic features. Methods: The proposed method employs an encoder–decoder architecture with a global conditioning mechanism. It eliminates the need for parallel utterances, intermediate phonetic representations, and speaker-independent ASR systems. The system is evaluated on tasks of voice conversion and musical style transfer using subjective and objective metrics. Results: Experimental results demonstrate the model’s efficacy, achieving competitive performance in both seen and unseen target scenarios. The proposed framework outperforms seven existing systems for audio transformation and aligns closely with state-of-the-art methods. Conclusion: This approach simplifies feature engineering, ensures vocabulary independence, and broadens the applicability of audio transformations across diverse domains, such as personalized voice assistants and musical experimentation. Full article
Show Figures

Figure 1

34 pages, 2098 KiB  
Review
Physiological Entrainment: A Key Mind–Body Mechanism for Cognitive, Motor and Affective Functioning, and Well-Being
by Marco Barbaresi, Davide Nardo and Sabrina Fagioli
Brain Sci. 2025, 15(1), 3; https://doi.org/10.3390/brainsci15010003 - 24 Dec 2024
Cited by 1 | Viewed by 3880
Abstract
Background: The human sensorimotor system can naturally synchronize with environmental rhythms, such as light pulses or sound beats. Several studies showed that different styles and tempos of music, or other rhythmic stimuli, have an impact on physiological rhythms, including electrocortical brain activity, heart [...] Read more.
Background: The human sensorimotor system can naturally synchronize with environmental rhythms, such as light pulses or sound beats. Several studies showed that different styles and tempos of music, or other rhythmic stimuli, have an impact on physiological rhythms, including electrocortical brain activity, heart rate, and motor coordination. Such synchronization, also known as the “entrainment effect”, has been identified as a crucial mechanism impacting cognitive, motor, and affective functioning. Objectives: This review examines theoretical and empirical contributions to the literature on entrainment, with a particular focus on the physiological mechanisms underlying this phenomenon and its role in cognitive, motor, and affective functions. We also address the inconsistent terminology used in the literature and evaluate the range of measurement approaches used to assess entrainment phenomena. Finally, we propose a definition of “physiological entrainment” that emphasizes its role as a fundamental mechanism that encompasses rhythmic interactions between the body and its environment, to support information processing across bodily systems and to sustain adaptive motor responses. Methods: We reviewed the recent literature through the lens of the “embodied cognition” framework, offering a unified perspective on the phenomenon of physiological entrainment. Results: Evidence from the current literature suggests that physiological entrainment produces measurable effects, especially on neural oscillations, heart rate variability, and motor synchronization. Eventually, such physiological changes can impact cognitive processing, affective functioning, and motor coordination. Conclusions: Physiological entrainment emerges as a fundamental mechanism underlying the mind–body connection. Entrainment-based interventions may be used to promote well-being by enhancing cognitive, motor, and affective functions, suggesting potential rehabilitative approaches to enhancing mental health. Full article
(This article belongs to the Special Issue Exploring the Role of Music in Cognitive Processes)
Show Figures

Figure 1

14 pages, 2940 KiB  
Communication
Potential Note Degree of Khong Wong Yai Based on Rhyme Structure and Pillar Tone as a Novel Approach for Musical Analysis Using Multivariate Statistics: A Case Study of the Composition Sadhukarn from Thailand, Laos, and Cambodia
by Sumetus Eambangyung
Stats 2024, 7(4), 1513-1526; https://doi.org/10.3390/stats7040089 - 20 Dec 2024
Viewed by 917
Abstract
Diverse multivariate statistics are powerful tools for musical analysis. A recent study identified relationships among different versions of the composition Sadhukarn from Thailand, Laos, and Cambodia using non-metric multidimensional scaling (NMDS) and cluster analysis. However, the datasets used for NMDS and cluster analysis [...] Read more.
Diverse multivariate statistics are powerful tools for musical analysis. A recent study identified relationships among different versions of the composition Sadhukarn from Thailand, Laos, and Cambodia using non-metric multidimensional scaling (NMDS) and cluster analysis. However, the datasets used for NMDS and cluster analysis require musical knowledge and complicated manual conversion of notations. This work aims to (i) evaluate a novel approach based on multivariate statistics of potential note degree of rhyme structure and pillar tone (Look Tok) for musical analysis of the 26 versions of the composition Sadhukarn from Thailand, Laos, and Cambodia; (ii) compare the multivariate results obtained by this novel approach and with the datasets from the published method using manual conversion; and (iii) investigate the impact of normalization on the results obtained by this new method. The result shows that the novel approach established in this study successfully identifies the 26 Sadhukarn versions according to their countries of origin. The results obtained by the novel approach of the full version were comparable to those obtained by the manual conversion approach. The normalization process causes the loss of identity and uniqueness. In conclusion, the novel approach based on the full version can be considered as a useful alternative approach for musical analysis based on multivariate statistics. In addition, it can be applied for other music genres, forms, and styles, as well as other musical instruments. Full article
(This article belongs to the Section Multivariate Analysis)
Show Figures

Figure 1

17 pages, 3902 KiB  
Article
Dual-Path Beat Tracking: Combining Temporal Convolutional Networks and Transformers in Parallel
by Nikhil Thapa and Joonwhoan Lee
Appl. Sci. 2024, 14(24), 11777; https://doi.org/10.3390/app142411777 - 17 Dec 2024
Viewed by 1851
Abstract
The Transformer, a deep learning architecture, has shown exceptional adaptability across fields, including music information retrieval (MIR). Transformers excel at capturing global, long-range dependencies in sequences, which is valuable for tracking rhythmic patterns over time. Temporal Convolutional Networks (TCNs), with their dilated convolutions, [...] Read more.
The Transformer, a deep learning architecture, has shown exceptional adaptability across fields, including music information retrieval (MIR). Transformers excel at capturing global, long-range dependencies in sequences, which is valuable for tracking rhythmic patterns over time. Temporal Convolutional Networks (TCNs), with their dilated convolutions, are effective at processing local, temporal patterns with reduced complexity. Combining these complementary characteristics, global sequence modeling from Transformers and local temporal detail from TCNs enhances beat tracking while reducing the model’s overall complexity. To capture beat intervals of varying lengths and ensure optimal alignment of beat predictions, the model employs a Dynamic Bayesian Network (DBN), followed by Viterbi decoding for effective post-processing. This system is evaluated across diverse public datasets spanning various music genres and styles, achieving performance on par with current state-of-the-art methods yet with fewer trainable parameters. Additionally, we also explore the interpretability of the model using Grad-CAM to visualize the model’s learned features, offering insights into how the TCN-Transformer hybrid captures rhythmic patterns in the data. Full article
(This article belongs to the Special Issue AI in Audio Analysis: Spectrogram-Based Recognition)
Show Figures

Figure 1

14 pages, 199 KiB  
Article
Liturgical Gift or Theological Burden? Teenagers and Ecumenical Liturgical Exchange Events
by Nelson Robert Cowan and Emily Snider Andrews
Religions 2024, 15(12), 1478; https://doi.org/10.3390/rel15121478 - 5 Dec 2024
Viewed by 2030
Abstract
Assumptions about the preferences of teenagers in corporate worship regarding format, style, musical selections, and other experiences abound. Recognizing that teenagers are far from homogenous, we sought to listen deeply to how they process and define their experiences of worship, particularly through the [...] Read more.
Assumptions about the preferences of teenagers in corporate worship regarding format, style, musical selections, and other experiences abound. Recognizing that teenagers are far from homogenous, we sought to listen deeply to how they process and define their experiences of worship, particularly through the lens of encountering liturgical difference. Our research team spent one week with approximately 35 highly religious, majority-Evangelical teenagers at Animate 2023 in Birmingham, Alabama—a summer camp with an emphasis in worship and the arts. Based on data from individual interviews and focus groups, this paper articulates some of our findings—namely that these highly devoted teenage worshipers demonstrate liturgical curiosity, delight in their own agency, and often desire to adopt practices that are foreign to them, even when some of those elements are deemed “weird”. The lived experiences of young people are often missing from conversations about their liturgical practices in both the Church and academy. While this study is not generalizable, it offers a micro glimpse into one worship arts camp, aiming to provide tangible data points to address this lacuna. Full article
(This article belongs to the Special Issue Contemporary Worship Music and Intergenerational Formation)
18 pages, 4245 KiB  
Systematic Review
The Effect of Music Distraction on Dental Anxiety During Invasive Dental Procedures in Children and Adults: A Meta-Analysis
by Kung-Chien Shih, Wei-Ti Hsu, Jia-Li Yang, Kee-Ming Man, Kuen-Bao Chen and Wei-Yong Lin
J. Clin. Med. 2024, 13(21), 6491; https://doi.org/10.3390/jcm13216491 - 29 Oct 2024
Cited by 1 | Viewed by 2732
Abstract
Background: Dental anxiety and odontophobia are common issues, leading to challenges with oral hygiene and dental health. Music distraction offers an effective and side effect-free solution to alleviate pain and increase the acceptability of dental treatments. Our meta-analysis aimed to assess the efficacy [...] Read more.
Background: Dental anxiety and odontophobia are common issues, leading to challenges with oral hygiene and dental health. Music distraction offers an effective and side effect-free solution to alleviate pain and increase the acceptability of dental treatments. Our meta-analysis aimed to assess the efficacy of music distraction in reducing patient anxiety during invasive dental procedures in children and adults. Methods: The PubMed, Web of Science, and Embase databases were searched for clinically controlled trials, using the keywords “music” and “dental anxiety”. The main outcome measured was the anxiety score. A meta-analysis was conducted using a random-effects model to estimate the standardized mean differences (SMDs). The subgroup analyses were conducted based on age groups, music preferences, and music styles. The research protocol has been registered with PROSPERO (Registration ID: CRD42022357961). Results: A total of 24 controlled clinical trials involving 1830 participants met the inclusion criteria for the meta-analysis. Music distraction significantly reduced dental anxiety during invasive procedures under local anesthesia (SMD, −0.50; 95% CI, −0.80 to −0.21; p = 0.0009; I2 = 83%). Our subgroup analysis revealed that music distraction was more effective in adults (SMD, −0.51; p = 0.0007) than in children (SMD, −0.47; p = 0.13) in reducing dental anxiety. Regarding music selection, music chosen by the participant (SMD, −1.01; p = 0.008) demonstrated more anxiolytic effects than by the author (SMD, −0.24; p = 0.02). Regarding music styles, classical music (SMD, −0.69; p = 0.009) was associated with better anxiolytic effects in adults. Conclusions: Our meta-analysis supports the use of music to alleviate dental anxiety during invasive procedures. Listening to classical or customized music can serve as an effective adjunct to outpatient surgical care in dental clinics. Full article
(This article belongs to the Topic Advances in Dental Health)
Show Figures

Figure 1

16 pages, 7880 KiB  
Communication
Multimodal Drumming Education Tool in Mixed Reality
by James Pinkl, Julián Villegas and Michael Cohen
Multimodal Technol. Interact. 2024, 8(8), 70; https://doi.org/10.3390/mti8080070 - 5 Aug 2024
Cited by 1 | Viewed by 2520
Abstract
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in [...] Read more.
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in the field of Action Observation. In this contribution, a multimodal tool designed to teach rudimental and polyrhythmic drumming was developed and tested in a 20-subject study. The tool presented subjects with a first-person MR perspective via a head-mounted display to provide users with visual exposure to both virtual content and their physical surroundings simultaneously. When compared against a control group practicing via video demonstrations, results showed increased rhythmic accuracy across four exercises. Specifically, a difference of 239 ms (z-ratio = 3.520, p < 0.001) was found between the timing errors of subjects who practiced with our multimodal mixed reality development compared to subjects who practiced with video, demonstrating the potential of such affordances. This research contributes to ongoing work in the fields of Action Observation and Mixed Reality, providing evidence that Action Observation techniques can be an effective practice method for drumming. Full article
Show Figures

Figure 1

Back to TopTop