Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (138)

Search Parameters:
Keywords = classical music

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 501 KiB  
Article
Nurse-Led Binaural Beat Intervention for Anxiety Reduction in Pterygium Surgery: A Randomized Controlled Trial
by Punchiga Ratanalerdnawee, Mart Maiprasert, Jakkrit Klaphajone, Pongsiri Khunngam and Phawit Norchai
Nurs. Rep. 2025, 15(8), 282; https://doi.org/10.3390/nursrep15080282 - 31 Jul 2025
Viewed by 216
Abstract
Background/Objectives: Anxiety before ophthalmic surgery under local anesthesia may hinder patient cooperation and surgical outcomes. Nurse-led auditory interventions offer a promising non-pharmacological approach to perioperative anxiety management. This study evaluated the effectiveness of superimposed binaural beats (SBBs)—classical music layered with frequency differentials—in [...] Read more.
Background/Objectives: Anxiety before ophthalmic surgery under local anesthesia may hinder patient cooperation and surgical outcomes. Nurse-led auditory interventions offer a promising non-pharmacological approach to perioperative anxiety management. This study evaluated the effectiveness of superimposed binaural beats (SBBs)—classical music layered with frequency differentials—in reducing anxiety during pterygium surgery with conjunctival autografting. Methods: In this randomized controlled trial, 111 adult patients scheduled for elective pterygium excision with conjunctival autografting under local anesthesia were allocated to one of three groups: SBBs, plain music (PM), or silence (control). A trained perioperative nurse administered all auditory interventions. The patients’ anxiety was assessed using the State–Trait Anxiety Inventory—State (STAI-S), and physiological parameters (blood pressure, heart rate, respiratory rate, and oxygen saturation) were recorded before and after surgery. Results: The SBB group showed significantly greater reductions in their STAI-S scores (p < 0.001), systolic blood pressure (p = 0.011), heart rate (p = 0.003), and respiratory rate (p = 0.009) compared to the PM and control groups. No adverse events occurred. Conclusions: SBBs are a safe, nurse-delivered auditory intervention that significantly reduces perioperative anxiety and supports physiological stability. Their integration into routine nursing care for minor ophthalmic surgeries is both feasible and beneficial. Trial Registration: This study was registered with the Thai Clinical Trials Registry (TCTR) under registration number TCTR20250125002 on 25 January 2025. Full article
(This article belongs to the Section Mental Health Nursing)
Show Figures

Graphical abstract

18 pages, 4696 KiB  
Article
A Deep-Learning Framework with Multi-Feature Fusion and Attention Mechanism for Classification of Chinese Traditional Instruments
by Jinrong Yang, Fang Gao, Teng Yun, Tong Zhu, Huaixi Zhu, Ran Zhou and Yikun Wang
Electronics 2025, 14(14), 2805; https://doi.org/10.3390/electronics14142805 - 12 Jul 2025
Viewed by 343
Abstract
Chinese traditional instruments are diverse and encompass a rich variety of timbres and rhythms, presenting considerable research potential. This work proposed a deep-learning framework for the automated classification of Chinese traditional instruments, addressing the challenges of acoustic diversity and cultural preservation. By integrating [...] Read more.
Chinese traditional instruments are diverse and encompass a rich variety of timbres and rhythms, presenting considerable research potential. This work proposed a deep-learning framework for the automated classification of Chinese traditional instruments, addressing the challenges of acoustic diversity and cultural preservation. By integrating two datasets, CTIS and ChMusic, we constructed a combined dataset comprising four instrument families: wind, percussion, plucked string, and bowed string. Three time-frequency features, namely MFCC, CQT, and Chroma, were extracted to capture diverse sound information. A convolutional neural network architecture was designed, incorporating 3-channel spectrogram feature stacking and a hybrid channel–spatial attention mechanism to enhance the extraction of critical frequency bands and feature weights. Experimental results demonstrated that the feature-fusion method improved classification performance compared to a single feature as input. Meanwhile, the attention mechanism further boosted test accuracy to 98.79%, outperforming baseline models by 2.8% and achieving superior F1 scores and recall compared to classical architectures. Ablation study confirmed the contribution of attention mechanisms. This work validates the efficacy of deep learning in preserving intangible cultural heritage through precise analysis, offering a feasible methodology for the classification of Chinese traditional instruments. Full article
Show Figures

Figure 1

10 pages, 451 KiB  
Article
PF2N: Periodicity–Frequency Fusion Network for Multi-Instrument Music Transcription
by Taehyeon Kim, Man-Je Kim and Chang Wook Ahn
Mathematics 2025, 13(11), 1708; https://doi.org/10.3390/math13111708 - 23 May 2025
Viewed by 558
Abstract
Automatic music transcription in multi-instrument settings remains a highly challenging task due to overlapping harmonics and diverse timbres. To address this, we propose the Periodicity–Frequency Fusion Network (PF2N), a lightweight and modular component that enhances transcription performance by integrating both spectral and periodicity-domain [...] Read more.
Automatic music transcription in multi-instrument settings remains a highly challenging task due to overlapping harmonics and diverse timbres. To address this, we propose the Periodicity–Frequency Fusion Network (PF2N), a lightweight and modular component that enhances transcription performance by integrating both spectral and periodicity-domain representations. Inspired by traditional combined frequency and periodicity (CFP) methods, the PF2N reformulates CFP as a neural module that jointly learns harmonically correlated features across the frequency and cepstral domains. Unlike handcrafted alignments in classical approaches, the PF2N performs data-driven fusion using a learnable joint feature extractor. Extensive experiments on three benchmark datasets (Slakh2100, MusicNet, and MAESTRO) demonstrate that the PF2N consistently improves transcription accuracy when incorporated into state-of-the-art models. The results confirm the effectiveness and adaptability of the PF2N, highlighting its potential as a general-purpose enhancement for multi-instrument AMT systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

12 pages, 1760 KiB  
Article
Familiar Music Reduces Mind Wandering and Boosts Behavioral Performance During Lexical Semantic Processing
by Gavin M. Bidelman and Shi Feng
Brain Sci. 2025, 15(5), 482; https://doi.org/10.3390/brainsci15050482 - 2 May 2025
Viewed by 867
Abstract
Music has been shown to increase arousal and attention and even facilitate processing during non-musical tasks, including those related to speech and language functions. Mind wandering has been studied in many sustained attention tasks. Here, we investigated the intersection of these two phenomena: [...] Read more.
Music has been shown to increase arousal and attention and even facilitate processing during non-musical tasks, including those related to speech and language functions. Mind wandering has been studied in many sustained attention tasks. Here, we investigated the intersection of these two phenomena: the role of mind wandering while listening to familiar/unfamiliar musical excerpts, and its effects on concurrent linguistic processing. We hypothesized that familiar music would be less distracting than unfamiliar music, causing less mind wandering, and consequently benefit concurrent speech perception. Participants (N = 96 young adults) performed a lexical-semantic congruity task where they judged the relatedness of visually presented word pairs while listening to non-vocal classical music (familiar or unfamiliar orchestral pieces), or a non-music environmental sound clip (control) played in the background. Mind wandering episodes were probed intermittently during the task by explicitly asking listeners if their mind was wandering in that moment. The primary outcome was accuracy and reactions times measured during the lexical-semantic judgment task across the three background music conditions (familiar, unfamiliar, and control). We found that listening to familiar music, relative to unfamiliar music or environmental noise, was associated with faster lexical-semantic decisions and a lower incidence of mind wandering. Mind wandering frequency was similar when performing the task when listening to familiar music and control environmental sounds. We infer that familiar music increases task enjoyment, reduces mind wandering, and promotes more rapid lexical access during concurrent lexical processing, by modulating task-related attentional resources. The implications of using music as an aid during academic study and cognitive tasks are discussed. Full article
(This article belongs to the Section Behavioral Neuroscience)
Show Figures

Figure 1

16 pages, 643 KiB  
Article
Cross-Cultural Biases of Emotion Perception in Music
by Marjorie G. Li, Kirk N. Olsen and William Forde Thompson
Brain Sci. 2025, 15(5), 477; https://doi.org/10.3390/brainsci15050477 - 29 Apr 2025
Cited by 1 | Viewed by 1750
Abstract
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features [...] Read more.
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features that predict emotion perception. Methods: White-European (Western) listeners from the UK, USA, New Zealand, and Australia (N = 100) listened to 48 ten-second excerpts of Western classical and Chinese traditional bowed-string music that were validated by experts to convey happiness, sadness, agitation, and calmness. After each excerpt, participants rated the familiarity, enjoyment, and perceived intensity of the four emotions. Musical features were computationally extracted for regression analyses. Results: Western listeners experienced Western classical music as more familiar and enjoyable than Chinese music. Happiness and sadness were recognised more accurately in Western classical music, whereas agitation was more accurately identified in Chinese music. The perceived intensity of happiness and sadness was greater for Western classical music; conversely, the perceived intensity of agitation was greater for Chinese music. Furthermore, emotion perception was influenced by both culture-shared (e.g., timbre) and culture-specific (e.g., dynamics) musical features. Conclusions: Our findings reveal clear cultural biases in the way individuals perceive and classify music, highlighting how these biases are shaped by the interaction between cultural familiarity and the emotional and structural qualities of the music. We discuss the possibility that purposeful engagement with music from diverse cultural traditions—especially in educational and therapeutic settings—may cultivate intercultural empathy and an appreciation of the values and aesthetics of other cultures. Full article
(This article belongs to the Special Issue Advances in Emotion Processing and Cognitive Neuropsychology)
Show Figures

Graphical abstract

17 pages, 3023 KiB  
Article
SEM-Net: A Social–Emotional Music Classification Model for Emotion Regulation and Music Literacy in Individuals with Special Needs
by Yu-Chi Chou, Shan-Ken Chien, Pen-Chiang Chao, Yuan-Jin Lin, Chih-Yun Chen, Kuang-Kai Yeh, Yen-Chia Peng, Chen-Hao Tsao, Shih-Lun Chen and Kuo-Chen Li
Appl. Sci. 2025, 15(8), 4191; https://doi.org/10.3390/app15084191 - 10 Apr 2025
Viewed by 703
Abstract
This study aims to establish an innovative AI-based social–emotional music classification model named SEM-Net, specifically designed to integrate three core positive social–emotional elements—positive outlook, empathy, and problem-solving—into classical music, facilitating accurate emotional classification of musical excerpts related to emotional states. SEM-Net employs a [...] Read more.
This study aims to establish an innovative AI-based social–emotional music classification model named SEM-Net, specifically designed to integrate three core positive social–emotional elements—positive outlook, empathy, and problem-solving—into classical music, facilitating accurate emotional classification of musical excerpts related to emotional states. SEM-Net employs a convolutional neural network (CNN) architecture composed of 17 meticulously structured layers to capture complex emotional and musical features effectively. To further enhance the precision and robustness of the classification system, advanced social–emotional music feature preprocessing and sophisticated feature extraction techniques were developed, significantly improving the model’s predictive performance. Experimental results demonstrate that SEM-Net achieves an impressive final classification accuracy of 94.13%, substantially surpassing the baseline method by 54.78% and outperforming other widely used deep learning architectures, including conventional CNN, LSTM, and Transformer models, by at least 27%. The proposed SEM-Net system facilitates emotional regulation and meaningfully enhances emotional and musical literacy, social communication skills, and overall quality of life for individuals with special needs, offering a practical, scalable, and accessible tool that contributes significantly to personalized emotional growth and social–emotional learning. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

23 pages, 8225 KiB  
Article
Parallel Net: Frequency-Decoupled Neural Network for DOA Estimation in Underwater Acoustic Detection
by Zhikai Yang, Xinyu Zhang, Zailei Luo, Tongsheng Shen, Mengda Cui and Xionghui Li
J. Mar. Sci. Eng. 2025, 13(4), 724; https://doi.org/10.3390/jmse13040724 - 4 Apr 2025
Viewed by 534
Abstract
Under wideband interference conditions, traditional neural networks often suffer from low accuracy in single-frequency direction-of-arrival (DOA) estimation and face challenges in detecting single-frequency sound sources. To address this limitation, we propose a novel model called Parallel Net. The architecture adopts a frequency-parallel [...] Read more.
Under wideband interference conditions, traditional neural networks often suffer from low accuracy in single-frequency direction-of-arrival (DOA) estimation and face challenges in detecting single-frequency sound sources. To address this limitation, we propose a novel model called Parallel Net. The architecture adopts a frequency-parallel design: it first employs a recurrent neural network, the generalized feedback gated recurrent unit (GFGRU), to independently extract features from each frequency component, and then it fuses these features through an attention mechanism. This design significantly enhances the network’s capability in estimating the DOA of single-frequency signals. The simulation results demonstrate that when the signal-to-noise ratio (SNR) exceeds −10 dB, Parallel Net achieves a mean absolute error (MAE) below 2°, outperforming traditional frequency-coherent neural networks and the MUSIC algorithm, and reduces the error to half that of classical beamforming (CBF). Further validation on the SWellEx-96 experiment confirms the model’s effectiveness in detecting single-frequency sources under wideband interference. Parallel Net exhibits superior sidelobe suppression and fewer spurious peaks compared to CBF, achieves higher accuracy than MUSIC, and produces smoother and more continuous DOA trajectories than conventional neural network models. Full article
(This article belongs to the Topic Advances in Underwater Acoustics and Aeroacoustics)
Show Figures

Figure 1

22 pages, 626 KiB  
Article
Absorbed Concert Listening: A Qualitative, Phenomenological Inquiry
by Simon Høffding, Remy Haswell-Martin and Nanette Nielsen
Philosophies 2025, 10(2), 38; https://doi.org/10.3390/philosophies10020038 - 27 Mar 2025
Cited by 1 | Viewed by 1757
Abstract
This paper pursues a phenomenological investigation of the nature of absorbed listening in Western, classical music concert audiences. This investigation is based on a data-set of 16 in-depth phenomenological interviews with audience members from three classical concerts with the Stavanger Symphony Orchestra and [...] Read more.
This paper pursues a phenomenological investigation of the nature of absorbed listening in Western, classical music concert audiences. This investigation is based on a data-set of 16 in-depth phenomenological interviews with audience members from three classical concerts with the Stavanger Symphony Orchestra and the Norwegian Radio Orchestra conducted in spring 2024. We identify seven major themes, namely “sharedness”, “attention”, “spontaneous thought/mental imagery”, “modes of listening” “absorption”, “distraction”, and “strong emotional experiences”, and interpret these in light of relevant ideas in phenomenology, cognitive psychology, and ecological aesthetics, more precisely “passive synthesis” from Husserl, the “sense of agency” from Gallagher, and “mind surfing” from Høffding, Nielsen, and Laeng. We show that, like absorbed musical performance, absorbed musical listening comes in many shapes and can be grasped as instantiating variations of passive synthesis, the sense of agency, and mind surfing. We conclude that absorbed listening circles around a kind of paradox of passivity, characterised by a sense of loss of egoic control arising from particular forms of invested, intensive perceptual, cognitive, and affective engagement. Full article
(This article belongs to the Special Issue The Aesthetics of the Performing Arts in the Contemporary Landscape)
Show Figures

Figure 1

20 pages, 3601 KiB  
Article
Full-Scale Piano Score Recognition
by Xiang-Yi Zhang and Jia-Lien Hsu
Appl. Sci. 2025, 15(5), 2857; https://doi.org/10.3390/app15052857 - 6 Mar 2025
Viewed by 851
Abstract
Sheet music is one of the most efficient methods for storing music. Meanwhile, a large amount of sheet music-image data is stored in paper form, but not in a computer-readable format. Therefore, digitizing sheet music is an essential task, such that the encoded [...] Read more.
Sheet music is one of the most efficient methods for storing music. Meanwhile, a large amount of sheet music-image data is stored in paper form, but not in a computer-readable format. Therefore, digitizing sheet music is an essential task, such that the encoded music object could be effectively utilized for tasks such as editing or playback. Although there have been a few studies focused on recognizing sheet music images with simpler structures—such as monophonic scores or more modern scores with relatively simple structures, only containing clefs, time signatures, key signatures, and notes—in this paper we focus on the issue of classical sheet music containing dynamics symbols and articulation signs, more than only clefs, time signatures, key signatures, and notes. Therefore, this study augments the data from the GrandStaff dataset by concatenating single-line scores into multi-line scores and adding various classical music dynamics symbols not included in the original GrandStaff dataset. Given a full-scale piano score in pages, our approach first applies three YOLOv8 models to perform the three tasks: 1. Converting a full page of sheet music into multiple single-line scores; 2. Recognizing the classes and absolute positions of dynamics symbols in the score; and 3. Finding the relative positions of dynamics symbols in the score. Then, the identified dynamics symbols are removed from the original score, and the remaining score serves as the input into a Convolutional Recurrent Neural Network (CRNN) for the following steps. The CRNN outputs KERN notation (KERN, a core pitch/duration representation for common practice music notation) without dynamics symbols. By combining the CRNN output with the relative and absolute position information of the dynamics symbols, the final output is obtained. The results show that with the assistance of YOLOv8, there is a significant improvement in accuracy. Full article
(This article belongs to the Special Issue Integration of AI in Signal and Image Processing)
Show Figures

Figure 1

17 pages, 4566 KiB  
Article
Vocal Directivity of the Greek Singing Voice on the First Three Formant Frequencies
by Georgios Dedousis, Konstantinos Bakogiannis, Areti Andreopoulou and Anastasia Georgaki
Acoustics 2025, 7(1), 13; https://doi.org/10.3390/acoustics7010013 - 4 Mar 2025
Viewed by 1148
Abstract
This study explores the relationship between formant frequencies and the directivity patterns of the Greek singing voice. Recordings were conducted in a controlled acoustic environment with four professional singers, two trained in classical music and two in Byzantine chant. Using microphones placed symmetrically [...] Read more.
This study explores the relationship between formant frequencies and the directivity patterns of the Greek singing voice. Recordings were conducted in a controlled acoustic environment with four professional singers, two trained in classical music and two in Byzantine chant. Using microphones placed symmetrically on a hemispherical structure, participants sang the Greek vowels across different registers. Directivity patterns were analyzed in third-octave bands centered on each singer’s first three formant frequencies (F1, F2, F3). The results indicate that directivity patterns vary with register and center frequency, with differences observed across vowels and singers. These findings contribute to vocal production research and the development of simulation, auralization, and virtual reality applications for speech and music. Full article
(This article belongs to the Special Issue Developments in Acoustic Phonetic Research)
Show Figures

Figure 1

44 pages, 15045 KiB  
Perspective
Exploring the Creative Art of Sergei Kuriokhin—Avant-Garde Musician, Cultural Theorist, and Cineast: Four Sergei(s) and Two Memoir Interviews
by Sergei Chubraev
Arts 2025, 14(2), 23; https://doi.org/10.3390/arts14020023 - 1 Mar 2025
Viewed by 758
Abstract
This text explores the life and legacy of Sergei Kuriokhin, a multifaceted artist who profoundly impacted Soviet and post-Soviet culture. Known for his radical experimentation in music, theater, and film, Kuriokhin defied conventional genres through his groundbreaking project, ‘Pop Mechanics’, which blended jazz, [...] Read more.
This text explores the life and legacy of Sergei Kuriokhin, a multifaceted artist who profoundly impacted Soviet and post-Soviet culture. Known for his radical experimentation in music, theater, and film, Kuriokhin defied conventional genres through his groundbreaking project, ‘Pop Mechanics’, which blended jazz, classical music, rock, circus acts, and more. His provocative performances often included surreal elements and bizarre satire, challenging cultural norms and the boundaries of Soviet censorship. Kuriokhin’s influence extended into politics, where his satirical “Lenin was a Mushroom” program questioned historical and ideological narratives, stirring public debate. His charisma, intellectual depth, and penchant for the absurd made him a central figure in Leningrad’s avant-garde scene. Kuriokhin collaborated with prominent artists and philosophers, leaving an indelible mark on Russian art and political discourse. This work, presented through the reflections of his close associates, offers insights into his lasting impact on Russian culture, blending history with personal mythologies. Full article
Show Figures

Figure 1

21 pages, 9659 KiB  
Article
Variable Properties of Auditory Scene Analysis in Music
by Adam Rosiński
Arts 2025, 14(1), 19; https://doi.org/10.3390/arts14010019 - 14 Feb 2025
Viewed by 998
Abstract
This article explores the variable properties of auditory image analysis during the perception of musical works, which are influenced by the specific elements to which the listener directs their attention. Traditional analyses of musical compositions typically involve brief comparisons with auditory phenomena described [...] Read more.
This article explores the variable properties of auditory image analysis during the perception of musical works, which are influenced by the specific elements to which the listener directs their attention. Traditional analyses of musical compositions typically involve brief comparisons with auditory phenomena described in scientific studies, such as those by A.S. Bregman. However, these analyses are often limited, offering only a narrow perspective on the works. In contrast, the approach presented in this article extends the theories and experiments developed by Bregman and others, providing a more comprehensive understanding of entire compositions or selected sections rather than focusing solely on isolated passages. This broader framework enhances auditory image analysis and serves as a foundation for further research. The expanded analysis integrates within music theory, enabling a deeper exploration of musical structures, particularly in the context of perceiving multilayered music where multiple sound sources may share similar acoustic features. The author illustrates how acoustic and perceptual factors contribute to complex mental representations through graphic and musical examples. To substantiate the claims, classical works by composers such as F. Chopin, A. Guilmant, and J.S. Bach are analysed, further highlighting the variable properties of auditory image analysis. Full article
Show Figures

Figure 1

29 pages, 4553 KiB  
Article
Simultaneous Source Number Detection and DOA Estimation Using Deep Neural Network and K2-Means Clustering with Prior Knowledge
by Aifei Liu, Yuan Zhou, Zi Li, Yuxuan Xie, Cao Zeng and Zhiling Liu
Electronics 2025, 14(4), 713; https://doi.org/10.3390/electronics14040713 - 12 Feb 2025
Cited by 1 | Viewed by 744
Abstract
Source number detection and Direction-of-Arrival (DOA) estimation are usually addressed in two stages, leading to high computational load. This paper proposes a simple solution to efficiently estimate the source number and DOAs using deep neural network (DNN) and clustering, named DNN-C. By observing [...] Read more.
Source number detection and Direction-of-Arrival (DOA) estimation are usually addressed in two stages, leading to high computational load. This paper proposes a simple solution to efficiently estimate the source number and DOAs using deep neural network (DNN) and clustering, named DNN-C. By observing that sources in space are usually few, DNN-C uses a simple fully connected DNN to obtain a spatial spectrum. Then, the K2-means clustering is specially designed to extract the source information from the obtained spatial spectrum. In particular, to enable the proposed DNN-C with the ability to detect the mixed sources, we first develop a new strategy for training data generation, and provide a guideline for data balance setting. We then explore the prior knowledge of array signal processing and spatial spectrum to obtain a peak vector and propose to add a virtual peak into the peak vector, and thus transform the task of source detection as a binary clustering problem of noise and sources. Overall, DNN-C provides a lightweight solution to implement source number detection and DOA estimation simultaneously and efficiently. Its testing time is about 2 times less than the classical solution (i.e., minimum descriptive length and multiple signal classification, shortened as MDL-MUSIC) when the grid step is 1° Importantly, it is robust to nonuniform noise by nature and can identify the absence of sources. The effectiveness of DNN-C is verified by simulation results. Furthermore, the DNN-C model trained by simulated data shows its generalization to real data measured by a circular array of eight sensors. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

20 pages, 1336 KiB  
Essay
Leningrad Contemporary Music Club: An Early Bird of Soviet Musical Underground
by Alexander Kan
Arts 2025, 14(1), 13; https://doi.org/10.3390/arts14010013 - 5 Feb 2025
Viewed by 1096
Abstract
This essay discusses the genesis, evolution, and impact of the Leningrad Contemporary Music Club (CMC), a pivotal hub for avant-garde and experimental music in the late Soviet Union. Founded amidst the socio-political constraints of the late 1970s, the CMC emerged as a sanctuary [...] Read more.
This essay discusses the genesis, evolution, and impact of the Leningrad Contemporary Music Club (CMC), a pivotal hub for avant-garde and experimental music in the late Soviet Union. Founded amidst the socio-political constraints of the late 1970s, the CMC emerged as a sanctuary for jazz, classical avant-garde, and progressive rock enthusiasts. This paper chronicles the CMC’s unique ability to foster creative expression within the repressive Soviet cultural framework, driven by a coalition of visionaries including such musicians as Sergey Kuryokhin and jazz theoreticians like Efim Barban. The narrative highlights the club’s seminal role in introducing Western avant-garde music to Soviet audiences, hosting groundbreaking performances, and cultivating a vibrant community of musicians, critics, and fans. Through an exploration of the CMC’s organisational strategies, cultural exchanges, and its ultimate closure following state intervention, the paper examines how the Club bridged underground and mainstream music while navigating ideological constraints. The research underscores the CMC’s legacy as a microcosm of resistance and innovation, situating its contributions within broader discussions of Soviet countercultural movements and global avant-garde practices. This work contributes to the historiography of Soviet underground culture, shedding light on the interplay between art, politics, and social transformation in late 20th-century Leningrad. Full article
Show Figures

Figure 1

17 pages, 674 KiB  
Article
Graph Neural Network and LSTM Integration for Enhanced Multi-Label Style Classification of Piano Sonatas
by Sibo Zhang, Yang Liu and Mengjie Zhou
Sensors 2025, 25(3), 666; https://doi.org/10.3390/s25030666 - 23 Jan 2025
Cited by 1 | Viewed by 1024
Abstract
In the field of musicology, the automatic style classification of compositions such as piano sonatas presents significant challenges because of their intricate structural and temporal characteristics. Traditional approaches often fail to capture the nuanced relationships inherent in musical works. This paper addresses the [...] Read more.
In the field of musicology, the automatic style classification of compositions such as piano sonatas presents significant challenges because of their intricate structural and temporal characteristics. Traditional approaches often fail to capture the nuanced relationships inherent in musical works. This paper addresses the limitations of traditional neural networks in piano sonata style classification and feature extraction by proposing a novel integration of graph convolutional neural networks (GCNs), graph attention networks (GATs), and Long Short-Term Memory (LSTM) networks to conduct the automatic multi-label classification of piano sonatas. Specifically, the method combines the graph convolution operations of GCNs, the attention mechanism of GATs, and the gating mechanism of LSTMs to perform the graph structure representation, feature extraction, allocation weighting, and coding of time-dependent features of music data layer by layer. The aim is to optimize the representation of the structural and temporal features of musical elements, as well as the dependence between discovery features, so as to improve classification performance. In addition, we utilize MIDI files of several piano sonatas to construct a dataset, spanning the 17th to the 19th centuries (i.e., the late Baroque, Classical, and Romantic periods). The experimental results demonstrate that the proposed method effectively improves the accuracy of style classification by 15% over baseline schemes. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop