Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = MUSHRA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 531 KiB  
Article
Exploring the Link Between Sound Quality Perception, Music Perception, Music Engagement, and Quality of Life in Cochlear Implant Recipients
by Ayşenur Karaman Demirel, Ahmet Alperen Akbulut, Ayşe Ayça Çiprut and Nilüfer Bal
Audiol. Res. 2025, 15(4), 94; https://doi.org/10.3390/audiolres15040094 (registering DOI) - 2 Aug 2025
Viewed by 64
Abstract
Background/Objectives: This study investigated the association between cochlear implant (CI) users’ assessed perception of musical sound quality and their subjective music perception and music-related quality of life (QoL). The aim was to provide a comprehensive evaluation by integrating a relatively objective Turkish [...] Read more.
Background/Objectives: This study investigated the association between cochlear implant (CI) users’ assessed perception of musical sound quality and their subjective music perception and music-related quality of life (QoL). The aim was to provide a comprehensive evaluation by integrating a relatively objective Turkish Multiple Stimulus with Hidden Reference and Anchor (TR-MUSHRA) test and a subjective music questionnaire. Methods: Thirty CI users and thirty normal-hearing (NH) adults were assessed. Perception of sound quality was measured using the TR-MUSHRA test. Subjective assessments were conducted with the Music-Related Quality of Life Questionnaire (MuRQoL). Results: TR-MUSHRA results showed that while NH participants rated all filtered stimuli as perceptually different from the original, CI users provided similar ratings for stimuli with adjacent high-pass filter settings, indicating less differentiation in perceived sound quality. On the MuRQoL, groups differed on the Frequency subscale but not the Importance subscale. Critically, no significant correlation was found between the TR-MUSHRA scores and the MuRQoL subscale scores in either group. Conclusions: The findings demonstrate that TR-MUSHRA is an effective tool for assessing perceived sound quality relatively objectively, but there is no relationship between perceiving sound quality differences and measures of self-reported musical engagement and its importance. Subjective music experience may represent different domains beyond the perception of sound quality. Therefore, successful auditory rehabilitation requires personalized strategies that consider the multifaceted nature of music perception beyond simple perceptual judgments. Full article
Show Figures

Figure 1

19 pages, 4510 KiB  
Article
Combining MUSHRA Test and Fuzzy Logic in the Evaluation of Benefits of Using Hearing Prostheses
by Piotr Szymański, Tomasz Poremski and Bożena Kostek
Electronics 2023, 12(20), 4345; https://doi.org/10.3390/electronics12204345 - 19 Oct 2023
Cited by 1 | Viewed by 1482
Abstract
Assessing the effectiveness of hearing aid fittings based on the benefits they provide is crucial but intricate. While objective metrics of hearing aids like gain, frequency response, and distortion are measurable, they do not directly indicate user benefits. Hearing aid performance assessment encompasses [...] Read more.
Assessing the effectiveness of hearing aid fittings based on the benefits they provide is crucial but intricate. While objective metrics of hearing aids like gain, frequency response, and distortion are measurable, they do not directly indicate user benefits. Hearing aid performance assessment encompasses various aspects, such as compensating for hearing loss and user satisfaction. The authors suggest enhancing the widely used APHAB (Abbreviated Profile of Hearing Aid Benefit) questionnaire by integrating it with the MUSHRA test. APHAB, a self-completed questionnaire for users, evaluates specific sound scenarios on a seven-point scale, with each point described by a letter, percentage, and description. Given the complexities, especially for older users, we propose converting the seven-point APHAB scale to a clearer 100-point MUSHRA scale using fuzzy logic rules. The paper starts with presenting the goals of the study, focused on the assessment of the benefits of hearing aid use, especially in the case of the elderly population. The introductory part includes an overview of methods for evaluating the effectiveness of hearing aid use. Then, the methodology for the data collection is presented. This is followed by a method modification that combines the MUSHRA (MUltiple Stimuli with Hidden Reference and Anchor) test and fuzzy logic processing and the commonly used hearing aid benefit assessment questionnaire, APHAB (Abbreviated Profile of Hearing Aid Benefit). The results of such a process are examined. A summary of the findings is given in the form of fuzzy logic-based rules, followed by a short discussion. Finally, the overall conclusion and possible future directions for the method development are presented. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1905 KiB  
Article
Particle-Velocity-Based Mixed-Source Sound Field Translation for Binaural Reproduction
by Huanyu Zuo, Lachlan I. Birnie, Prasanga N. Samarasinghe, Thushara D. Abhayapala and Vladimir Tourbabin
Appl. Sci. 2023, 13(11), 6449; https://doi.org/10.3390/app13116449 - 25 May 2023
Cited by 1 | Viewed by 1640
Abstract
Following the rise of virtual reality is a demand for sound field reproduction techniques that allow the user to interact and move within acoustic reproductions with six-degrees-of-freedom. To this end, a mixed-source model of near-field and far-field virtual sources has been introduced to [...] Read more.
Following the rise of virtual reality is a demand for sound field reproduction techniques that allow the user to interact and move within acoustic reproductions with six-degrees-of-freedom. To this end, a mixed-source model of near-field and far-field virtual sources has been introduced to improve the performance of sound field translation in binaural reproductions of spatial audio recordings. The previous works, however, expand the sound field in terms of the mixed sources based on sound pressure. In this paper, we develop a new mixed-source expansion based on particle velocity, which contributes to more precise reconstruction of the interaural phase difference and, therefore, contributes to improved human perception of sound localization. We represent particle velocity over space using velocity coefficients in the spherical harmonic domain, and the driving signals of the virtual mixed-sources are estimated by constructing cost functions to optimize the velocity coefficients. Compared to the state-of-the-art method, sound-pressure-based mixed-source expansion, we show through numerical simulations that the proposed particle-velocity-based mixed-source expansion has better reconstruction performance in sparse solutions, allowing for sound field translation with better perceptual immersion over a larger space. Finally, we perceptually validate the proposed method through a Multiple Stimulus with Hidden Reference and Anchor (MUSHRA) experiment for a single source scenario. The experimental results support the better perceptual immersion of the proposed method. Full article
(This article belongs to the Special Issue Spatial Audio and Signal Processing)
Show Figures

Figure 1

29 pages, 2963 KiB  
Article
Perceptual Similarities between Artificial Reverberation Algorithms and Real Reverberation
by Huan Mi, Gavin Kearney and Helena Daffern
Appl. Sci. 2023, 13(2), 840; https://doi.org/10.3390/app13020840 - 7 Jan 2023
Cited by 2 | Viewed by 3057
Abstract
This paper presents a study evaluating the perceptual similarity between artificial reverberation algorithms and acoustic measurements. An online headphone-based listening test was conducted and data were collected from 20 expert assessors. Seven reverberation algorithms were tested in the listening test, including the Dattorro, [...] Read more.
This paper presents a study evaluating the perceptual similarity between artificial reverberation algorithms and acoustic measurements. An online headphone-based listening test was conducted and data were collected from 20 expert assessors. Seven reverberation algorithms were tested in the listening test, including the Dattorro, Directional Feedback Delay Network (DFDN), Feedback Delay Network (FDN), Gardner, Moorer, and Schroeder reverberation algorithms. A new Hybrid Moorer–Schroeder (HMS) reverberation algorithm was included as well. A solo cello piece, male speech, female singing, and a drumbeat were rendered with the seven reverberation algorithms in three different reverberation times (0.266 s, 0.95 s and 2.34 s) as the test conditions. The test was conducted online and based on the Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) paradigm. The reference conditions consisted of the same audio samples convolved with measured binaural room impulse responses (BRIRs) with the same three reverberation times. The anchor was dual-mono 3.5 kHz low pass filtered audio. The similarity between the test audio and the reference audio was scored on a scale of zero to a hundred. Statistical analysis of the results shows that the Gardner and HMS reverberation algorithms are good candidates for exploration of artificial reverberation in Augmented Reality (AR) scenarios in future research. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality - 2nd Volume)
Show Figures

Figure 1

18 pages, 3663 KiB  
Article
Implementing a Statistical Parametric Speech Synthesis System for a Patient with Laryngeal Cancer
by Krzysztof Szklanny and Jakub Lachowicz
Sensors 2022, 22(9), 3188; https://doi.org/10.3390/s22093188 - 21 Apr 2022
Cited by 4 | Viewed by 2723
Abstract
Total laryngectomy, i.e., the surgical removal of the larynx, has a profound influence on a patient’s quality of life. The procedure results in a loss of natural voice, which in effect constitutes a significant socio-psychological problem for the patient. The main aim of [...] Read more.
Total laryngectomy, i.e., the surgical removal of the larynx, has a profound influence on a patient’s quality of life. The procedure results in a loss of natural voice, which in effect constitutes a significant socio-psychological problem for the patient. The main aim of the study was to develop a statistical parametric speech synthesis system for a patient with laryngeal cancer, on the basis of the patient’s speech samples recorded shortly before the surgery and to check if it was possible to generate speech quality close to that of the original recordings. The recording made use of a representative corpus of the Polish language, consisting of 2150 sentences. The recorded voice proved to indicate dysphonia, which was confirmed by the auditory-perceptual RBH scale (roughness, breathiness, hoarseness) and by acoustical analysis using AVQI (The Acoustic Voice Quality Index). The speech synthesis model was trained using the Merlin repository. Twenty-five experts participated in the MUSHRA listening tests, rating the synthetic voice at 69.4 in terms of the professional voice-over talent recording, on a 0–100 scale, which is a very good result. The authors compared the quality of the synthetic voice to another model of synthetic speech trained with the same corpus, but where a voice-over talent provided the recorded speech samples. The same experts rated the voice at 63.63, which means the patient’s synthetic voice with laryngeal cancer obtained a higher score than that of the talent-voice recordings. As such, the method enabled for the creation of a statistical parametric speech synthesizer for patients awaiting total laryngectomy. As a result, the solution would improve the quality of life as well as better mental wellbeing of the patient. Full article
(This article belongs to the Special Issue Analytics and Applications of Audio and Image Sensing Techniques)
Show Figures

Figure 1

21 pages, 3411 KiB  
Article
AMBIQUAL: Towards a Quality Metric for Headphone Rendered Compressed Ambisonic Spatial Audio
by Miroslaw Narbutt, Jan Skoglund, Andrew Allen, Michael Chinen, Dan Barry and Andrew Hines
Appl. Sci. 2020, 10(9), 3188; https://doi.org/10.3390/app10093188 - 3 May 2020
Cited by 23 | Viewed by 8064
Abstract
Spatial audio is essential for creating a sense of immersion in virtual environments. Efficient encoding methods are required to deliver spatial audio over networks without compromising Quality of Service (QoS). Streaming service providers such as YouTube typically transcode content into various bit rates [...] Read more.
Spatial audio is essential for creating a sense of immersion in virtual environments. Efficient encoding methods are required to deliver spatial audio over networks without compromising Quality of Service (QoS). Streaming service providers such as YouTube typically transcode content into various bit rates and need a perceptually relevant audio quality metric to monitor users’ perceived quality and spatial localization accuracy. The aim of the paper is two-fold. First, it is to investigate the effect of Opus codec compression on the quality of spatial audio as perceived by listeners using subjective listening tests. Secondly, it is to introduce AMBIQUAL, a full reference objective metric for spatial audio quality, which derives both listening quality and localization accuracy metrics directly from the B-format Ambisonic audio. We compare AMBIQUAL quality predictions with subjective quality assessments across a variety of audio samples which have been compressed using the Opus 1.2 codec at various bit rates. Listening quality and localization accuracy of first and third-order Ambisonics were evaluated. Several fixed and dynamic audio sources (single and multiple) were used to evaluate localization accuracy. Results show good correlation regarding listening quality and localization accuracy between objective quality scores using AMBIQUAL and subjective scores obtained during listening tests. Full article
Show Figures

Figure 1

22 pages, 4118 KiB  
Article
A Multi-Frame PCA-Based Stereo Audio Coding Method
by Jing Wang, Xiaohan Zhao, Xiang Xie and Jingming Kuang
Appl. Sci. 2018, 8(6), 967; https://doi.org/10.3390/app8060967 - 12 Jun 2018
Cited by 5 | Viewed by 4427
Abstract
With the increasing demand for high quality audio, stereo audio coding has become more and more important. In this paper, a multi-frame coding method based on Principal Component Analysis (PCA) is proposed for the compression of audio signals, including both mono and stereo [...] Read more.
With the increasing demand for high quality audio, stereo audio coding has become more and more important. In this paper, a multi-frame coding method based on Principal Component Analysis (PCA) is proposed for the compression of audio signals, including both mono and stereo signals. The PCA-based method makes the input audio spectral coefficients into eigenvectors of covariance matrices and reduces coding bitrate by grouping such eigenvectors into fewer number of vectors. The multi-frame joint technique makes the PCA-based method more efficient and feasible. This paper also proposes a quantization method that utilizes Pyramid Vector Quantization (PVQ) to quantize the PCA matrices proposed in this paper with few bits. Parametric coding algorithms are also employed with PCA to ensure the high efficiency of the proposed audio codec. Subjective listening tests with Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) have shown that the proposed PCA-based coding method is efficient at processing stereo audio. Full article
(This article belongs to the Special Issue Modelling, Simulation and Data Analysis in Acoustical Problems)
Show Figures

Figure 1

Back to TopTop