Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (118)

Search Parameters:
Keywords = HEARRING classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 761 KB  
Review
Retrocochlear Auditory Dysfunctions (RADs) and Their Treatment: A Narrative Review
by Domenico Cuda, Patrizia Mancini, Giuseppe Chiarella and Rosamaria Santarelli
Audiol. Res. 2026, 16(1), 5; https://doi.org/10.3390/audiolres16010005 (registering DOI) - 23 Dec 2025
Abstract
Background/Objectives: Retrocochlear auditory dysfunctions (RADs), including auditory neuropathy (AN) and auditory processing disorders (APD), encompass disorders characterized by impaired auditory processing beyond the cochlea. This narrative review critically examines their distinguishing features, synthesizing recent advances in classification, pathophysiology, clinical presentation, and treatment. [...] Read more.
Background/Objectives: Retrocochlear auditory dysfunctions (RADs), including auditory neuropathy (AN) and auditory processing disorders (APD), encompass disorders characterized by impaired auditory processing beyond the cochlea. This narrative review critically examines their distinguishing features, synthesizing recent advances in classification, pathophysiology, clinical presentation, and treatment. Methods: This narrative review involved a comprehensive literature search across major electronic databases (e.g., PubMed, Scopus) to identify and synthesize relevant studies on the classification, diagnosis, and management of AN and APD. The goal was to update the view on etiologies (genetic/non-genetic) and individualized rehabilitative strategies. Diagnosis relies on a comprehensive assessment, including behavioral, electrophysiological, and imaging tests. Rehabilitation is categorized into bottom-up and top-down approaches. Results: ANSD is defined by neural desynchronization with preserved outer hair cell function, resulting in abnormal auditory brainstem responses and poor speech discrimination. The etiologies (distal/proximal) influence the prognosis for interventions, particularly cochlear implants (CI). APD involves central processing deficits, often with normal peripheral hearing and heterogeneous symptoms affecting speech perception and localization. Rehabilitation is multidisciplinary, utilizing bottom-up strategies (e.g., auditory training, CI) and compensatory top-down approaches. Remote microphone systems are highly effective in improving the signal-to-noise ratio. Conclusions: Accurate diagnosis and personalized, multidisciplinary management are crucial for optimizing communication and quality of life. Evidence suggests that combined bottom-up and top-down interventions may yield superior outcomes. However, methodological heterogeneity limits the generalizability of protocols, highlighting the need for further targeted research. Full article
Show Figures

Figure 1

13 pages, 1912 KB  
Article
Vibro-Acoustic Radiation Analysis for Detecting Otitis Media with Effusion
by Gyuyoung Yi, Jonghoon Jeon, Kyunglae Gu, Junhong Park and Jae Ho Chung
Appl. Sci. 2026, 16(1), 4; https://doi.org/10.3390/app16010004 - 19 Dec 2025
Viewed by 82
Abstract
Otitis media with effusion (OME) is a common middle ear disease characterized by fluid accumulation without acute infection, leading to conductive hearing loss. Conventional diagnostic tools, such as tympanometry and otoscopy, have limited sensitivity and rely on expert interpretation. This study investigates vibro-acoustic [...] Read more.
Otitis media with effusion (OME) is a common middle ear disease characterized by fluid accumulation without acute infection, leading to conductive hearing loss. Conventional diagnostic tools, such as tympanometry and otoscopy, have limited sensitivity and rely on expert interpretation. This study investigates vibro-acoustic radiation (VAR) as a novel, non-invasive, and objective method for OME detection. VAR signals were obtained from 36 OME patients (43 ears) and 15 normal ears using bone-conduction excitation and stereo microphones, and the frequency response functions were analyzed. OME increases the mechanical loading of the tympanic membrane and ossicular chain, thereby modifying sound transmission across the middle ear. Using a simplified theoretical model, we estimated acoustic parameters of the ear canal, eardrum, and middle ear, including specific acoustic impedance and resonance frequency ranges, to interpret changes in VAR. VAR analysis revealed significantly reduced signal amplitude in the 8–10 kHz range in OME ears compared with normal ears (p < 0.05). A classification algorithm based on these features achieved 86.7% accuracy, 85.0% sensitivity, and 80.0% specificity, with an area under the ROC curve of 0.986. These findings suggest that VAR has strong potential as a non-invasive diagnostic tool for OME, warranting validation in larger clinical studies. Full article
Show Figures

Figure 1

14 pages, 1426 KB  
Article
Trends and Incidence of Hearing Implant Utilization in Italy: A Population-Based Study
by Enrico Ciminello, Domenico Cuda, Francesca Forli, Anna Rita Fetoni, Stefano Berrettini, Eugenio Mattei, Tiziana Falcone, Adriano Cuccu, Paola Ciccarelli, Stefania Ceccarelli and Marina Torre
Audiol. Res. 2025, 15(6), 175; https://doi.org/10.3390/audiolres15060175 - 14 Dec 2025
Viewed by 162
Abstract
Background/Objectives: Cochlear implants (CIs) and other implantable hearing devices are crucial to treat hearing loss. The aim of this study was to analyze the temporal trends of implantation for hearing devices in Italy between 2001 and 2023, with stratification by age. Methods: This [...] Read more.
Background/Objectives: Cochlear implants (CIs) and other implantable hearing devices are crucial to treat hearing loss. The aim of this study was to analyze the temporal trends of implantation for hearing devices in Italy between 2001 and 2023, with stratification by age. Methods: This population-based study explored Hospital Discharge Records and used codes from the International Classification of Diseases, 9th revision—Clinical Modification (ICD9-CM) to identify cochlear and non-cochlear implants. Patients were partitioned into six age classes: <1, 1–2, 3–17, 18–65, 66–80, and >80; and time series for counts and incidence rates (IRs) per 1,000,000 inhabitants with confidence intervals (CI95%) were explored overall and by age class. Trends were assessed by incidence rate ratio and Cox–Stuart test with a significance threshold for p-values at 0.05. Results: 22,850 (83.6%) records for cochlear and 4476 (16.4%) for non-cochlear implants were extracted. Cochlear implants volume shifted from 537 procedures in 2001 to 1595 in 2023 (p < 0.01), while IR increased (p < 0.01) from 9.4 (CI95%: 9.7, 10.3) in 2001 to 27 (CI95%: 25.7, 28.4) in 2023. The volumes of implanted CIs increased in children and adults. Volumes for non-cochlear implants increased between 2001 and 2010, from 62 to 254, and remained stable afterwards. IR shifted from 1.1 (CI95%: 0.8, 1.4) in 2001 to 4.1 (CI95%: 3.6, 4.7) in 2023. Conclusions: Those trends highlight the importance of monitoring efficacy and safety of hearing devices, and the establishment of the Italian Implantable Hearing Device Registry at the Italian National Institute of Health is a first step in such a direction. Full article
Show Figures

Figure 1

16 pages, 2557 KB  
Article
Cochlear Implantation in Children with Inner Ear Malformations: Auditory Outcomes, Safety and the Role of Anatomical Severity
by Miriam González-García, Cristina Alonso-González, Francisco Ropero-Romero, Estefanía Berrocal-Postigo, Francisco Javier Aguilar-Vera, Concepción Gago-Torres, Leyre Andrés-Ustárroz, Manuel Lazo-Maestre, M. Amparo Callejón-Leblic and Serafín Sánchez-Gómez
J. Clin. Med. 2025, 14(22), 8245; https://doi.org/10.3390/jcm14228245 - 20 Nov 2025
Viewed by 583
Abstract
Background/Objectives: Cochlear implantation (CI) has been shown to be effective in children with inner ear malformations (IEMs). However, outcomes vary with malformation type and anatomical complexity. Advances in radiological classification may improve the understanding of such variability to better guide patient counseling. [...] Read more.
Background/Objectives: Cochlear implantation (CI) has been shown to be effective in children with inner ear malformations (IEMs). However, outcomes vary with malformation type and anatomical complexity. Advances in radiological classification may improve the understanding of such variability to better guide patient counseling. We aimed to assess one-year post-implant auditory outcomes in children with IEMs using radiology-based classifications, and to explore genetic and perinatal predictors. We also propose a preliminary severity score derived from the INCAV system. Methods: Out of 303 pediatric CI recipients assessed at a tertiary center, we retrospectively analyzed 41 children (82 ears) diagnosed with IEMs. Malformations were categorized with the Sennaroğlu system and re-coded using INCAV, from which a severity score was derived. Postoperative outcomes were assessed in 56 implanted ears, including pure-tone average (PTA), word recognition score (WRS), and post-surgical complications. Statistical analyses included Spearman’s correlation, linear regression, and exploratory discriminant MANOVA. Results: The most frequent malformation was enlarged vestibular aqueduct (33%), followed by incomplete partition type II (22%). CI was performed in 56 malformed ears with a complication rate of 10.7%. PTA and WRS correlated with the INCAV-derived severity score, with higher severity linked to poorer thresholds and lower WRS. Linear regression showed severity explained ~20% of PTA variance, with outcomes more frequently impaired in ears with scores > 3. Exploratory analysis revealed inter-subject variability, with partial separation of mild versus moderate/severe groups mainly driven by PTA and WRS. Conclusions: CI in pediatric IEMs is safe and consistently improves hearing thresholds. PTA was the most robust predictor of performance, while the INCAV-derived severity score, though exploratory, may provide additional value for anatomical stratification, prognostic counseling, and rehabilitation planning. Full article
(This article belongs to the Special Issue The Challenges and Prospects in Cochlear Implantation)
Show Figures

Figure 1

14 pages, 1737 KB  
Article
Classification of Speech and Associated EEG Responses from Normal-Hearing and Cochlear Implant Talkers Using Support Vector Machines
by Shruthi Raghavendra, Sungmin Lee and Chin-Tuan Tan
Audiol. Res. 2025, 15(6), 158; https://doi.org/10.3390/audiolres15060158 - 18 Nov 2025
Viewed by 385
Abstract
Background/Objectives: Speech produced by individuals with hearing loss differs notably from that of normal-hearing (NH) individuals. Although cochlear implants (CIs) provide sufficient auditory input to support speech acquisition and control, there remains considerable variability in speech intelligibility among CI users. As a [...] Read more.
Background/Objectives: Speech produced by individuals with hearing loss differs notably from that of normal-hearing (NH) individuals. Although cochlear implants (CIs) provide sufficient auditory input to support speech acquisition and control, there remains considerable variability in speech intelligibility among CI users. As a result, speech produced by CI talkers often exhibits distinct acoustic characteristics compared to that of NH individuals. Methods: Speech data were obtained from eight cochlear-implant (CI) and eight normal-hearing (NH) talkers, while electroencephalogram (EEG) responses were recorded from 11 NH listeners exposed to the same speech stimuli. Support Vector Machine (SVM) classifiers employing 3-fold cross-validation were evaluated using classification accuracy as the performance metric. This study evaluated the efficacy of Support Vector Machine (SVM) algorithms using four kernel functions (Linear, Polynomial, Gaussian, and Radial Basis Function) to classify speech produced by NH and CI talkers. Six acoustic features—Log Energy, Zero-Crossing Rate (ZCR), Pitch, Linear Predictive Coefficients (LPC), Mel-Frequency Cepstral Coefficients (MFCCs), and Perceptual Linear Predictive Cepstral Coefficients (PLP-CC)—were extracted. These same features were also extracted from electroencephalogram (EEG) recordings of NH listeners who were exposed to the speech stimuli. The EEG analysis leveraged the assumption of quasi-stationarity over short time windows. Results: Classification of speech signals using SVMs yielded the highest accuracies of 100% and 94% for the Energy and MFCC features, respectively, using Gaussian and RBF kernels. EEG responses to speech achieved classification accuracies exceeding 70% for ZCR and Pitch features using the same kernels. Other features such as LPC and PLP-CC yielded moderate to low classification performance. Conclusions: The results indicate that both speech-derived and EEG-derived features can effectively differentiate between CI and NH talkers. Among the tested kernels, Gaussian and RBF provided superior performance, particularly when using Energy and MFCC features. These findings support the application of SVMs for multimodal classification in hearing research, with potential applications in improving CI speech processing and auditory rehabilitation. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

23 pages, 1370 KB  
Systematic Review
PMP22-Related Neuropathies: A Systematic Review
by Carlo Alberto Cesaroni, Laura Caiazza, Giulia Pisanò, Martina Gnazzo, Giulia Sigona, Susanna Rizzi, Agnese Pantani, Daniele Frattini and Carlo Fusco
Genes 2025, 16(11), 1279; https://doi.org/10.3390/genes16111279 - 29 Oct 2025
Viewed by 1352
Abstract
Background. PMP22-related neuropathies comprise a spectrum of predominantly demyelinating disorders, most commonly Charcot–Marie–Tooth type 1A (CMT1A; 17p12 duplication) and hereditary neuropathy with liability to pressure palsies (HNPP; 17p12 deletion), with rarer phenotypes due to PMP22 sequence variants (CMT1E, Dejerine–Sottas syndrome [DSS]). [...] Read more.
Background. PMP22-related neuropathies comprise a spectrum of predominantly demyelinating disorders, most commonly Charcot–Marie–Tooth type 1A (CMT1A; 17p12 duplication) and hereditary neuropathy with liability to pressure palsies (HNPP; 17p12 deletion), with rarer phenotypes due to PMP22 sequence variants (CMT1E, Dejerine–Sottas syndrome [DSS]). Methods. We conducted a PRISMA-compliant systematic review (PROSPERO ID: 1139921) of PubMed and Scopus (January 2015–August 2025). Eligible studies reported genetically confirmed PMP22-related neuropathies with clinical and/or neurophysiological data. Owing to heterogeneous reporting, we synthesized pooled counts and proportions without meta-analysis, explicitly tracking missing denominators. Results. One hundred twenty-seven studies (n = 4493 patients) were included. Sex was available for 995 patients (males 53.8% [535/995]; females 46.2% [460/995]); mean age at onset was 23.7 years in males and 16.4 years in females. Phenotypic classification was reported for 4431/4493 (75.4% CMT1A, 20.9% HNPP, 2.6% CMT1E, 1.2% DSS). Across phenotypes, weakness/foot drop was the leading presenting symptom when considering only cohorts that explicitly reported it (e.g., 65.3% in CMT1A; 76.0% in HNPP); sensory complaints (numbness, paresthesia/dysesthesia) were variably documented. Neurophysiology consistently showed demyelinating patterns, with median and ulnar nerves most frequently abnormal among assessed nerves; in HNPP, deep peroneal and sural involvement were also common in evaluated subsets. Comorbidities clustered by phenotype: orthopedic/neuromuscular features (pes cavus/hammer toes, scoliosis/kyphosis, tremor) in CMT1A and DSS; broader metabolic/autoimmune and neurodevelopmental associations in HNPP; and higher syndromic/ocular/hearing involvement in CMT1E. Genetically, 75.6% (3241/4291) had 17p12 duplication, 19.6% (835/4291) 17p12 deletion, and 4.8% (215/4291) PMP22 sequence variants with marked allelic heterogeneity. Among 2571 cases with available methods, MLPA was most used (41.9%), followed by NGS (20.4%) and Sanger sequencing (17.8%). Main limitations include heterogeneous and incomplete reporting across studies (especially symptoms and nerve-specific data) and the absence of a formal risk-of-bias appraisal, which preclude meta-analysis and may skew phenotype proportions toward more frequently reported entities (e.g., CMT1A). Conclusions. Recent literature confirms that PMP22 copy-number variants account for the vast majority of cases, while sequence-level variants underpin a minority with distinct phenotypes (notably CMT1E/DSS). Routine MLPA, complemented by targeted/NGS, optimizes diagnostic yield. Standardized reporting of nerve-conduction parameters and symptom denominators is urgently needed to enable robust cross-study comparisons in both pediatric and adult populations. Full article
(This article belongs to the Section Neurogenomics)
Show Figures

Figure 1

22 pages, 4342 KB  
Article
Cloud-Based Personalized sEMG Classification Using Lightweight CNNs for Long-Term Haptic Communication in Deaf-Blind Individuals
by Kaavya Tatavarty, Maxwell Johnson and Boris Rubinsky
Bioengineering 2025, 12(11), 1167; https://doi.org/10.3390/bioengineering12111167 - 27 Oct 2025
Viewed by 765
Abstract
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring [...] Read more.
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring hand-to-hand contact. In this study, we introduce a novel, cloud-based, AI-assisted gesture recognition and haptic communication system designed for long-term use by individuals with Usher syndrome, whose auditory and visual abilities deteriorate with age. Central to our approach is a wearable haptic interface that relocates tactile input and output from the hands to an arm-mounted sleeve, thereby preserving manual dexterity and enabling continuous, bidirectional tactile interaction. The system uses surface electromyography (sEMG) to capture user-specific muscle activations in the hand and forearm and employs lightweight, personalized convolutional neural networks (CNNs), hosted on a centralized server, to perform real-time gesture classification. A key innovation of the system is its ability to adapt over time to each user’s evolving physiological condition, including the progressive loss of vision and hearing. Experimental validation using a public dataset, along with real-time testing involving seven participants, demonstrates that personalized models consistently outperform cross-user models in terms of accuracy, adaptability, and usability. This platform offers a scalable, longitudinally adaptable solution for non-visual communication and holds significant promise for advancing assistive technologies for the deaf-blind community. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

16 pages, 659 KB  
Article
The Standardized Prevalence Ratios of Occupational and Chronic Diseases Among Korean Firefighters Compared with the General Population
by Soo Jin Kim and Seunghon Ham
Fire 2025, 8(10), 408; https://doi.org/10.3390/fire8100408 - 21 Oct 2025
Viewed by 1271
Abstract
(1) Background: Firefighters, exposed to diverse and unpredictable occupational environments, face cumulatively increased physical health risks. The purpose of this study was to assess the standardized prevalence ratios (SPRs) of occupational and chronic diseases in firefighters and the general population, categorized into pre-disease [...] Read more.
(1) Background: Firefighters, exposed to diverse and unpredictable occupational environments, face cumulatively increased physical health risks. The purpose of this study was to assess the standardized prevalence ratios (SPRs) of occupational and chronic diseases in firefighters and the general population, categorized into pre-disease and disease stages; (2) Methods: This study was a community-based, retrospective, cross-sectional study. Data sources included the occupational health examination of 7024 firefighters and the National Health and Nutrition Examination Survey of 1485 general populations in 2019. Statistical analyses were performed using SAS version 9.4 SAS Institute Inc., Cary, NC, USA. SPRs of chronic and occupational diseases were calculated for each pre-disease and disease stage, and chi-square tests were performed; (3) Results: Data were analyzed from a cohort of 7024 firefighters who consented to the access and use of their occupational health examination results, 91.9% (n = 6456) were male, the average age was 43 years, the average number of years of service was 15.3 years. Among the five classifications of the occupational health examination results, 26.7% (n = 1877) were A, 19.2% (n = 1352) were C1, 42.4% (n = 2980) were C2, 1.5% (n = 108) were D1, and 10% (n = 705) were D2. As a result of calculating the SPRs compared to the general population, in the pre-disease stage, obesity SPR = 1.29 (95% confidence interval [CI] 1.23 to 1.34), hypertension SPR = 1.52 (95% CI 1.47 to 1.57), diabetes mellitus SPR = 1.07 (95% CI 1.02 to 1.11), and metabolic syndrome SPR = 1.62 (95% CI 1.57 to 1.66) were all higher in the firefighter group. On the other hand, in the disease stage, metabolic syndrome and complex pulmonary ventilation impairment were higher in SPRs than in the general population, but not statistically significant. However, at the disease stage, SPRs for obesity, hypertension, diabetes, and noise-induced hearing loss were higher and statistically significant in the general population; (4) Conclusions: The SPRs for firefighters produced in this study clearly demonstrate the healthy worker effect. The SPRs, derived from a cross-sectional study, highlight the need for future cohort building of firefighters to track and monitor health outcomes, as well as systematic and thorough health management interventions to prevent progression from pre-disease to disease. Therefore, this study can be utilized in the development of mid-to-long-term firefighter health promotion programs and health and safety plans to minimize firefighters’ physical health and occupational exposures. Full article
(This article belongs to the Special Issue Wildfire Smoke Effects on Public Health)
Show Figures

Figure 1

18 pages, 4337 KB  
Article
A Transformer-Based Multimodal Fusion Network for Emotion Recognition Using EEG and Facial Expressions in Hearing-Impaired Subjects
by Shuni Feng, Qingzhou Wu, Kailin Zhang and Yu Song
Sensors 2025, 25(20), 6278; https://doi.org/10.3390/s25206278 - 10 Oct 2025
Viewed by 1562
Abstract
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). [...] Read more.
Hearing-impaired people face challenges in expressing and perceiving emotions, and traditional single-modal emotion recognition methods demonstrate limited effectiveness in complex environments. To enhance recognition performance, this paper proposes a multimodal fusion neural network based on a multimodal multi-head attention fusion neural network (MMHA-FNN). This method utilizes differential entropy (DE) and bilinear interpolation features as inputs, learning the spatial–temporal characteristics of brain regions through an MBConv-based module. By incorporating the Transformer-based multi-head self-attention mechanism, we dynamically model the dependencies between EEG and facial expression features, enabling adaptive weighting and deep interaction of cross-modal characteristics. The experiment conducted a four-classification task on the MED-HI dataset (15 subjects, 300 trials). The taxonomy included happy, sad, fear, and calmness, where ‘calmness’ corresponds to a low-arousal neutral state as defined in the MED-HI protocol. Results indicate that the proposed method achieved an average accuracy of 81.14%, significantly outperforming feature concatenation (71.02%) and decision layer fusion (69.45%). This study demonstrates the complementary nature of EEG and facial expressions in emotion recognition among hearing-impaired individuals and validates the effectiveness of feature layer interaction fusion based on attention mechanisms in enhancing emotion recognition performance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

26 pages, 7995 KB  
Article
Smart Home Control Using Real-Time Hand Gesture Recognition and Artificial Intelligence on Raspberry Pi 5
by Thomas Hobbs and Anwar Ali
Electronics 2025, 14(20), 3976; https://doi.org/10.3390/electronics14203976 - 10 Oct 2025
Viewed by 2872
Abstract
This paper outlines the process of developing a low-cost system for home appliance control via real-time hand gesture classification using Computer Vision and a custom lightweight machine learning model. This system strives to enable those with speech or hearing disabilities to interface with [...] Read more.
This paper outlines the process of developing a low-cost system for home appliance control via real-time hand gesture classification using Computer Vision and a custom lightweight machine learning model. This system strives to enable those with speech or hearing disabilities to interface with smart home devices in real time using hand gestures, such as is possible with voice-activated ‘smart assistants’ currently available. The system runs on a Raspberry Pi 5 to enable future IoT integration and reduce costs. The system also uses the official camera module v2 and 7-inch touchscreen. Frame preprocessing uses MediaPipe to assign hand coordinates, and NumPy tools to normalise them. A machine learning model then predicts the gesture. The model, a feed-forward network consisting of five fully connected layers, was built using Keras 3 and compiled with TensorFlow Lite. Training data utilised the HaGRIDv2 dataset, modified to consist of 15 one-handed gestures from its original of 23 one- and two-handed gestures. When used to train the model, validation metrics of 0.90 accuracy and 0.31 loss were returned. The system can control both analogue and digital hardware via GPIO pins and, when recognising a gesture, averages 20.4 frames per second with no observable delay. Full article
Show Figures

Figure 1

16 pages, 390 KB  
Article
Association Between Polypharmacy and Self-Reported Hearing Disability: An Observational Study Using ATC Classification and HHIE-S-It Questionnaire
by Francesco Martines, Pietro Salvago, Gianluca Lavanco, Ginevra Malta and Fulvio Plescia
Audiol. Res. 2025, 15(5), 135; https://doi.org/10.3390/audiolres15050135 - 10 Oct 2025
Viewed by 717
Abstract
Background: hearing loss represents, today, one of the most significant health problems affecting the world’s population. This clinical condition, particularly manifest in adulthood, can arise or be aggravated by both the presence of specific pathologies and by taking multiple classes of drugs at [...] Read more.
Background: hearing loss represents, today, one of the most significant health problems affecting the world’s population. This clinical condition, particularly manifest in adulthood, can arise or be aggravated by both the presence of specific pathologies and by taking multiple classes of drugs at the same time. Methods: to understand this relationship, the present non-interventional observational study aimed to investigate the relationship between worsening hearing abilities in 1651 patients aged between 18 and 99 years. In particular, the thorough history of patients allowed us to evaluate the pathological profiles, pharmacological profiles, and therapeutic regimens adopted. This allowed us to evaluate its association with self-reported hearing loss, assessed through the administration of the HHIE-S-It questionnaire. Furthermore, given the presence of multimorbidity, the possible correlation between self-reported hearing loss and the specific classes of drugs, categorized using the Anatomical Therapeutic Classification (ATC) system, was evaluated. Results: the results highlighted how patients taking drugs, both in mono- and polytherapy regimens, had higher hearing deficits than patients not taking drugs. Furthermore, an apparent dose–response effect, in which the risk of moderate to severe impairment progressively increased with the number of drugs taken, was also observed. Different classes of drugs, particularly those used for the treatment of diseases of the cardiovascular system, as well as drugs for acid-related disorders, were significantly linked to an increased risk of perceived hearing impairment. On the contrary, agents belonging to the antidiabetic category have proven to be drugs capable of offering a potential protective effect. Conclusion: this study highlighted how both the number of drugs taken and some specific categories of drugs can contribute to perceived hearing impairment. While this evidence highlights the importance of integrating audiological evaluation into the management of patients in polypharmacy, the cross-sectional nature of the design precludes the inference of causality. This evidence still favors safer and more personalized therapeutic strategies. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

14 pages, 1917 KB  
Article
Moroccan Sign Language Recognition with a Sensory Glove Using Artificial Neural Networks
by Hasnae El Khoukhi, Assia Belatik, Imane El Manaa, My Abdelouahed Sabri, Yassine Abouch and Abdellah Aarab
Digital 2025, 5(4), 53; https://doi.org/10.3390/digital5040053 - 8 Oct 2025
Viewed by 1079
Abstract
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited [...] Read more.
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited understanding of sign language poses a major barrier, often resulting in social, educational, and professional exclusion. To bridge this communication gap, the present study proposes a smart wearable glove system designed to translate Arabic sign language (ArSL), especially Moroccan sign language (MSL), into a written alphabet in real time. The glove integrates five MPU6050 motion sensors, one on each finger, capable of capturing detailed motion data, including angular velocity and linear acceleration. These motion signals are processed using an Artificial Neural Network (ANN), implemented directly on a Raspberry Pi Pico through embedded machine learning techniques. A custom dataset comprising labeled gestures corresponding to the MSL alphabet was developed for training the model. Following the training phase, the neural network attained a gesture recognition accuracy of 98%, reflecting strong performance in terms of reliability and classification precision. We developed an affordable and portable glove system aimed at improving daily communication for individuals with hearing impairments in Morocco, contributing to greater inclusivity and improved accessibility. Full article
Show Figures

Figure 1

17 pages, 2255 KB  
Article
Electromyography-Based Sign Language Recognition: A Low-Channel Approach for Classifying Fruit Name Gestures
by Kudratjon Zohirov, Mirjakhon Temirov, Sardor Boykobilov, Golib Berdiev, Feruz Ruziboev, Khojiakbar Egamberdiev, Mamadiyor Sattorov, Gulmira Pardayeva and Kuvonch Madatov
Signals 2025, 6(4), 50; https://doi.org/10.3390/signals6040050 - 25 Sep 2025
Viewed by 1374
Abstract
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The [...] Read more.
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The dataset (DS) contains EMG signal data of 46 hearing-impaired people and descriptions of fruit names, including apple, pear, apricot, nut, cherry, and raspberry, in sign language (SL). Based on the presented DS, gesture movements were classified using five different classification algorithms—Random Forest, k-Nearest Neighbors, Logistic Regression, Support Vector Machine, and neural networks—and the algorithm that gives the best result for gesture movements was determined. The best classification result was obtained during recognition of the word cherry based on the RF algorithm, and 97% accuracy was achieved. Full article
(This article belongs to the Special Issue Advances in Signal Detecting and Processing)
Show Figures

Figure 1

14 pages, 775 KB  
Article
Prognostic Significance of Isolated Low-Frequency Hearing Loss: A Longitudinal Audiometric Study
by Junhun Lee, Chul Young Yoon, Jiwon Kim and Young Joon Seo
J. Clin. Med. 2025, 14(19), 6749; https://doi.org/10.3390/jcm14196749 - 24 Sep 2025
Viewed by 1308
Abstract
Background/Objectives: Hearing loss is a prevalent sensory impairment in older adults, linked to reduced quality of life, cognitive decline, and social isolation. While it usually begins in the high-frequency range, some individuals present with isolated low-frequency hearing loss (LFHL). The long-term prognostic [...] Read more.
Background/Objectives: Hearing loss is a prevalent sensory impairment in older adults, linked to reduced quality of life, cognitive decline, and social isolation. While it usually begins in the high-frequency range, some individuals present with isolated low-frequency hearing loss (LFHL). The long-term prognostic implications of such frequency-specific patterns remain unclear. This study aimed to evaluate the risk of long-term hearing deterioration by initial hearing loss type: LFHL, high-frequency hearing loss (HFHL), and combined-frequency hearing loss (CFHL). Methods: We retrospectively analyzed pure-tone audiometry (PTA) data from 10,261 patients who underwent at least two pure-tone audiometry assessments between 2011 and 2022 at a tertiary hospital. Each ear was treated as an independent observation. Hearing loss was defined as a threshold > 20 dB HL at 250, 500, 4000, or 8000 Hz. Participants were classified into normal hearing (NH), LFHL, HFHL, and CFHL groups. The outcome was a final four-frequency pure-tone average (4PTA) ≥ 40 dB HL. Logistic regression adjusted for age and sex was used, with subgroup analyses by follow-up duration. Results: HFHL (OR = 1.66, 95% CI: 1.47–1.89) and CFHL (OR = 2.23, 95% CI: 1.97–2.53) showed significantly higher risks of hearing loss compared with NH. LFHL did not show a significant increase (OR = 0.94, 95% CI: 0.76–1.16). These results were consistent across follow-up durations, with CFHL showing the most extensive deterioration. Conclusion: HFHL is a strong predictor of long-term auditory decline, and risk is further elevated with CFHL. In contrast, isolated LFHL was not associated with increased risk, suggesting relatively favorable outcomes. Frequency-specific classification may aid risk stratification and long-term monitoring strategies. Full article
Show Figures

Figure 1

11 pages, 894 KB  
Article
AI-Based Prediction of Bone Conduction Thresholds Using Air Conduction Audiometry Data
by Chul Young Yoon, Junhun Lee, Jiwon Kim, Sunghwa You, Chanbeom Kwak and Young Joon Seo
J. Clin. Med. 2025, 14(18), 6549; https://doi.org/10.3390/jcm14186549 - 17 Sep 2025
Viewed by 717
Abstract
Background/Objectives: This study evaluated the feasibility of predicting bone conduction (BC) thresholds and classifying air–bone gap (ABG) status using only air conduction (AC) data obtained from pure tone audiometry (PTA). Methods: A total of 60,718 PTA records from five tertiary hospitals in the [...] Read more.
Background/Objectives: This study evaluated the feasibility of predicting bone conduction (BC) thresholds and classifying air–bone gap (ABG) status using only air conduction (AC) data obtained from pure tone audiometry (PTA). Methods: A total of 60,718 PTA records from five tertiary hospitals in the Republic of Korea were utilized. Input features included AC thresholds (0.25–8 kHz), age, and sex, while outputs were BC thresholds (0.25–4 kHz) and ABG classification based on 10 dB and 15 dB criteria. Five machine learning models—deep neural network (DNN), long short-term memory (LSTM), bidirectional LSTM (BiLSTM), random forest (RF), and extreme gradient boosting (XGB)—were trained using 5-fold cross-validation with Synthetic Minority Over-sampling Technique (SMOTE). Model performance was evaluated based on accuracy, sensitivity, precision, and F1 score under ±5 dB and ±10 dB thresholds for BC prediction. Results: LSTM and BiLSTM outperformed DNN in predicting BC thresholds, achieving ~60% accuracy within ±5 dB and ~80% within ±10 dB. For ABG classification, all models performed better with the 10 dB criterion than the 15 dB. Tree-based models (RF, XGB) achieved the highest classification accuracy (up to 0.512) and precision (up to 0.827). Confidence intervals for all metrics were within ±0.01, indicating stable results. Conclusions: AI models can accurately predict BC thresholds and ABG status using AC data alone. These findings support the integration of AI-driven tools into clinical audiology and telemedicine, particularly for remote screening and diagnosis. Future work should focus on clinical validation and implementation to expand accessibility in hearing care. Full article
Show Figures

Figure 1

Back to TopTop