Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = auditory encoding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8271 KB  
Article
Enhancing EEG Decoding with Selective Augmentation Integration
by Jianbin Ye, Yanjie Sun, Man Xiao, Bo Liu and Kele Xu
Sensors 2026, 26(2), 399; https://doi.org/10.3390/s26020399 - 8 Jan 2026
Viewed by 195
Abstract
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. [...] Read more.
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. This approach utilizes contrastive learning to mitigate representational distortions caused by augmentation, thereby strengthening the encoder’s feature learning. A selective augmentation strategy is further incorporated to dynamically determine optimal augmentation combinations based on performance. We also introduce NeuroBrain, a novel neural architecture specifically designed for auditory EEG decoding. It effectively captures both local and global dependencies within EEG signals. Comprehensive evaluations on the SparrKULee and WithMe datasets confirm the superiority of our proposed framework and architecture, demonstrating a 29.42% performance gain over HappyQuokka and a 5.45% accuracy improvement compared to EEGNet. These results validate our method’s efficacy in tackling key challenges in EEG analysis and advancing the state of the art. Full article
Show Figures

Figure 1

17 pages, 1461 KB  
Article
Semantic Latent Geometry Reveals Imagination–Perception Structure in EEG
by Hossein Ahmadi, Martina Impagnatiello and Luca Mesin
Appl. Sci. 2026, 16(2), 661; https://doi.org/10.3390/app16020661 - 8 Jan 2026
Viewed by 150
Abstract
We investigate whether representation-level, semantic diagnostics expose structure in electroencephalography (EEG) beyond conventional accuracy when contrasting perception and imagination and relating outcomes to self-reported imagery ability. Using a task-independent encoder that preserves scalp topology and temporal dependencies, we learn semantic features from multi-subject, [...] Read more.
We investigate whether representation-level, semantic diagnostics expose structure in electroencephalography (EEG) beyond conventional accuracy when contrasting perception and imagination and relating outcomes to self-reported imagery ability. Using a task-independent encoder that preserves scalp topology and temporal dependencies, we learn semantic features from multi-subject, multi-modal EEG (pictorial, orthographic, auditory) and evaluate subject-independent decoding with lightweight heads, achieving state-of-the-art or better accuracy with low variance across subjects. To probe the latent space directly, we introduce threshold-resolved correlation pruning and derive the Semantic Sensitivity Index (SSI) and cross-modal overlap (CMO). While correlations between Vividness of Visual Imagery Questionnaire (VVIQ)/Bucknell Auditory Imagery Scale (BAIS) and leave-one-subject-out (LOSO) accuracy are small and imprecise at n = 12, the semantic diagnostics reveal interpretable geometry: for several subjects, imagination retains a more compact, non-redundant latent subset than perception (positive SSI), and a substantial cross-modal core emerges (CMO ≈ 0.5–0.8). These effects suggest that accuracy alone under-reports cognitive organization in the learned space and that semantic compactness and redundancy patterns capture person-specific phase preferences. Given the small cohort and the subjectivity of questionnaires, the findings argue for semantic, representation-aware evaluation as a necessary complement to accuracy in EEG-based decoding and trait linkage. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

25 pages, 6176 KB  
Article
Audiovisual Brain Activity Recognition Based on Symmetric Spatio-Temporal–Frequency Feature Association Vectors
by Yang Xi, Lu Zhang, Chenxue Wu, Bingjie Shi and Cunzhen Li
Symmetry 2025, 17(12), 2175; https://doi.org/10.3390/sym17122175 - 17 Dec 2025
Viewed by 239
Abstract
The neural mechanisms of auditory and visual processing are not only a core research focus in cognitive neuroscience but also hold critical importance for the development of brain–computer interfaces, neurological disease diagnosis, and human–computer interaction technologies. However, EEG-based studies on classifying auditory and [...] Read more.
The neural mechanisms of auditory and visual processing are not only a core research focus in cognitive neuroscience but also hold critical importance for the development of brain–computer interfaces, neurological disease diagnosis, and human–computer interaction technologies. However, EEG-based studies on classifying auditory and visual brain activities largely overlook the in-depth utilization of spatial distribution patterns and frequency-specific characteristics inherent in such activities. This paper proposes an analytical framework that constructs symmetrical spatio-temporal–frequency feature association vectors to represent brain activities by computing EEG microstates across multiple frequency bands and brain functional connectivity networks. Then we construct an Adaptive Tensor Fusion Network (ATFN) that leverages feature association vectors to recognize brain activities related to auditory, visual, and audiovisual processing. The ATFN includes a feature fusion and selection module based on differential feature enhancement, a feature encoding module enhanced with attention mechanisms, and a classifier based on a multilayer perceptron to achieve the efficient recognition of audiovisual brain activities. The feature association vectors are then processed by the Adaptive Tensor Fusion Network (ATFN) to efficiently recognize different types of audiovisual brain activities. The results show that the classification accuracy for auditory, visual, and audiovisual brain activity reaches 96.97% using the ATFN, demonstrating that the proposed symmetric spatio-temporal–frequency feature association vectors effectively characterize visual, auditory, and audiovisual brain activities. The symmetrical spatio-temporal–frequency feature association vectors establish a computable mapping that captures the intrinsic correlations among temporal, spatial, and frequency features, offering a more interpretable method to represent brain activities. The proposed ATFN provides an effective recognition framework for brain activity, with a potential application for brain–computer interfaces and neurological disease diagnosis. Full article
Show Figures

Figure 1

19 pages, 3468 KB  
Article
Sensory Representation of Neural Networks Using Sound and Color for Medical Imaging Segmentation
by Irenel Lopo Da Silva, Nicolas Francisco Lori and José Manuel Ferreira Machado
J. Imaging 2025, 11(12), 449; https://doi.org/10.3390/jimaging11120449 - 15 Dec 2025
Viewed by 338
Abstract
This paper introduces a novel framework for sensory representation of brain imaging data, combining deep learning-based segmentation with multimodal visual and auditory outputs. Structural magnetic resonance imaging (MRI) predictions are converted into color-coded maps and stereophonic/MIDI sonifications, enabling intuitive interpretation of cortical activation [...] Read more.
This paper introduces a novel framework for sensory representation of brain imaging data, combining deep learning-based segmentation with multimodal visual and auditory outputs. Structural magnetic resonance imaging (MRI) predictions are converted into color-coded maps and stereophonic/MIDI sonifications, enabling intuitive interpretation of cortical activation patterns. High-precision U-Net models efficiently generate these outputs, supporting clinical decision-making, cognitive research, and creative applications. Spatial, intensity, and anomalous features are encoded into perceivable visual and auditory cues, facilitating early detection and introducing the concept of “auditory biomarkers” for potential pathological identification. Despite current limitations, including dataset size, absence of clinical validation, and heuristic-based sonification, the pipeline demonstrates technical feasibility and robustness. Future work will focus on clinical user studies, the application of functional MRI (fMRI) time-series for dynamic sonification, and the integration of real-time emotional feedback in cinematic contexts. This multisensory approach offers a promising avenue for enhancing the interpretability of complex neuroimaging data across medical, research, and artistic domains. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Graphical abstract

18 pages, 916 KB  
Article
Real-Time Electroencephalography-Guided Binaural Beat Audio Enhances Relaxation and Cognitive Performance: A Randomized, Double-Blind, Sham-Controlled Repeated-Measures Crossover Trial
by Chanaka N. Kahathuduwa, Jessica Blume, Chinnadurai Mani and Chathurika S. Dhanasekara
Physiologia 2025, 5(4), 44; https://doi.org/10.3390/physiologia5040044 - 24 Oct 2025
Viewed by 5391
Abstract
Background/Objectives: Binaural beat audio has gained popularity as a non-invasive tool to promote relaxation and enhance cognitive performance, though empirical support has been inconsistent. We developed a novel algorithm integrating real-time electroencephalography (EEG) feedback to dynamically tailor binaural beats to induce relaxed brain [...] Read more.
Background/Objectives: Binaural beat audio has gained popularity as a non-invasive tool to promote relaxation and enhance cognitive performance, though empirical support has been inconsistent. We developed a novel algorithm integrating real-time electroencephalography (EEG) feedback to dynamically tailor binaural beats to induce relaxed brain states. This study aimed to examine the efficacy and feasibility of this algorithm in a clinical trial. Methods: In a randomized, double-blinded, sham-controlled crossover trial, 25 healthy adults completed two 30 min sessions (EEG-guided intervention versus sham). EEG (Fp1) was recorded using a consumer-grade single-electrode headset, with auditory stimulation adjusted in real time based on EEG data. Outcomes included EEG frequency profiles, stop signal reaction time (SSRT), and novelty encoding task performance. Results: The intervention rapidly reduced dominant EEG frequency in all participants, with 100% achieving <8 Hz and 96% achieving <4 Hz within median 7.4 and 9.0 min, respectively. Compared to the sham, the intervention was associated with an faster novelty encoding reaction time (p = 0.039, dz = −0.225) and trends towards improved SSRT (p = 0.098, dz = −0.209), increased boundary separation in stop trials (p = 0.065, dz = 0.350), and improved inhibitory drift rate (p = 0.067, dz = 0.452) within the limits of the exploratory nature of these findings. Twenty-four (96%) participants reached a target level of <4 Hz with the intervention, while none reached this level with the sham. Conclusions: Real-time EEG-guided binaural beats may rapidly induce low-frequency brain states while potentially preserving or enhancing aspects of executive function. These findings support the feasibility of personalized, closed-loop auditory entrainment for promoting “relaxed alertness.” The results are preliminary and hypothesis-generating, warranting larger, multi-channel EEG studies in ecologically valid contexts. Full article
Show Figures

Figure 1

17 pages, 5644 KB  
Article
Mutation Spectrum of GJB2 in Taiwanese Patients with Sensorineural Hearing Loss: Prevalence, Pathogenicity, and Clinical Implications
by Yi-Feng Lin, Che-Hong Chen, Chang-Yin Lee, Hung-Ching Lin and Yi-Chao Hsu
Int. J. Mol. Sci. 2025, 26(17), 8213; https://doi.org/10.3390/ijms26178213 - 24 Aug 2025
Viewed by 3790
Abstract
Hearing loss is often caused by genetic and environmental factors, with inherited mutations responsible for 50–60% of cases. The GJB2 gene, encoding connexin 26, is a major contributor to nonsyndromic sensorineural hearing loss (NSHL) due to its role in cellular communication critical for [...] Read more.
Hearing loss is often caused by genetic and environmental factors, with inherited mutations responsible for 50–60% of cases. The GJB2 gene, encoding connexin 26, is a major contributor to nonsyndromic sensorineural hearing loss (NSHL) due to its role in cellular communication critical for auditory function. In Taiwan, common deafness-associated genes include GJB2, SLC26A4, OTOF, MYO15A, and MTRNR1, which were similar to those found in other populations. The most common pathogenic genes is GJB2 mutations and the hearing level in children with GJB2 p.V37I/p.V37I or p.V37I/c.235delC was estimated to deteriorate at approximately 1 decibel hearing level (dB HL)/year. We found another common mutation in Taiwan Biobank, GJB2 p.I203T, which were identified in our data and individuals carrying this mutation experienced more severe hearing loss, suggesting a synergistic effect of these mutations on auditory impairment. We suggest GJB2 whole genetic screening is recommended for clinical management and prevention strategies in Taiwan. This study used data from the Taiwan Biobank to analyze allele frequencies of GJB2 gene variants. Predictive software (PolyPhen-2 version 2.2, SIFT for missense variants 6.2.1, MutationTaster Ensembl 112 and Alphamissense CC BY-NC-SA 4.0) assessed the pathogenicity of specific mutations. Additionally, 82 unrelated NSHL patients were screened for mutations in these genes using PCR and DNA sequencing. The study explored the correlation between genetic mutations and the severity of hearing loss in patients. Several common GJB2 mutation sites were identified from the Taiwan Biobank, including GJB2 p.V37I (7.7%), GJB2 p.I203T (6%), GJB2 p.V27I (31%), and GJB2 p.E114G (22%). Bioinformatics analysis classified GJB2 p.I203T as pathogenic, while GJB2 p.V27I and GJB2 p.E114G were considered polymorphisms. Patients with GJB2 p.I203T mutation experienced more severe hearing loss, emphasizing the potential interaction between the gene in auditory impairment. The mutation patterns of GJB2 in the Taiwanese population are similar to other East Asian regions. Although GJB2 mutations represent the predominant genetic cause of hereditary hearing loss, the corresponding mutant proteins exhibit detectable aggregation, particularly at cell–cell junctions, suggesting at least partial trafficking to the plasma membrane. Genetic screening for these mutations—especially GJB2 p.I203T (6%), GJB2 p.V27I (31%), and GJB2 p.E114G (22%)—is essential for the effective diagnosis and management of non-syndromic hearing loss (NSHL) in Taiwan. We found GJB2 p.I203T which were identified in our data and individuals carrying this mutation experienced more severe hearing loss, suggesting a synergistic effect of these mutations on auditory impairment. We suggest whole GJB2 gene sequencing in genetic screening is recommended for clinical management and prevention strategies in Taiwan. These findings have significant clinical and public health implications for the development of preventive and therapeutic strategies. Full article
(This article belongs to the Special Issue Hearing Loss: Recent Progress in Molecular Genomics)
Show Figures

Figure 1

29 pages, 2426 KB  
Review
Transmembrane Protein 43: Molecular and Pathogenetic Implications in Arrhythmogenic Cardiomyopathy and Various Other Diseases
by Buyan-Ochir Orgil, Mekaea S. Spaulding, Harrison P. Smith, Zainab Baba, Neely R. Alberson, Enkhzul Batsaikhan, Jeffrey A. Towbin and Enkhsaikhan Purevjav
Int. J. Mol. Sci. 2025, 26(14), 6856; https://doi.org/10.3390/ijms26146856 - 17 Jul 2025
Cited by 1 | Viewed by 1809
Abstract
Transmembrane protein 43 (TMEM43 or LUMA) encodes a highly conserved protein found in the nuclear and endoplasmic reticulum membranes of many cell types and the intercalated discs and adherens junctions of cardiac myocytes. TMEM43 is involved in facilitating intra/extracellular signal transduction [...] Read more.
Transmembrane protein 43 (TMEM43 or LUMA) encodes a highly conserved protein found in the nuclear and endoplasmic reticulum membranes of many cell types and the intercalated discs and adherens junctions of cardiac myocytes. TMEM43 is involved in facilitating intra/extracellular signal transduction to the nucleus via the linker of the nucleoskeleton and cytoskeleton complex. Genetic mutations may result in reduced TMEM43 expression and altered TMEM43 protein cellular localization, resulting in impaired cell polarization, intracellular force transmission, and cell–cell connections. The p.S358L mutation causes arrhythmogenic right ventricular cardiomyopathy type-5 and is associated with increased absorption of lipids, fatty acids, and cholesterol in the mouse small intestine, which may promote fibro-fatty replacement of cardiac myocytes. Mutations (p.E85K and p.I91V) have been identified in patients with Emery–Dreifuss Muscular Dystrophy-related myopathies. Other mutations also lead to auditory neuropathy spectrum disorder-associated hearing loss and have a negative association with cancer progression and tumor cell survival. This review explores the pathogenesis of TMEM43 mutation-associated diseases in humans, highlighting animal and in vitro studies that describe the molecular details of disease processes and clinical, histologic, and molecular manifestations. Additionally, we discuss TMEM43 expression-related conditions and how each disease may progress to severe and life-threatening states. Full article
Show Figures

Figure 1

15 pages, 2337 KB  
Article
Is It About Speech or About Prediction? Testing Between Two Accounts of the Rhythm–Reading Link
by Susana Silva, Ana Rita Batista, Nathércia Lima Torres, José Sousa, Aikaterini Liapi, Styliani Bairami and Vasiliki Folia
Brain Sci. 2025, 15(6), 642; https://doi.org/10.3390/brainsci15060642 - 14 Jun 2025
Viewed by 1261
Abstract
Background/Objectives: The mechanisms underlying the positive association between reading and rhythmic skills remain unclear. Our goal was to systematically test between two major explanations: the Temporal Sampling Framework (TSF), which highlights the relation between rhythm and speech encoding, and a competing explanation based [...] Read more.
Background/Objectives: The mechanisms underlying the positive association between reading and rhythmic skills remain unclear. Our goal was to systematically test between two major explanations: the Temporal Sampling Framework (TSF), which highlights the relation between rhythm and speech encoding, and a competing explanation based on rhythm’s role in enhancing prediction within visual and auditory sequences. Methods: We compared beat versus duration perception for their associations with encoding and sequence learning (prediction-related) tasks, using both visual and auditory sequences. We also compared these associations for Portuguese vs. Greek participants, since Portuguese stress-timed rhythm is more compatible with music-like beats lasting around 500 ms, in contrast to the syllable-timed rhythm of Greek. If rhythm acts via speech encoding, its effects should be more salient in Portuguese. Results: Consistent with the TSF’s predictions, we found a significant association between beat perception and auditory encoding in Portuguese but not in Greek participants. Correlations between time perception and sequence learning in both modalities were either null or insufficiently supported in both groups. Conclusions: Altogether, the evidence supported the TSF-related predictions in detriment of the Rhythm-as-Predictor (RaP) hypothesis. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

23 pages, 1843 KB  
Article
Fish Oil Supplementation Attenuates Offspring’s Neurodevelopmental Changes Induced by a Maternal High-Fat Diet in a Rat Model
by Yasna Muñoz, Heidy Kaune, Alexies Dagnino-Subiabre, Gonzalo Cruz, Jorge Toledo, Rodrigo Valenzuela, Renato Moraga, Luis Tabilo, Cristian Flores, Alfredo Muñoz, Nicolás Crisosto, Juan F. Montiel and Manuel Maliqueo
Nutrients 2025, 17(10), 1741; https://doi.org/10.3390/nu17101741 - 21 May 2025
Cited by 2 | Viewed by 2536
Abstract
Background/Objectives: A maternal high-fat diet (HFD) impairs brain structure in offspring. In turn, fish oil (FO) rich in n-3 polyunsaturated fatty acids (PUFAs) has neuroprotective effects. Therefore, we investigated whether maternal HFD exposure affected the neurological reflexes, neuron morphology, and n-3 [...] Read more.
Background/Objectives: A maternal high-fat diet (HFD) impairs brain structure in offspring. In turn, fish oil (FO) rich in n-3 polyunsaturated fatty acids (PUFAs) has neuroprotective effects. Therefore, we investigated whether maternal HFD exposure affected the neurological reflexes, neuron morphology, and n-3 PUFA levels in the cerebral cortex of the offspring and whether these effects were mitigated by maternal FO consumption. Methods: Female Sprague Dawley rats received a control diet (CD, 10% Kcal fat) or HFD (45% Kcal fat) five weeks before mating and throughout pregnancy and lactation. From mating, a subgroup of HFD was supplemented with 11.4% FO into the diet (HFD-FO). Neurological reflexes were evaluated from postnatal day (PND) 3 until PND20. Brains were removed at PND22 for neuron morphology analysis. Moreover, fatty acid composition and transcripts of genes encoding for factors associated with synapse transmission (SNAP-25), plasticity (BDNF), transport of DHA (MFSD2a), and inflammation (NF-κB and IL-1β) were quantified in prefrontal, motor, and auditory cortices. Results: FO diminished the effects of HFD on the number of thin and mushroom-shaped dendritic spines in the cerebral cortex in both sexes. It also reversed the HFD effects on the motor and auditory reflexes in female and male offspring, respectively. In males, FO up-regulated Bdnf transcript levels in the motor cortex compared with CD and HFD. In females, n-3 PUFAs were higher in HFD and HFD-FO than in CD in the auditory cortex. Conclusions: Our results highlight the protective role of maternal dietary n-3 PUFAs in counteracting the effects induced by HFD on the acquisition of neurological reflexes and neuronal morphology in the cerebral cortex of the offspring of both sexes. Full article
(This article belongs to the Special Issue Dietary Fatty Acids and Metabolic Health)
Show Figures

Figure 1

18 pages, 4885 KB  
Article
Decoding Poultry Welfare from Sound—A Machine Learning Framework for Non-Invasive Acoustic Monitoring
by Venkatraman Manikandan and Suresh Neethirajan
Sensors 2025, 25(9), 2912; https://doi.org/10.3390/s25092912 - 5 May 2025
Cited by 6 | Viewed by 4195
Abstract
Acoustic monitoring presents a promising, non-invasive modality for assessing animal welfare in precision livestock farming. In poultry, vocalizations encode biologically relevant cues linked to health status, behavioral states, and environmental stress. This study proposes an integrated analytical framework that combines signal-level statistical analysis [...] Read more.
Acoustic monitoring presents a promising, non-invasive modality for assessing animal welfare in precision livestock farming. In poultry, vocalizations encode biologically relevant cues linked to health status, behavioral states, and environmental stress. This study proposes an integrated analytical framework that combines signal-level statistical analysis with machine learning and deep learning classifiers to interpret chicken vocalizations in a welfare assessment context. The framework was evaluated using three complementary datasets encompassing health-related vocalizations, behavioral call types, and stress-induced acoustic responses. The pipeline employs a multistage process comprising high-fidelity signal acquisition, feature extraction (e.g., mel-frequency cepstral coefficients, spectral contrast, zero-crossing rate), and classification using models including Random Forest, HistGradientBoosting, CatBoost, TabNet, and LSTM. Feature importance analysis and statistical tests (e.g., t-tests, correlation metrics) confirmed that specific MFCC bands and spectral descriptors were significantly associated with welfare indicators. LSTM-based temporal modeling revealed distinct acoustic trajectories under visual and auditory stress, supporting the presence of habituation and stressor-specific vocal adaptations over time. Model performance, validated through stratified cross-validation and multiple statistical metrics (e.g., F1-score, Matthews correlation coefficient), demonstrated high classification accuracy and generalizability. Importantly, the approach emphasizes model interpretability, facilitating alignment with known physiological and behavioral processes in poultry. The findings underscore the potential of acoustic sensing and interpretable AI as scalable, biologically grounded tools for real-time poultry welfare monitoring, contributing to the advancement of sustainable and ethical livestock production systems. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

26 pages, 15804 KB  
Article
Acoustic Event Detection in Vehicles: A Multi-Label Classification Approach
by Anaswara Antony, Wolfgang Theimer, Giovanni Grossetti and Christoph M. Friedrich
Sensors 2025, 25(8), 2591; https://doi.org/10.3390/s25082591 - 19 Apr 2025
Viewed by 2376
Abstract
Autonomous driving technologies for environmental perception are mostly based on visual cues obtained from sensors like cameras, RADAR, or LiDAR. They capture the environment as if seen through “human eyes”. If this visual information is complemented with auditory information, thereby also providing “ears”, [...] Read more.
Autonomous driving technologies for environmental perception are mostly based on visual cues obtained from sensors like cameras, RADAR, or LiDAR. They capture the environment as if seen through “human eyes”. If this visual information is complemented with auditory information, thereby also providing “ears”, driverless cars can become more reliable and safer. In this paper, an Acoustic Event Detection model is presented that can detect various acoustic events in an automotive context along with their time of occurrence to create an audio scene description. The proposed detection methodology uses the pre-trained network Bidirectional Encoder representation from Audio Transformers (BEATs) and a single-layer neural network trained on the database of real audio recordings collected from different cars. The performance of the model is evaluated for different parameters and datasets. The segment-based results for a duration of 1 s show that the model performs well for 11 sound classes with a mean accuracy of 0.93 and F1-Score of 0.39 for a confidence threshold of 0.5. The threshold-independent metric mAP has a value of 0.77. The model also performs well for sound mixtures containing two overlapping events with mean accuracy, F1-Score, and mAP equal to 0.89, 0.42, and 0.658, respectively. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

18 pages, 2560 KB  
Article
Exploring Vibrotactile Displays to Support Hazard Awareness in Multitasking Control Tasks for Heavy Machinery Work
by S. M. Ashif Hossain, Allen Yin and Thomas K. Ferris
Safety 2025, 11(1), 26; https://doi.org/10.3390/safety11010026 - 11 Mar 2025
Cited by 4 | Viewed by 2356
Abstract
(1) Background: The safe execution of heavy machinery operations and high-risk construction tasks requires operators to manage multiple tasks, with a constant awareness of coworkers and hazards. With high demands on visual and auditory resources, vibrotactile feedback systems offer a solution to enhance [...] Read more.
(1) Background: The safe execution of heavy machinery operations and high-risk construction tasks requires operators to manage multiple tasks, with a constant awareness of coworkers and hazards. With high demands on visual and auditory resources, vibrotactile feedback systems offer a solution to enhance awareness without overburdening vision or hearing. (2) Aim: This study evaluates the impact of vibrotactile feedback regarding proximity to hazards on multitasking performance and cognitive workload in order to support hazard awareness in a controlled task environment. (3) Method: Twenty-four participants performed a joystick-controlled navigation task and a concurrent mental spatial rotation task. Proximity to hazards in the navigation task was conveyed via different encodings of vibrotactile feedback: No Vibration, Intensity-Modulation, Pulse Duration, and Pulse Spacing. Performance metrics, including obstacle collisions, target hits, contact time, and accuracy, were assessed alongside perceived workload. (4) Results: Intensity-Modulated feedback reduced obstacle collisions and proximity time, while lowering workload, compared to No Vibration. No significant effects were found on spatial rotation accuracy, indicating that vibrotactile feedback effectively guides navigation and supports spatial awareness. (5) Conclusions: This study highlights the potential of vibrotactile feedback to improve navigation performance and hazard awareness, offering valuable insights into multimodal safety systems in high-demand environments. Full article
Show Figures

Figure 1

13 pages, 1754 KB  
Article
Cross-Modal Interactions and Movement-Related Tactile Gating: The Role of Vision
by Maria Casado-Palacios, Alessia Tonelli, Claudio Campus and Monica Gori
Brain Sci. 2025, 15(3), 288; https://doi.org/10.3390/brainsci15030288 - 8 Mar 2025
Cited by 1 | Viewed by 2293
Abstract
Background: When engaging with the environment, multisensory cues interact and are integrated to create a coherent representation of the world around us, a process that has been suggested to be affected by the lack of visual feedback in blind individuals. In addition, the [...] Read more.
Background: When engaging with the environment, multisensory cues interact and are integrated to create a coherent representation of the world around us, a process that has been suggested to be affected by the lack of visual feedback in blind individuals. In addition, the presence of voluntary movement can be responsible for suppressing somatosensory information processed by the cortex, which might lead to a worse encoding of tactile information. Objectives: In this work, we aim to explore how cross-modal interaction can be affected by active movements and the role of vision in this process. Methods: To this end, we measured the precision of 18 blind individuals and 18 age-matched sighted controls in a velocity discrimination task. The participants were instructed to detect the faster stimulus between a sequence of two in both passive and active touch conditions. The sensory stimulation could be either just tactile or audio–tactile, where a non-informative sound co-occurred with the tactile stimulation. The measure of precision was obtained by computing the just noticeable difference (JND) of each participant. Results: The results show worse precision with the audio–tactile sensory stimulation in the active condition for the sighted group (p = 0.046) but not for the blind one (p = 0.513). For blind participants, only the movement itself had an effect. Conclusions: For sighted individuals, the presence of noise from active touch made them vulnerable to auditory interference. However, the blind group exhibited less sensory interaction, experiencing only the detrimental effect of movement. Our work should be considered when developing next-generation haptic devices. Full article
(This article belongs to the Special Issue Multisensory Perception of the Body and Its Movement)
Show Figures

Figure 1

15 pages, 4108 KB  
Article
Vocal Emotion Perception and Musicality—Insights from EEG Decoding
by Johannes M. Lehnen, Stefan R. Schweinberger and Christine Nussbaum
Sensors 2025, 25(6), 1669; https://doi.org/10.3390/s25061669 - 8 Mar 2025
Cited by 2 | Viewed by 2228
Abstract
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not [...] Read more.
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not be sensitive enough to detect early neural effects. To address this, we re-analyzed EEG data from 38 musicians and 39 non-musicians engaged in a vocal emotion perception task. Stimuli were generated using parameter-specific voice morphing to preserve emotional cues in either the pitch contour (F0) or timbre. By employing a neural decoding framework with a Linear Discriminant Analysis classifier, we tracked the evolution of emotion representations over time in the EEG signal. Converging with the previous ERP study, our findings reveal that musicians—but not non-musicians—exhibited significant emotion decoding between 500 and 900 ms after stimulus onset, a pattern observed for F0-Morphs only. These results suggest that musicians’ superior vocal emotion recognition arises from more effective integration of pitch information during later processing stages rather than from enhanced early sensory encoding. Our study also demonstrates the potential of neural decoding approaches using EEG brain activity as a biological sensor for unraveling the temporal dynamics of voice perception. Full article
(This article belongs to the Special Issue Sensing Technologies in Neuroscience and Brain Research)
Show Figures

Figure 1

21 pages, 2146 KB  
Perspective
Preclinical Models to Study the Molecular Pathophysiology of Meniere’s Disease: A Pathway to Gene Therapy
by Prathamesh T. Nadar-Ponniah and Jose A. Lopez-Escamez
J. Clin. Med. 2025, 14(5), 1427; https://doi.org/10.3390/jcm14051427 - 20 Feb 2025
Cited by 2 | Viewed by 2657
Abstract
Background: Meniere’s disease (MD) is a set of rare disorders that affects >4 million people worldwide. Individuals with MD suffer from episodes of vertigo associated with fluctuating sensorineural hearing loss and tinnitus. Hearing loss can involve one or both ears. Over 10% of [...] Read more.
Background: Meniere’s disease (MD) is a set of rare disorders that affects >4 million people worldwide. Individuals with MD suffer from episodes of vertigo associated with fluctuating sensorineural hearing loss and tinnitus. Hearing loss can involve one or both ears. Over 10% of the reported cases are observed in families, suggesting its significant genetic contribution. The condition is polygenic with >20 genes, and several patterns of inheritance have been reported, including autosomal dominant, autosomal recessive, and digenic inheritance across multiple MD families. Preclinical research using animal models has been an indispensable tool for studying the neurophysiology of the auditory and vestibular systems and to get a better understanding of the functional role of genes that are involved in the hearing and vestibular dysfunction. While mouse models are the most used preclinical model, this review analyzes alternative animal and non-animal models that can be used to study MD genes. Methods: A literature search of the 21 genes reported for familial MD and the preclinical models used to investigate their functional role was performed. Results: Comparing the homology of proteins encoded by these genes to other model organisms revealed Drosophila and zebrafish as cost-effective models to screen multiple genes and study the pathophysiology of MD. Conclusions: Murine models are preferred for a quantitative neurophysiological assessment of hearing and vestibular functions to develop drug or gene therapy. Full article
(This article belongs to the Special Issue Recent Developments in Hearing and Balance Disorders: 2nd Edition)
Show Figures

Graphical abstract

Back to TopTop