Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = auditory masking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4487 KB  
Article
A Modeling Approach to Aggregated Noise Effects of Offshore Wind Farms in the Canary and North Seas
by Ion Urtiaga-Chasco and Alonso Hernández-Guerra
J. Mar. Sci. Eng. 2026, 14(1), 2; https://doi.org/10.3390/jmse14010002 - 19 Dec 2025
Viewed by 450
Abstract
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. [...] Read more.
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. Here, we integrate a turbine SL prediction algorithm with open-source propagation models in a Jupyter Notebook (version 7.4.7) to streamline aggregated SPL estimation for OWFs. Species-specific audiograms and weighting functions are included to assess potential biological impacts. The tool is applied to four planned OWFs, two in the Canary region and two in the Belgian and German North Seas, under conservative assumptions. Results indicate that at 10 m/s wind speed, a single turbine’s SL reaches 143 dB re 1 µPa in the one-third octave band centered at 160 Hz. Sensitivity analyses indicate that variations in wind speed can cause the operational source level at 160 Hz to increase by up to approximately 2 dB re 1 µPa2/Hz from the nominal value used in this study, while differences in sediment type can lead to transmission loss variations ranging from 0 to on the order of 100 dB, depending on bathymetry and range. Maximum SPLs of 112 dB re 1 µPa are predicted within OWFs, decreasing to ~50 dB re 1 µPa at ~100 km. Within OWFs, Low-Frequency (LF) cetaceans and Phocid Carnivores in Water (PCW) would likely perceive the noise; National Marine Fisheries Service (NMFS) marine mammals’ auditory-injury thresholds are not exceeded, but behavioral-harassment thresholds may be crossed. Outside the farms, only LF audiograms are crossed. In high-traffic North Sea regions, OWF noise is largely masked, whereas in lower-noise areas, such as the Canary Islands, it can exceed ambient levels, highlighting the importance of site-specific assessments, accurate ambient noise monitoring and propagation modeling for ecological impact evaluation. Full article
Show Figures

Figure 1

17 pages, 881 KB  
Article
Electrophysiological Evidence of Early Auditory Dysfunction in Personal Listening Device Users: Insights from ABR with Ipsilateral Masking
by A. P. Divya, Praveen Prakash, Sreeraj Konadath, Reesha Oovattil Hussain, Vijaya Kumar Narne and Sunil Kumar Ravi
Diagnostics 2025, 15(21), 2672; https://doi.org/10.3390/diagnostics15212672 - 23 Oct 2025
Viewed by 724
Abstract
Background: Recreational noise exposure from personal listening devices (PLDs) may lead to hidden hearing loss (HHL), affecting auditory nerve function despite normal pure-tone audiometry (PTA) and otoacoustic emissions (OAE). Subclinical auditory damage at the synaptic level often goes undetected by conventional assessments, emphasizing [...] Read more.
Background: Recreational noise exposure from personal listening devices (PLDs) may lead to hidden hearing loss (HHL), affecting auditory nerve function despite normal pure-tone audiometry (PTA) and otoacoustic emissions (OAE). Subclinical auditory damage at the synaptic level often goes undetected by conventional assessments, emphasizing the need for more sensitive measures. Recorded click ABR in the presence of various levels of ipsilateral maskers for the better identification of auditory damage at the synaptic level. These results could help to develop a better objective diagnostic tool that can detect hidden hearing loss. Objective: To examine the effects of PLD usage on extended high-frequency audiometric thresholds and on click-evoked auditory brainstem responses (ABR) with and without ipsilateral masking in individuals with normal hearing. Materials and Methods: Thirty-five young adults aged 18–35 years (18 PLD users, 17 controls) with clinically normal hearing were recruited. Extended high-frequency audiometry (EHFA) was conducted from 9 to 16 kHz. Click-evoked ABRs were recorded at 80 dB nHL under unmasked and ipsilateral broadband noise-masked conditions at 50, 60, and 70 dB SPL. ABR analyses included absolute and relative amplitude (V/I) and latencies of waves I, III, and V. Results: PLD users demonstrated significantly elevated extended high-frequency thresholds compared to controls. ABR analyses revealed reduced Wave I amplitudes across stimulus conditions in PLD users, while Wave V amplitudes were largely preserved, resulting in consistently higher V/I amplitude ratios under masked conditions. No group differences were observed for Wave III amplitudes or absolute/interpeak latencies, except for a modest prolongation of I–III latency at one masker level in PLD users. Conclusions: Conventional audiological tests may not detect early auditory damage; however, extended high-frequency audiometry and ABR with ipsilateral masking demonstrate greater sensitivity in identifying noise-induced functional changes within the auditory brainstem pathways. Full article
Show Figures

Figure 1

13 pages, 1420 KB  
Article
Comparison of Prototype Transparent Mask, Opaque Mask, and No Mask on Speech Understanding in Noise
by Samuel R. Atcherson, Evan T. Finley and Jeanne Hahne
Audiol. Res. 2025, 15(4), 103; https://doi.org/10.3390/audiolres15040103 - 11 Aug 2025
Viewed by 1488
Abstract
Background: Face masks are used in healthcare for the prevention of the spread of disease; however, the recent COVID-19 pandemic raised awareness of the challenges of typical opaque masks that obscure nonverbal cues. In addition, various masks have been shown to attenuate speech [...] Read more.
Background: Face masks are used in healthcare for the prevention of the spread of disease; however, the recent COVID-19 pandemic raised awareness of the challenges of typical opaque masks that obscure nonverbal cues. In addition, various masks have been shown to attenuate speech above 1000 Hz, and lack of nonverbal cues exacerbates speech understanding in the presence of background noise. Transparent masks can help to overcome the loss of nonverbal cues, but they have greater attenuative effects on higher speech frequencies. This study evaluated a newer prototype transparent face mask redesigned from a version evaluated in a previous study. Methods: Thirty participants (10 with normal hearing, 10 with moderate hearing loss, and 10 with severe-to-profound hearing loss) were recruited. Selected lists from the Connected Speech Test (CST) were digitally recorded using male and female talkers and presented to listeners at 65 dB HL in 12 conditions against a background of 4-talker babble (+5 dB SNR): without a mask (auditory only and audiovisual), with an opaque mask (auditory only and audiovisual), and with a transparent mask (auditory only and audiovisual). Results: Listeners with normal hearing performed consistently well across all conditions. For listeners with hearing loss, speech was generally easier to understand with the male talker. Audiovisual conditions were better than auditory-only conditions, and No Mask and Transparent Mask conditions were better than Opaque Mask conditions. Conclusions: These findings continue to support the use of transparent masks to improve communication, minimize medical errors, and increase patient satisfaction. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

16 pages, 634 KB  
Review
White Noise Exemplifies the Constrained Disorder Principle-Based Concept of Overcoming Malfunctions
by Sagit Stern Shavit and Yaron Ilan
Appl. Sci. 2025, 15(16), 8769; https://doi.org/10.3390/app15168769 - 8 Aug 2025
Viewed by 3934
Abstract
The Constrained Disorder Principle (CDP) characterizes systems by their inherent variability, which is regulated within dynamic boundaries to ensure optimal function and adaptability. In biological systems, this variability, or “noise”, is crucial for resilience and flexibility at various scales, ranging from genes and [...] Read more.
The Constrained Disorder Principle (CDP) characterizes systems by their inherent variability, which is regulated within dynamic boundaries to ensure optimal function and adaptability. In biological systems, this variability, or “noise”, is crucial for resilience and flexibility at various scales, ranging from genes and cells to more complex organ systems. Disruption of the boundaries that control this noise—whether through amplification or suppression—can lead to malfunctions and result in pathological conditions. White noise (WN), defined by equal intensity across all audible frequencies, is an exemplary clinical application of the CDP. It has been shown to stabilize disrupted processes and restore functional states by utilizing its stochastic properties within the auditory system. This paper explores WN-based therapies, specifically for the masking, habituation, and alleviation of tinnitus, a subjective perception of sound. It describes the potential to improve WN-based therapies’ effectiveness by applying the CDP and CDP-based second-generation artificial intelligence systems. Understanding the characteristics and limitations of these approaches is essential for their effective implementation across various fields. Full article
Show Figures

Figure 1

22 pages, 4121 KB  
Article
An Integrated Spatial-Spectral Denoising Framework for Robust Electrically Evoked Compound Action Potential Enhancement and Auditory Parameter Estimation
by Fan-Jie Kung
Sensors 2025, 25(11), 3523; https://doi.org/10.3390/s25113523 - 3 Jun 2025
Viewed by 764
Abstract
The electrically evoked compound action potential (ECAP) is a crucial physiological signal used by clinicians to evaluate auditory nerve functionality. Clean ECAP recordings help to accurately estimate auditory neural activity patterns and ECAP magnitudes, particularly through the panoramic ECAP (PECAP) framework. However, noise—especially [...] Read more.
The electrically evoked compound action potential (ECAP) is a crucial physiological signal used by clinicians to evaluate auditory nerve functionality. Clean ECAP recordings help to accurately estimate auditory neural activity patterns and ECAP magnitudes, particularly through the panoramic ECAP (PECAP) framework. However, noise—especially in low-signal-to-noise ratio (SNR) conditions—can lead to significant errors in parameter estimation. This study proposes a two-stage preprocessing denoising (TSPD) algorithm to address this issue and enhance ECAP signals. First, an ECAP matrix is constructed using the forward-masking technique, representing the signal as a two-dimensional image. This matrix undergoes spatial noise reduction via an improved spatial median (I-Median) filter. In the second stage, the denoised matrix is vectorized and further processed using a log-spectral amplitude (LSA) Wiener filter for spectral domain denoising. The enhanced vector is then reconstructed into the ECAP matrix for parameter estimation using PECAP. The above integrated spatial-spectral denoising framework is denoted as PECAP-TSPD in this work. Evaluations are conducted using a simulation-based ECAP model mixed with simulated and experimental noise, designed to emulate the spatial characteristics of real ECAPs. Three objective quality measures—namely, normalized root mean square error (RMSE), two-dimensional correlation coefficient (TDCC), and structural similarity index (SSIM)—are used. Simulated and experimental results show that the proposed PECAP-TSPD method has the lowest average RMSE of PECAP magnitudes (1.952%) and auditory neural patterns (1.407%), highest average TDCC (0.9988), and average SSIM (0.9931) compared to PECAP (6.446%, 5.703%, 0.9859, 0.8997), PECAP with convolutional neural network (CNN)-based denoising mask (PECAP-CNN) (9.700%, 7.111%, 0.9766, 0.8832), and PECAP with improved median filtering (PECAP-I-Median) (4.515%, 3.321%, 0.9949, 0.9470) under impulse noise conditions. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

21 pages, 3952 KB  
Article
Which Factors Enhance the Perceived Restorativeness of Streetscapes: Sound, Vision, or Their Combined Effects? Insights from Four Street Types in Nanjing, China
by Xi Lu, Jiamin Xu, Eckart Lange and Jingwen Cao
Land 2025, 14(4), 757; https://doi.org/10.3390/land14040757 - 1 Apr 2025
Cited by 4 | Viewed by 1599
Abstract
Streetscapes play a critical role in restorative landscapes, offering opportunities for promoting public well-being. Previous studies have predominantly examined the influence of visual and auditory stimuli on perceived restorativeness independently. There is a limited understanding of their interactive effects. In this research, 360 [...] Read more.
Streetscapes play a critical role in restorative landscapes, offering opportunities for promoting public well-being. Previous studies have predominantly examined the influence of visual and auditory stimuli on perceived restorativeness independently. There is a limited understanding of their interactive effects. In this research, 360 participants completed a series of experiments considering four distinct street types, including visual comfort assessment, acoustic environment assessment, and perceived restorativeness. They were assigned to a control group and one of three experimental groups, each receiving specific enhancement: visual stimuli, auditory stimuli, or a combination of audiovisual stimuli. The findings revealed that the experimental groups reported a greater sense of restorativeness compared to the control group. Notably, auditory stimuli demonstrated a more pronounced restorative effect than visual stimuli, while limited differences were found between auditory and audiovisual stimuli. The differences in experimental outcomes among the four street types are compared and discussed, highlighting context-specific guidelines for enhancing streetscape restorativeness. The research findings highlight enhancing the masking effect of soundscape in street environmental design. The study adds a novel multi-sensory approach to the current body of research on restorative landscapes, providing significant insights for the planning and design of streetscapes. Full article
Show Figures

Figure 1

19 pages, 3137 KB  
Article
Investigating Neurophysiological, Perceptual, and Cognitive Mechanisms in Misophonia
by Chhayakanta Patro, Emma Wasko, Prashanth Prabhu and Nirmal Kumar Srinivasan
Biology 2025, 14(3), 238; https://doi.org/10.3390/biology14030238 - 26 Feb 2025
Cited by 3 | Viewed by 4072
Abstract
Misophonia is a condition characterized by intense, involuntary distress or anger in response to specific sounds, often leading to irritation or aggression. While the condition is recognized for its emotional and behavioral impacts, little is known about its physiological and perceptual effects. The [...] Read more.
Misophonia is a condition characterized by intense, involuntary distress or anger in response to specific sounds, often leading to irritation or aggression. While the condition is recognized for its emotional and behavioral impacts, little is known about its physiological and perceptual effects. The current study aimed to explore the physiological correlates and perceptual consequences of misophonia through a combination of electrophysiological, perceptual, and cognitive assessments. Seventeen individuals with misophonia and sixteen control participants without the condition were compared. Participants completed a comprehensive battery of tests, including (a) cortical event-related potentials (ERPs) to assess neural responses to standard and deviant auditory stimuli, (b) the spatial release from the speech-on-speech masking (SRM) paradigm to evaluate speech segregation in background noise, and (c) the flanker task to measure selective attention and cognitive control. The results revealed that individuals with misophonia exhibited significantly smaller mean peak amplitudes of the N1 and N2 components in response to oddball tones compared to controls. This suggests a potential underlying neurobiological deficit in misophonia patients, as these components are associated with early auditory processing. However, no significant differences between each group were observed in the P1 and P2 components regarding oddball tones or in any ERP components in response to standard tones. Despite these altered neural responses, the misophonia group did not show differences in hearing thresholds, speech perception abilities, or cognitive function compared to the controls. These findings suggest that while misophonia may involve distinct neurophysiological changes, particularly in early auditory processing, it does not necessarily lead to perceptual deficits in speech perception or cognitive function. Full article
(This article belongs to the Special Issue Neural Correlates of Perception in Noise in the Auditory System)
Show Figures

Figure 1

20 pages, 9577 KB  
Article
A Novel Calculation Method to Quantify the Torque Dependency of the Masking Threshold of Tonal Powertrain Noise in Electric Vehicles
by Victor Abbink, Carsten Moll, David Landes and M. Ercan Altinsoy
Appl. Sci. 2024, 14(24), 11928; https://doi.org/10.3390/app142411928 - 20 Dec 2024
Viewed by 1137
Abstract
Tonal powertrain noise can have a strong negative impact on passengers’ quality and comfort perception in the interior of electric vehicles. Therefore, in the vehicle development process, the assessment of the perceptibility of tonal powertrain noise is essential. As wind and tire noise [...] Read more.
Tonal powertrain noise can have a strong negative impact on passengers’ quality and comfort perception in the interior of electric vehicles. Therefore, in the vehicle development process, the assessment of the perceptibility of tonal powertrain noise is essential. As wind and tire noise can possibly mask tonal noises, engineers use modern masking models to determine the masking threshold of tonal powertrain noise from vehicle interior measurements. In the presently used method, the masking threshold is mostly generated with torque-free deceleration measurements. However, the influence of torque on masking tire noise must be considered. As this requires time-consuming and costly road measurements, an extension of the method is being developed, which will also enable the use of roller dynamometer measurements for the assessment. For the extension of the method, however, the influence of the torque must also be considered. This paper presents a novel calculation method that quantifies the influence of torque on the masking threshold and converts masking thresholds from an arbitrary torque level to another. By identifying the frequency and speed range that is mainly affected by the torque-dependent tire noise, a regression model with respect to the tractive force on the tires can be used to calculate a torque-dependent correction factor. The developed method can significantly improve the validity of masking thresholds and quantitatively, the method generalizes well across different vehicle segments. The error can be reduced to below 2 dB above 2000 rpm and to below 1 dB above 4000 rpm. By using this method, more valid target level settings for tonal powertrain noise can be derived. Full article
Show Figures

Figure 1

10 pages, 1618 KB  
Article
Spatial Release from Masking for Small Spatial Separations Using Simulated Cochlear Implant Speech
by Nirmal Srinivasan, SaraGrace McCannon and Chhayakant Patro
J. Otorhinolaryngol. Hear. Balance Med. 2024, 5(2), 18; https://doi.org/10.3390/ohbm5020018 - 27 Nov 2024
Cited by 2 | Viewed by 2954
Abstract
Background: Spatial release from masking (SRM) is the improvement in speech intelligibility when the masking signals are spatially separated from the target signal. Young, normal- hearing listeners have a robust auditory sys-tem that is capable of using the binaural cues even with a [...] Read more.
Background: Spatial release from masking (SRM) is the improvement in speech intelligibility when the masking signals are spatially separated from the target signal. Young, normal- hearing listeners have a robust auditory sys-tem that is capable of using the binaural cues even with a very small spatial separation between the target and the maskers. Prior studies exploring SRM through simulated cochlear implant (CI) speech have been completed using substantial spatial separations, exceeding 45° between the target signal and masking signals. Nevertheless, in re-al-world conversational scenarios, the spatial separation between the target and the maskers may be considerably less than what has been previously investigated. This study presents SRM data utilizing simulated CI speech with young, normal-hearing listeners, focusing on smaller but realistic spatial separations between the target and the maskers. Methods: Twenty-five young, normal-hearing listeners participated in this study. Speech identification thresholds, the target-to-masker ratio required to accurately identify 50% of the target words, were measured for both natural speech and simulated CI speech. Results: The results revealed that young, normal-hearing listeners had significantly higher speech identification thresholds when presented with simulated CI speech in comparison to natural speech. Furthermore, the amount of SRM was found to be greater for natural speech than for the simulated CI speech. Conclusions: The data suggests that young normal-hearing individuals are capable of utilizing the interaural level difference cues in the simulated cochlear implant signal to achieve masking release at reduced spatial separations between the target and the maskers, highlighting the auditory system’s capability to extract these interaural cues even in the presence of degraded speech signals. Full article
Show Figures

Figure 1

19 pages, 7262 KB  
Article
Comfortable Sound Design Based on Auditory Masking with Chord Progression and Melody Generation Corresponding to the Peak Frequencies of Dental Treatment Noises
by Masato Nakayama, Takuya Hayashi, Toru Takahashi and Takanobu Nishiura
Appl. Sci. 2024, 14(22), 10467; https://doi.org/10.3390/app142210467 - 13 Nov 2024
Viewed by 1973
Abstract
Noise reduction methods have been proposed for various loud noises. However, in a quiet indoor environment, even small noises often cause discomfort. One of the small noises that causes discomfort is noise with resonant frequencies. Since resonant frequencies are often high frequencies, it [...] Read more.
Noise reduction methods have been proposed for various loud noises. However, in a quiet indoor environment, even small noises often cause discomfort. One of the small noises that causes discomfort is noise with resonant frequencies. Since resonant frequencies are often high frequencies, it is difficult to apply conventional active noise control methods to them. To solve this problem, we focused on auditory masking, a phenomenon in which synthesized sounds increase the audible threshold. We have performed several studies on reducing discomfort based on auditory masking. However, it was difficult for comfortable sound design to be achieved using the previously proposed methods, even though they were able to reduce feelings of discomfort. Here, we focus on a pleasant sound: music. Comfortable sound design is made possible by introducing music theory into the design of masker signals. In this paper, we therefore propose comfortable sound design based on auditory masking with chord progression and melody generation to match the peak frequencies of dental treatment noises. Full article
(This article belongs to the Special Issue Noise Measurement, Acoustic Signal Processing and Noise Control)
Show Figures

Figure 1

33 pages, 10230 KB  
Article
Multi-Sensory Interaction and Spatial Perception in Urban Microgreen Spaces: A Focus on Vision, Auditory, and Olfaction
by Haohua Zheng, Man Luo, Yihan Wang and Yangyang Wei
Sustainability 2024, 16(20), 8809; https://doi.org/10.3390/su16208809 - 11 Oct 2024
Cited by 10 | Viewed by 5118
Abstract
As important recreational spaces for urban residents, urban microgreen parks enhance the urban living environment and alleviate psychological pressure on residents. The visual, auditory, and olfactory senses are crucial forms of perception in human interaction with nature, and the sustainable perceptual design of [...] Read more.
As important recreational spaces for urban residents, urban microgreen parks enhance the urban living environment and alleviate psychological pressure on residents. The visual, auditory, and olfactory senses are crucial forms of perception in human interaction with nature, and the sustainable perceptual design of miniature green parks under their interaction has become a recent research hotspot. This study aimed to investigate the effects of the visual, acoustic, and olfactory environments (e.g., aromatic green vegetation) on human perception in miniature green parks. Participants were evenly divided into eight groups, including single-sensory groups, multi-sensory interaction groups, and a control group. Eye-tracking technology, blood pressure monitoring, and the Semantic Differential (SD) scales and Profile of Mood State (POMS) were used to assess the effectiveness of physical and mental perception recovery in each group. The results revealed that in an urban microgreen space environment with relatively low ambient noise, visual–auditory, visual–olfactory, and visual–auditory–olfactory interactive stimuli were more effective in promoting the recovery of visual attention than single visual stimuli. Additionally, visual–auditory–olfactory interactive stimuli were able to optimize the quality of spatial perception by using positive sensory inputs to effectively mask negative experiences. Simultaneously, environments with a high proportion of natural sounds had the strongest stimuli, and in the visual–auditory group, systolic blood pressure at S7 and heart rate at S9 significantly decreased (p < 0.05), with reductions of 18.60 mmHg and 20.15 BPM, respectively. Aromatic olfactory sources were more effective in promoting physical and mental relaxation compared to other olfactory sources, with systolic blood pressure reductions of 24.40 mmHg (p < 0.01) for marigolds, 23.35 mmHg (p < 0.01) for small-leaved boxwood, and 27.25 mmHg (p < 0.05) for camphor trees. Specific auditory and olfactory conditions could guide visual focus, such as birdsong directing attention to trees, insect sounds drawing attention to herbaceous plants, floral scents attracting focus to flowers, and leaf scents prompting observation of a wider range of natural vegetation. In summary, significant differences exist between single-sensory experiences and multi-sensory modes of spatial perception and interaction in urban microgreen parks. Compared to a silent and odorless environment, the integration of acoustic and olfactory elements broadened the scope of visual attention, and In the visual–auditory–olfactory interactive perception, the combination of natural sounds and aromatic camphor tree scents had the best effect on attention recovery, thereby improving the quality of spatial perception in urban microgreen parks. Full article
Show Figures

Figure 1

9 pages, 2212 KB  
Article
Adaptive Filtering for Multi-Track Audio Based on Time–Frequency Masking Detection
by Wenhan Zhao and Fernando Pérez-Cota
Signals 2024, 5(4), 633-641; https://doi.org/10.3390/signals5040035 - 2 Oct 2024
Viewed by 2425
Abstract
There is a growing need to facilitate the production of recorded music as independent musicians are now key in preserving the broader cultural roles of music. A critical component of the production of music is multitrack mixing, a time-consuming task aimed at, among [...] Read more.
There is a growing need to facilitate the production of recorded music as independent musicians are now key in preserving the broader cultural roles of music. A critical component of the production of music is multitrack mixing, a time-consuming task aimed at, among other things, reducing spectral masking and enhancing clarity. Traditionally, this is achieved by skilled mixing engineers relying on their judgment. In this work, we present an adaptive filtering method based on a novel masking detection scheme capable of identifying masking contributions, including temporal interchangeability between the masker and maskee. This information is then systematically used to design and apply filters. We implement our methods on multitrack music to improve the quality of the raw mix. Full article
Show Figures

Figure 1

17 pages, 2684 KB  
Article
Characterization of Cochlear Implant Artifact and Removal Based on Multi-Channel Wiener Filter in Unilateral Child Patients
by Dario Rossi, Giulia Cartocci, Bianca M. S. Inguscio, Giulia Capitolino, Gianluca Borghini, Gianluca Di Flumeri, Vincenzo Ronca, Andrea Giorgi, Alessia Vozzi, Rossella Capotorto, Fabio Babiloni, Alessandro Scorpecci, Sara Giannantonio, Pasquale Marsella, Carlo Antonio Leone, Rosa Grassia, Francesco Galletti, Francesco Ciodaro, Cosimo Galletti and Pietro Aricò
Bioengineering 2024, 11(8), 753; https://doi.org/10.3390/bioengineering11080753 - 24 Jul 2024
Cited by 1 | Viewed by 2357
Abstract
Cochlear implants (CI) allow deaf patients to improve language perception and improving their emotional valence assessment. Electroencephalographic (EEG) measures were employed so far to improve CI programming reliability and to evaluate listening effort in auditory tasks, which are particularly useful in conditions when [...] Read more.
Cochlear implants (CI) allow deaf patients to improve language perception and improving their emotional valence assessment. Electroencephalographic (EEG) measures were employed so far to improve CI programming reliability and to evaluate listening effort in auditory tasks, which are particularly useful in conditions when subjective evaluations are scarcely appliable or reliable. Unfortunately, the presence of CI on the scalp introduces an electrical artifact coupled to EEG signals that masks physiological features recorded by electrodes close to the site of implant. Currently, methods for CI artifact removal have been developed for very specific EEG montages or protocols, while others require many scalp electrodes. In this study, we propose a method based on the Multi-channel Wiener filter (MWF) to overcome those shortcomings. Nine children with unilateral CI and nine age-matched normal hearing children (control) participated in the study. EEG data were acquired on a relatively low number of electrodes (n = 16) during resting condition and during an auditory task. The obtained results obtained allowed to characterize CI artifact on the affected electrode and to significantly reduce, if not remove it through MWF filtering. Moreover, the results indicate, by comparing the two sample populations, that the EEG data loss is minimal in CI users after filtering, and that data maintain EEG physiological characteristics. Full article
(This article belongs to the Special Issue IoT Technology in Bioengineering Applications)
Show Figures

Graphical abstract

16 pages, 2980 KB  
Article
Schooling Fish from a New, Multimodal Sensory Perspective
by Matz Larsson
Animals 2024, 14(13), 1984; https://doi.org/10.3390/ani14131984 - 5 Jul 2024
Cited by 7 | Viewed by 3978
Abstract
The acoustic hypothesis suggests that schooling can result in several benefits. (1) The acoustic pattern (AP) (pressure waves and other water movements) produced by swimming are likely to serve as signals within fish shoals, communicating useful spatial and temporal information between school members, [...] Read more.
The acoustic hypothesis suggests that schooling can result in several benefits. (1) The acoustic pattern (AP) (pressure waves and other water movements) produced by swimming are likely to serve as signals within fish shoals, communicating useful spatial and temporal information between school members, enabling synchronized locomotion and influencing join, stay or leave decisions and shoal assortment. (2) Schooling is likely to reduce the masking of environmental signals, e.g., by auditory grouping, and fish may achieve windows of silence by simultaneously stopping their movements. (3) A solitary swimming fish produces an uncomplicated AP that will give a nearby predator’s lateral line organ (LLO) excellent information, but, if extra fish join, they will produce increasingly complex and indecipherable APs. (4) Fishes swimming close to one another will also blur the electrosensory system (ESS) of predators. Since predators use multimodal information, and since information from the LLO and the ESS is more important than vision in many situations, schooling fish may acquire increased survival by confusing these sensory systems. The combined effects of such predator confusion and other acoustical benefits may contribute to why schooling became an adaptive success. A model encompassing the complex effects of synchronized group locomotion on LLO and ESS perception might increase the understanding of schooling behavior. Full article
(This article belongs to the Special Issue Functional Morphology and Adaptations of Aquatic Life)
Show Figures

Figure 1

18 pages, 4144 KB  
Article
Auditory Sensory Gating: Effects of Noise
by Fan-Yin Cheng, Julia Campbell and Chang Liu
Biology 2024, 13(6), 443; https://doi.org/10.3390/biology13060443 - 18 Jun 2024
Cited by 1 | Viewed by 3665
Abstract
Cortical auditory evoked potentials (CAEPs) indicate that noise degrades auditory neural encoding, causing decreased peak amplitude and increased peak latency. Different types of noise affect CAEP responses, with greater informational masking causing additional degradation. In noisy conditions, attention can improve target signals’ neural [...] Read more.
Cortical auditory evoked potentials (CAEPs) indicate that noise degrades auditory neural encoding, causing decreased peak amplitude and increased peak latency. Different types of noise affect CAEP responses, with greater informational masking causing additional degradation. In noisy conditions, attention can improve target signals’ neural encoding, reflected by an increased CAEP amplitude, which may be facilitated through various inhibitory mechanisms at both pre-attentive and attentive levels. While previous research has mainly focused on inhibition effects during attentive auditory processing in noise, the impact of noise on the neural response during the pre-attentive phase remains unclear. Therefore, this preliminary study aimed to assess the auditory gating response, reflective of the sensory inhibitory stage, to repeated vowel pairs presented in background noise. CAEPs were recorded via high-density EEG in fifteen normal-hearing adults in quiet and noise conditions with low and high informational masking. The difference between the average CAEP peak amplitude evoked by each vowel in the pair was compared across conditions. Scalp maps were generated to observe general cortical inhibitory networks in each condition. Significant gating occurred in quiet, while noise conditions resulted in a significantly decreased gating response. The gating function was significantly degraded in noise with less informational masking content, coinciding with a reduced activation of inhibitory gating networks. These findings illustrate the adverse effect of noise on pre-attentive inhibition related to speech perception. Full article
(This article belongs to the Special Issue Neural Correlates of Perception in Noise in the Auditory System)
Show Figures

Figure 1

Back to TopTop