Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = auditory perception inspired

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2932 KiB  
Article
The Role of Sensory Cues in Collective Dynamics: A Study of Three-Dimensional Vicsek Models
by Poorendra Ramlall and Subhradeep Roy
Appl. Sci. 2025, 15(3), 1556; https://doi.org/10.3390/app15031556 - 4 Feb 2025
Viewed by 923
Abstract
This study presents a three-dimensional collective motion model that integrates auditory and visual sensing modalities, inspired by organisms like bats that rely on these senses for navigation. Most existing models of collective motion consider vision-based sensing, likely reflecting an inherent human bias towards [...] Read more.
This study presents a three-dimensional collective motion model that integrates auditory and visual sensing modalities, inspired by organisms like bats that rely on these senses for navigation. Most existing models of collective motion consider vision-based sensing, likely reflecting an inherent human bias towards visual perception. However, many organisms utilize multiple sensory modalities, and this study explores how the integration of these distinct sensory inputs influences group behavior. We investigate a generalized scenario of three-dimensional motion, an area not previously explored for combining sensory information. Through numerical simulations, we investigate the combined impact of auditory and visual sensing on group behavior, contrasting these effects with those observed when relying solely on vision or audition. The results demonstrate that composite sensing allows particles to interact with more neighbors, thereby gaining more information. This interaction allows the formation of a single, large, perfectly aligned group using a narrow sensing region, achievable by taking advantage of the mechanics of both auditory and visual sensing. Our findings demonstrate the importance of integrating multiple sensory modalities in shaping emergent group behavior, with potential applications in both biological studies and the development of robotic swarms. Full article
Show Figures

Figure 1

16 pages, 5334 KiB  
Article
An Auditory Convolutional Neural Network for Underwater Acoustic Target Timbre Feature Extraction and Recognition
by Junshuai Ni, Fang Ji, Shaoqing Lu and Weijia Feng
Remote Sens. 2024, 16(16), 3074; https://doi.org/10.3390/rs16163074 - 21 Aug 2024
Cited by 2 | Viewed by 1526
Abstract
In order to extract the line-spectrum features of underwater acoustic targets in complex environments, an auditory convolutional neural network (ACNN) with the ability of frequency component perception, timbre perception and critical information perception is proposed in this paper inspired by the human auditory [...] Read more.
In order to extract the line-spectrum features of underwater acoustic targets in complex environments, an auditory convolutional neural network (ACNN) with the ability of frequency component perception, timbre perception and critical information perception is proposed in this paper inspired by the human auditory perception mechanism. This model first uses a gammatone filter bank that mimics the cochlear basilar membrane excitation response to decompose the input time-domain signal into a number of sub-bands, which guides the network to perceive the line-spectrum frequency information of the underwater acoustic target. A sequence of convolution layers is then used to filter out interfering noise and enhance the line-spectrum components of each sub-band by simulating the process of calculating the energy distribution features, after which the improved channel attention module is connected to select line spectra that are more critical for recognition, and in this module, a new global pooling method is proposed and applied in order to better extract the intrinsic properties. Finally, the sub-band information is fused using a combination layer and a single-channel convolution layer to generate a vector with the same dimensions as the input signal at the output layer. A decision module with a Softmax classifier is added behind the auditory neural network and used to recognize the five classes of vessel targets in the ShipsEar dataset, achieving a recognition accuracy of 99.8%, which is improved by 2.7% compared to the last proposed DRACNN method, and there are different degrees of improvement over the other eight compared methods. The visualization results show that the model can significantly suppress the interfering noise intensity and selectively enhance the radiated noise line-spectrum energy of underwater acoustic targets. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

18 pages, 3363 KiB  
Article
A Concert-Based Study on Melodic Contour Identification among Varied Hearing Profiles—A Preliminary Report
by Razvan Paisa, Jesper Andersen, Francesco Ganis, Lone M. Percy-Smith and Stefania Serafin
J. Clin. Med. 2024, 13(11), 3142; https://doi.org/10.3390/jcm13113142 - 27 May 2024
Viewed by 956
Abstract
Background: This study investigated how different hearing profiles influenced melodic contour identification (MCI) in a real-world concert setting with a live band including drums, bass, and a lead instrument. We aimed to determine the impact of various auditory assistive technologies on music [...] Read more.
Background: This study investigated how different hearing profiles influenced melodic contour identification (MCI) in a real-world concert setting with a live band including drums, bass, and a lead instrument. We aimed to determine the impact of various auditory assistive technologies on music perception in an ecologically valid environment. Methods: The study involved 43 participants with varying hearing capabilities: normal hearing, bilateral hearing aids, bimodal hearing, single-sided cochlear implants, and bilateral cochlear implants. Participants were exposed to melodies played on a piano or accordion, with and without an electric bass as a masker, accompanied by a basic drum rhythm. Bayesian logistic mixed-effects models were utilized to analyze the data. Results: The introduction of an electric bass as a masker did not significantly affect MCI performance for any hearing group when melodies were played on the piano, contrary to its effect on accordion melodies and previous studies. Greater challenges were observed with accordion melodies, especially when accompanied by an electric bass. Conclusions: MCI performance among hearing aid users was comparable to other hearing-impaired profiles, challenging the hypothesis that they would outperform cochlear implant users. A cohort of short melodies inspired by Western music styles was developed for future contour identification tasks. Full article
(This article belongs to the Special Issue Advances in the Diagnosis, Treatment, and Prognosis of Hearing Loss)
Show Figures

Figure 1

18 pages, 12292 KiB  
Article
Correlations among Firing Rates of Tactile, Thermal, Gustatory, Olfactory, and Auditory Sensations Mimicked by Artificial Hybrid Fluid (HF) Rubber Mechanoreceptors
by Kunio Shimada
Sensors 2023, 23(10), 4593; https://doi.org/10.3390/s23104593 - 9 May 2023
Cited by 1 | Viewed by 1998
Abstract
In order to advance the development of sensors fabricated with monofunctional sensation systems capable of a versatile response to tactile, thermal, gustatory, olfactory, and auditory sensations, mechanoreceptors fabricated as a single platform with an electric circuit require investigation. In addition, it is essential [...] Read more.
In order to advance the development of sensors fabricated with monofunctional sensation systems capable of a versatile response to tactile, thermal, gustatory, olfactory, and auditory sensations, mechanoreceptors fabricated as a single platform with an electric circuit require investigation. In addition, it is essential to resolve the complicated structure of the sensor. In order to realize the single platform, our proposed hybrid fluid (HF) rubber mechanoreceptors of free nerve endings, Merkel cells, Krause end bulbs, Meissner corpuscles, Ruffini endings, and Pacinian corpuscles mimicking the bio-inspired five senses are useful enough to facilitate the fabrication process for the resolution of the complicated structure. This study used electrochemical impedance spectroscopy (EIS) to elucidate the intrinsic structure of the single platform and the physical mechanisms of the firing rate such as slow adaption (SA) and fast adaption (FA), which were induced from the structure and involved the capacitance, inductance, reactance, etc. of the HF rubber mechanoreceptors. In addition, the relations among the firing rates of the various sensations were clarified. The adaption of the firing rate in the thermal sensation is the opposite of that in the tactile sensation. The firing rates in the gustation, olfaction, and auditory sensations at frequencies of less than 1 kHz have the same adaption as in the tactile sensation. The present findings are useful not only in the field of neurophysiology, to research the biochemical reactions of neurons and brain perceptions of stimuli, but also in the field of sensors, to advance salient developments in sensors mimicking bio-inspired sensations. Full article
(This article belongs to the Special Issue Applications of Flexible Tactile Sensors in Intelligent Systems)
Show Figures

Figure 1

16 pages, 5113 KiB  
Article
Origami-Inspired Structure with Pneumatic-Induced Variable Stiffness for Multi-DOF Force-Sensing
by Wenchao Yue, Jiaming Qi, Xiao Song, Shicheng Fan, Giancarlo Fortino, Chia-Hung Chen, Chenjie Xu and Hongliang Ren
Sensors 2022, 22(14), 5370; https://doi.org/10.3390/s22145370 - 19 Jul 2022
Cited by 15 | Viewed by 4540
Abstract
With the emerging need for human–machine interactions, multi-modal sensory interaction is gradually pursued rather than satisfying common perception forms (visual or auditory), so developing flexible, adaptive, and stiffness-variable force-sensing devices is the key to further promoting human–machine fusion. However, current sensor sensitivity is [...] Read more.
With the emerging need for human–machine interactions, multi-modal sensory interaction is gradually pursued rather than satisfying common perception forms (visual or auditory), so developing flexible, adaptive, and stiffness-variable force-sensing devices is the key to further promoting human–machine fusion. However, current sensor sensitivity is fixed and nonadjustable after fabrication, limiting further development. To solve this problem, we propose an origami-inspired structure to achieve multiple degrees of freedom (DoFs) motions with variable stiffness for force-sensing, which combines the ductility and flexibility of origami structures. In combination with the pneumatic actuation, the structure can achieve and adapt the compression, pitch, roll, diagonal, and array motions (five motion modes), which significantly increase the force adaptability and sensing diversity. To achieve closed-loop control and avoid excessive gas injection, the ultra-flexible microfiber sensor is designed and seamlessly embedded with an approximately linear sensitivity of ∼0.35 Ω/kPa at a relative pressure of 0–100 kPa, and an exponential sensitivity at a relative pressure of 100–350 kPa, which can render this device capable of working under various conditions. The final calibration experiment demonstrates that the pre-pressure value can affect the sensor’s sensitivity. With the increasing pre-pressure of 65–95 kPa, the average sensitivity curve shifts rightwards around 9 N intervals, which highly increases the force-sensing capability towards the range of 0–2 N. When the pre-pressure is at the relatively extreme air pressure of 100 kPa, the force sensitivity value is around 11.6 Ω/N. Therefore, our proposed design (which has a low fabrication cost, high integration level, and a suitable sensing range) shows great potential for applications in flexible force-sensing development. Full article
(This article belongs to the Special Issue Advances in Tactile Sensing and Robotic Grasping)
Show Figures

Figure 1

28 pages, 19143 KiB  
Article
Behavioral Outcomes and Neural Network Modeling of a Novel, Putative, Recategorization Sound Therapy
by Mithila Durai, Zohreh Doborjeh, Philip J. Sanders, Dunja Vajsakovic, Anne Wendt and Grant D. Searchfield
Brain Sci. 2021, 11(5), 554; https://doi.org/10.3390/brainsci11050554 - 27 Apr 2021
Cited by 8 | Viewed by 3916
Abstract
The mechanisms underlying sound’s effect on tinnitus perception are unclear. Tinnitus activity appears to conflict with perceptual expectations of “real” sound, resulting in it being a salient signal. Attention diverted towards tinnitus during the later stages of object processing potentially disrupts high-order auditory [...] Read more.
The mechanisms underlying sound’s effect on tinnitus perception are unclear. Tinnitus activity appears to conflict with perceptual expectations of “real” sound, resulting in it being a salient signal. Attention diverted towards tinnitus during the later stages of object processing potentially disrupts high-order auditory streaming, and its uncertain nature results in negative psychological responses. This study investigated the benefits and neurophysiological basis of passive perceptual training and informational counseling to recategorize phantom perception as a more real auditory object. Specifically, it examined underlying psychoacoustic correlates of tinnitus and the neural activities associated with tinnitus auditory streaming and how malleable these are to change with targeted intervention. Eighteen participants (8 females, 10 males, mean age = 61.6 years) completed the study. The study consisted of 2 parts: (1) An acute exposure over 30 min to a sound that matched the person’s tinnitus (Tinnitus Avatar) that was cross-faded to a selected nature sound (Cicadas, Fan, Water Sound/Rain, Birds, Water and Bird). (2) A chronic exposure for 3 months to the same “morphed” sound. A brain-inspired spiking neural network (SNN) architecture was used to model and compare differences between electroencephalography (EEG) patterns recorded prior to morphing sound presentation, during, after (3-month), and post-follow-up. Results showed that the tinnitus avatar generated was a good match to an individual’s tinnitus as rated on likeness scales and was not rated as unpleasant. The five environmental sounds selected for this study were also rated as being appropriate matches to individuals’ tinnitus and largely pleasant to listen to. There was a significant reduction in the Tinnitus Functional Index score and subscales of intrusiveness of the tinnitus signal and ability to concentrate with the tinnitus trial end compared to baseline. There was a significant decrease in how strong the tinnitus signal was rated as well as ratings of how easy it was to ignore the tinnitus signal on severity rating scales. Qualitative analysis found that the environmental sound interacted with the tinnitus in a positive way, but participants did not experience change in severity, however, characteristics of tinnitus, including pitch and uniformity of sound, were reported to change. The results indicate the feasibility of the computational SNN method and preliminary evidence that the sound exposure may change activation of neural tinnitus networks and greater bilateral hemispheric involvement as the sound morphs over time into natural environmental sound; particularly relating to attention and discriminatory judgments (dorsal attention network, precentral gyrus, ventral anterior network). This is the first study that attempts to recategorize tinnitus using passive auditory training to a sound that morphs from resembling the person’s tinnitus to a natural sound. These findings will be used to design future-controlled trials to elucidate whether the approach used differs in effect and mechanism from conventional Broadband Noise (BBN) sound therapy. Full article
(This article belongs to the Special Issue Neurorehabilitation of Sensory Disorders)
Show Figures

Figure 1

14 pages, 3443 KiB  
Article
Bio-Inspired Modality Fusion for Active Speaker Detection
by Gustavo Assunção, Nuno Gonçalves and Paulo Menezes
Appl. Sci. 2021, 11(8), 3397; https://doi.org/10.3390/app11083397 - 10 Apr 2021
Cited by 1 | Viewed by 2398
Abstract
Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion [...] Read more.
Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations. Full article
(This article belongs to the Special Issue Computer Vision for Mobile Robotics)
Show Figures

Figure 1

12 pages, 8465 KiB  
Article
A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition
by Honghui Yang, Junhao Li, Sheng Shen and Guanghui Xu
Sensors 2019, 19(5), 1104; https://doi.org/10.3390/s19051104 - 4 Mar 2019
Cited by 85 | Viewed by 6072
Abstract
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is [...] Read more.
Underwater acoustic target recognition (UATR) using ship-radiated noise faces big challenges due to the complex marine environment. In this paper, inspired by neural mechanisms of auditory perception, a new end-to-end deep neural network named auditory perception inspired Deep Convolutional Neural Network (ADCNN) is proposed for UATR. In the ADCNN model, inspired by the frequency component perception neural mechanism, a bank of multi-scale deep convolution filters are designed to decompose raw time domain signal into signals with different frequency components. Inspired by the plasticity neural mechanism, the parameters of the deep convolution filters are initialized randomly, and the is n learned and optimized for UATR. The n, max-pooling layers and fully connected layers extract features from each decomposed signal. Finally, in fusion layers, features from each decomposed signal are merged and deep feature representations are extracted to classify underwater acoustic targets. The ADCNN model simulates the deep acoustic information processing structure of the auditory system. Experimental results show that the proposed model can decompose, model and classify ship-radiated noise signals efficiently. It achieves a classification accuracy of 81.96%, which is the highest in the contrast experiments. The experimental results show that auditory perception inspired deep learning method has encouraging potential to improve the classification performance of UATR. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

Back to TopTop