Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (13,753)

Search Parameters:
Keywords = acoustic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1898 KB  
Article
Parallel Bilingual Datasets: A Multimodal Deep Learning Framework for Proficiency and Style Classification
by Padmavathi Kesavan, Miranda Lakshmi Travis, Martin Aruldoss and Martin Wynn
Multimodal Technol. Interact. 2026, 10(5), 47; https://doi.org/10.3390/mti10050047 (registering DOI) - 30 Apr 2026
Abstract
This study presents a multimodal deep learning framework for automatic proficiency and style classification of parallel Bilingual Tamil–Hindi learner data. The proposed system employs a dual-headed neural architecture to simultaneously predict proficiency levels (Basic, Advanced) and stylistic categories (Formal, Literary) using shared feature [...] Read more.
This study presents a multimodal deep learning framework for automatic proficiency and style classification of parallel Bilingual Tamil–Hindi learner data. The proposed system employs a dual-headed neural architecture to simultaneously predict proficiency levels (Basic, Advanced) and stylistic categories (Formal, Literary) using shared feature representations. A curated dataset of bilingual text samples is utilized, along with synthetic speech generated through text-to-speech (TTS) to enable controlled multimodal experimentation. Five deep learning architectures are evaluated under text-only, audio-only, and learnable fusion settings. Experimental findings indicate that text-based models consistently achieve strong performance in both proficiency and style classification tasks. In contrast, the audio-only model demonstrates limited effectiveness, highlighting the constraints of synthetic acoustic features in capturing meaningful linguistic information. The fusion models provide only marginal improvements over text-based approaches, suggesting that textual representations play a dominant role in proficiency and stylistic classification within controlled datasets. These results emphasize the importance of linguistic features over acoustic signals for automated language assessment in low-resource settings. The proposed framework provides a scalable and reproducible approach and offers a foundation for future work incorporating real speech data and more diverse linguistic inputs. Full article
22 pages, 1906 KB  
Article
Audible Sound Stress Alters Behavior and Gene Transcription, and Negatively Impacts Development, Survival and Reproductive Fitness in Spodoptera frugiperda
by Chao-Yang Duan, Yun-Ju Xiang, Jun-Bo Li, Jun-Zhong Zhang, Da-Ying Fu, Wei Gao and Jin Xu
Insects 2026, 17(5), 467; https://doi.org/10.3390/insects17050467 (registering DOI) - 30 Apr 2026
Abstract
Moth auditory systems, evolutionarily adapted and structurally diverse with ultrasonic sensitivity, underpin the development of acoustic-based pest management strategies. Here, based on hypotheses derived from previous findings, we tested whether and how audible sounds (music, bird chirp, noise; 0.25–1 kHz, 80/120 dB) affect [...] Read more.
Moth auditory systems, evolutionarily adapted and structurally diverse with ultrasonic sensitivity, underpin the development of acoustic-based pest management strategies. Here, based on hypotheses derived from previous findings, we tested whether and how audible sounds (music, bird chirp, noise; 0.25–1 kHz, 80/120 dB) affect the development, survival, behavior and fecundity, as well as the molecular responses, using both short-term and long-term exposure (three successive generations) experimental designs. Behavioral assays showed dose-specific responses: high-intensity (120 dB) bird chirp and noise suppressed larval and adult activity, while low-intensity (80 dB) counterparts promoted larval crawling. Long-term exposure revealed that bird chirp and noise significantly impaired fitness, reducing larval/pupal body weight, pupation/eclosion rates, and egg hatching rate, with 120 dB noise exerting the strongest effects; 80 dB music showed neutral or positive impacts. Transcriptomic analysis identified 71–235 differentially expressed genes (DEGs) across treatment groups, with bird chirp and noise inducing more downregulated DEGs related to metabolism, immunity, and development. Notably, all cuticle-related DEGs in the 80 dB noise group and 53.2% in the 120 dB noise group were upregulated, suggesting stress-induced cuticular remodeling. GO/KEGG enrichment indicated distinct patterns: 80 dB music, bird chirp and 120 dB noise groups only had downregulated DEGs enriched in certain terms/pathways, mainly associated with cellular components; the 80 dB noise group had upregulated DEGs enriched in sensory, cuticle, metabolism and longevity-related terms/pathways, and downregulated DEGs in metabolism and human disease-related terms/pathways. Analysis of the expression patterns of all the longevity pathway-related genes suggested that sound stress induces lifespan regulation in this insect. These findings clarify S. frugiperda’s multidimensional responses to audible sound, providing a foundation for sound-based pest management. Full article
30 pages, 4920 KB  
Review
Acoustofluidic Biosensors
by Chun-Jui Chen, Jae-Sung Kwon and Han-Sheng Chuang
Micromachines 2026, 17(5), 561; https://doi.org/10.3390/mi17050561 - 30 Apr 2026
Abstract
The rapid and precise detection of biomarkers and pathogens remains a critical challenge in clinical diagnostics. Traditional methodologies are frequently hindered by protracted workflows, complex sample preparation, and reliance on resource-intensive instrumentation. Acoustofluidics—the synergistic integration of acoustics and microfluidics—has emerged as a transformative [...] Read more.
The rapid and precise detection of biomarkers and pathogens remains a critical challenge in clinical diagnostics. Traditional methodologies are frequently hindered by protracted workflows, complex sample preparation, and reliance on resource-intensive instrumentation. Acoustofluidics—the synergistic integration of acoustics and microfluidics—has emerged as a transformative solution for point-of-care testing (POCT). Bulk acoustic wave (BAW) and surface acoustic wave (SAW) technologies enable the contactless, label-free, and biocompatible manipulation of bioparticles across micro- and nanometer scales. This review critically examines recent advancements in BAW- and SAW-based acoustofluidic biosensors. We elucidate the fundamental principles governing distinct acoustic modes—including Quartz Crystal Microbalance (QCM), film bulk acoustic resonator (FBAR), and Solidly Mounted Resonator (SMR) for BAW and Rayleigh and Love waves for SAW—and evaluate their specific roles in liquid-phase sensing, particle sorting, and cellular focusing. Results show that integrating on-chip sample preparation accelerates diagnostic workflows, reducing assay times to under 10 min. Coupling acoustic manipulation with optical, mass-based, or electrochemical modalities effectively overcomes fundamental diffusion limits, achieving ultrasensitive, multimodal detection. We address translational challenges—acoustothermal heating, biofouling, and scalable integration. Following a discussion of clinical applications in oncology and infectious diseases, we map emerging trajectories, emphasizing AI-driven intelligent microfluidics, modular architectures, and flexible wearable platforms that will ultimately democratize continuous precision diagnostics. Full article
(This article belongs to the Special Issue Point-of-Care Testing Based on Biosensors and Biomimetic Sensors)
12 pages, 5003 KB  
Case Report
Multimodal Imaging of Oncocytic Lipoadenoma Arising from the Parotid Deep Lobe with Medial Extension into the Parapharyngeal Space: A Case Report with Histopathologic Findings and Literature Review
by Jong-Uk Lee, Hye Jin Baek, Kwang Ho Choi, Eun Cho and Hyo Jung An
Diagnostics 2026, 16(9), 1366; https://doi.org/10.3390/diagnostics16091366 - 30 Apr 2026
Abstract
Background: Oncocytic lipoadenoma is an exceptionally rare benign fat-containing salivary gland tumor that most commonly arises in the parotid gland. Previous case reports have largely focused on histopathology with limited or single-modality imaging documentation; therefore, practical preoperative radiological characterization remains challenging. Case [...] Read more.
Background: Oncocytic lipoadenoma is an exceptionally rare benign fat-containing salivary gland tumor that most commonly arises in the parotid gland. Previous case reports have largely focused on histopathology with limited or single-modality imaging documentation; therefore, practical preoperative radiological characterization remains challenging. Case Presentation: A 46-year-old male presented with a 2-year history of a slowly enlarging right-sided parotid mass. Computed tomography and magnetic resonance imaging showed a well-circumscribed fat-containing mass with a discrete medially enhancing solid component, mild diffusion restriction and small cystic foci without aggressive features. Ultrasonography revealed a heterogeneously hypoechoic parotid mass; however, limited acoustic penetration hindered evaluation of the deep portion. A core-needle biopsy was inconclusive, and an atypical lipomatous tumor could not be excluded. Subsequent surgical excision confirmed an oncocytic lipoadenoma, a biphasic tumor comprising mature adipose tissue and cytokeratin 7-positive oncocytic epithelial nests. The patient has remained recurrence-free for 7 years after surgery. Conclusions: Fat-containing parotid tumors can be diagnostically challenging because imaging findings are often nonspecific, and biphasic lipoepithelial entities are rarely encountered. This case highlights that awareness of the pattern of macroscopic fat with a discrete enhancing non-fat component, interpreted alongside histopathological findings, may help narrow the differential diagnosis, guide management, and reduce diagnostic uncertainty. Full article
(This article belongs to the Special Issue Advances in Oral and Maxillofacial Imaging)
21 pages, 3645 KB  
Article
A Novel Mechanism Analysis Method for the Robotic Grinding of a TC4 Workpiece Using Acoustic Emission Based on an Improved CCEEMD Algorithm
by Xiangye Zhu, Qi Liu, Liang Liang, Xiaohu Xu and Sijie Yan
Machines 2026, 14(5), 501; https://doi.org/10.3390/machines14050501 - 30 Apr 2026
Abstract
The instantaneous contact zone in robotic abrasive belt grinding involves highly coupled thermo-mechanical interactions between abrasive grains and the workpiece material. Acoustic Emission (AE) signals generated during this process are inherently nonlinear and nonstationary, posing challenges for accurate process monitoring and mechanistic understanding. [...] Read more.
The instantaneous contact zone in robotic abrasive belt grinding involves highly coupled thermo-mechanical interactions between abrasive grains and the workpiece material. Acoustic Emission (AE) signals generated during this process are inherently nonlinear and nonstationary, posing challenges for accurate process monitoring and mechanistic understanding. To address this, this study introduces an innovative AE signal processing framework designed to elucidate the robotic grinding mechanism for Ti-6Al-4V (TC4) titanium alloy. An improved Completely Complementary Ensemble Empirical Mode Decomposition (CCEEMD) algorithm, building upon Empirical Mode Decomposition (EMD), is developed to precisely extract intrinsic mode functions (IMFs) from raw AE data. Subsequently, a novel denoising algorithm utilizing noise statistical characteristics effectively removes invalid noise from the robotic machining system. Validation through robotic grinding experiments on TC4 workpieces successfully established quantifiable relationships between extracted AE features and the underlying grinding mechanism. Significantly, implementing this methodology contributed to extending the effective service life of a structured abrasive belt by approximately 20% while increasing machining efficiency by approximately 12%. This work presents a novel methodology combining improved CCEEMD and statistical denoising for AE analysis in robotic grinding, providing a robust link between AE signatures and material removal mechanisms, ultimately enabling quantitative process optimization. Full article
(This article belongs to the Special Issue Intelligent Design and Manufacturing of Mechanical Equipment)
39 pages, 3200 KB  
Article
A Multimodal Audiovisual Deep Learning Framework for Early Detection of Parkinson’s Disease
by Yinpeng Guo, Hua Huo, Yulong Pei, Lan Ma, Shilu Kang, Jiaxin Xu and Aokun Mei
Electronics 2026, 15(9), 1904; https://doi.org/10.3390/electronics15091904 - 30 Apr 2026
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder primarily caused by the degeneration of dopamine-producing neurons in the substantia nigra, leading to characteristic motor symptoms such as tremors, rigidity, and bradykinesia, as well as non-motor manifestations including depression, sleep disturbances, and speech impairments. Among these symptoms, speech abnormalities affect approximately 90% of individuals with PD, making acoustic analysis a promising non-invasive cue for early detection. However, subtle speech variations are often imperceptible to the human ear, and speech-only analysis may overlook complementary visual manifestations, such as hypomimia—reduced facial expressivity commonly observed in PD patients. To address these limitations, we propose Parkinson’s Detection via Attentional Fusion Network (PDAF-Net), a novel multimodal deep learning framework for early PD detection that jointly models acoustic and facial dynamic features in a binary classification setting. The proposed architecture consists of a Dual-Stream Feature Encoder (DSFE), with an audio branch based on a one-dimensional convolutional neural network (1D-CNN) and bidirectional long short-term memory (BiLSTM), and a visual branch built upon a two-dimensional convolutional neural network (2D-CNN) and a Transformer encoder. Multimodal integration is achieved through a Cross-Attention-guided Attentional Feature Fusion (CA-AFF) module, which explicitly models bidirectional cross-modal interactions and performs adaptive feature recalibration via an iterative attentional fusion mechanism. We conducted experiments on a self-collected Chinese multimodal dataset comprising 100 PD patients and 100 healthy controls. Although the data are balanced at the subject level, sliding-window segmentation introduces sample-level imbalance; to address this issue, a class-balanced focal loss is employed. Model performance was evaluated using subject-wise five-fold cross-validation. The results demonstrate that PDAF-Net consistently outperforms unimodal baselines across multiple evaluation metrics, achieving an accuracy of 89.3%, an F1-score of 0.884, and an AUC of 0.916. These findings highlight the effectiveness of explicit cross-modal interaction modeling and adaptive feature fusion for improving automated early PD screening in real-world clinical settings. Full article
16 pages, 3309 KB  
Article
Acoustic Streaming-Based 3D Cell Focusing and Plasma Separation
by Jingjing Zheng, Qian Wu, Zhenheng Lin, Xuejia Hu, Liqing Qiao, Genliang Li and Jinkun Luo
Micromachines 2026, 17(5), 560; https://doi.org/10.3390/mi17050560 - 30 Apr 2026
Abstract
Separating plasma from small-volume blood samples is important for rapid blood analysis in point-of-care testing. Microfluidic approaches provide flexible platforms for plasma extraction, but many methods either require complex pretreatment or rely on sheath-assisted or multi-step operations. In this study, we present an [...] Read more.
Separating plasma from small-volume blood samples is important for rapid blood analysis in point-of-care testing. Microfluidic approaches provide flexible platforms for plasma extraction, but many methods either require complex pretreatment or rely on sheath-assisted or multi-step operations. In this study, we present an acoustofluidic platform that enables sheath-free three-dimensional (3D) focusing of blood cells and downstream plasma extraction in an integrated microchip. The device employs symmetric cavity-trapped bubbles to generate acoustic streaming under acoustic excitation, thereby reconstructing the local flow field and driving suspended cells toward a stable central region of the channel. Based on this mechanism, blood cells are concentrated toward the middle outlet, while plasma is collected from the two side outlets. The device remains operable over a range of inflow conditions through acoustic-voltage adjustment. Using diluted simulated blood samples, the platform achieved a plasma recovery of approximately 71% and a plasma purity of approximately 99%. In addition, cell-viability tests indicated good biocompatibility under the tested operating conditions. Owing to its simple structure, integrated design, and sheath-free operation, this platform shows potential for future miniaturized sample-preparation applications. However, further validation using real whole blood and clinically relevant plasma-quality metrics will be required in future studies. Full article
(This article belongs to the Special Issue Acoustic Microfluidics: Design, Fabrication, and Applications)
29 pages, 4742 KB  
Article
DistSense: A Distributed P2P System for Privacy-Preserving and Robust Audiovisual Activity Recognition in Smart Homes
by José Manuel Torres, Luis P. Mota, Rui S. Moreira, Christophe Soares and Pedro Sobral
Appl. Sci. 2026, 16(9), 4407; https://doi.org/10.3390/app16094407 - 30 Apr 2026
Abstract
Ambient Assisted Living (AAL) systems have become increasingly relevant as aging populations intensify the demand for technologies that promote autonomy, safety, and quality of life. However, the widespread adoption of audiovisual sensing in smart homes raises critical concerns regarding data protection, privacy, and [...] Read more.
Ambient Assisted Living (AAL) systems have become increasingly relevant as aging populations intensify the demand for technologies that promote autonomy, safety, and quality of life. However, the widespread adoption of audiovisual sensing in smart homes raises critical concerns regarding data protection, privacy, and user trust. Ensuring secure processing while maintaining accurate activity recognition remains a key challenge. This work introduces DistSense, a distributed Peer-to-Peer (P2P) system designed to enhance activity detection in domestic environments through collaborative inference among intelligent audiovisual sensors. DistSense prioritizes privacy by performing local processing, sharing only high-level events, and leveraging distributed ledger mechanisms to ensure data integrity and auditability and support cross-device validation. This collaborative strategy reduces false positives caused by occlusions, illumination variability, and acoustic noise. To assess the system, functional tests were conducted for each module, followed by two use cases evaluated in both simulated and real edge hardware environments. The trained models achieved 88% accuracy for audio and 80% for video, and the system demonstrated effective performance in detecting daily activities and domestic hazards under varying noise conditions. Results indicate that DistSense successfully balances security, user acceptance, and inference robustness, positioning it as a viable solution for privacy-preserving activity monitoring in smart home contexts. Full article
Show Figures

Figure 1

26 pages, 1120 KB  
Article
Assisted Navigation for Visually Impaired People Using 3D Audio and Stereoscopic Cameras
by José Francisco Lucio-Naranjo, Daniel Sanaguano Moreno, Roberto A. Tenenbaum, Erick P. Herrera-Granda, Luis Bravo-Moncayo and Henry Paz-Arias
Appl. Sci. 2026, 16(9), 4405; https://doi.org/10.3390/app16094405 - 30 Apr 2026
Abstract
This paper presents a prototype for an assistive navigation system that integrates three-dimensional audio spatialization with computer vision to improve the mobility of visually impaired individuals. The system uses stereoscopic depth perception and real-time point cloud reconstruction alongside a modified YOLO convolutional neural [...] Read more.
This paper presents a prototype for an assistive navigation system that integrates three-dimensional audio spatialization with computer vision to improve the mobility of visually impaired individuals. The system uses stereoscopic depth perception and real-time point cloud reconstruction alongside a modified YOLO convolutional neural network for object detection and auralization techniques with head-related impulse response functions. Twenty participants (ten who were visually impaired and ten who were blindfolded) navigated controlled obstacle scenarios while wearing a chest-mounted camera and specialized headphones. The prototype achieved 95.00% precision in object classification across eleven obstacle categories and a 33.19% recall, indicating conservative detection behavior. The processing efficiency was 0.042489 s per image, which exceeds real-time requirements. User evaluation revealed an average collision rate of 0.5 per scenario and a mean completion time of 48 s. Statistical analysis showed no significant difference in collision rates between participant groups (p = 0.172), though visually impaired participants demonstrated faster completion times (p = 0.003). Integrating segmented, convolution-based audio processing with stereoscopic depth estimation enabled users to perceive obstacle locations through spatial sound cues, establishing a foundation for advancing assistive navigation technologies without extensive training. Full article
(This article belongs to the Section Acoustics and Vibrations)
29 pages, 1745 KB  
Article
Research on the Characteristics and Comprehensive Mitigation Measures of Vibration and Acoustic Environment in Building Clusters Above Metro Depots
by Jian Li, Xiaohong Xue, Jian Wang, Wanliang Kang, Boyang Zhang, Zhengye Huang, Yuan Mei and Xin Ke
Buildings 2026, 16(9), 1794; https://doi.org/10.3390/buildings16091794 - 30 Apr 2026
Abstract
Taking a metro over-track TOD(Transit-Oriented Development) project in Chongqing as the engineering background, this study adopts a combined research approach integrating field measurements and numerical simulation. A coupled finite element model of the train–track–tunnel–soil–building system and a regional acoustic model are established to [...] Read more.
Taking a metro over-track TOD(Transit-Oriented Development) project in Chongqing as the engineering background, this study adopts a combined research approach integrating field measurements and numerical simulation. A coupled finite element model of the train–track–tunnel–soil–building system and a regional acoustic model are established to systematically reveal the vibration response characteristics of building clusters above the depot induced by metro operation, the propagation mechanism of structure-borne secondary noise, and the distribution patterns of the regional acoustic environment, while identifying the areas where vibration and noise exceed the prescribed limits as well as the key influencing factors. On this basis, following a hierarchical mitigation strategy consisting of source control, path interruption, and receiver protection, an integrated control scheme is proposed through the coordinated application of track vibration reduction, building vibration isolation, acoustic environment optimization, and building sound insulation. The engineering applicability and control effectiveness of the proposed scheme are further verified by numerical simulation. The findings of this study can provide theoretical support and technical reference for the refined design and integrated prevention and control of vibration and acoustic environments in similar metro over-track development projects. Full article
31 pages, 5974 KB  
Article
CUCT-Net: End-to-End Signal-to-Image Learning for Quantized Speed-of-Sound Estimation and Tissue Segmentation in Ultrasound Computed Tomography
by Qinhan Gao and Mohamed Khaled Almekkawy
Sensors 2026, 26(9), 2801; https://doi.org/10.3390/s26092801 - 30 Apr 2026
Abstract
Objective: Traditional Full Waveform Inversion (FWI) methods for Ultrasound Computed Tomography (UCT) are computationally expensive and can be sensitive to strong acoustic contrasts. In this work, we propose the Multi-Channel Transducer Network (CUCT-Net), a deep learning framework that directly maps received ultrasound signals [...] Read more.
Objective: Traditional Full Waveform Inversion (FWI) methods for Ultrasound Computed Tomography (UCT) are computationally expensive and can be sensitive to strong acoustic contrasts. In this work, we propose the Multi-Channel Transducer Network (CUCT-Net), a deep learning framework that directly maps received ultrasound signals to image-space outputs for quantized speed-of-sound (SoS) estimation and for direct tissue-level segmentation over both low- and high-contrast regions, enabling end-to-end recovery of both contrast-driven and anatomically meaningful structures from raw measurements. Method: CUCT-Net uses a multi-input encoder–decoder architecture that maps raw multi-static UCT measurements to quantized SoS (or tissue-class) maps without requiring an initial guess or iterative optimization. Parallel per-transducer encoders extract view-specific features that are fused and refined by a decoder, with Shift Units (SU) used to enhance fine-scale feature modeling under sparse sensing. Experiments are performed on k-Wave simulations using (i) Shepp–Logan-inspired disc phantoms with Original/Distorted/Mixed variants and (ii) DBB-derived anatomical brain phantoms, under clean and noisy measurement conditions. Results: The proposed network achieves accurate quantized SoS estimation and direct tissue-level segmentation across synthetic and anatomically derived phantom experiments. Strong robustness to noise is demonstrated through transfer learning. Compared with FWI, CUCT-Net significantly reduces computational cost while maintaining stable performance under reduced-sensor conditions for quantized SoS estimation and complex tissue heterogeneity for segmentation. Conclusions: CUCT-Net formulates UCT as a direct signal-to-image learning problem that supports both quantized SoS estimation and tissue-level segmentation. By learning an end-to-end mapping from raw ultrasound measurements to quantized SoS or tissue representations, the proposed framework bypasses iterative inversion and achieves efficient and robust performance under reduced-sensor and strong-contrast conditions. The multi-input architecture enables effective integration of information from multiple transducers, demonstrating the feasibility and potential of data-driven end-to-end quantized SoS estimation and tissue segmentation for UCT. Full article
(This article belongs to the Section Physical Sensors)
13 pages, 7866 KB  
Article
A New Type of Ultrasonic Gyroscopic Sensor Based on a Solid-State Standing-Wave Vibrator: Towards Shock-Resistant Design
by Michail Shevelko, Andrey Baranov, Ekaterina Popkova, Yasemin Staroverova, Alexander Kukaev and Sergey Shevchenko
Sensors 2026, 26(9), 2798; https://doi.org/10.3390/s26092798 - 30 Apr 2026
Abstract
This paper presents a new type of ultrasonic gyroscopic sensor based on a solid-state standing-wave vibrator, which is promising for shock-resistant applications. A theoretical model of the proposed design, which is a layered structure, and the numerical simulation of its frequency response using [...] Read more.
This paper presents a new type of ultrasonic gyroscopic sensor based on a solid-state standing-wave vibrator, which is promising for shock-resistant applications. A theoretical model of the proposed design, which is a layered structure, and the numerical simulation of its frequency response using the developed software are presented. A test sample of the novel sensing element was made and experimental studies of its frequency response were conducted. The results showed a high correlation between the resonant frequencies both for the real sample research and numerical modeling; thus, the validity of the theoretical model was confirmed. The laboratory investigation of the developed sensing element on a test bench under rotating conditions was carried out and a shift in the standing-wave amplitude proportional to the angular velocity of rotation was revealed; thus, an informative signal for this type of gyroscopic sensor was found. It is shown that the amplitude of the output signal of the new sensor on standing waves compares favorably with the signal levels reported for similar traveling-wave solutions in previous studies. The optimization strategies for the new sensor’s design and operating mode to increase signal to noise ratio are also identified. Thus, the potential of using the developed solid-state standing-wave vibrator as a shock-resistant ultrasonic gyroscopic sensor is supported. Full article
(This article belongs to the Special Issue Ultrasonic Sensors and Ultrasonic Signal Processing)
Show Figures

Figure 1

33 pages, 2780 KB  
Review
System-Level Harmonic NVH Engineering in Electric Drivetrains: A State-of-the-Art Review from Gear Microgeometry to Sound Branding
by Krisztian Horvath
World Electr. Veh. J. 2026, 17(5), 240; https://doi.org/10.3390/wevj17050240 - 30 Apr 2026
Abstract
Electric vehicles (EVs) have fundamentally changed the noise, vibration, and harshness (NVH) landscape of automotive powertrains. In the absence of masking internal-combustion-engine noise, harmonic components such as gear whine, electric-motor orders, and inverter-related tones become more perceptible and more critical to vehicle refinement. [...] Read more.
Electric vehicles (EVs) have fundamentally changed the noise, vibration, and harshness (NVH) landscape of automotive powertrains. In the absence of masking internal-combustion-engine noise, harmonic components such as gear whine, electric-motor orders, and inverter-related tones become more perceptible and more critical to vehicle refinement. This review synthesizes the current state of the art in harmonic NVH engineering for electric drivetrains, focusing on the interactions between gear geometry, manufacturing variability, electromechanical coupling, structural transfer, and human sound perception. Classical mechanisms of gear-mesh excitation are revisited together with emerging EV-specific challenges, including long-wavelength flank deviations, ghost orders, lightweight housing dynamics, and psychoacoustic sound-quality requirements. The review further examines recent progress in predictive and data-driven approaches, including machine-learning-based gear-noise modeling, digital-twin concepts, and virtual NVH assessment workflows. Overall, the literature shows that harmonic NVH engineering in EVs is evolving from a conventional gear-noise problem into a multidisciplinary system-level task integrating gear dynamics, manufacturing science, structural acoustics, electric-drive control, psychoacoustics, and data-driven optimization. This review provides a structured synthesis of these developments and identifies key research gaps and future directions for the next generation of refined electric drivetrains. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Figure 1

17 pages, 4327 KB  
Article
An Efficient High-Frequency Design Methodology for APU Inlet Mufflers Based on Axial Segmentation and Optimal Frequency Selection
by Dongwen Xue, Qun Yan, Yong Zheng, Jiafeng Yang and Yonghui Chen
Aerospace 2026, 13(5), 420; https://doi.org/10.3390/aerospace13050420 - 30 Apr 2026
Abstract
The International Civil Aviation Organization (ICAO) sets strict limits for aircraft ramp noise, a key source of which is Auxiliary Power Unit (APU) inlet noise. This paper presents a systematic and computationally efficient design methodology for APU inlet mufflers. The high-frequency noise necessitates [...] Read more.
The International Civil Aviation Organization (ICAO) sets strict limits for aircraft ramp noise, a key source of which is Auxiliary Power Unit (APU) inlet noise. This paper presents a systematic and computationally efficient design methodology for APU inlet mufflers. The high-frequency noise necessitates validating a single-degree-of-freedom liner impedance model up to 10,000 Hz. The core innovation overcomes prohibitive full-passage simulation costs (days) by optimally selecting attenuation center frequencies from the source spectrum and implementing an axially segmented design. This approach enables efficient, targeted optimization (minutes per case) and leverages acoustic mode scattering at segment interfaces to enhance overall attenuation. The design is verified via high-fidelity, full-flow-path simulation. Experimental validation under various operating conditions shows good agreement with predictions, achieving approximately 9 dB reduction in overall A-weighted Sound Power Level (OASPL) with consistent performance. The results demonstrate the feasibility and effectiveness of the proposed rapid, precise, and efficient design framework. Full article
(This article belongs to the Topic Advances in Aeroacoustics Research in Wind Engineering)
Show Figures

Figure 1

17 pages, 12453 KB  
Article
Design and Fabrication of a Chitosan-Based Diaphragm Digital Stethoscope for Heart Sound Acquisition
by María Claudia Rivas Ebner, Seong-Wan Kim, Giyeon Yu, Emmanuel Ackah, Hyun-Woo Jeong, Kyung Min Byun, Young-Seek Seok and Seung Ho Choi
Micromachines 2026, 17(5), 555; https://doi.org/10.3390/mi17050555 - 30 Apr 2026
Abstract
Cardiac auscultation remains a widely used non-invasive method for assessing cardiac function; however, conventional acoustic stethoscopes are limited by subjective interpretation and lack of digital signal-handling capabilities. This study presents the design and fabrication of a chitosan-based diaphragm digital stethoscope using a biopolymer-derived [...] Read more.
Cardiac auscultation remains a widely used non-invasive method for assessing cardiac function; however, conventional acoustic stethoscopes are limited by subjective interpretation and lack of digital signal-handling capabilities. This study presents the design and fabrication of a chitosan-based diaphragm digital stethoscope using a biopolymer-derived acoustic interface. Chitosan was extracted from mealworm larvae shells through sequential chemical processing and subsequently processed into a glycerol-plasticized film via solution casting to obtain a flexible diaphragm. The mechanical properties of the diaphragm were evaluated to assess its suitability for acoustic applications. The diaphragm was mechanically coupled to a piezoelectric sensor and integrated into a custom 3D-printed chest piece connected to a microcontroller-based acquisition system. Heart sound signals were acquired from four conventional auscultation sites (aortic, pulmonic, tricuspid, and mitral regions). The recorded signals were processed using band-pass filtering, envelope extraction, and time–frequency analysis to visualize waveform morphology and frequency content. The signals obtained exhibited temporal and spectral features consistent with reported phonocardiography characteristics, including identifiable S1 and S2 components. These results demonstrate the feasibility of using chitosan-based diaphragm materials for heart sound acquisition in a digital stethoscope configuration, providing a low-complexity platform for further development of biopolymer-based acoustic sensing devices. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Figure 1

Back to TopTop