Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (145)

Search Parameters:
Keywords = loudspeaker

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1974 KiB  
Article
Effect of Transducer Burn-In on Subjective and Objective Parameters of Loudspeakers
by Tomasz Kopciński, Bartłomiej Kruk and Jan Kucharczyk
Appl. Sci. 2025, 15(15), 8425; https://doi.org/10.3390/app15158425 - 29 Jul 2025
Viewed by 199
Abstract
Speaker burn-in is a controversial practice in the audio world, based on the belief that new devices reach optimal performance only after a certain period of use. Supporters claim it improves component flexibility, reduces initial distortion, and enhances sound quality—especially in the low-frequency [...] Read more.
Speaker burn-in is a controversial practice in the audio world, based on the belief that new devices reach optimal performance only after a certain period of use. Supporters claim it improves component flexibility, reduces initial distortion, and enhances sound quality—especially in the low-frequency range. Critics, however, emphasize the lack of scientific evidence for audible changes and point to the placebo effect in subjective listening tests. They argue that modern manufacturing and strict quality control minimize differences between new and “burned-in” devices. This study cites a standard describing a preliminary burn-in procedure, specifying the exact conditions and duration required. Objective tests revealed slight changes in speaker impedance and amplitude response after burn-in, but these differences are inaudible to the average listener. Notably, significant variation was observed between speakers of the same series, attributed to production line tolerances rather than use-related changes. The study also explored aging processes in speaker materials to better understand potential long-term effects. However, subjective listening tests showed that listeners rated the sound consistently across all test cases, regardless of whether the speaker had undergone burn-in. Overall, while minor physical changes may occur, their audible impact is negligible, especially for non-expert users. Full article
Show Figures

Figure 1

32 pages, 6763 KiB  
Article
Noise Levels Due to Commercial and Leisure Activities in Urban Areas: Experimental Validation of a Numerical Model Fed with Crowd Density Estimation Using Computer Vision
by Óscar Ramón-Turner, Jacob D. R. Bordón, Asunción González-Rodríguez, Javier Lorenzo-Navarro, Modesto Castrillón-Santana, Guillermo M. Álamo, Román Quevedo-Reina, Carlos Romero-Sánchez, Antonio T. Ester-Sánchez, Cristina Medina, Fidel García, Orlando Maeso and Juan J. Aznárez
Sensors 2025, 25(12), 3604; https://doi.org/10.3390/s25123604 - 8 Jun 2025
Viewed by 539
Abstract
Noise levels of anthropogenic origin in urban environments have reached thresholds that pose serious public health and quality of life problems. This paper/work aims to examine these noise levels, the underlying causes of their increase and possible solutions through the implementation of predictive [...] Read more.
Noise levels of anthropogenic origin in urban environments have reached thresholds that pose serious public health and quality of life problems. This paper/work aims to examine these noise levels, the underlying causes of their increase and possible solutions through the implementation of predictive models. To address this problem, as a first step, a simplified mathematical model capable of accurately predicting anthropogenic noise levels in a given area is developed. As variables, this model considers the crowd density, estimated using an Artificial Neural Network (ANN) capable of detecting people in images, as well as the geometric and architectural characteristics of the environment. To verify the model, several protocols have been developed for collecting experimental data. In a first phase, these experimental measurements were carried out in controlled environments, using loudspeakers as noise sources. In a second phase, these measurements were carried out in real environments, accounting for the specific noise sources present in each setting. The difference in sound levels between the model and reality is proven to be less than 3 dB in 75% and less than 3.5 dB in 100% of the cases examined in a controlled environment. In the real problem, in general terms and taking into account that the study is carried out on pedestrian streets, it seems that the model is able to reproduce most of the noise of anthropogenic origin. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

14 pages, 1549 KiB  
Article
Equalizing the In-Ear Acoustic Response of Piezoelectric MEMS Loudspeakers Through Inverse Transducer Modeling
by Oliviero Massi, Riccardo Giampiccolo and Alberto Bernardini
Micromachines 2025, 16(6), 655; https://doi.org/10.3390/mi16060655 - 29 May 2025
Viewed by 2601
Abstract
Micro-Electro-Mechanical Systems (MEMS) loudspeakers are attracting growing interest as alternatives to conventional miniature transducers for in-ear audio applications. However, their practical deployment is often hindered by pronounced resonances in their frequency response, caused by the mechanical and acoustic characteristics of the device structure. [...] Read more.
Micro-Electro-Mechanical Systems (MEMS) loudspeakers are attracting growing interest as alternatives to conventional miniature transducers for in-ear audio applications. However, their practical deployment is often hindered by pronounced resonances in their frequency response, caused by the mechanical and acoustic characteristics of the device structure. To mitigate these limitations, we present a model-based digital signal equalization approach that leverages a circuit equivalent model of the considered MEMS loudspeaker. The method relies on constructing an inverse circuital model based on the nullor, which is implemented in the discrete-time domain using Wave Digital Filters (WDFs). This inverse system is employed to pre-process the input voltage signal, effectively compensating for the transducer frequency response. The experimental results demonstrate that the proposed method significantly flattens the Sound Pressure Level (SPL) over the 100 Hz-10 kHz frequency range, with a maximum deviation from the target flat frequency response of below 5 dB. Full article
(This article belongs to the Special Issue Exploration and Application of Piezoelectric Smart Structures)
Show Figures

Figure 1

16 pages, 5691 KiB  
Article
Adaptive Binaural Cue-Based Amplitude Panning in Irregular Loudspeaker Configurations
by Shang Zhao, Yunjia Zhou and Zhibin Lin
Appl. Sci. 2025, 15(9), 4689; https://doi.org/10.3390/app15094689 - 23 Apr 2025
Viewed by 372
Abstract
The amplitude panning method is a prevalent technique for controlling sound image directions in stereophonic and multichannel surround systems. However, conventional methods typically achieve accurate localization only with standardized loudspeaker configurations, limiting their applicability in diverse scenarios. This paper presents an adaptive binaural [...] Read more.
The amplitude panning method is a prevalent technique for controlling sound image directions in stereophonic and multichannel surround systems. However, conventional methods typically achieve accurate localization only with standardized loudspeaker configurations, limiting their applicability in diverse scenarios. This paper presents an adaptive binaural cue-based amplitude panning algorithm based on binaural cues tailored for accurate azimuthal sound image localization in irregular loudspeaker configurations. Our two-stage approach first employs inverse filtering to equalize loudspeaker magnitude responses. Subsequently, gains and inter-loudspeaker time delay are optimized based on interaural time difference and interaural cross-correlation values, derived from the measured binaural room impulse responses using a dummy head. Objective and subjective evaluations on a stereo system demonstrate that the proposed method significantly improves azimuth localization accuracy compared to existing techniques. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

17 pages, 10294 KiB  
Article
Virtual Sound Source Construction Based on Direct-to-Reverberant Ratio Control Using Multiple Pairs of Parametric-Array Loudspeakers and Conventional Loudspeakers
by Masato Nakayama, Takuma Ekawa, Toru Takahashi and Takanobu Nishiura
Appl. Sci. 2025, 15(7), 3744; https://doi.org/10.3390/app15073744 - 28 Mar 2025
Viewed by 586
Abstract
We propose a new method for constructing a virtual sound source (VSS) based on the direct-to-reverberant ratio (DRR) of room impulse responses (RIRs), using multiple pairs of parametric-array loudspeakers (PALs) and conventional loudspeakers (hereafter referred to simply as loudspeakers). In this paper, we [...] Read more.
We propose a new method for constructing a virtual sound source (VSS) based on the direct-to-reverberant ratio (DRR) of room impulse responses (RIRs), using multiple pairs of parametric-array loudspeakers (PALs) and conventional loudspeakers (hereafter referred to simply as loudspeakers). In this paper, we focus on the differences in the DRRs of the RIRs generated by PALs and loudspeakers. The DRR of an RIR is recognized as a key cue for distance perception. A PAL can achieve super-directivity using an array of ultrasonic transducers. Its RIR exhibits a high DRR, characterized by a large-amplitude direct wave and low-amplitude reverberations. Consequently, a PAL makes the VSS appear to be closer to the listener. In contrast, a loudspeaker causes the VSS to be perceived as farther away because the sound it emits has a low DRR. The proposed method leverages the differences in the DRRs of the RIRs between PALs and loudspeakers. It controls the perceived distance of the VSS by reproducing the desired DRR at the listener’s position through a weighted combination of the RIRs emitted from PALs and loudspeakers into the air. Additionally, the proposed method adjusts the direction of the VSS using vector-based amplitude panning (VBAP). Finally, we have confirmed the effectiveness of the proposed method through evaluation experiments. Full article
(This article belongs to the Special Issue Spatial Audio and Sound Design)
Show Figures

Figure 1

18 pages, 14199 KiB  
Article
Enhanced Virtual Sound Source Construction Based on Wave Field Synthesis Using Crossfade Processing with Electro-Dynamic and Parametric Loudspeaker Arrays
by Yuting Geng, Ayano Hirose, Mizuki Iwagami, Masato Nakayama and Takanobu Nishiura
Appl. Sci. 2024, 14(24), 11911; https://doi.org/10.3390/app142411911 - 19 Dec 2024
Cited by 1 | Viewed by 1096
Abstract
Wave field synthesis (WFS) can be used to construct virtual sound sources (VSSs) with a loudspeaker array. Conventional methods using a single type of loudspeaker showed limited performance in distance perception. For example, WFS with electro-dynamic loudspeakers (EDLs) has the advantage of constructing [...] Read more.
Wave field synthesis (WFS) can be used to construct virtual sound sources (VSSs) with a loudspeaker array. Conventional methods using a single type of loudspeaker showed limited performance in distance perception. For example, WFS with electro-dynamic loudspeakers (EDLs) has the advantage of constructing VSSs near the loudspeaker, while WFS with parametric array loudspeakers (PALs) has the advantage of constructing VSSs far from the loudspeaker. In this paper, we propose a VSS construction method utilizing crossfade processing with both EDLs and PALs. The contribution of EDLs and PALs was balanced to better synthesize the target sound field. We carried out experiments to evaluate the sound pressure, frequency characteristic, and sound image perception. The experimental results demonstrated that the proposed method can enhance these aspects of the VSS. Full article
(This article belongs to the Special Issue Applied Audio Interaction)
Show Figures

Figure 1

12 pages, 1648 KiB  
Article
How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?
by Aniket A. Saoji, Bilal A. Sheikh, Natasha J. Bertsch, Kayla R. Goulson, Madison K. Graham, Elizabeth A. McDonald, Abigail E. Bross, Jonathan M. Vaisberg, Volker Kühnel, Solveig C. Voss, Jinyu Qian, Cynthia H. Hogan and Melissa D. DeJong
Audiol. Res. 2024, 14(6), 1114-1125; https://doi.org/10.3390/audiolres14060092 - 13 Dec 2024
Cited by 1 | Viewed by 2474
Abstract
Background/Objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep [...] Read more.
Background/Objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation. Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker. Sentence recognition scores were measured for 10 hearing-impaired patients using three hearing aid programs: calm situation, speech in noise, and spheric speech in loud noise (DNN-based noise reduction). Speech perception results were compared to bench analyses comprising the phase inversion technique, employed to predict SNR improvement, and the Hearing-Aid Speech Perception Index (HASPI v2), utilized to predict speech intelligibility. Results: The spheric speech in loud noise program improved speech perception by 20 to 32% points as compared to the calm situation program. Thus, DNN-based noise reduction can improve speech perception in noisy environments, potentially reducing the need for cochlear implants in some cases. The phase inversion method showed a 4–5 dB SNR improvement for the DNN-based noise reduction program compared to the other two programs. HASPI v2 predicted slightly better speech intelligibility than was measured in this study. Conclusions: DNN-based noise reduction might make it difficult for some patients with significant residual hearing to qualify for cochlear implantation, potentially delaying its adoption or eliminating the need for it entirely. Full article
Show Figures

Figure 1

13 pages, 16809 KiB  
Article
Determination of the Sound Intensity Vector Field from Synchronized Sound Pressure Waveforms
by Witold Mickiewicz and Michał Raczyński
Appl. Sci. 2024, 14(23), 11299; https://doi.org/10.3390/app142311299 - 4 Dec 2024
Viewed by 1080
Abstract
Visualization of vector acoustic fields using sound intensity measurements is a very interesting field of acoustics and has numerous applications. Unfortunately, sound intensity measurement itself requires specialized measuring instruments (presure–presure (PP) or pressure–velocity (PU)-type probes). This article presents a simplified and low-cost method [...] Read more.
Visualization of vector acoustic fields using sound intensity measurements is a very interesting field of acoustics and has numerous applications. Unfortunately, sound intensity measurement itself requires specialized measuring instruments (presure–presure (PP) or pressure–velocity (PU)-type probes). This article presents a simplified and low-cost method of sound intensity measurement based on sound pressure measurements made synchronously. This allows for the use of a single microphone and eliminates one of the main problems of the classical PP probe, which is the phase mismatch of two different microphones. The described method was used to visualize the vector acoustic field in front of an active loudspeaker. The results were compared qualitatively and quantitatively with measurements made with a commercial PU intensity probe. The results obtained showed that the proposed method has the same level of metrological accuracy. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

46 pages, 2469 KiB  
Review
A Review on Head-Related Transfer Function Generation for Spatial Audio
by Valeria Bruschi, Loris Grossi, Nefeli A. Dourou, Andrea Quattrini, Alberto Vancheri, Tiziano Leidi and Stefania Cecchi
Appl. Sci. 2024, 14(23), 11242; https://doi.org/10.3390/app142311242 - 2 Dec 2024
Viewed by 5731
Abstract
A head-related transfer function (HRTF) is a mathematical model that describes the acoustic path between a sound source and a listener’s ear. Using binaural synthesis techniques, HRTFs play a crucial role in creating immersive audio experiences through headphones or loudspeakers, using binaural synthesis [...] Read more.
A head-related transfer function (HRTF) is a mathematical model that describes the acoustic path between a sound source and a listener’s ear. Using binaural synthesis techniques, HRTFs play a crucial role in creating immersive audio experiences through headphones or loudspeakers, using binaural synthesis techniques. HRTF measurements can be conducted either with standardised mannequins or with in-ear microphones on real subjects. However, various challenges arise in, for example, individual differences in head shape, pinnae geometry, and torso dimensions, as well as in the extensive number of measurements required for optimal audio immersion. To address these issues, numerous methods have been developed to generate new HRTFs from existing data or through computer simulations. This review paper provides an overview of the current approaches and technologies for generating, adapting, and optimising HRTFs, with a focus on physical modelling, anthropometric techniques, machine learning methods, interpolation strategies, and their practical applications. Full article
(This article belongs to the Special Issue Spatial Audio and Sound Design)
Show Figures

Figure 1

15 pages, 772 KiB  
Article
Use of Mobile Phones and Radiofrequency-Emitting Devices in the COSMOS-France Cohort
by Isabelle Deltour, Florence Guida, Céline Ribet, Marie Zins, Marcel Goldberg and Joachim Schüz
Int. J. Environ. Res. Public Health 2024, 21(11), 1514; https://doi.org/10.3390/ijerph21111514 - 14 Nov 2024
Viewed by 1623
Abstract
COSMOS-France is the French part of the COSMOS project, an international prospective cohort study that investigates whether the use of mobile phones and other wireless technologies is associated with health effects and symptoms (cancers, cardiovascular diseases, neurologic pathologies, tinnitus, headaches, or sleep and [...] Read more.
COSMOS-France is the French part of the COSMOS project, an international prospective cohort study that investigates whether the use of mobile phones and other wireless technologies is associated with health effects and symptoms (cancers, cardiovascular diseases, neurologic pathologies, tinnitus, headaches, or sleep and mood disturbances). Here, we provide the first descriptive results of COSMOS-France, a cohort nested in the general population-based cohort of adults named Constances. Methods: A total of 39,284 Constances volunteers were invited to participate in the COSMOS-France study during the pilot (2017) and main recruitment phase (2019). Participants were asked to complete detailed questionnaires on their mobile phone use, health conditions, and personal characteristics. We examined the association between mobile phone use, including usage for calls and Voice over Internet Protocol (VoIP), cordless phone use, and Wi-Fi usage with age, sex, education, smoking status, body mass index (BMI), and handedness. Results: The participation rate was 48.4%, resulting in 18,502 questionnaires in the analyzed dataset. Mobile phone use was reported by 96.1% (N = 17,782). Users reported typically calling 5–29 min per week (37.1%, N = 6600), making one to four calls per day (52.9%, N = 9408), using one phone (83.9%, N = 14,921) and not sharing it (80.4% N = 14,295), mostly using the phone on the side of the head of their dominant hand (59.1%, N = 10,300), not using loudspeakers or hands-free kits, and not using VoIP (84.9% N = 15,088). Individuals’ age and sex modified this picture, sometimes markedly. Education and smoking status were associated with ever use and call duration, but neither BMI nor handedness was. Cordless phone use was reported by 66.0% of the population, and Wi-Fi use was reported by 88.4%. Conclusion: In this cross-sectional presentation of contemporary mobile phone usage in France, age and sex were important determinants of use patterns. Full article
(This article belongs to the Special Issue Epidemiology of Lifestyle-Related Diseases)
Show Figures

Figure 1

16 pages, 3085 KiB  
Article
Theoretical and Experimental Assessment of Nonlinear Acoustic Effects through an Orifice
by Elio Di Giulio, Riccardo Di Leva and Raffaele Dragonetti
Acoustics 2024, 6(4), 818-833; https://doi.org/10.3390/acoustics6040046 - 30 Sep 2024
Viewed by 2176
Abstract
Nonlinear acoustic effects become prominent when acoustic waves propagate through an orifice, particularly at higher pressure amplitudes, potentially generating vortex rings and transferring acoustic energy into the flow. This study develops and validates a predictive theoretical model for acoustic behaviour both within and [...] Read more.
Nonlinear acoustic effects become prominent when acoustic waves propagate through an orifice, particularly at higher pressure amplitudes, potentially generating vortex rings and transferring acoustic energy into the flow. This study develops and validates a predictive theoretical model for acoustic behaviour both within and outside an orifice under linear conditions. Using transfer matrices, the model predicts the external acoustic field, while finite element numerical simulations are employed to validate the theoretical predictions in the linear regime. The experimental setup includes an impedance tube with a plate and orifice, supported by a custom-built system, where a loudspeaker generates acoustic waves. A single microphone is used to measure acoustic particle velocity and characterize the phenomenon, enabling the identification of the onset of nonlinearity. The experimental data show good agreement with the linear theoretical predictions. This work represents the first observation of nonlinear effects in a free-field environment within a semi-anechoic chamber, eliminating reflections from external surfaces, and demonstrates the efficacy of a purely acoustic-based system (speaker and two microphones) for evaluating speaker velocity and the resulting velocity within the orifice. Full article
Show Figures

Figure 1

26 pages, 11056 KiB  
Article
Design of Differential Loudspeaker Line Array for Steerable Frequency-Invariant Beamforming
by Yankai Zhang, Qian Xiang and Qiaoxi Zhu
Sensors 2024, 24(19), 6277; https://doi.org/10.3390/s24196277 - 27 Sep 2024
Viewed by 1275
Abstract
Differential beamforming has attracted much research since it can utilize an array with a small aperture size to form frequency-invariant beampatterns and achieve high directional gains. It has recently been applied to the loudspeaker line array to produce a broadside frequency-invariant radiation pattern. [...] Read more.
Differential beamforming has attracted much research since it can utilize an array with a small aperture size to form frequency-invariant beampatterns and achieve high directional gains. It has recently been applied to the loudspeaker line array to produce a broadside frequency-invariant radiation pattern. However, designing steerable frequency-invariant beampatterns for the loudspeaker line array has yet to be explored. This paper proposes a method to design a steerable differential beamformer with a loudspeaker line array. We first determine the target differential beampatterns according to the desired direction, the main lobe width, and the beampattern order. Then, we transform the target beampattern into the modal domain for representation. The Jacobi-Anger expansion is subsequently used to design the beamformer so that the resulting beampattern matches the target differential beampattern. Furthermore, based on the criterion of minimizing the mean square error between the synthesized beampattern and the ideal one, a multi-constraint optimization problem, which compromises between the robustness and the mean square error, is formulated to calculate the optimal desired weighting vector. Simulations and experimental results show that the proposed method can achieve steerable frequency-invariant beamforming from 300 Hz–4 kHz. Full article
(This article belongs to the Special Issue Signal Detection and Processing of Sensor Arrays)
Show Figures

Figure 1

12 pages, 721 KiB  
Article
Impact of Reverberation on Speech Perception in Noise in Bimodal/Bilateral Cochlear Implant Users with and without Residual Hearing
by Clara König, Uwe Baumann, Timo Stöver and Tobias Weissgerber
J. Clin. Med. 2024, 13(17), 5269; https://doi.org/10.3390/jcm13175269 - 5 Sep 2024
Cited by 1 | Viewed by 1348
Abstract
(1) Background: The aim of the present study was to assess the impact of reverberation on speech perception in noise and spatial release from masking (SRM) in bimodal or bilateral cochlear implant (CI) users and CI subjects with low-frequency residual hearing using [...] Read more.
(1) Background: The aim of the present study was to assess the impact of reverberation on speech perception in noise and spatial release from masking (SRM) in bimodal or bilateral cochlear implant (CI) users and CI subjects with low-frequency residual hearing using combined electric–acoustic stimulation (EAS). (2) Methods: In total, 10 bimodal, 14 bilateral CI users and 14 EAS users, and 17 normal hearing (NH) controls, took part in the study. Speech reception thresholds (SRTs) in unmodulated noise were assessed in co-located masker condition (S0N0) with a spatial separation of speech and noise (S0N60) in both free-field and loudspeaker-based room simulation for two different reverberation times. (3) Results: There was a significant detrimental effect of reverberation on SRTs and SRM in all subject groups. A significant difference between the NH group and all the CI/EAS groups was found. There was no significant difference in SRTs between any CI and EAS group. Only NH subjects achieved spatial release from masking in reverberation, whereas no beneficial effect of spatial separation of speech and noise was found in any CI/EAS group. (4) Conclusions: The subject group with electric–acoustic stimulation did not yield a superior outcome in terms of speech perception in noise under reverberation when the noise was presented towards the better hearing ear. Full article
Show Figures

Figure 1

15 pages, 3841 KiB  
Article
Dispersion Influence of Electroacoustic Transducer Parameters in the Design Process of Miniature Loudspeaker Arrays and Omnidirectional Sound Sources
by Bartlomiej Chojnacki
Sensors 2024, 24(15), 4958; https://doi.org/10.3390/s24154958 - 31 Jul 2024
Cited by 1 | Viewed by 1169
Abstract
Electroacoustic transducers represent one of the crucial materials used in the construction of loudspeaker arrays. The dispersion in their parameters may influence the performance of a speaker set. Parametric loudspeaker arrays and omnidirectional sound sources have been used for years. However, the possible [...] Read more.
Electroacoustic transducers represent one of the crucial materials used in the construction of loudspeaker arrays. The dispersion in their parameters may influence the performance of a speaker set. Parametric loudspeaker arrays and omnidirectional sound sources have been used for years. However, the possible influence of transducer manufacturing tolerances on the arrays’ performance has not been investigated. In previous research, the sources of possible dispersion in acoustic measurements carried out with omnidirectional sources were studied, pointing out that the problems with sound sources may be a significant reason behind the small measurement repeatability in standards. This paper investigated the measurement of several common types of miniature speakers, using 10 pieces of each type and investigating the influence of their parameter dispersion in electric and acoustic ways. Numerical simulations of omnidirectional sound sources were performed to investigate the drivers’ dispersion influence sensitivity. The results provided proof of the small-signal parameter dispersion reaching 20% of the variation. The acoustic measurements show that the loudspeakers may differ in sensitivity parameters by up to 4 dB in 10 transducer tests. The analysis of an example multitransducer array indicated that a dispersion of a sensitivity higher than 1 dB might lead to significant misperformance in constructed arrays and measurement deviations with this type of array. Full article
(This article belongs to the Special Issue Acoustic Sensors and Their Applications—2nd Edition)
Show Figures

Figure 1

22 pages, 6169 KiB  
Article
Design of Robust Broadband Frequency-Invariant Broadside Beampatterns for the Differential Loudspeaker Array
by Yankai Zhang, Hongjian Wei and Qiaoxi Zhu
Appl. Sci. 2024, 14(14), 6383; https://doi.org/10.3390/app14146383 - 22 Jul 2024
Cited by 1 | Viewed by 1249
Abstract
The directional loudspeaker array has various applications due to its capability to direct sound generation towards the target listener and reduce noise pollution. Differential beamforming has recently been applied to the loudspeaker line array to produce a broadside frequency-invariant radiation pattern. However, the [...] Read more.
The directional loudspeaker array has various applications due to its capability to direct sound generation towards the target listener and reduce noise pollution. Differential beamforming has recently been applied to the loudspeaker line array to produce a broadside frequency-invariant radiation pattern. However, the existing methods cannot achieve a compromise between robustness and broadband frequency-invariant beampattern preservation. This paper proposed a robust broadband differential beamforming design to allow the loudspeaker line array to radiate broadside frequency-invariant radiation patterns with robustness. Specifically, we propose a method to determine the ideal broadside differential beampattern by combining multiple criteria, namely null positions, maximizing the directivity factor, and achieving a desired beampattern with equal sidelobes. We derive the above ideal broadside differential beampattern into the target beampattern in the modal domain. We propose a robust modal matching method with Tikhonov regularization to optimize the loudspeaker weights in the modal domain. Simulations and experiments show improved frequency-invariant broadside beamforming over the 250–4k Hz frequency range compared with the existing modal matching and null-constrained methods. Full article
(This article belongs to the Special Issue Noise Measurement, Acoustic Signal Processing and Noise Control)
Show Figures

Figure 1

Back to TopTop