Technologies and Applications for Drone Audition

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 26 March 2025 | Viewed by 5858

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, School of Engineering, Tokyo Institute of Technology, I1-27, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8552, Japan
Interests: acoustic signal processing; acoustic measurement; acoustic imaging; robot audition; drone audition; ultrasonics

E-Mail Website
Guest Editor
Field of Robot, Control and Instrumentation, Division of Environmental Science, Faculty of Advanced Science and Technology, Kumamoto University, Kumamoto, Japan
Interests: autonomous systems; control; dynamics

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit the Special Issue "Technologies and Applications for Drone Audition".

Nowadays, drones/unmanned aerial vehicles (UAVs) are used in various fields such as environmental monitoring, surveying, transportation, search and rescue, agriculture, forestry, and so on due to their advantages of unmanned remote flight and safe and easy access to danger areas. For these applications, advanced measurement and sensing technologies have been studied. Besides camera-based measurement and sensing methods, acoustic scene analysis technologies have attracted the attention, which is known as "Drone Audition". Applications of Drone Audition include audio-visual integration and invisible target estimation under poor lighting conditions or occlusions that are effective for search and rescue missions in disaster-stricken areas. As this research field is rapidly growing, we propose the Special Issue entitled "Technologies and Applications for Drone Audition" to gain the recognition and development of drone audition in the world.

The Special Issue aims to collect the latest research on drone audition technologies and their applications, and research on related technologies for drone audition, such as noise reduction, laser measurement, global navigation satellite system (GNSS), simultaneous localization and mapping (SLAM), flight planning, and so on.

In this Special Issue, original research articles and reviews are welcome. The scope may include (but is not limited to) the following:

  • Drone audition
  • Acoustic signal processing for drone/UAV
  • Acoustic sensing for drone/UAV
  • Sound source localization using drone/UAV-embedded sensors
  • Sound source separation using drone/UAV-embedded sensors
  • Sound identification/recognition using drone/UAV-embedded sensors
  • Noise reduction of drone/UAV for drone audition
  • Surrounding environment recognition for drone audition
  • Search and rescue by drone audition technology

We look forward to receiving your contributions.

Dr. Kotaro Hoshiba
Prof. Dr. Makoto Kumon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • drone audition 
  • acoustic signal processing 
  • acoustic sensing 
  • sound source localization 
  • sound source separation 
  • sound identification/recognition 
  • noise reduction 
  • surrounding environment recognition 
  • search and rescue

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1012 KiB  
Article
A Performance Assessment on Rotor Noise-Informed Active Multidrone Sound Source Tracking Methods
by Benjamin Yen, Taiki Yamada, Katsutoshi Itoyama and Kazuhiro Nakadai
Drones 2024, 8(6), 266; https://doi.org/10.3390/drones8060266 - 14 Jun 2024
Viewed by 1099
Abstract
This study evaluates and assesses the performance of recent developments in sound source tracking using microphone arrays from multiple drones. Stemming from a baseline study, which triangulates the spatial spectrum calculated from the MUltiple SIgnal Classification (MUSIC) for each drone, otherwise known as [...] Read more.
This study evaluates and assesses the performance of recent developments in sound source tracking using microphone arrays from multiple drones. Stemming from a baseline study, which triangulates the spatial spectrum calculated from the MUltiple SIgnal Classification (MUSIC) for each drone, otherwise known as Particle Filtering with MUSIC (PAFIM), recent studies extended the method by introducing methods to improve the method’s effectiveness. This includes a method to optimise the placement of the drone while tracking the sound source and methods to reduce the influence of high levels of drone rotor noise in the audio recordings. This study evaluates each of the recently proposed methods under a detailed set of simulation settings that are more challenging and realistic than those from previous studies and progressively evaluates each component of the extensions. Results show that applying the rotor noise reduction method and array placement planning algorithm improves tracking accuracy significantly. However, under more realistic input conditions and representations of the problem setting, these methods struggle to achieve decent performance due to factors not considered in their respective studies. As such, based on the performance assessment results, this study summarises a list of recommendations to resolve these shortcomings, with the prospect of further developments or modifications to PAFIM for improved robustness against more realistic settings. Full article
(This article belongs to the Special Issue Technologies and Applications for Drone Audition)
Show Figures

Figure 1

23 pages, 7625 KiB  
Article
Proposal of Practical Sound Source Localization Method Using Histogram and Frequency Information of Spatial Spectrum for Drone Audition
by Kotaro Hoshiba, Izumi Komatsuzaki and Nobuyuki Iwatsuki
Drones 2024, 8(4), 159; https://doi.org/10.3390/drones8040159 - 18 Apr 2024
Viewed by 2102
Abstract
A technology to search for victims in disaster areas by localizing human-related sound sources, such as voices and emergency whistles, using a drone-embedded microphone array was researched. One of the challenges is the development of sound source localization methods. Such a sound-based search [...] Read more.
A technology to search for victims in disaster areas by localizing human-related sound sources, such as voices and emergency whistles, using a drone-embedded microphone array was researched. One of the challenges is the development of sound source localization methods. Such a sound-based search method requires a high resolution, a high tolerance for quickly changing dynamic ego-noise, a large search range, high real-time performance, and high versatility. In this paper, we propose a novel sound source localization method based on multiple signal classification for victim search using a drone-embedded microphone array to satisfy these requirements. In the proposed method, the ego-noise and target sound components are extracted using the histogram information of the three-dimensional spatial spectrum (azimuth, elevation, and frequency) at the current time, and they are separated using continuity. The direction of arrival of the target sound is estimated from the separated target sound component. Since this method is processed with only simple calculations and does not use previous information, all requirements can be satisfied simultaneously. Evaluation experiments using recorded sound in a real outdoor environment show that the localization performance of the proposed method was higher than that of the existing and previously proposed methods, indicating the usefulness of the proposed method. Full article
(This article belongs to the Special Issue Technologies and Applications for Drone Audition)
Show Figures

Figure 1

21 pages, 2080 KiB  
Article
Placement Planning for Sound Source Tracking in Active Drone Audition
by Taiki Yamada, Katsutoshi Itoyama, Kenji Nishida and Kazuhiro Nakadai
Drones 2023, 7(7), 405; https://doi.org/10.3390/drones7070405 - 21 Jun 2023
Cited by 1 | Viewed by 1543
Abstract
This paper addresses a placement planning method for drones to improve the performance of source tracking by multiple drones equipped with microphone arrays. By equipping the drone with a microphone array, the drone will be able to locate the person in need of [...] Read more.
This paper addresses a placement planning method for drones to improve the performance of source tracking by multiple drones equipped with microphone arrays. By equipping the drone with a microphone array, the drone will be able to locate the person in need of rescue, and by deploying multiple drones, the 3D location of the sound source can be estimated. However, effective drone placement for sound source tracking has not been well explored. Therefore, this paper proposes a new drone placement planning method to improve the performance of sound source tracking. By placing multiple drones close to the sound source with multiple angles, it is expected that tracking will be performed with small variance. The placement planning algorithm is also extended to be applicable to multiple sound sources. Through numerical simulations, it is confirmed that the proposed method reduces the sound source tracking error. In conclusion, the contribution of this research is to extend the field of drone audition to active drone audition that allows drones to move by themselves to achieve better tracking results. Full article
(This article belongs to the Special Issue Technologies and Applications for Drone Audition)
Show Figures

Figure 1

Back to TopTop