Next Article in Journal
Persistent Multi-Agent Search and Tracking with Flight Endurance Constraints
Previous Article in Journal
Design and Experiments of a Novel Humanoid Robot with Parallel Architectures
Previous Article in Special Issue
Multi-Modal Sensing and Robotic Manipulation of Non-Rigid Objects: A Survey
Article Menu
Issue 1 (March) cover image

Export Article

Open AccessArticle
Robotics 2019, 8(1), 1; https://doi.org/10.3390/robotics8010001

Representation of Multiple Acoustic Sources in a Virtual Image of the Field of Audition from Binaural Synthetic Aperture Processing as the Head is Turned

Environmental Research Institute, North Highland College, University of the Highlands and Islands, Thurso, Caithness KW14 7EE, UK
Received: 29 August 2018 / Revised: 13 December 2018 / Accepted: 18 December 2018 / Published: 23 December 2018
(This article belongs to the Special Issue Feature Papers)
  |  
PDF [1795 KB, uploaded 28 December 2018]
  |  

Abstract

The representation of multiple acoustic sources in a virtual image of the field of audition based on binaural synthetic-aperture computation (SAC) is described through use of simulated inter-aural time delay (ITD) data. Directions to the acoustic sources may be extracted from the image. ITDs for multiple acoustic sources at an effective instant in time are implied for example by multiple peaks in the coefficients of a short-time base (≈2.25 ms for an antennae separation of 0.15 m) cross correlation function (CCF) of acoustic signals received at the antennae. The CCF coefficients for such peaks at the time delays measured for a given orientation of the head are then distended over lambda circles in a short-time base instantaneous acoustic image of the field of audition. Numerous successive short-time base images of the field of audition generated as the head is turned are integrated into a mid-time base (up to say 0.5 s) acoustic image of the field of audition. This integration as the head turns constitutes a SAC. The intersections of many lambda circles at points in the SAC acoustic image generate maxima in the integrated CCF coefficient values recorded in the image. The positions of the maxima represent the directions to acoustic sources. The locations of acoustic sources so derived provide input for a process managing the long-time base (>10s of seconds) acoustic image of the field of audition representing the robot’s persistent acoustic environmental world view. The virtual images could optionally be displayed on monitors external to the robot to assist system debugging and inspire ongoing development. View Full-Text
Keywords: robotic sensing; acoustic localization; binaural systems; synthetic aperture robotic sensing; acoustic localization; binaural systems; synthetic aperture
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Tamsett, D. Representation of Multiple Acoustic Sources in a Virtual Image of the Field of Audition from Binaural Synthetic Aperture Processing as the Head is Turned. Robotics 2019, 8, 1.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Robotics EISSN 2218-6581 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top