Next Article in Journal
Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry
Next Article in Special Issue
Numerical Study and Optimisation of a Novel Single-Element Dual-Frequency Ultrasound Transducer
Previous Article in Journal
Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement
Previous Article in Special Issue
The Dynamic Performance of Flexural Ultrasonic Transducers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Acoustic Sensors for Air and Surface Navigation Applications

1
School of Engineering, RMIT University, Aerospace Engineering and Aviation Discipline, Bundoora VIC 3083, Australia
2
School of Science, RMIT University, Computer Science and Information Technology Discipline, Melbourne 3000, Australia
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 499; https://doi.org/10.3390/s18020499
Submission received: 19 December 2017 / Revised: 30 January 2018 / Accepted: 1 February 2018 / Published: 7 February 2018
(This article belongs to the Special Issue Ultrasonic Sensors 2018)

Abstract

:
This paper presents the state-of-the-art and reviews the state-of-research of acoustic sensors used for a variety of navigation and guidance applications on air and surface vehicles. In particular, this paper focuses on echolocation, which is widely utilized in nature by certain mammals (e.g., cetaceans and bats). Although acoustic sensors have been extensively adopted in various engineering applications, their use in navigation and guidance systems is yet to be fully exploited. This technology has clear potential for applications in air and surface navigation/guidance for intelligent transport systems (ITS), especially considering air and surface operations indoors and in other environments where satellite positioning is not available. Propagation of sound in the atmosphere is discussed in detail, with all potential attenuation sources taken into account. The errors introduced in echolocation measurements due to Doppler, multipath and atmospheric effects are discussed, and an uncertainty analysis method is presented for ranging error budget prediction in acoustic navigation applications. Considering the design challenges associated with monostatic and multi-static sensor implementations and looking at the performance predictions for different possible configurations, acoustic sensors show clear promises in navigation, proximity sensing, as well as obstacle detection and tracking. The integration of acoustic sensors in multi-sensor navigation systems is also considered towards the end of the paper and a low Size, Weight and Power, and Cost (SWaP-C) sensor integration architecture is presented for possible introduction in air and surface navigation systems.

1. Introduction

The directionality of acoustic waves has been long used for localization by human beings. The term ‘echolocation’ was coined by Donald R. Griffin [1], where he discusses ship captains exploiting sound to ascertain the ship’s surroundings and avoid obstacles in low visibility environments. Acoustic sensors provide a low Size, Weight, and Power (SWaP) solution, which is low cost, scalable and robust. Moreover, acoustic sensors have the capability to provide high-resolution spatial information at short distance range. Radio-based localization techniques like Global Navigation Satellite Systems (GNSS) are prone to data degradations in dense urban environments and indoors [2]. On the other hand, electromagnetic techniques suffer from interference from other sources as well as metal structures. Optical navigation sensors are also still relatively expensive and their performance deteriorates in degraded visibility conditions as well as in environments consisting of optically transparent or opaque objects.
This paper discusses the fundamental principles and the technological considerations governing the use of sound waves for air and surface navigation applications, considering all known parasite effects. Before presenting the fundamental theory, Section 2 briefly reviews the role of echolocation in nature. In particular, this section delves into acoustic navigation systems employed by certain mammals, especially bats. Bats exhibit a very sophisticated acoustic echolocation system, which involves frequency, amplitude, and Pulse Repetition Frequency (PRF) modulation according to proximity to the prey or conspecifics [3,4,5]. These examples can greatly support the development of new navigation techniques, particularly when targeting emerging multi-Unmanned Aircraft System (UAS) formation flight and swarming. Section 3 addresses the theory governing the propagation of sound, particularly through the atmosphere, introducing all attenuation factors. Section 4 lists the ranging error sources and their modeling. Section 5 explains in detail the different system configurations: monostatic, multistatic, and a hybrid approach, which is a combination of the two. Monostatic as well as multistatic configurations are of particular interest as they can support a variety of applications in indoor navigation as well as personal mobility, including assistance for physically disabled. Section 6 details the state-of-the-art in acoustic sensors and also delves into the techniques developed for using acoustic sensors in navigation as well as personal mobility. In Section 7, the integration of acoustic sensors in the existing multi-sensor navigation systems for air and surface vehicles is discussed. In the final section, the paper sums up the findings, with evaluation of the scope of acoustic sensors in various navigation applications and future research trends.

2. Echolocation in Nature

Animals, especially mammals like bats and dolphins, use acoustic waves that vary in frequency, signal duration, and intensity, for navigation and tracking. Additionally, bats show the ability to detect and when needed, compensate for Doppler shift. Some interesting observations from bat echolocation are listed below:
  • Bats can lower their call intensity as they approach strong reflective objects to prevent the echo sound pressure level from becoming too large. Bats can exhibit very high-resolution of target detection with time difference discriminations of 10–12 nanoseconds [3,6,7]. The duration of echolocation can vary considerably, with individual clicks being approximately ~50–100 μs long to constant frequency signals which are longer than 30 ms. Table 1 lists various bat species and their call type, based on their diet.
  • As seen in the Table 1, the echolocation call can consist of a single frequency or multiple frequencies comprising a harmonic series. The pulse interval of the call also varies with proximity to the target. As bats approach their target, the repetition rate of their calls increases to get faster localisation updates. Also, the pulse interval of the call gives an indication of the maximum range from which bats can detect objects. Gleaning bats can passively listen to prey-generated sounds to localize their prey by interrupting echolocation or drastically reducing call intensity shortly before capturing prey [4].
  • Big brown bats (Eptesicus fuscus) might roughly localise the position of its prey by listening to conspecific-generated echoes [5]. This is true for bottlenose dolphins (Tursiops truncates) as well [8].
Certain bats like Mexican free-tailed bat tend to increase their emission rate when flying in pairs. However, when flying in bigger groups, bats tend to decrease their emission rates, thereby reducing mutual interference [9]. This temporal modulation of emission in bats is similar to timing algorithms employed in electronic communication systems. These algorithms, also referred to as back-off algorithms, introduce probabilistic delays in resending packets lost due to interference [10].
Bats exhibit a variety of behaviour while coping with interference from environmental noise as well as both calls and echoes from nearby bats. While some species treat the presence of nearby conspecifics as any other source of noise or object in their field of view [13], certain species of bats like the free-tailed bats (Molossidae) compensate for interference by calling louder or by varying the frequency or duration of echolocation pulses [14].
Echolocation in bats refers to sensing of the environment based on Time of Arrival (TOA) of sound waves emitted by them. This helps bats navigate as well track their prey during the night. The strength of the received signal is indicative of the size of the target. Also, analysis of frequency spectrum of the echo gives an idea of the surface texture of the target. Most bat echolocation calls are ultrasonic, ranging from 20–200 kHz and the sound intensity can reach up to 130 dB. There have been attempts to investigate echolocation abilities in human beings as well, especially in visually impaired. In [15], the echolocation abilities of blind and sighted humans are reviewed, suggesting enhanced auditory abilities in visually impaired than normally sighted humans. The effect of prior visual experience on sound localisation in late blind individuals has been studied in detail in [16].
There has been some research focus on analyzing the flight dynamics of bats and attempts made to emulate the same [17]. Bats use either their tongue or vocal chords to produce sonar signals [8]. Bats can vary the frequency, signal duration, signal intensity, harmonic composition and pulse interval according to their surroundings. Bats use narrowband signals for ranging of distant targets and broadband for localization. Some species of bats also account for Doppler shift by varying their call frequency [3,18]. Attempts have been made to develop biomimetic sonars inspired by bats’ external ears or pinnae (Figure 1), for localization and mapping, referred to as BatSLAM [19].

3. Sound Propagation

Acoustic waves are longitudinal waves that require a material medium to propagate. Fundamentally, sound can be defined as mechanical energy transmitted by pressure waves in a material medium. Acoustic waves are mechanical waves, i.e., they involve rapid to and fro displacements or vibrations of molecules in the medium. The velocity of sound in a medium ( c m ) varies with the bulk modulus ( B ) and density ( ρ ) of the medium as shown in Equation (1). Sound travels faster in a medium with high bulk modulus or stiffness, like solids as compared to a medium with lower bulk modulus, like fluids:
c m = B ρ
The attenuation rate of sound waves varies with frequency, with higher frequencies attenuating at a faster rate. Attenuation can occur either due to reflection/scattering at interfaces or absorption [21]. However, higher frequencies, having short wavelengths, reflect strongly from small objects. Reflection from surfaces causes interference with the incident sound wave, which could be constructive or destructive. Interference depends upon the frequency of sound as well as the difference between the path length of direct and reflection paths [22]. Furthermore, the speed of sound in air varies with temperature, pressure, humidity, and wind, thereby affecting the propagation of sound. The generic equation for sound propagation can be given by:
L p ( r ) = L w + Σ i A i
where L p ( r ) : The sound pressure level at distance r from the source ( dB ); L w : The sound power level of the source ( dB ); A i : The combination of modifying factors that either attenuate or enhance the transmission of the sound energy as it propagates from source to receiver.
Acoustic sources have both far-field and near-field regions. Wavefronts produced by the sound source in near-field are not parallel and the intensity of the wave oscillates with range and the angle between source and receiver. However, in the far-field, wavefronts are nearly parallel, with intensity varying only with range to a centroid between sound sources, in accordance with the inverse squared rule. The near-field distance r n f is given by:
r n f = D 2 λ
where D is the equivalent aperture of the transmitter given by:
D = 3.2 k sin ( θ 3 dB 2 )
where k is the wave number and θ 3 dB is the half power beam angle. The wavefront for a sound source radiating equally in all directions is a sphere of radius r , whose intensity I from the source of power W is given by:
I = W 4 π r 2
Assuming a point source of sound in an unbounded homogenous atmosphere, the propagation of sound is affected by just two attenuating effects. While the first attenuation effect is geometric, which is solely dependent on the distance from the sound source, the second attenuating effect is the atmospheric absorption. Sound propagates due to the oscillation of air molecules about their mean position; with a higher frequency of sound leading to a higher rate of oscillation. This vibration of the air molecules leads to loss of energy through two dissipative mechanisms. While one of the mechanisms comprises of frictional losses, which includes both viscous action and heat conduction, the other mechanism involves the interaction of water vapour with the resonance of oxygen and nitrogen molecules. Hence, there are heat conduction losses, shear viscosity losses, and molecular relaxation losses [23].

3.1. Sound Attenuation in Atmosphere

However, in practical situations, the propagation of sound in the atmosphere is affected by additional factors like ground effects, attenuation due to finite barriers and buildings, reflections, wind, and temperature gradient effects, and atmospheric turbulence. The atmospheric sound attenuation factors are discussed in detail in Section 3.1.1, Section 3.1.2, Section 3.1.3, Section 3.1.4 and Section 3.1.5.

3.1.1. Geometrical Divergence ( A d i v )

Geometrical divergence refers to the spherical spreading in the free field from a point sound source. The attenuating effect is the geometric attenuation, which results from the spreading of the radiated sound energy over a sphere of increasing diameter as the wavefront propagates away from the source. Equation (6) shows the relationship between the sound power level of the source, L w , and sound pressure level, L p ( r ) , at a distance r from that source. The variation of sound pressure level with distance from the source is shown in Equation (7). Unlike atmospheric attenuation, geometric attenuation is independent of the frequency of the propagating sound wave. From Equation (7), it can be inferred that sound intensity or sound pressure level, L p decreases by 6 dB per doubling of distance away from the source:
L p ( r ) = L w + 10 log [ 1 4 π r 2 ]
L p ( r 2 ) = L p ( r 1 ) + 20 log [ r 1 r 2 ]
The geometrical divergence, in dB, is given by:
A d i v = [ 20 log ( d d 0 ) + 11 ]
where d is the distance from the sound source to receiver (m) and d 0 is the reference distance which is 1 m from an omnidirectional point sound source.

3.1.2. Atmospheric Absorption ( A atm )

Air absorption becomes significant at higher frequencies and at long ranges, thereby acting as a low-pass filter at long range. The pressure of a planar sound wave at a distance x from a point of pressure P 0 is given by:
P = P 0 e α x 2
The attenuation coefficient, α , for air absorption depends on frequency, humidity, temperature, and pressure, with its value being calculated using Equations (10)–(12) [24].
α = f 2 [ ( 1.84 × 10 11 ( T 0 T ) 1 / 2 p s p 0 ) + ( T 0 T ) 2.5 × ( 0.10680 e 3352 / T f r , N f 2 + f r , N 2 + 0.01278 e 2239.1 / T f r , O f 2 + f r , O 2 ) ]
where f is the frequency, T is the absolute temperature of the atmosphere in kelvins, T 0 is the reference value of T (293.15 K) and f r , N and f r , O are relaxation frequencies associated with the vibration of nitrogen and oxygen molecules, respectively, and are given by:
f r , N = p s p 0 ( T 0 T ) 1 / 2 ( 9 + 280 H e 4.17 [ ( T 0 / T ) 1 3 1 ] )
f r , O = p s p 0 ( 24.0 + 4.04 × 10 4 H 0.02 + H 0.391 + H )
where p s is local atmospheric pressure, p 0 is the reference atmospheric pressure (101,325 Pa) and H is the percentage molar concentration of water vapour in the atmosphere which is given by:
H = ρ s a t r h p 0 p s
where r h is the relative humidity and ρ s a t is given by:
ρ s a t = 10 C s a t
where C s a t is given by:
C s a t = 6.8346 ( T 0 T ) 1.261 + 4.6151
Similarly, ρ s a t can also be written as [25,26]:
ρ s a t = 1322.8 ( r h T ) [ 25.22 ( T 273.15 ) T 5.31 ln ( T 273.15 ) ]
Figure 2 shows the variation of absorption coefficient α with the frequency of sound at 293.15 K, one atmospheric pressure, 20% relative humidity and H being 4.7 × 10−3. As there are two relaxation frequencies associated with oxygen and nitrogen, the frequency dependence of the attenuation coefficient for sound in the air has three distinct regions. At very low frequencies, where the sound frequency is much lower than that associated with nitrogen molecules, the attenuation is dominated by vibrational relaxation of nitrogen molecules ( α 1 ). The frequency dependence is quadratic with an apparent bulk viscosity associated with the nitrogen relaxation. In the intermediate region, the frequency is substantially larger than that associated with nitrogen relaxation, but still substantially less than that associated with oxygen relaxation, with quadratic frequency dependence, smaller coefficient and apparent bulk viscosity that is associated with oxygen relaxation ( α 2 ). In the higher frequency region, there is quadratic dependence again, although with an even smaller coefficient, and with the intrinsic bulk viscosity associated with molecular rotation ( α 3 ) [24].
Having calculated the absorption coefficient for a given temperature, pressure, relative humidity and percentage molar concentration of water vapor in the atmosphere, the attenuation of sound due to atmospheric absorption, during propagation through a distance d   ( m ) is given by:
A atm = α d / 1000

3.1.3. Ground Effect ( A g r )

Adding the effect of a bounding ground plane to the sound propagation model allows for sound to propagate directly from source to receiver as well as through secondary propagation path resulting from a reflection off the ground plane as shown in Figure 3. This secondary propagation path can result in interference effects between the direct and reflected waves at the receiver. The interference effect can be constructive or destructive, depending on the relative amplitudes and phase of the direct and reflected waves. The relationship between the direct and reflected waves depends on a variety of factors including the difference between the direct and reflected path lengths, which is a function of source and receiver separation distance ( dp ) as well as their height above the ground ( h s and h r ), the wavelength of sound and reflective properties of the ground which can cause variations in the phase and amplitude of the reflected sound wave.
Based on the acoustical properties of ground surfaces, they are classified into three types based on the ground factor ( G ). Hard ground, which has a G of 0, includes concrete, paving, water, ice and all other low porosity ground surfaces. On the other hand, porous ground, which has a G of 1, consists of grass, trees, foliage and all other ground surfaces which are suitable for the growth of vegetation. Surfaces which are a combination of both hard and porous grounds, i.e., having a value of G ranging from 0 to 1, where G represents the fraction of the ground surface that is porous, are known as mixed ground. The total ground attenuation is obtained by summing up the attenuation A s for the source region specified by the ground factor G s , A m for the middle region specified by the ground factor G m , and A r for the receiver region specified by the ground factor G r , as shown in Equation (18):
A g r = A s + A m + A r

3.1.4. Screening ( A b a r )

An object shall be considered as a screening obstacle (Figure 4) if it meets the following requirements:
  • The object has a surface density of at least 10 kg/m2;
  • The surface of the object is closed without cracks or gaps;
  • The horizontal dimension of the object normal to the source-receiver line ( l i + l r ) is larger than the acoustic wavelength λ at the nominal midband frequency for the octave band of interest, i.e., l i + l r > λ (Figure 5).
The barrier diffraction could be either single diffraction in case of thin barriers (Figure 5) or double diffraction in thick barriers. In case of more than two barriers, the barrier attenuation can be approximated to be a case of double diffraction, by choosing the two most effective barriers, neglecting the effect of the others. The effect diffraction (in dB) for downward sound propagation over the top edge and the vertical edge, respectively, is given by Equations (19) and (20) [27]. D Z is the barrier attenuation for each octave band and A g r is the ground attenuation in the absence of the barrier, as described in Section 3.1.3.
A b a r = D Z A g r > 0
A b a r = D Z > 0

3.1.5. Wind and Temperature Gradient Effects

As a result of uneven heating of the Earth’s surface, the atmosphere is constantly in motion. The turbulent flow of air across the rough solid surface of the Earth generates a boundary layer. The lower part of the meteorological boundary layer, called the surface layer, extends over 50–100 m in typical daytime conditions [24]. Turbulent fluxes vary by less than 10% of their magnitude in the surface layer, but the wind speed and temperature gradients are the largest. Turbulence can be modelled as a series of moving eddies with a distribution of sizes. Various turbulence models like Gaussian, Von Kármán, and Kolmogorov are used in atmospheric acoustics [28,29,30]. It has been shown that turbulence effects decrease with increase in elevation of sound sources from the ground [31].
As the temperature decreases with height, in the absence of wind, this causes sound waves to bend, or refract, upwards. Wind velocity either adds or subtracts from the velocity of sound, depending upon whether the source is upwind or downwind of the receiver, height above ground and temperature inversions. Wind effects tend to dominate over temperature effects when both are present. The general relationship between the speed of sound profile c ( z ) , the temperature profile T ( z ) and wind speed profile u ( z ) in the direction of sound propagation, for a height z , is given by [32]:
c ( z ) = c ( 0 ) T ( z ) + 273.15 273.15 + u ( z )

3.1.6. Other Sound Attenuation Factors

Various standardised techniques for measuring the attenuation of sound outdoors due to atmospheric absorption effects have been developed in ISO 9613-1 and ISO 9613-2 [27,33]. This includes attenuation of sound due to miscellaneous effects like foliage, housing or industrial sites. The analytical models presented rely on the values of various atmospheric parameters like temperature, pressure, relative humidity, wind speed and time of the day. Besides, fog and precipitation can also affect the attenuation of sound [34]. Experiments show that precipitation affects the temperature variation, hence indirectly affecting sound attenuation outdoors [31]. Hence, to summarize, the attenuation of sound in atmosphere can be given by:
A = A d i v + A atm + A g r + A b a r + A m i s c
where A m i s c is the sound attenuation due to other miscellaneous effects like wind and temperature gradient effects, precipitation, foliage, and housing or industrial sites.

4. Echolocation Errors

Table 2 gives the different ranging parameters involved in the design of the acoustic sensors. The ranging equation is given by Equation (23), where R m is the measured range, R a is the actual range from the transmitter ( x T , y T , z T ) to the receiver ( x R , y R , z R ) and ε is the error in the measured range. The error term ε comprises mainly of error due to multipath ( ε M p ), Doppler shift ( ε D s ) and atmospheric effects ( ε A t m ):
R m = R a + ε
where:
R a = ( x T x R ) 2 + ( y T y R ) 2 + ( z T z R ) 2
ε = ε D s + ε M p + ε A t m

4.1. Doppler Effect

The Doppler effect is caused by the perceived change in sound frequency due to relative motion between sound source and receiver. Figure 6 shows the elevation angle ( E n ) of the nth transmitter ( T r n ) to the receiver ( R ) as well as the relative bearing ( χ n ), the tangential velocity of the transmitter ( v T ) and the azimuth of the Line of Sight (LOS) projection ( χ n ). v 0 is the velocity of the receiver and the case of v 0 = v results in a null Doppler shift as no component of receiver velocity vector is in the direction of LOS to the transmitter. As is evident from Equation (26), the Doppler shift is inversely proportional to both elevation and azimuth angle. The pitch increases as the relative distance between the sound source and receiver decreases. The change in observed frequency of sound is given as:
Δ f n = f ( | v n | | v a | c ) cos χ n sin E n
where v n = nth transmitter velocity component along the LOS; v a = receiver velocity projection along the LOS; c = speed of sound ( ms 1 ); f = sound frequency ( Hz ); E n = elevation angle of the nth transmitter; χ n = azimuth of the LOS projection.
Considering a simplified case where transmitter motion and LOS from the transmitter to the receiver are coplanar, the sound field at times t and ( t + t ) for a moving sound source ( T r ) which has moved to a new point ( T r ) in time t , is shown in Figure 7.
The moving sound source emits crests every t units of time. The Doppler-shifted frequency f from a moving sound source emitting frequency f s is given by:
f = f s 1 M cos   θ
where M is the Mach number for the sound source and θ is the direction of the receiver to the sound source at the time ( t + t ) .
Assuming no relative motion between transmitter-receiver and the speed of the transmitter being M c , if it takes time t for sound to reach the receiver from the transmitter, the error in range due to Doppler shift is given by:
ε D s = M c t cos   θ
where θ is the direction of receiver motion relative to the LOS between the transmitter and the receiver.

4.2. Multipath

An important property of a medium that influences the strength or amplitude of reflected waves is acoustic impedance. Acoustic impedance can be defined as the product of the density of the medium ( ρ ) and speed of sound ( c m ) in the medium:
Z = ρ c m
Acoustic impedance gives a measure of the sound transmitted and reflected back at the interface of two mediums. The ratio of the reflected pressure amplitude, P r , to the incident pressure amplitude, P i , called amplitude reflection coefficient, is given by:
R P = P r P i = Z 2 Z 1 Z 2 + Z 1
Assuming a homogenous medium, sound propagation, especially at high frequencies, can be assumed to be a straight line from the source to the receiver [35]. Assuming a range independent geometry for a homogenous medium, the sound waves are subject to multiple reflections, as shown in Figure 8. As most of the reflecting surfaces are irregular, sound waves experience a significant amount of scattering or diffraction on reflection.
Using ray-tracing [36,37], the reflection point S and the defined point V , as shown in Figure 9, should satisfy the equation:
( S V ) × n = 0
where sound waves are emitted from point T r to the receiver location R after reflection at point S . V is defined as a point on the reflecting surface and n is a unit vector normal to that surface. The line equation connecting T r and R i m a g e is given by:
S = T r + m ( R i m a g e T )
where m is a parameter between 0 and 1. Combining Equations (31) and (32):
S = T r + n × V n × T r n × ( R i m a g e T r ) ( R i m a g e T r )
Assuming specular reflection, the extra path length L m S is given by:
L m S = | T r S | + | R S | | T r R |
The starting point for performing ray-tracing of acoustic waves is Helmholtz equation, which can be written in Cartesian coordinates x = ( x , y , z ) as:
2 p + ω 2 c 2 ( x ) p = δ ( x x 0 )
where p is the total pressure, ω is the angular frequency of the source located at x 0 . A solution of the Helmholtz equation, which is called the ray series, is used to obtain the ray equations [38]. The ranging error due to multipath is given by:
ε M p = i = 1 N | T r i S i | + | R i S i | | T r i R i |
where: T r i = location of ith acoustic transmitter; S i = ith reflection point; R i = location of ith acoustic receiver; N = number of reflecting surfaces.

4.3. Atmospheric Effects

In the “International Standard Atmosphere” (ISA), the troposphere extends up to 11 km. The temperature gradient in the troposphere can be assumed to be constant. Following this assumption, the ranging error due to atmospheric effects is given by:
ε A t m = [ ( c t + c w t ) c t ]
where c w is the variation of the speed of sound due to the wind. The variation of the speed of sound due to temperature [39] is given by:
c = c 0 + d c d H H
where: c 0 = speed of sound at sea-level; H = height above sea-level; d c d H = λ c 0 2 T 0 is the gradient of the speed of sound, where T 0 is the sea-level temperature (K) and λ = d T d H is the variation of temperature with height. The variation of the speed of sound due to wind is given by:
c w = c r c o s δ + v w
where: c r = speed of sound relative to air; δ = angle of wavefront normal with the horizontal; v w = horizontal wind velocity.
The magnitude of the horizontal wind velocity near the Earth’s surface is predominantly determined by the prevailing horizontal pressure gradient in the atmosphere and the surface friction [39]. The surface friction arises from the relative motion between air and the ground surface and has to be accounted for heights up to 1000 m.

4.4. Ranging Error Analysis

The range measured by acoustic sensors can be written as:
R = c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + 0 t ( d c d H R cos θ + v w cos δ ) d t
= c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + R t cos θ d c d H + v w t cos δ
where t is the time of flight. R can be rewritten as:
R = c 0 t + M c t cos   θ + i = 1 N [ | T r i S i | + | R i S i | | T r i R i | ] + v w t cos δ 1 t cos θ d c d H
= c 0 t + M c t cos   θ + i = 1 N [ | T r i S i | + | R i S i | | T r i R i | ] + v w t cos δ 1 t cos θ c 0 λ 2 T 0
The uncertainty in range measurement can be obtained by calculating the deviation in range measurement error from all error sources. The cumulative deviation of ranging error is given by:
σ R = ( R c 0 ) 2 σ c 0 2 + ( R t ) 2 σ t 2 + ( R θ ) 2 σ θ 2 + ( R T 0 ) 2 σ T 0 2 + ( R λ ) 2 σ λ 2 + ( R M ) 2 σ M 2 + ( R c ) 2 σ c 2 + ( R T r i ) 2 σ T r i 2 + ( R S i ) 2 σ S i 2 + ( R R i ) 2 σ R i 2 + ( R ν w ) 2 σ ν w 2 + ( R δ ) 2 σ δ 2
where:
R c 0 = ( t cos θ λ ( c 0 t + M c t cos θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + v w t cos δ ) 2 T 0 ( 1 t cos θ c 0 λ 2 T 0 ) 2 + t ( 1 t cos θ c 0 λ 2 T 0 ) )
R t = ( 2 T 0 c 0 λ cos θ ( c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + v w t cos δ ) ( 2 T 0 c 0 t λ cos θ ) 2 + c 0 + M c cos   θ + v w cos δ 1 t cos θ c 0 λ 2 T 0 )
R θ = ( 2 T 0 t c 0 λ sin θ ( c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + v w t cos δ ) ( 2 T 0 c 0 t λ cos θ ) 2 M c t sin   θ 1 t cos θ c 0 λ 2 T 0 )
R T 0 = ( 2 c 0 t λ cos θ ( c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + v w t cos δ ) ( 2 T 0 c 0 t λ cos θ ) 2 )
R λ = ( 2 T 0 c 0 t cos θ ( c 0 t + M c t cos   θ + i = 1 N | T r i S i | + | R i S i | | T r i R i | + v w t cos δ ) ( 2 T 0 c 0 t λ cos θ ) 2 )
R M = ( c t cos θ 1 t cos θ c 0 λ 2 T 0 )
R c = ( M t cos θ 1 t cos θ c 0 λ 2 T 0 )
R S i = ( 2 1 t cos θ c 0 λ 2 T 0 )
R T r i = R R i = 0
R v w = ( t cos δ 1 t cos θ c 0 λ 2 T 0 )
R δ = ( v w t sin δ 1 t cos θ c 0 λ 2 T 0 )
The error budgeting has been numerically validated in a case study, taking realistic values of variables, as shown in Table 3. This analysis gives an error of 0.24 m for a range of 10 m, with more than 95% of the error being due to multipath. However, in real life situations, the hardware limitations associated with the practical realization of acoustic sensors can introduce additional errors, which have to be considered as well in the overall error budgeting.

5. Sensor Arrangements

5.1. Monostatic Approach

A major limitation of the multistatic system is that it works in predetermined environments only. To overcome this limitation, a monostatic approach can be applied for obstacle detection and tracking. The generalized SONAR (Sound Navigation and Ranging) equation is given by:
S N L = S L 2 T L + T S ( N L D I )
where: S N L = signal-to-noise ratio of the returning echo; S L = source sound level; 2 T L = two-way transmission losses; T S = target strength; N L = noise level; D I = directivity index.
This approach utilizes a collocated transceiver. The transmission losses are a function of spherical spreading and atmospheric attenuation. The basic range equation for the monostatic approach is:
ρ k t ( t k ) = ( t k t t ) c / 2
where ρ k t is the actual measurement, t k is the nominal time of reception, t t is the nominal time of emission, and c is the speed of sound.

5.2. Multistatic Approach

In a familiar environment, for example in a building, a multistatic sensor approach can be applied that utilises base stations (BS) at known fixed locations in the test environment. Similar work has been done so far [40], which utilises transmitters at known locations to calculate the mobile station (MS) or receiver coordinates. A platform fitted with an acoustic receiver ( R ), as shown in Figure 10, can utilise distance measurements from multiple transmitters to calculate its position. An optimised arrangement of transmitters ensures that the Position Dilution of Precision (PDOP), as shown in Equation (58), is kept within an acceptable threshold [41,42,43]. The optimised geometry also results in keeping the cost of the system down:
PDOP = σ x 2 + σ y 2 + σ z 2 σ R 2 = 1 h 1 2 + 1 h 2 2 + 1 h 3 2 + 1 h 4 2
where σ x 2 , σ y 2 , and σ z 2 are the variances of the estimation errors along each axis and h 1 , h 2 , h 3 , and h 4 are altitudes of the tetrahedron formed by joining unit vectors from four BS and one MS [44].
The distance measurements are made based on time-of-flight (TOF) measurements. The general range equation for the multistatic sensor arrangement is:
ρ k t ( t k ) = ( t k t t ) c
where ρ k t is the actual measurement, t k is the nominal time of the receiver clock k at reception, t t is the nominal time of the transmitter clock s at emission, and c is the speed of sound. After taking into consideration the clock biases, propagation delay in the air, multipath error and random measurement noise, the complete expression for range measurement becomes:
r k t ( t k ) = ρ k t ( t r , k ) ( d t k d t t ) c + P k , t t ( t k ) + d k , T t ( t k ) + d t t ( t k ) + d k , t ( t k ) + ε t
where: ρ k t ( t r , k ) = geometric distance (m); d t k = receiver clock error (s); d t t = transmitter clock error (s); P k , t t ( t k ) = propagation delay in air (standard conditions) (m); d k , t ( t k ) = error due to hardware code delay at the receiver (m); d t t ( t k ) = error due to hardware code delay at the transmitter (m); d k , T t ( t k ) = multipath error (m); ε t = random measurement noise (m).
The multipath error depends on the relative geometry of the transmitter and the receiver with respect to the surrounding reflective surface as well as its reflective properties. The coordinates of the receiver as well as the timing information are derived from the simultaneous/sequential observation of four (or more) transmitters. Assuming a constant clock error for measurements to all transmitters and neglecting all other error terms, the following system of equations is obtained:
r k n ( t ) = ( x n x k ) 2 + ( y n y k ) 2 + ( z n z k ) 2 + v d t k   ( n = 1 ,   2 ,   3 ,   4 )
Considering this system of equations, a minimum of four BSs are required to yield four equations, which are solved to provide the positioning solution. One approach to solving the time-independent non-linear system of equations is to linearize them by using a recursive least squares algorithm for positioning. With more than four BS being used for calculating the position of the MS, the problem is over-determined and can be solved in the least squares sense to yield an optimal estimate of the MS location. In practice, the solution is obtained iteratively starting from an initial guess of the receiver position ( x 0 , y 0 , z 0 ) as shown in Figure 11, where:
r i 2 = ( x x i ) 2 + ( y y i ) 2 + ( z z i ) 2
r o i 2 = ( x o x i ) 2 + ( y o y i ) 2 + ( z o z i ) 2
x = x 0 + Δ x
y = y 0 + Δ y
z = z 0 + Δ z
where Δ x , Δ y and Δ z are the differences between the true solution and the initial guesses (corrections to the nominal values). Equation (62) can also be written as:
r i 2 = ( Δ x + x o x i ) 2 + ( Δ y + y o y i ) 2 + ( Δ z + z o z i ) 2
After simplification and disregarding the higher order error terms:
i = 1 n r i 2 r o i 2 = i = 1 n 2 [ x o x i y o y i z o z i ] { Δ x Δ y Δ z }
Equation (68) can be written as:
A Δ r = b
where:
A = 2 [ x o x 1 y o y 1 z o z 1 x o x 2 y o y 2 z o z 2 x o x n y o y n z o z n ] , Δ r = { Δ x Δ y Δ z }   and   b = [ r 1 2 r 01 2 r 2 2 r 02 2 r n 2 r 0 n 2 ]
Denoting the ignored error term by e , Equation (69) can be written as:
e = A Δ r b
or:
J = e T e = ( A Δ r b ) T A Δ r b )
This is an unconstrained minimization problem. J needs to be minimized with respect to n unknown values of Δ r . Thus:
0 = J r = r ( Δ r A T A Δ r 2 b T A Δ r + b T b )
= 2 ( A T A ) r 2 A T b
Pre-multiplying Equation (74) by 1 / 2 ( A T A ) 1 yields the least squares solution for linear algebraic equations, which can be written as:
Δ r = A * * b
where:
A * * = ( A T A 1 ) 1 A T
A * * is called the pseudo inverse of A for overdetermined systems. Note that the order of matrix Δ r remains 3 × 1, irrespective of the number of rows in A , as long as there are a minimum of three rows. The number of rows in A denotes the number of BS’s that are in range. To correct the initial guess, a linear (first-order Taylor) expansion of ρ k is adopted. Let n ( x n ,   y n ,   z n ) denote a preliminary best estimate value. The linearised equation is:
x n x s R n s Δ x k y n y s R n s Δ y k z n z s R n s Δ z k + Δ d t k = Δ R s
where R n s is the nominal range measurement to the sth BS and Δ R s is the difference between the actual and nominal range measurements. In addition to the ranging errors that affect the sensor position accuracy, the relative geometry of BS and MS affects the estimation. The linearised equations to s BSs are given by:
[ Δ x k Δ y k Δ z k Δ d t k ] = ( B T σ 0 2   R ρ   B ) 1   B T   σ 0 2   R ρ   [ Δ R 1 Δ R 2 Δ R 3 Δ R s ]
where B is the matrix of coefficients of the linear set of equations, R ρ is the covariance matrix of the pseudorange errors, and σ 0 2 is a scale factor known as the a priori variance of unit weight.
Various other multilateration algorithms have been proposed in the literature which present numerical algorithms based on iteratively solving simultaneous position equations based on an initial estimate [45,46,47]. A comparison of a linear least squares estimator, a non-linear least squares estimator, and an iteratively reweighted least squares technique was also made [46], where it was shown that the performance of non-linear least squares method was the best. In [48], a probabilistic model of the error in distance measurement for trilateration using Extended Kalman Filtering (EKF) was developed. However, most of the iterative algorithms converge locally, hence are sensitive to the initial estimate. An inappropriate initial estimate can lead to convergence to the wrong local optimum in the vicinity of the initial estimate, described as mirroring in [40]. Besides, iterative algorithms are computationally intensive. Various closed-form trilateration algorithms have also been discussed in the literature [49,50,51,52]. In [53], a closed-form algorithm for solving least squares trilateration problem is discussed, which works for an overdetermined system as well as in cases where there is no real solution. Additionally, global optimization techniques can also be explored, but their computational complexity can render them unsuitable for real-time applications.

5.3. Combination of Multistatic and Monostatic Approaches

A third approach combines the monostatic and multistatic approaches. This hybrid approach enables relative navigation among platforms equipped with monostatic transceivers. Swarms of platforms communicate with each other via on-board acoustic sensors to calculate their position as well as augment the knowledge of their environment. The transmitters of the multistatic system are fixed at known locations. The transmitters relay their unique identification information and transmission time along with the acoustic signal to enable positioning based on multilateration. Figure 12 shows the schematic of relative navigation in case of three independent transceivers. T i is the ith transmitter fixed at a known location while T R j is the jth transceiver on-board the platform, with r i j being the distance between the ith transmitter and the jth transceiver at a given instant of time.

6. Overview of State-of-the-Art Acoustic Sensors

Acoustic sensors, mostly ultrasonic, have been widely used in navigation, especially indoor navigation and personal mobility. In [54], an ultrasonic indoor positioning system based on Time Difference of Arrival (TDOA) is discussed. The proposed system consists of fixed active ultrasonic transmitters and passive receivers arranged in an omnidirectional hexagonal pattern on the mobile platform. The indoor navigation system discussed in [40] is based on Time of Arrival (TOA) to calculate the coordinates of the receiver in real time. Synchronization between the transmitters at fixed known positions on the ceiling and the mobile receiver enables Time Division Multiple Access (TDMA) implementation for positioning of the receiver using multilateration. In [55], a combination of globally and locally-referenced Local Positioning Systems (LPS) is introduced to cover an extensive indoor environment. In addition, certain acoustic LPS employ Direct Sequence Code Division Multiple Access (DS-CDMA) [56] techniques, where the receiver on-board the robot determines its position using TDOA between a reference beacon and other beacons, with each beacon transmitting a unique 255-bit Kasami code. More advanced Code Division Multiple Access (CDMA) based acoustic positioning systems determine TOA at the receiver using signal correlation [57], used in all modern Radio Frequency (RF)-based communication systems, including the GNSS.
Certain navigation systems like [58,59,60,61] use both RF as well as ultrasonic signals to calculate the position of the mobile platform. Cricket indoor location system [58] uses RF signals for synchronizing the ultrasonic transmitters and receiver on the mobile platform. In Dolphin [59], the relative position of a node placed on the ceiling is calculated by trilateration using TOA of ultrasonic signals from three nodes. The RF signal from the nodes contains node location information as well as time synchronization. A commercial-off-the-shelf (COTS) hardware components based architecture for a self-driving miniature vehicle is developed using real-time operating system (RTOS) in [62]. There 1:10 scale self-driving vehicle has three ultrasonic sensors, three infrared sensors and a camera. In [63], an ultrasonic relative positioning system is developed for a group of ground-based robots. Moving in formation, the robots can determine the distance and orientation of nearby robots based on TOF evaluation of ultrasonic pulses as well as a RF communication link. In [60], personal tracking is achieved with the help of a portable unit called Bat, which consists of a radio transceiver, controlling logic and an ultrasonic transducer. Ultrasound receiver units are placed on the ceiling at known points and are interconnected. The Bats are triggered by a radio message from the base station, which synchronizes the trigger with the receivers. In [61], a multi-block navigation system is developed which utilizes RFID transmitters to trigger relevant beacons to send an ultrasonic signal for localization of the mobile object.
Table 4 lists some representative COTS ultrasonic ranging sensors operating at frequencies ranging from 40 kHz to 500 kHz. It can be observed that as the operating frequency of the ultrasonic ranging sensor increases, the detection range decreases mainly due to higher attenuation. As described in Section 2, bats can vary the frequency of their echolocation calls based on the distance from their prey or an obstruction, which allows them to achieve optimal range and angular resolution performance in various conditions.
Various acoustic navigation aids for visually impaired have been investigated. An ultrasonic FM mobility aid is proposed in [64]. While, in [65], a bio-inspired mobility aid for visually impaired based on echolocation of bats is discussed. This device uses downswept FM ultrasound emissions to detect obstacles, which are perceived as localized sound images corresponding to the direction and size of the obstacles. In [66], a digital signal processor based ultrasonic navigation aid for the visually impaired is presented. The nearest obstacle in front of the user is detected using 2-dimensional echolocation and a binaural audio feedback is provided. GuideCane [67] and UltraCane [68] use ultrasonic sensors in the cane to help visually impaired navigate quickly and safely in the presence of obstacles and hazards.

7. Integration of Acoustic Sensors in Multi-Sensor Navigation Systems

Currently, air and surface vehicles rely on a combination of GNSS, Inertial Measurement Unit (IMU) and in some cases, Vision-Based Navigation (VBN) and Aircraft Dynamics Model (ADM) virtual sensor for air vehicle navigation and guidance [2]. Integration of ANS (Acoustic Navigation System) with the existing NGS (Navigation and Guidance System) enables accurate and reliable positioning, even in low visibility indoor environments, using low Size, Weight and Power, and Cost (SWaP-C) sensors. GNSSS signals are prone to data degradations or complete loss of signal in dense urban environments as well as indoors due to multipath effects, interference, or antenna obscuration [36,69]. High precision IMUs employ relatively high cost, weight and volume inertial components (e.g., ring laser and fibre optic gyroscopes), rendering them unsuitable for many ground and air vehicles like cars, Remotely Piloted Aircraft Systems (RPAS), etc. Although commercially available Micro Electromechanical Systems (MEMS) devices can support the design of low cost IMUs, their accuracy decreases steeply with operating time. The performance of VBN sensors is affected by low visibility conditions which could be due to night/low light conditions, smoke, fog, precipitation as well as presence of transparent or opaque objects [70]. An ADM virtual sensor is dependent on the aircraft’s physical sensors and is based on the assumption of the aircraft being a rigid body with a constant and static mass distribution [71]. An integration of sensor data achieved through Multi-Sensor Data Fusion algorithms can lead to an improved positioning solution for air and ground platforms. The performance analysis of different multi-sensor integrated architectures, including the Extended Kalman Filter (EKF)-based VBN-IMU-GNSS (EVIG), the EKF-based VBN-IMU-GNSS-ADM (EVIGA) and the Unscented Kalman Filter (UKF)-based VIGA (UVIGA) have shown that these integration schemes provide improvements in position, velocity, and attitude (PVA) data in all flight phases in air vehicles, when compared to individual sensor measurements. In particular, the EVIGA and UVIGA systems achieve horizontal/vertical position accuracies in line with International Civil Aviation Organization (ICAO) CAT-I and CAT-II requirements [71]. Figure 13 presents the NGS architecture including the ANS and thereby resulting in an ANS-VIGA (AVIGA) integration scheme.
In this integrated system, the data output rate for ANS is 2 Hz, 20 Hz for VBN and GNSS at 2 Hz to augment the MEMS-IMU running at 100 Hz. The IMU position and velocity information are compared to the Global Positioning System (GPS) position and velocity and form the measurement input of the data fusion block. A similar process is also applied to the attitude data, the differences of which are used in the data-fusion algorithms. The data-fusion algorithm provides estimates of the PVA errors, which are then removed from the sensor measurements to obtain the corrected navigation states. The corrected PVA as well as the estimates of the accelerometer and the gyroscope biases are used to update the IMU raw measurements. The attitude data provided by ADM augmentation and by the IMU are compared to feed the data-fusion block at 100 Hz. The attitude data provided by the VBN and IMU sensors are compared at 20 Hz. The best estimate of attitude is compared with the IMU attitude to obtain the corrected attitude. By employing a UKF, the AVIGA system performance in terms of attitude data accuracy can be increased in addition to a significant extension of the ADM validity time. Additionally, a pre-filter can be also used to pre-process the Six Degrees-of-Freedom (6-DoF) ADM navigation solution. Another pre-filter can be employed to process the ANS data to remove any outliers. In order to select the navigation sensors based on its performance (accuracy, availability, continuity, and integrity), a sensor selection and prioritisation approach is employed using Adaptive Boolean Decision Logic (ABDL), as shown in Figure 13.
Multiple operating modes can be implemented using the ABDL, wherein each mode is a unique combination of sensors. The selection of sensors is dictated by the flight phase, integrity alerts (both preventive and reactive) and other factors. Table 5 lists some of the typical characteristics of COTS sensors used for navigation in small sized platforms.

8. Conclusions and Recommendations for Future Research

This paper presents acoustic wave propagation and its applications in air and surface vehicles navigation. Taking inspiration from echolocating mammals, especially bats, novel acoustic navigation techniques are introduced. Various sound wave attenuation factors like geometric divergence, atmospheric absorption, ground effect, screening, and wind and temperature gradient effects are discussed in detail. Mathematical error modeling for acoustic range measurements, taking into consideration Doppler shift, multipath and atmospheric effects, is presented. Different acoustic sensor arrangements for navigation, monostatic, multistatic and a combination of the two, are discussed. Also, the state-of-the-art in acoustic sensors and their applications in navigation are presented. Finally, the integration of acoustic sensors in multi-sensor navigation systems is discussed.
The use of acoustic sensors for air and surface vehicle navigation holds very high potential, especially when addressing low SWAP-C requirements. Current research trends indicate that future acoustic sensors will implement sophisticated bio-inspired features, which may significantly enhance their range and resolution performance [75,76]. In future applications, acoustic sources might also be used as Signals of Opportunity (SoOP) for navigation in GNSS challenged/denied environments [77]. Although acoustic sensor networks have been utilized in the design of smart city networks [78] and for intelligent transport systems [79], their potential has not yet been fully exploited in air and surface navigation systems. So, acoustic sensors are expected to attract much research and industry efforts in the near future, due to a growing interest in alternative sources of navigation data in urban/indoor environments for a variety of civil and military applications. In order to undertake the development of either monostatic or multistatic acoustic sensors, a careful analysis of aero-acoustic effects is required, especially for aviation applications. In particular, it is essential to assess the magnitude of potential in-band and out-of-band interferences (e.g., engine, propellers, aerodynamic noise) and implement appropriate engineering solutions that mitigate/prevent the negative effects of such interferences on sensor performance. For instance, the use of larger propellers in rotary-wing aircraft might significantly reduce in-band interferences for acoustic sensors operating at higher frequencies. Similar considerations apply to the acoustic emissions produced by gas-turbine engines (i.e., size and number of rotor blades). In this case, however, noise reduction and frequency tailoring can be obtained by adopting other engineering solutions, such as chevrons, liners and noise absorptive materials [80].

Acknowledgments

The research presented in this article was supported by the home University. The research activity was contributed solely by the authors, and did not involve other contributors.

Author Contributions

R.K. is the lead author and main contributor to the paper. He performed the literature review, wrote the initial draft of the introduction, theory of sound propagation and the state-of-the-art in acoustic sensors. S.R. contributed to the development of the multi-sensor data fusion section besides contributing to the literature review and revision process of manuscript. A.G. contributed to the echolocation error analysis and development of code for Figure 2. He also contributed to the literature review, revision of manuscript and replies to reviewer comments. R.V.S. provided feedback to the development of the manuscript and extensively contributed to address reviewer comments. R.S. defined the overall structure of the manuscript, assisted in its preparation and participated in the internal revision process.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Griffin, D.R. Echolocation by blind men, bats and radar. Science 1944, 100, 589–590. [Google Scholar] [CrossRef] [PubMed]
  2. Sabatini, R.; Moore, T.; Hill, C. Avionics-based GNSS integrity augmentation for unmanned aerial systems sense-and-avoid. In Proceedings of the 27th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2014), Tampa Convention Center, Tampa, FL, USA, 8–12 September 2014. [Google Scholar]
  3. Jones, G. Echolocation. Curr. Biol. 2005, 15, R484–R488. [Google Scholar] [CrossRef] [PubMed]
  4. Russo, D.; Jones, G.; Arlettaz, R. Echolocation and passive listening by foraging mouse-eared bats Myotis myotis and M. blythii. J. Exp. Biol. 2007, 210, 166–176. [Google Scholar] [CrossRef] [PubMed]
  5. Chiu, C.; Xian, W.; Moss, C.F. Flying in silence: Echolocating bats cease vocalizing to avoid sonar jamming. Proc. Natl. Acad. Sci. USA 2008, 105, 13116–13121. [Google Scholar] [CrossRef] [PubMed]
  6. Yovel, Y.; Geva-Sagiv, M.; Ulanovsky, N. Click-based echolocation in bats: Not so primitive after all. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 2011, 197, 515–530. [Google Scholar] [CrossRef] [PubMed]
  7. Simmons, J.A.; Ferragamo, M.; Moss, C.F.; Stevenson, S.B.; Altes, R.A. Discrimination of jittered sonar echoes by the echolocating bat, Eptesicus fuscus: The shape of target images in echolocation. J. Comp. Physiol. A 1990, 167, 589–616. [Google Scholar] [CrossRef] [PubMed]
  8. Xitco, M.J.; Roitblat, H.L. Object recognition through eavesdropping: Passive echolocation in bottlenose dolphins. Learn. Behav. 1996, 24, 355–365. [Google Scholar] [CrossRef]
  9. Adams, A.M.; Davis, K.; Smotherman, M. Suppression of emission rates improves sonar performance by flying bats. Sci. Rep. 2017, 7, 41641. [Google Scholar] [CrossRef] [PubMed]
  10. Abramson, N. THE ALOHA SYSTEM: Another alternative for computer communications. In Proceedings of the AFIPS ‘70 Fall Joint Computer Conference, Houston, TX, USA, 17–19 November 1970; pp. 281–285. [Google Scholar]
  11. Jones, G.; Teeling, E.C. The evolution of echolocation in bats. Trends Ecol. Evolut. 2006, 21, 149–156. [Google Scholar] [CrossRef] [PubMed]
  12. Animal diversity web. Available online: http://animaldiversity.org/ (accessed on 26 July 2017).
  13. Cvikel, N.; Levin, E.; Hurme, E.; Borissov, I.; Boonman, A.; Amichai, E.; Yovel, Y. On-board recordings reveal no jamming avoidance in wild bats. Proc. R. Soc. London B Biol. Sci. 2015, 282, 20142274. [Google Scholar] [CrossRef] [PubMed]
  14. Ulanovsky, N.; Fenton, M.B.; Tsoar, A.; Korine, C. Dynamics of jamming avoidance in echolocating bats. Proc. R. Soc. London B Biol. Sci. 2004, 271, 1467–1475. [Google Scholar] [CrossRef] [PubMed]
  15. Kolarik, A.J.; Cirstea, S.; Pardhan, S.; Moore, B.C. A summary of research investigating echolocation abilities of blind and sighted humans. Hear. Res. 2014, 310, 60–68. [Google Scholar] [CrossRef] [PubMed]
  16. Tao, Q.; Chan, C.C.; Luo, Y.-J.; Li, J.-J.; Ting, K.-H.; Lu, Z.-L.; Whitfield-Gabrieli, S.; Wang, J.; Lee, T.M. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals. Brain Topogr. 2017, 30, 364–379. [Google Scholar] [CrossRef] [PubMed]
  17. Ramezani, A.; Shi, X.; Chung, S.-J.; Hutchinson, S. Bat Bot (B2), a biologically inspired flying machine. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3219–3226. [Google Scholar]
  18. Schnitzler, H.-U.; Vergl, J. Die Ultraschall-Ortungslaute der Hufeisen-Fledermäuse (Chiroptera-Rhinolophidae) in verschiedenen Orientierungssituationen. Zeitschrift Für Vergleichende Physiologie 1968, 57, 376–408. (In Germany) [Google Scholar] [CrossRef]
  19. Steckel, J.; Peremans, H. BatSLAM: Simultaneous localization and mapping using biomimetic sonar. PLoS ONE 2013, 8, e54076. [Google Scholar] [CrossRef] [PubMed]
  20. Big-eared-townsend-fledermaus. Available online: https://commons.wikimedia.org/wiki/File:Big-eared-townsend-fledermaus.jpg (accessed on 2 October 2017).
  21. Zagzebski, J.A. Essentials of Ultrasound Physics; Mosby: Maryland Heights, MO, USA, 1996. [Google Scholar]
  22. Sound Propagation Theory & Methodologies. Available online: http://www.npl.co.uk/upload/pdf/sound-propagation-theory-methodol-appendix_a.pdf (accessed on 19 April 2017).
  23. Bass, H.; Sutherland, L.; Zuckerwar, A.; Blackstock, D.; Hester, D. Atmospheric absorption of sound: Further developments. J. Acoust. Soc. Am. 1995, 97, 680–683. [Google Scholar] [CrossRef]
  24. Attenborough, K. Sound propagation in the atmosphere. In Springer Handbook of Acoustics; Springer: Berlin, Germany, 2014; pp. 117–155. [Google Scholar]
  25. Kneizys, F.X.; Shettle, E.; Abreu, L.; Chetwynd, J.; Anderson, G. Users Guide to LOWTRAN 7; Air Force Geophysics Lab: Hanscom AFB, MA, USA, 1988. [Google Scholar]
  26. Sabatini, R.; Richardson, M. Airborne Laser Systems Testing and Analysis; The Research and Technology Organisation: Neuilly-sur-Seine, France, 2010. [Google Scholar]
  27. Norma, I. ISO 9613-2:1996: Acoustics—Attenuation of Sound During Propagation Outdoors—Part 2: General Method of Calculation; International Organization for Standardization: Geneva, Switzerland, 1996. [Google Scholar]
  28. Daigle, G.; Piercy, J.; Embleton, T. Line-of-sight propagation through atmospheric turbulence near the ground. J. Acoust. Soc. Am. 1983, 74, 1505–1513. [Google Scholar] [CrossRef]
  29. Juvé, D.; Blanc-Benon, P.; Chevret, P. Sound propagation through a turbulent atmosphere: Influence of the turbulence model. In Proceedings of the Sixth International Symposium on Long Range Sound Propagation, NRC Canada, Ottawa, ON, Canada, 1994; pp. 270–282. [Google Scholar]
  30. Von Karman, T. Progress in the statistical theory of turbulence. Proc. Natl. Acad. Sci. USA 1948, 34, 530–539. [Google Scholar] [CrossRef] [PubMed]
  31. Johnson, M.A.; Raspet, R.; Bobak, M.T. A turbulence model for sound propagation from an elevated source above level ground. J. Acoust. Soc. Am. 1987, 81, 638–646. [Google Scholar] [CrossRef]
  32. Zaporozhets, O.; Tokarev, V.; Attenborough, K. Aircraft Noise: Assessment, Prediction and Control; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  33. Organización Internacional de Normalización. ISO 9613-1: 1993 Acoustics: Attenuation of Sound during Propagation Outdors. Caldulation of the Absorption of Sound by the Atmosphere; International Organization for Standardization: Geneva, Switzerland, 1993. [Google Scholar]
  34. Wiener, F.M.; Keast, D.N. Experimental study of the propagation of sound over ground. J. Acoust. Soc. Am. 1959, 31, 724–733. [Google Scholar] [CrossRef]
  35. Crocker, M.J. Handbook of Acoustics; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
  36. Sabatini, R.; Moore, T.; Hill, C. A New Avionics-Based GNSS Integrity Augmentation System: Part 1–Fundamentals. J. Navig. 2013, 66, 363–384. [Google Scholar] [CrossRef]
  37. Bijjahalli, S.; Ramasamy, S.; Sabatini, R. A novel vehicle-based GNSS integrity augmentation system for autonomous airport surface operations. J. Intell. Robot. Syst. 2017, 87, 379–403. [Google Scholar] [CrossRef]
  38. Jensen, F.B.; Kuperman, W.A.; Porter, M.B.; Schmidt, H. Computational Ocean Acoustics; Springer Science & Business Media: Berlin, Germany, 2000. [Google Scholar]
  39. Ruijgrok, G.J. Elements of Aviation Acoustics; Delft University Press: Delft, The Netherlands, 2004. [Google Scholar]
  40. Kapoor, R.; Ramasamy, S.; Gardi, A.; Bieber, C.; Silverberg, L.; Sabatini, R. A novel 3D multilateration sensor using distributed ultrasonic beacons for indoor navigation. Sensors 2016, 16, 1637. [Google Scholar] [CrossRef] [PubMed]
  41. Phillips, A.H. Geometrical determination of PDOP. Navigation 1984, 31, 329–337. [Google Scholar] [CrossRef]
  42. Roa, J.O.; Jiménez, A.R.; Seco, F.; Prieto, J.C.; Ealo, J. Optimal placement of sensors for trilateration: Regular lattices vs meta-heuristic solutions. In International Conference on Computer Aided Systems Theory; Springer: Berlin/Heidelberg, Germany, 2007; pp. 780–787. [Google Scholar]
  43. Kapoor, R.; Ramasamy, S.; Gardi, A.; Sabatini, R. A bio-inspired acoustic sensor system for UAS navigation and tracking. In Proceedings of the 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), St. Petersburg, FL, USA, 17–21 September 2017; pp. 1–7. [Google Scholar]
  44. Sabatini, R.; Moore, T.; Ramasamy, S. Global navigation satellite systems performance analysis and augmentation strategies in aviation. Prog. Aerosp. Sci. 2017, 95, 45–98. [Google Scholar] [CrossRef]
  45. Foy, W.H. Position-location solutions by Taylor-series estimation. IEEE Trans. Aerosp. Electron. Syst. 1976, AES-12, 187–194. [Google Scholar] [CrossRef]
  46. Navidi, W.; Murphy, W.S.; Hereman, W. Statistical methods in surveying by trilateration. Comput. Stat. Data Anal. 1998, 27, 209–227. [Google Scholar] [CrossRef]
  47. Coope, I. Reliable computation of the points of intersection of $ n $ spheres in $ R^ n$. ANZIAM J. 2000, 42, 461–477. [Google Scholar] [CrossRef]
  48. Pent, M.; Spirito, M.; Turco, E. Method for positioning GSM mobile stations using absolute time delay measurements. Electron. Lett. 1997, 33, 2019–2020. [Google Scholar] [CrossRef]
  49. Fang, B.T. Trilateration and extension to global positioning system navigation. J. Guid. Control Dyn. 1986, 9, 715–717. [Google Scholar] [CrossRef]
  50. Ziegert, J.C.; Mize, C.D. The laser ball bar: A new instrument for machine tool metrology. Precis. Eng. 1994, 16, 259–267. [Google Scholar] [CrossRef]
  51. Manolakis, D.E. Efficient solution and performance analysis of 3-D position estimation by trilateration. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1239–1248. [Google Scholar] [CrossRef]
  52. Thomas, F.; Ros, L. Revisiting trilateration for robot localization. IEEE Trans. Robot. 2005, 21, 93–101. [Google Scholar] [CrossRef] [Green Version]
  53. Zhou, Y. A closed-form algorithm for the least-squares trilateration problem. Robotica 2011, 29, 375–389. [Google Scholar] [CrossRef]
  54. Yayan, U.; Yucel, H.; Yazici, A. A low cost ultrasonic based positioning system for the indoor navigation of mobile robots. J. Intell. Robot. Syst. 2015, 78, 541–552. [Google Scholar] [CrossRef]
  55. Gualda, D.; Ureña, J.; García, J.C.; Lindo, A. Locally-referenced ultrasonic–LPS for localization and navigation. Sensors 2014, 14, 21750–21769. [Google Scholar] [CrossRef] [PubMed]
  56. Ureña, J.; Hernández, A.; Jiménez, A.; Villadangos, J.M.; Mazo, M.; García, J.; García, J.J.; Álvarez, F.J.; De Marziani, C.; Pérez, M. Advanced sensorial system for an acoustic LPS. Microprocess. Microsyst. 2007, 31, 393–401. [Google Scholar] [CrossRef]
  57. Seco, F.; Prieto, J.C.; Ruiz, A.R.J.; Guevara, J. Compensation of multiple access interference effects in CDMA-based acoustic positioning systems. IEEE Trans. Instrum. Meas. 2014, 63, 2368–2378. [Google Scholar] [CrossRef]
  58. Priyantha, N.B. The Cricket Indoor Location System; Massachusetts Institute of Technology: Cambridge, MA, USA, 2005. [Google Scholar]
  59. Fukuju, Y.; Minami, M.; Morikawa, H.; Aoyama, T. DOLPHIN: An Autonomous Indoor Positioning System in Ubiquitous Computing Environment. In Proceedings of the WSTFES 2003 IEEE Workshop on Software Technologies for Future Embedded Systems, Hokkaido, Japan, 15–16 May 2003. [Google Scholar]
  60. Harter, A.; Hopper, A.; Steggles, P.; Ward, A.; Webster, P. The anatomy of a context-aware application. Wirel. Netw. 2002, 8, 187–197. [Google Scholar] [CrossRef]
  61. Park, J.; Choi, M.; Zu, Y.; Lee, J. Indoor localization system in a multi-block workspace. Robotica 2010, 28, 397–403. [Google Scholar] [CrossRef]
  62. Berger, C.; Hansson, J. COTS-architecture with a real-time OS for a self-driving miniature vehicle. In Proceedings of the SAFECOMP 2013-Workshop ASCoMS (Architecting Safety in Collaborative Mobile Systems) of the 32th International Conference on Computer Safety, Reliability and Security, Toulouse, France, 24–27 September 2013. [Google Scholar]
  63. Rivard, F.; Bisson, J.; Michaud, F.; Létourneau, D. Ultrasonic relative positioning for multi-robot systems. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 323–328. [Google Scholar]
  64. Kay, L. An ultrasonic sensing probe as a mobility aid for the blind. Ultrasonics 1964, 2, 53–59. [Google Scholar] [CrossRef]
  65. Ifukube, T.; Sasaki, T.; Peng, C. A blind mobility aid modeled after echolocation of bats. IEEE Trans. Biomed. Eng. 1991, 38, 461–465. [Google Scholar] [CrossRef] [PubMed]
  66. Mihajlik, P.; Guttermuth, M.; Seres, K.; Tatai, P. DSP-based ultrasonic navigation aid for the blind. In Proceedings of the IMTC 2001 18th IEEE Instrumentation and Measurement Technology Conference Rediscovering Measurement in the Age of Informatics (Cat. No.01CH 37188), Budapest, Hungary, 21–23 May 2001; pp. 1535–1540. [Google Scholar]
  67. Ulrich, I.; Borenstein, J. The GuideCane-applying mobile robot technologies to assist the visually impaired. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2001, 31, 131–136. [Google Scholar] [CrossRef]
  68. Marion, A.; Michael, A. Assistive Technology for Visually Impaired and Blind People; Springer: London, UK, 2008. [Google Scholar]
  69. Sabatini, R.; Moore, T.; Hill, C. A new avionics-based GNSS integrity augmentation system: Part 2–Integrity flags. J. Navig. 2013, 66, 501–522. [Google Scholar] [CrossRef]
  70. Sabatini, R.; Richardson, M.; Bartel, C.; Shaid, T.; Ramasamy, S. A low-cost vision based navigation system for small size unmanned aerial vehicle applications. J. Aeronaut. Aerosp. Eng. 2013, 2, 1–16. [Google Scholar] [CrossRef]
  71. Cappello, F.; Ramasamy, S.; Sabatini, R. A low-cost and high performance navigation system for small RPAS applications. Aerosp. Sci. Technol. 2016, 58, 529–545. [Google Scholar] [CrossRef]
  72. Grabe, V.; Bülthoff, H.H.; Giordano, P.R. On-board velocity estimation and closed-loop control of a quadrotor UAV based on optical flow. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 491–497. [Google Scholar]
  73. Wu, A.; Johnson, E.N.; Kaess, M.; Dellaert, F.; Chowdhary, G. Autonomous Flight in GPS-Denied Environments Using Monocular Vision and Inertial Sensors. J. Aerosp. Inf. Syst. 2013, 10, 172–186. [Google Scholar]
  74. Pfeifer, N.; Glira, P.; Briese, C. Direct georeferencing with on board navigation components of light weight UAV platforms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, B7. [Google Scholar] [CrossRef]
  75. Schillebeeckx, F.; De Mey, F.; Vanderelst, D.; Peremans, H. Biomimetic sonar: Binaural 3D localization using artificial bat pinnae. Int. J. Robot. Res. 2011, 30, 975–987. [Google Scholar] [CrossRef]
  76. Aiordachioaie, D.; Frangu, L.; Epure, S. Airborne ultrasonic image generation with biomimetic sonar head. IET Radar Sonar Navig. 2013, 7, 933–949. [Google Scholar] [CrossRef]
  77. Kapoor, R.; Ramasamy, S.; Gardi, A.; Sabatini, R. UAV Navigation using Signals of Opportunity in Urban Environments: A Review. Energy Procedia 2017, 110, 377–383. [Google Scholar] [CrossRef]
  78. Jin, J.; Gubbi, J.; Marusic, S.; Palaniswami, M. An information framework for creating a smart city through internet of things. IEEE Internet Things J. 2014, 1, 112–121. [Google Scholar] [CrossRef]
  79. Fazenda, B.; Atmoko, H.; Gu, F.; Guan, L.; Ball, A. Acoustic based safety emergency vehicle detection for intelligent transport systems. In Proceedings of the 2009 ICCAS-SICE, Fukuoka, Japan, 18–21 August 2009; pp. 4250–4255. [Google Scholar]
  80. Perkins, H.D.; Paxson, D.E.; Snyder, C.A. Advanced Engine Designs and Concepts Beyond the Geared Turbofan. In Encyclopedia of Aerospace Engineering; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2016; pp. 1–13. [Google Scholar]
Figure 1. Bat pinnae of Townsend’s big-eared bat, Corynohinus townsendi [20].
Figure 1. Bat pinnae of Townsend’s big-eared bat, Corynohinus townsendi [20].
Sensors 18 00499 g001
Figure 2. Attenuation of sound in air as a function of frequency (log-log plot).
Figure 2. Attenuation of sound in air as a function of frequency (log-log plot).
Sensors 18 00499 g002
Figure 3. Effect of ground on the received sound pressure level.
Figure 3. Effect of ground on the received sound pressure level.
Sensors 18 00499 g003
Figure 4. Obstacle between the source (S) and the receiver (R).
Figure 4. Obstacle between the source (S) and the receiver (R).
Sensors 18 00499 g004
Figure 5. Diffraction of sound by a thin barrier.
Figure 5. Diffraction of sound by a thin barrier.
Sensors 18 00499 g005
Figure 6. Reference geometry for Doppler shift analysis.
Figure 6. Reference geometry for Doppler shift analysis.
Sensors 18 00499 g006
Figure 7. Doppler shift sound field.
Figure 7. Doppler shift sound field.
Sensors 18 00499 g007
Figure 8. Multipath.
Figure 8. Multipath.
Sensors 18 00499 g008
Figure 9. Geometric reflection model.
Figure 9. Geometric reflection model.
Sensors 18 00499 g009
Figure 10. Multistatic sensor arrangement.
Figure 10. Multistatic sensor arrangement.
Sensors 18 00499 g010
Figure 11. Actual and estimated position of receiver.
Figure 11. Actual and estimated position of receiver.
Sensors 18 00499 g011
Figure 12. Relative navigation of multiple transceivers.
Figure 12. Relative navigation of multiple transceivers.
Sensors 18 00499 g012
Figure 13. AVIGA architecture.
Figure 13. AVIGA architecture.
Sensors 18 00499 g013
Table 1. Echolocation call types for different bat species based on diet [11,12].
Table 1. Echolocation call types for different bat species based on diet [11,12].
DietEcholocation Call TypeBat Species
FruitsBroadband clicks of short durationEgyptian fruit bat
Moths, beetles, flies and other insectsNarrowband with dominant fundamental harmonicEastern red bat
Flying insects and small fruitsMultiharmonic narrowband, faintly audible to humansBlack-bearded tomb bat
Aquatic insects like midges, crane flies and black fliesShort, broadband, with dominant fundamental harmonicDaubenton’s bat
Large insects, spiders and small vertebratesShort, multiharmonic broadbandGreater false vampire bat
MothsLong, multiharmonic broadbandMadagascar sucker-footed bat
Butterfly, moths and beetlesConstant frequency (CF) & Frequency Modulated (FM)Greater horseshoe bat
Beetles, moths, flies, wasps, and flying ants Downswept FM narrowbandBig brown bat
Beetles, moths, flies, and small insectsFM broadbandTownsend’s big-eared bat
Table 2. Acoustic sensor ranging parameters.
Table 2. Acoustic sensor ranging parameters.
TypeParameters
Design parametersTransmitted power, carrier frequency and PRF
Measured observablesRange, velocity, azimuth and elevation
Environmental parametersTemperature, wind, humidity and environmental layout
Performance indicatorsPosition accuracy and maximum range
Table 3. Ranging parameters.
Table 3. Ranging parameters.
VariableValue (Unit)
Speed of sound at sea level ( c 0 ) 340.27 ( m / s )
Time of flight ( t ) 0.009 ( s )
Mach number for the sound source ( M ) 0.024
Direction of receiver motion to the LOS ( θ ) 30 ( deg )
Variation of temperature with height ( λ ) 0.0065 ( K / m )
Speed of sound emitted by source ( c ) at 20 °C 343 ( m / s )
Distance between ith transmitter and receiver ( | T r i R i | ) 10 ( m )
Distance between ith transmitter and reflection point ( | T r i S i | ) 2 ( m )
Distance between ith receiver and reflection point ( | R i S i | ) 8.328 ( m )
Sea-level temperature ( T 0 ) 288 ( K )
Horizontal wind velocity ( v w ) 2.95 ( m / s )
Angle of wavefront normal with the horizontal ( δ ) 30 ( deg )
Table 4. Commercially available ultrasonic ranging sensors.
Table 4. Commercially available ultrasonic ranging sensors.
Ultrasonic SensorManufacturerTransducer FrequencyDetection Range (mm)
MA40SR/SMurata40 kHzSound Pressure Level (SPL) dependent
MB8450MaxBotix42 kHz500–5000
MA58MF14-7NMurata58 kHzSPL dependent
UC6000-30GM-E6R2-V15Pepperl + Fuchs65 kHz350–6000
XX630A3PCM12Telemecanique Sensors75 kHz203–8000
3RG6014-3AD00-PFPepperl + Fuchs80 kHz600–6000
UC4000-30GM-IUR2-V15Pepperl + Fuchs85 kHz200–4000
UM30-214113Sick120 kHz350–3400
UB2000-F54-I-V15Pepperl + Fuchs175 kHz80–2000
UC2000-30GM-IUR2-V15Pepperl + Fuchs180 kHz80–2000
BUS M18M1-GPXI-12/100-S92GBalluff200 kHz120–1300
T30UIPAQBanner228 kHz150–1000
UGT507ifm electronic230 kHzMaximum of 1200
UNDK 30U6103/S14Baumer240 kHz100–1000
UNDK 20U 6912Baumer290 kHz60–400
XX518A3PAM12Telemecanique Sensors300 kHz51–508
UB400-12GM-E5-V1Pepperl + Fuchs310 kHz30–400
BUS M30M1-PPX-03/25-S92KBalluff320 kHz30–350
XXV18B1PBM12Telemecanique Sensors360 kHz3–50
UB500-18GM75-E5-V15Pepperl + Fuchs380 kHz30–500
UB300-18GM40-E5-V1Pepperl + Fuchs390 kHz30–300
UM30-212113Sick400 kHz60–350
XX512A1KAM8Telemecanique Sensors500 kHz25–152
Table 5. Transport grade COTS Navigation sensor characteristics [72,73,74].
Table 5. Transport grade COTS Navigation sensor characteristics [72,73,74].
SensorData Output Rate ( H z )Size ( L × W × H ) ( c m 3 )Weight ( g )Power ( W )
MEMS based Inertial Navigaiton System 100 12 15–50<0.5
Vision Based Navigation 20 64 50–100~1
GNSS receiver 2 10 20–60<0.4

Share and Cite

MDPI and ACS Style

Kapoor, R.; Ramasamy, S.; Gardi, A.; Schyndel, R.V.; Sabatini, R. Acoustic Sensors for Air and Surface Navigation Applications. Sensors 2018, 18, 499. https://doi.org/10.3390/s18020499

AMA Style

Kapoor R, Ramasamy S, Gardi A, Schyndel RV, Sabatini R. Acoustic Sensors for Air and Surface Navigation Applications. Sensors. 2018; 18(2):499. https://doi.org/10.3390/s18020499

Chicago/Turabian Style

Kapoor, Rohan, Subramanian Ramasamy, Alessandro Gardi, Ron Van Schyndel, and Roberto Sabatini. 2018. "Acoustic Sensors for Air and Surface Navigation Applications" Sensors 18, no. 2: 499. https://doi.org/10.3390/s18020499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop