Next Article in Journal
Investigations on the Impact of Material-Integrated Sensors with the Help of FEM-Based Modeling
Previous Article in Journal
A Compact 3D Omnidirectional Range Sensor of High Resolution for Robust Reconstruction of Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Preliminary Study of a Millimeter Wave FMCW InSAR for UAS Indoor Navigation

Department of Industrial Engineering, University of Naples Federico II, Piazzale Tecchio 80, Naples 80125, Italy
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(2), 2309-2335; https://doi.org/10.3390/s150202309
Submission received: 4 November 2014 / Accepted: 13 January 2015 / Published: 22 January 2015
(This article belongs to the Section Remote Sensors)

Abstract

: Small autonomous unmanned aerial systems (UAS) could be used for indoor inspection in emergency missions, such as damage assessment or the search for survivors in dangerous environments, e.g., power plants, underground railways, mines and industrial warehouses. Two basic functions are required to carry out these tasks, that is autonomous GPS-denied navigation with obstacle detection and high-resolution 3D mapping with moving target detection. State-of-the-art sensors for UAS are very sensitive to environmental conditions and often fail in the case of poor visibility caused by dust, fog, smoke, flames or other factors that are met as nominal mission scenarios when operating indoors. This paper is a preliminary study concerning an innovative radar sensor based on the interferometric Synthetic Aperture Radar (SAR) principle, which has the potential to satisfy stringent requirements set by indoor autonomous operation. An architectural solution based on a frequency-modulated continuous wave (FMCW) scheme is proposed after a detailed analysis of existing compact and lightweight SAR. A preliminary system design is obtained, and the main imaging peculiarities of the novel sensor are discussed, demonstrating that high-resolution, high-quality observation of an assigned control volume can be achieved.

1. Introduction

Unmanned aerial systems (UAS) are commonly defined as medium-small scale uninhabited aerial vehicles able to attain stable flight operation thanks to a control system that can be programmed to follow a certain flight path or can be remotely controlled from a ground station. Today, UAS are moving toward autonomous sense and detect functions [1,2] and are performing missions with increasing levels of autonomy and complexity, such as repetitive reconnaissance and surveillance, whereby human presence onboard is undesirable or inadvisable. Outdoor flying unmanned vehicles have received a considerable amount of research and industrial attention over the years. Although limitations exist concerning UAS inclusion in air space, today, complete systems are available for military and civilian applications [3,4].

On the contrary, there is still much to be done in the area of indoor or urban autonomous operation, both for vehicle navigation and for monitoring or exploration. The application to unknown building interiors and very cluttered urban or natural environments is one of the most demanding issues envisioned for UAS, since it requires the real-time capability: (i) to detect and identify very different objects, such as buildings, walls, caves, infrastructures or underground facilities, in problematic and unpredictable illumination conditions; (ii) to navigate through complex-shaped passageways, even avoiding non-stationary obstacles; and (iii) to gather and relay information. Use of very compact sized and extreme lightweight small UAS or micro aerial vehicles (MAV), different from outdoor applications, represents an additional strong constraint when indoor flight operations must be performed. Target mission scenarios include high risk indoor inspection, e.g., nuclear power plant failure and leakage or tunnel roof collapse in mine, but also the search for survivors in cluttered dense urban environment or indoors, such as underground railways or industrial warehouses. Pipeline inspection and nuclear, biological or chemical (NBC) emergency reconnaissance represent additional dangerous applications that could take full advantage of small UAS and MAV operations. Completely different scenarios, but similar capabilities, are required in planetary exploration. Specifically, in past decades, rovers have emerged as one of the most important tools for planetary exploration. Important drawbacks of rover systems deal with the limited coverage they can achieve and uncertainty in terrain. For planetary and planet-like bodies, when a significant atmosphere is present, the above limitations can be overcome by aerial vehicles. In addition to Earth, several planets, such as Venus, Mars, Jupiter, Saturn, Uranus and Neptune, but also the Saturn moon, Titan, are endowed with an adequate atmosphere. Aerial vehicles proposed and investigated for planetary exploration include [57] airplanes and gliders, helicopters, balloons and airships. The most investigated solutions are based on lighter-than-atmosphere robotic airships combining the long-term airborne capability of balloons with the maneuverability of airplanes or helicopters.

The introduced applications involve flight operation in GPS-denied and substantially unknown environments with a potentially large communication latency (planetary explorations) or extended communication blackout periods (indoor emergencies). The accomplishment of two basic functions is required to carry out these tasks: fully autonomous navigation with obstacle detection/avoidance capability and high resolution 3D mapping and monitoring of the target area, including moving target detection. Unless the small UAS is provided with hovering capability, autonomous navigation presents clearly the most stringent time requirements. Regarding obstacle avoidance, in theory, accurate geometric models of the operational environment combined with thematic information and the description of all of the present objects could reduce the need for continuous and real-time sensing. However, those data are often neither updated nor available at the required spatial resolution and accuracy. Furthermore, unexpected obstacles, for instance consequent to an accident that requires to investigation, can appear anytime and anywhere; hence, real-time mapping capabilities are required, too.

The set of data needed to perform these tasks cannot be provided by sensors that are potentially adequate under conventional operating conditions, such as laser scanners and optical cameras, owing to their physical size, weight, strong dependence on illumination conditions and possible poor visibility caused by environmental factors. Conversely, radar sensors are able to operate in any illumination condition, and microwave carrier frequencies allow for coherent signal detection to be performed, thus resulting in significantly increased sensitivity and instant access to range information. In addition, high-resolution 3D mapping can be provided by combining the Synthetic Aperture Radar (SAR) technique with radar interferometry [8,9]. This also makes velocity information available via Doppler processing, which is a valuable feature for sensors operating onboard moving platforms. Finally, millimeter wave radar technology has been receiving increasing interest for application in small UAS [10,11] thanks to the limited size and power requirements and the capability to penetrate smoke and fire [12,13].

The objective of this work is to assess the main features, possible architectural schemes and technical solutions and to carry out a preliminary design of a very innovative radar sensor for novel autonomous operations onboard small UAS. Table 1 summarizes the key driving issues in the preliminary design that will be presented in the paper. First of all, it should be noted that for matching with the considered operational scenarios, the sensor must be compact, lightweight and characterized by low power consumption. In addition, it has to guarantee very high 3D resolution and accuracy, as well as the capability to perform real-time onboard processing in order to support autonomous navigation, exploration and mapping in completely unknown and unstructured environments.

2. System Architecture

2.1. State-of-the-Art Analysis

In the last decade, several compact and lightweight SARs have been developed and tested for different purposes and applications. Table 2 lists the most relevant systems together with their main features, as available today in the open literature. All of them are devoted to outdoor operations, such as surveillance and remote sensing applications, and work in side-looking mode with limited pointing capability. Vision-based navigation through those radar sensors has not been implemented yet. None of these systems satisfies all of the constraints of Table 1. Real-time onboard operation is rarely enabled; resolutions can be insufficient; and in most cases, the mass and power requirements exceed small platform availability. Nonetheless, a few interesting features can be highlighted. MiniSAR by Sandia National Labs [14] and Miniaturized SAR (MISAR) by European Aeronautic Defence and Space Company N.V. (EADS) [11]; both include a double gimbal structure, which allows mechanical steering of the antenna to be achieved, thus making SAR interferometry along multiple directions possible. In both cases, two separate antennas, one for transmission and one for reception, are accommodated to implement a frequency-modulated continuous wave (FMCW) scheme. More than half of the listed sensors exploit this architectural scheme, even though not possessing a gimbal structure. Finally, it is important to remark that AiR-Based REmote Sensing (ARBRES) X-Band SAR [15] and MetaSensing X-Band SAR [16] make use of three antennas, namely two receiving and one transmitting for performing FMCW single-pass interferometry.

In the following subsections, a critical analysis of some key design solutions is presented, and then, an adequate innovative architecture is proposed.

2.2. Why FMCW SAR

First of all, it is necessary to point out the advantages connected to the use of FMCW SAR. FMCW radar transmits a frequency-modulated signal, which is usual in SAR, but in a continuous wave, differently from most realizations. The received echo, which is delayed by round trip delay τ associated with target-range distance, is mixed with the transmitted signal [17]. For a linear frequency modulation, the output of the mixing process, namely the beat signal, has two Fourier components at different frequencies. The first component is a signal centered at a constant frequency lower than the carrier frequency [18]. The second component is a residual signal centered approximately at twice the carrier frequency, which has less energy with respect to the former component [17] and is filtered out. The process involving both the mixing of transmitted and received signals and the low-pass filtering of a beat signal is also called deramp-on-receive.

The aforementioned low, constant frequency in the beat signal, which is computed by differentiating the phase term of the beat signal with respect to time, is labeled as the beat frequency. The beat frequency holds strong relevance in FMCW radar, as it is directly proportional to the target range by the ratio between the propagation velocity and the bandwidth of the transmitted signal, thus allowing the system to compute the range by measuring the beat frequency. The theoretical value for the range resolution is [17]:

d r = c 2 B
where c is light velocity and B is the transmitted bandwidth. Actually, Equation (1) is equivalent to the conventional pulsed radar theoretical range resolution [8,36]. However, it is important to remark that the FMCW range compressed signal is obtained in the frequency domain rather than in the time domain.

The FMCW scheme guarantees decisive advantages with respect to conventional pulsed SAR, especially when compact systems have to be realized. Continuous transmission, i.e., a unity duty cycle η = 1, involves less transmitted peak power, which makes significant simplifications in the power generation and conditioning unit along with a strong reduction in power requirements with respect to pulsed systems possible. In addition, deramp-on-receive relies on the sampling of the beat signal bandwidth BB instead of the whole transmitted bandwidth B. This means that even the GHz bandwidth can be easily handled by a MHz sampling frequency fS, because BBB, thus allowing simpler and cheaper hardware equipment.

The FMCW's peculiar features in comparison to traditional pulsed technology are consequent to the motion during continuous transmission. A better understanding of motion effects on the signal is given by [37] in which the following equation is reported for the beat signal in the two-dimensional spatial frequency domain:

S B ( K r , K x ) = exp ( j K x v t ) exp ( j R 0 K r 2 K x 2 )
where Kr and Kx are the spatial frequencies in the range and azimuth directions, respectively, υ is the platform velocity, R0 is the distance of the closest approach and t is the time referring to the signal transmission/reception at velocity c. The second exponential in Equation (2) coincides with the beat signal of conventional pulsed SAR in the two-dimensional spatial frequency domain, whereas the first is a space invariant term that takes into account the motion during transmission. This term becomes equal to one in conventional SAR, because of the start-stop approximation, which assumes that the radar is stationary during the pulse transmission-reception, because vc. Start-stop approximation is traditionally exploited to explain raw SAR image formation [8]. As a direct consequence of Equation (2), in general, conventional algorithms for SAR image formation would result in FMCW SAR image degradation. More complex reference functions have to be adopted in these cases [38].

However, specific conditions exist in which start-stop approximation can be considered valid for FMCW SAR, too. Even though continuous transmission is used, it is possible to define the concept of the pulse repetition interval (PRI) for FMCW radar as the sweep duration, i.e., the time the transmitted frequency takes to shift from the minimum to the maximum frequency, or equivalently, the time between the start of two consecutive sweeps. It is clear that the last definition leads to almost a similar PRI meaning as for pulsed SAR, although it refers to sweep instead of chirp (see Figure 1). Based on the introduced PRI, the pulse repetition frequency can be defined as the reciprocal of the PRI.

The Nyquist sampling theorem requires PRI to be small enough in order to properly sample the azimuth Doppler history. In detail, provided that the sampling requirements are satisfied [38], each sweep represents a sample of the Doppler history in the same way as a pulse of conventional SAR. Hence, both fast time t and slow time tN (i.e., referring to radar motion at velocity v) can be introduced for FMCW SAR, too. On the other hand, a longer sweep duration would produce several samples in the azimuth Doppler history within each sweep, thus making start-stop approximation less acceptable. The remainder of this paper focuses on the case in which start-stop approximation is valid [16,38].

As in conventional SAR, the FMCW SAR target response exhibits a Doppler bandwidth, BD, generated by the variation of the observation angle and, therefore, by the variation of the radial velocity:

B D = 2 v λ [ sin ( θ s q + θ az 2 ) sin ( θ s q θ az 2 ) ]
where λ is the carrier wavelength, θsq is the squint angle and θaz is the beamwidth in the azimuth direction. Hence, provided that proper motion compensation algorithms are exploited [17,38], the theoretical FMCW SAR azimuth resolution is:
d a = v B D = l az 2
where laz is the antenna length. Equation (4) is exactly the same equation that holds for conventional pulsed SAR.

As expected, the result of range and azimuth compression is a bi-dimensional sinc function multiplied by two complex exponentials, the former depending on both the minimum platform to target distance and a reference distance Rref used for the processing [39], the latter depending only on the reference distance and system parameters. Namely:

s ( f R , t N ) = sinc [ π ( f R + R 0 R ref cPRI 2 B ) ( PRI 2 R 0 c v 2 t N 2 c R 0 ) ] · sinc [ B D ( t N x 0 v ) ] B D exp [ j 4 π λ ( R 0 R ref ) ] exp ( j π B PRI τ ref 2 )
where fR is the range frequency, x0is the position of the target along the azimuth direction with respect to the center scene and τref is the time delay of the echo at reference range Rref, which corresponds to the range from the center scene. The first exponential resembles the exponential term of the pulsed SAR 2D-focused signal and again can be exploited to perform interferometry (see Section 2.3). Moreover, it has to be noted that the signal of Equation (5), unlike the pulsed SAR 2D-focused signal, is better described in the range-time domain, as range frequency fR is directly proportional to the range in FMCW SAR. Finally, the amplitude of the resulting signal depends on the Doppler bandwidth.

The implementation advantages of FMCW SAR must be weighed against some drawbacks that this scheme exhibits. In general, data processing is more complex with respect to pulsed SAR, because deramp-on-receive produces an unwanted phase term, called the residual video phase (RVP), which must be removed. In addition, moving targets can introduce ambiguities in range measurement. Indeed, owing to longer observation time compared to a conventional system, targets can move through several resolution cells within a sweep [38], causing the Doppler effect not to be negligible. Several solutions have been proposed to correctly determine the range, even in the presence of moving targets, including triangular frequency modulation [17,18] to determine the range and Doppler information within a single time interval. Non-linearities in transmitted and received signals cause an additional erroneous phase term in the beat signal, therefore leading to deteriorated range resolution [38]. Typical algorithms for non-linearity correction work under the assumption that non-linearity effects depend linearly on time delay, which is true for small distances. This is the case of indoor applications. The assumption falls for long range observations and causes the computational load to increase. Hardware and software solutions are known in the literature [17,38], such as voltage-controlled oscillator (VCO) and direct digital synthesizer (DDS), or approaches based on approximations of non-linearity. Finally, the simultaneous signal transmission and reception generate signal leakage in the reception chain. Specifically, due to the extremely high transmitted-to-received power ratio, saturation or damage of equipment can occur if even a small leakage of transmitted power is present [18]. Good isolation is therefore required, and typically, separated transmitting and receiving antennas in both bistatic and quasi-monostatic configurations are exploited. Considering that relatively assessed solutions are today available to deal with the discussed drawbacks and taking into account its advantages for the considered applications, the FMCW SAR scheme is selected herein as a base for the system architecture.

2.3. Why SAR Interferometry

SAR interferometry is a technique that exploits phase information, obtained from two or more SAR images, in order to compute target height and position in a three-dimensional environment. It can be considered a well-assessed technology for conventional pulsed SAR [8,9]. As regards FMCW SAR, the 2D-focused SAR signal (see Equation (5)) shows that the phase of the azimuth sinc samples target range as the multiple of the wavelength and can therefore be utilized to perform interferometry. It has to be noted that it is necessary to remove the additional contribution to the phase given by the reference range distance, which is typically the distance to the center of the scene illuminated by the beamwidth, and therefore, it can be different in the two images to be correlated. SAR interferometry has been successfully tested on data collected by FMCW SAR [16], and it is considered a key asset towards the operational scenario considered in this work.

2.4. Selected Scheme

Based on the state-of-the-art analysis, a system architecture that is potentially able to satisfy all requirements listed in Table 1 is shown in Figure 2. The selected scheme is an interferometric FMCW SAR, equipped with three antennas, one transmitting and two receiving, mounted on a double gimbal structure. Among various factors, interferometric measurement resolution and accuracy are strongly dependent on antenna separation knowledge and control. Furthermore, the proposed system is compact and operates on a single platform, i.e., the two antennas could be rigidly connected and simultaneously pointed to specific targets by adequately rotating a double gimbal to change the baseline (i.e., antenna separation with respect to the target). Hence, it is expected to achieve adequate performance. It is worth noting that: (i) although electronic antenna steering would be favorable for fast and accurate sweeping of all hemispherical field-of-views, the creation of adequate baseline components to extract phase measurements is based on antenna mechanical re-orientation; consequently, the design and development of a double gimbal has been considered to make easier realization of both the antenna and electronics; (ii) depending on the platform selected for the mission, for instance a quadrotor, antenna mechanical re-orientation can be achieved by either rotation of the platform itself or the combined action of the platform and double gimbal.

In addition, an autonomous processing unit (PU), committed to real-time onboard data processing, is included in the scheme. Radar data are stored onboard in a mass memory unit. These data are exploited by the PU to directly command the double gimbal pointing system. The PU also sends information to the UAS navigation unit via a direct interface data link. Communication from the navigation unit to the PU is also necessary to support image processing and data extraction. Finally, the PU interfaces with the radio frequency transmitter to send stored data to the ground station via a wireless data link, when available.

3. Preliminary System Design

3.1. Preliminary Design Process

The design process is outlined in Figure 3: circles represent input parameters, which have been chosen according to the system requirements (Table 1), the system architecture (Figure 2) and the application, whereas boxes return the sought values. The input parameters of the design process are chosen first. Table 3 lists the input parameters that vary within a minimum and maximum value, whereas Table 4 lists the ones that assume a constant value in the implemented design process.

The resolution requirements in range, azimuth and height directions are chosen according to the expected performance, whereas boundaries on platform velocity and maximum and minimum range distances depend on the application. In our case, it is the dynamics of the small UAS flying in an indoor environment performing loitering maneuvers. In addition, a typical value for an indoor differential radar cross-section has been considered. The following sub-sections report a brief explanation of peculiar blocks, specific for the FMCW SAR design. An example of the overall system characteristics is finally derived, accordingly.

3.2. Ambiguities and Antenna Width

Range ambiguity for a FMCW radar may occur owing to the continuous transmission of a frequency modulated signal when an echo from a target arrives at receiver after the end of the sweep that generated it. As a result, the received signal will be mixed with a different sweep and will result in the target being closer than in reality (see Figure 4). The unambiguous range is therefore equal to the round-trip distance covered by the wave in a single sweep, namely:

R u = cPRI 2

Therefore, under the hypothesis that the whole swath width is less than the unambiguous range, the following inequalities shall be satisfied to avoid echo ambiguities and bandwidth undersampling:

c 2 ( R FR R NR ) > PRF > 2 B D
where the subscripts FR and NR refer to far- and near-range, respectively. The difference RFR − RNR depends on the antenna aperture, hence on the antenna width in elevation in an inverse proportion. Since the considered distances and the Doppler bandwidth are small, Equation (7) does not yield strict bounds on the antenna dimensions. Hence, the antenna width d can be quite small and may be chosen according to other requirements, e.g., the radar equation, heat dissipation and technological restrictions.

3.3. Transmitted Power

Transmitted power can be computed by the following formula derived in [40]:

P T = SNR ( 4 π ) 3 R max 4 k B F N T N B N G T G R λ 2 σ 0 d r gr d a N R N A
which takes into account the range and azimuth compression gains, NR and NA, respectively. In Equation (8), the subscripts T and R refer to transmitting and receiving antenna gains (G), BN is the noise bandwidth and drgr is the ground range resolution.

For rectangular antennas, the gain at the boresight is expressed in [41,42] as:

G = k e 4 π A λ 2
where ke is an efficiency factor, typically equal to 0.65, and A the antenna area. Under the hypothesis of identical transmitting and receiving antennas and by expressing compression gains as in [43], Equation (8) becomes:
P T = SNR 4 π R max 3 k B F N T N B N l az v η k e 2 A 2 σ 0 d r gr daB

Concerning the transmitted power, it is important to point out that in FMCW SAR, noise bandwidth BN is equal to sampling frequency fS [44]. This is an additional advantage over conventional SAR, in which the noise bandwidth is equal to the transmitted one.

3.4. Interferometry

Plane wave approximation (pwa) is a typical assumption exploited to perform interferometry and to compute interferometric phase φ. With reference to the geometry depicted in Figure 5, this leads to:

ϕ 1 = 2 π λ ( R 2 , 1 R 1 , 1 ) 2 π λ B int sin ( θ α )
where Bint is the interferometric baseline defined as the modulus of the antenna separation vector and α is the baseline roll angle. In Equation (11) and following, φi represents the interferometric phase of the i-th point and Rj,i the distance between the j-th antenna and the i-th point. Therefore, the differential phase between two points in adjacent range cells, with separation in height Δh and separation in slant range dr = R1,2R1,1, is:
Δ Φ pwa = ϕ 2 ϕ 1 = 2 π λ B int [ sin ( Δ θ + θ α ) sin ( θ α ) ]
where:
Δ θ = cos 1 ( R 1 , 1 cos θ Δ h R 1 , 1 + d r ) θ
is the variation in the off-nadir angle related to the difference in height.

For a close-range (cr) application, as is the aim of the present work, the plane wave approximation is not valid anymore. Hence, Equation (11) must be generalized as:

ϕ 1 = 2 π λ ( R 2 , 1 R 1 , 1 ) = 2 π λ [ R 1 , 1 2 + B int 2 R 1 , 1 B int sin ( θ α ) R 1 , 1 ]
thus leading to differential phase:
Δ Φ cr = 2 π λ [ R 1 , 2 2 + B int 2 R 1 , 2 B int sin ( Δ θ + θ α ) R 1 , 1 2 + B int 2 R 1 , 1 B int sin ( θ α ) + R 1 , 1 R 1 , 2 ]

The percentage error resulting from the adoption of the plane wave approximation (12) in a close-range application can be calculated as:

ɛ Δ Φ = Δ Φ cr Δ Φ pwa 2 π × 100

Figure 6 shows the percentage error function for various θ, Δθ, Bint and R. The error increases for larger Bint and closer targets, as the line of sight of two antennas becomes less and less parallel. Finally, increasing the off-nadir angle θ causes a shift of the function towards larger α, although, obviously, the periodic behavior of the function is clear.

3.4.1. Interferometric Baseline

A new method to design the interferometric baseline for close-range applications is required. Equation (15) does not allow Bint to be obtained directly from the other parameters, so it is necessary to address an indirect solution. The one hereby proposed envisages exploiting the numerical representation of Equation (15), given a certain geometry, as a function of a range of values for both Bint and α. One of the requirements for the correct reconstruction of height variation is that the difference in phases between two adjacent pixels is no greater than 2π. Therefore, all of the couples:

( B int , α ) : Δ Φ cr ( B int , α ) > 2 π
are discarded, whereas all of the other values could represent a good choice, depending on the application. The value of the maximum allowable interferometric baseline:
B int : Δ Φ cr ( B int ) = 2 π
referred to as the critical baseline [9], is shown in Figures 7 and 8 for various operating conditions.

As expected, Figure 7 shows that when the range increases, the critical baseline increases, as well. This means that, depending on the size of the antennas, a minimum interferometric baseline is achievable, thus imposing a bound on the smallest distance at which it is possible to perform interferometry. Based on this consideration, minimum values for Rmin listed in Table 3 have to be updated accordingly.

However, it has to be pointed out that this minimum distance is also strongly related to the height variation between points in adjacent range cells. Namely, if Δh is smaller than expected, then interferometry can be performed at even a smaller range distance (see Figure 8).

3.5. System Parameters

In Section 3.1, input parameters, due to both requirements and the envisaged missions, for the design of an innovative FMCW SAR system have been shown (see Table 3). In the remainder of this section, attention will be paid to further assumptions, which have been made to achieve a combination of working parameters (see Table 5) by exploiting the design block diagram depicted in Figure 3 and by accounting for the radar and interferometry constraints previously discussed.

In order to propose an advanced configuration, the most stringent input values from Table 3 have been chosen for theoretical three-dimensional resolution. Furthermore, the mission profile contributed to the choice of both platform velocity v, small enough to move in unknown environments, and the expected difference in height Δh, set equal to the height resolution. Finally, the off-nadir angle θ, which influences both transmitted power PT and interferometric performance, has been chosen to achieve an adequate baseline. It is worth noting that, being that the radar is designed to operate indoors, at close range, the transmitted power is much lower than the values of the existing, compact, lightweight systems listed in Table 2. Nonetheless, the parameters reported in Table 5 must be considered as nominal ones. From the practical point of view, the system must be able to collect useful data under extremely different operating conditions depending on the observation geometries, the synthetic aperture formation and the effective baseline. The next section will focus on these problems, which are critical for the proposed system.

4. Assessment of Three-Dimensional Mapping Capabilities

A typical operational scenario for the proposed system is well represented by a parallelepiped, whose dimensions are depicted in Figure 9. Specifically, concerning indoor exploration, this parallelepiped can represent an example of a warehouse in which the sensor is requested to operate. The same scenario is valid also for planetary exploration, where the parallelepiped can be conceived of as a relatively small control volume that encloses scatterers, which vary depending on the application.

The design values proposed in the previous section (see Table 5) allow both acceptable values of SNR for the whole range of distances to be obtained and the start-stop approximation to be exploited. Concerning geometric resolution, it is worth highlighting that a practically rectangular resolution element is achieved when a conventional side-looking monostatic SAR is considered. Specifically, this is possible because the azimuth or the along-track directions and ranges or the across-track direction are orthogonal and the sampling frequency and pulse repetition frequency (PRF) are tuned correspondingly, accounting for multilook processing, too [45]. On the contrary, the proposed system is designed to look in general along directions not perpendicular to the motion of the platform. As a result, image pixels no longer cover rectangular, but differently-skewed areas. Hence, in order to get satisfactory resolutions, it is of primary importance both to introduce a set of figures of merit to decide whether an image is acceptable or not and to evaluate the system performance in the control volume.

4.1. Geometric Model

The target position in three-dimensional space is determined by the intersection of three surfaces:

R = P T
f D = 2 v l λ
ϕ = 2 π λ ( R 2 R 1 )
namely the range sphere, Doppler cone and phase hyperboloid [9].

Given a Cartesian coordinate system, whose origin is in the vertex O and axes along the edges of parallelepiped OD, OA and OC in Figure 9, P and T represent the antenna and target positions in Equation (19), whereas l represents the line of sight vector. It is worth noting that, if plane wave approximation is valid [9], the phase hyperboloid Equation (19c) degenerates into a cone.

4.1.1. Range Sphere-Doppler Cone Intersection

The gradient method can be exploited to assess the effects of pixel shape in the presence of the squint angle within the whole three-dimensional environment. The application of the gradient method requires the introduction of more general definitions of range and Doppler or azimuth directions as the direction of fast time gradient t and Doppler frequency gradient f D , respectively [46]. In addition, a further hypothesis of motion at constant velocity within the integration time is assumed. It is worth noting that the gradient method, traditionally applied considering terrain, can be extended to each wall in the case of indoor navigation to get a three-dimensional awareness.

The characteristics of range and Doppler isolines, caused by the intersection of both the range sphere and Doppler cone with walls, are analyzed herein. In detail, the unambiguous area is defined in the plane of each wall as the geometric locus that simultaneously satisfies the following three criteria:

  • the angle Ωof intersection between the iso-range and iso-Doppler contour lines falls within the interval [Ωmin, Ωmax],

  • the spatial resolutions computed along the range and Doppler directions are not lower than required in Table 1,

  • the area of an illuminated pixel (i.e., the area bounded by two adjacent iso-range and iso-Doppler lines) is smaller than a threshold Apixel related to the required cell resolution.

Consequently, the ambiguous area is the complement of the unambiguous one. The aforementioned criteria physically mean that within the ambiguous area, the shape of the resolution cell does not allow the target position on the wall plane to be established with the desired accuracy, owing to the size of the resolution cell and the geometry of both the isolines and the pixel. Furthermore, it is worth noting that a phase value can be assigned to a point observable in both the range and Doppler domain, that is a point that lies in the unambiguous area, thus making interferometry possible.

The imaging performance is estimated considering the parameters listed in Table 6. The azimuth or Doppler resolution depends on the integration time or synthetic aperture duration. The integration time should be defined as the time span for which a given target is illuminated by the main lobe of the transmitting antenna and remains within the main lobe of the receiving one. For the considered system and environment, the integration time is a function of the distance and of the relative geometry between the sensor and the target. Hence, it varies from point to point within the control volume. However, since this actual integration time is, in general, not known, the performance analysis is addressed in this section by supposing a constant integration time. This means that the integration time must be interpreted herein as the time span used for SAR focusing, which is assumed constant for all of the imaged targets. The value for integration time reported in Table 6 is also compliant with the possible platform dynamics and antenna apertures assumed in the simulation. As a consequence, a range of distances at which the theoretical azimuth resolution (Equation (4)) can be achieved will exist. Farther points may suffer from worse resolution owing to the increasing distance between either two close iso-range or iso-Doppler curves, which results in a larger imaging pixel. Nonetheless, as shown in the following, the degraded pixel is still complaint with the minimum required resolution and pixel area threshold (Table 6) over sufficiently large areas within the test environment.

Quantitatively, a preliminary analysis of the mapping capability is carried out with the platform at a specific location. The antenna is located at position P with a velocity v (see Table 7) at half the integration time. The selected velocity and integration time give the theoretical azimuth resolution at a distance of about 3 m (and synthetic aperture equal to 0.5 m), but acceptable values are obtained even at longer distances, as shown in Figures 10 and 11. In more detail, Figure 10 shows the three terms that contribute to the ambiguous area (shaded) and the shape of the resolution element within the unambiguous area. The total unambiguous area is about 47% of the total area, and the walls having observable areas are depicted in Figure 11. It should be noted that points lying within areas, whose size depends on the distance (i.e., the farther the wall, the larger the size), around the projection of the velocity direction on walls are not observable, owing to forward-looking ambiguities. In addition, points inside a circle, whose radius depends on the distance, around projections of the platform on the walls, are not observable, owing to the poor ground range resolution. Front and rear walls are not observable, as the vector normal to their surfaces is parallel to the velocity vector, thus resulting in parallel range and Doppler isolines. Furthermore, most of the wall ABFE is not observable. It is worth noting that even though the azimuth resolution satisfies the requirements of Table 6, the effects of both the ground range resolution and intersection angle Ωdue to the distance strongly affect the observation capability.

The presented results suggest that the whole control volume can be mapped by exploiting the platform agility to move and the point the beam.

4.2. Layover

Layover is a well-known geometric distortion of SAR images affecting targets that have the same range and velocity relative to the platform in three-dimensional space [40,45]. Layover does not affect the capability to image an area of interest, but can cause the inversion of the position of scatterers and geometric distortion, resulting in interpretation problems. With reference to the considered control volume, the most critical zones interested in layover are edges and angles generated by the intersection of two or three walls, which have at least two layover points [45]. However, this is not a specific problem of the proposed system, since it affects any radar observation, and SAR data processing algorithms do not typically remove layover areas. In addition, the exploitation of multi-aspect InSAR data has demonstrated good capabilities in terms of the recognition and removal of layover areas [47]. Even though these techniques have been tested on different scenarios, i.e., layover generated by small and large buildings in urban areas, they are expected to be useful for the proposed system. Indeed, since it is expected that the required multi-aspect interferometric acquisitions will constitute the system operating mode in order to increase the percentage of the covered area within the control volume (see Section 4.1.1), the proposed and the successfully experienced techniques to cope with layover will be certainly exploited.

5. Conclusions

In this paper, the first steps towards the overall feasibility study and design of an innovative radar sensor for autonomous operations in GPS-denied indoor environments by flying small UAS have been taken. The work can be summarized as follows:

  • After the state-of-the-art analysis of existing small SAR sensors, FMCW has been individuated as a suitable scheme to be exploited in combination with InSAR technology for applications requiring both high-resolution performance and compact and lightweight systems. Millimeter wavelengths have been selected thanks to their atmospheric penetration characteristics, even in environments with smoke and flames, and to limit antenna dimensions. The peculiar features of the FMCW scheme have been thus discussed, also giving a comparison with well-assessed pulsed SAR technology.

  • Based on the FMCW features, a system design procedure has been achieved, outlining guidelines to trade-off the design choices based on the specific mission requirements and operative environments.

  • Imaging peculiarities have been discussed in terms of the resolution.

The presented results demonstrate that high-resolution, high-quality observation of an assigned control volume is possible, provided that an adequate flight trajectory is selected. The advantage of FMCW with respect to the pulse architecture in terms of sampling frequency and real-time data handling suggests that the transmission of both raw data and processed images to the ground station could be easily achieved. It is clear that for autonomous navigation, onboard real-time data processing operations are required, such as interferogram formation, simultaneous localization and mapping procedures and structured data handling and storage, all of which are very demanding on the system processor. In addition, very long missions could produce an extremely large amount of data to be stored onboard. Nevertheless, it can be expected that future enhancements in miniaturization and customization of both processors and data storage devices will make the aforementioned problems affordable.

Acknowledgments

This work has been supported by Regione Campania with the European Social Fund “P.O. Campania 2007/2013-2014/202”.

Author Contributions

A.F. Scannapieco developed the system design, performed simulations to assess system mapping capabilities and contributed to the writing phase; A. Renga studied and developed the system architecture and contributed to the writing phase; A. Moccia conceived the idea presented in this paper, supervised the project and contributed to the writing phase.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fasano, G.; Accardo, D.; Moccia, A.; Carbone, G.; Ciniglio, U.; Corraro, F.; Luongo, S. Multi-sensor-based fully autonomous non-cooperative collision avoidance system for unmanned air vehicles. AIAA J Aerosp. Comput. Inf. Commun. 2008, 5, 338–360. [Google Scholar]
  2. Accardo, D.; Fasano, G.; Forlenza, L.; Moccia, A.; Rispoli, A. Flight test of a radar-based tracking system for UAS sense and avoid. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1139–1160. [Google Scholar]
  3. Gundlach, J. Designing Unmanned Aircraft Systems: A Comprehensive Approach; American Institute of Aeronautics and Astronautics, Inc.: Reston, VA, USA, 2012. [Google Scholar]
  4. Austin, R. Unmanned Aircraft System: UAVS Design, Development and Deployment; John Wiley & Sons, Ltd: Chichester, UK, 2010. [Google Scholar]
  5. Elfes, A.; Hall, J.L.; Kulczycki, E.A.; Clouse, D.S.; Morfopoulos, A.C.; Montgomery, J.F.; Cameron, J.M.; Ansar, A.; Machuzak, R.J. Autonomy architecture for Aerobot exploration of Saturnian moon Titan. IEEE Aerosp. Electron. Syst. Mag. 2008, 23, 16–24. [Google Scholar]
  6. Shrotri, K.; Khalid, A.; Gunduz, M.E.; Manyapu, K.; Fait Sumer, Y.; Schrage, D.P. Marvin-Near Surface Methane Detection on Mars. Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; pp. 1–16.
  7. Noth, A.; Enge, W.; Siegwart, R. Recent progress on the Martian Solar Airplane Sky-Solar. Proceedings of the 9th ESA Workshop on Advanced Space Technology for Robotics and Automation (ASTRA 2006), ESTEC, Noordwijk, The Netherlands, 28–30 November 2006.
  8. Moccia, A. Synthetic Aperture Radar. In Encyclopedia of Aerospace Engineering; Blockley, R., Shyy, W., Eds.; John Wiley & Sons, Ltd.: Chichester, UK, 2010; Volume 5, pp. 1–13. [Google Scholar]
  9. Rosen, P.A.; Hensley, S.; Joughin, I.R.; Li, F.K.; Madsen, S.N.; Rodriguez, E.; Goldstein, R.M. Synthetic aperture radar interferometry. IEEE Proc. 2000, 88, 331–382. [Google Scholar]
  10. Zaugg, E.; Long, D. Theory and application of motion compensation for LFMCW SAR. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2990–2998. [Google Scholar]
  11. Edrich, M. Ultra-lightweight synthetic aperture radar based on a 35 GHz FMCW sensor concept and online raw data transmission. IEE Proc. Radar Sonar Navig. 2006, 153, 129–134. [Google Scholar]
  12. Gibbins, C.J.; Chadha, R. Millimetre-wave propagation through hydrocarbon flame. IEEE Proc. 1987, 134, 169–173. [Google Scholar]
  13. Mphale, K.; Heron, M. Absorption and Transmission Power Coefficients for Millimeter Waves in a Weakly Ionised Vegetation Fire. Int. J. Infrared Millim. Waves. 2007, 28, 865–879. [Google Scholar]
  14. MiniSAR Radar. Available online: http://www.ee.sc.edu/classes/Spring14/elct861/Class_Notes/SAND2005-MiniSAR-fact-sheet.pdf (accessed on 28 October 2014).
  15. Aguasca, A.; Acevo-Herrera, R.; Broquetas, A.; Mallorqui, J.J.; Fabregas, X. ARBRES: Light-Weight CW/FM SAR Sensors for Small UAVs. Sensors 2013, 13, 3204–3216. [Google Scholar]
  16. Meta, A.; Imbembo, E.; Trampuz, C.; Coccia, A.; de Luca, G. A selection of MetaSensing airborne campaigns at L-, X- and Ku-band. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 4571–4574.
  17. Griffiths, H.D. New ideas in FM radar. Electron. Commun. Eng. J. 1990, 2, 185–194. [Google Scholar]
  18. Brooker, G.M. High Range-Resolution Techniques. In Introduction to Sensors for Ranging and Imaging; SciTech Publishing: Raleigh, NC, USA, 2009; pp. 303–356. [Google Scholar]
  19. Kirk, J.C. Evolution of lite-weight SAR/MTI technology. Proceedings of the IEEE Radar Conference (RadarCon), Pasadena, CA, USA, 4–8 May 2009; pp. 1–4.
  20. Henke, D.; Meier, E. Moving target tracking in single-channel SAR. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 6813–6816.
  21. The BYU microSAR System. Available online: http://www.mers.byu.edu/yinsar/microSAR_descrip3.pdf (accessed on 28 October 2014).
  22. Zaugg, E.; Edwards, M.; Long, D.; Stringham, C. Developments in compact high-performance synthetic aperture radar systems for use on small Unmanned Aircraft. Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 23–27 May 2011; pp. 1–14.
  23. NuSAR Naval Research Laboratory (NRL) Unmanned Aerial Vehicle (UAV) Synthetic Aperture Radar. Available online: http://www.sdl.usu.edu/programs/nusar.pdf (accessed on 28 October 2014).
  24. Wilson, M.L.; Von Berg, D.L.; Kruer, M.; Holt, N.; Anderson, S.A.; Long, D.G.; Margulis, Y. DUSTER: Demonstration of an Integrated LWIR-VNIR-SAR imaging system. Proceedings of the SPIE 6946, Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications V, Orlando, FL, USA, 17 March 2008.
  25. Kinghorn, A.; Nejman, A. PicoSAR—An advanced lightweight SAR system. Proceedings of the European Radar Conference (EuRAD), Rome, Italy, 30 September–2 October 2009; pp. 168–171.
  26. Knapskog, A.O.; Brovoll, S.; Torvik, B. Characteristics of ships in habour investigated in simultaneous images from TerraSAR-X and PicoSAR. Proceedings of the IEEE Radar Conference (RadarCon), Marriott Crystal Gateway, Arlington, VA, USA, 10–14 May 2010; pp. 422–427.
  27. Almorox-González, P.; González-Partida, J.-T.; Burgos-García, M.; Dorta-Naranjo, B.P.; de la Morena-Alvarez-Palencia, C.; Arche-Andradas, L. Portable high resolution LFMCW radar sensor in millimeter-wave band. Proceedings of the International Conference on Sensor Technologies and Applications (SENSORCOMM), Valencia, Spain, 10–14 October 2007; pp. 5–9.
  28. González-Partida, J.; Almorox-González, P.; Burgos-Garcia, M.; Dorta-Naranjo, B. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell. Sensors 2008, 8, 3384–3405. [Google Scholar]
  29. Edwards, M.; Madsen, D.; Stringham, C.; Margulis, A.; Wicks, B.; Long, D. microASAR: A small, robust LFMCW SAR for operation on UAVs and small aircraft. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Boston, MA, USA, 7–11 July 2008; pp. V-514–V-517.
  30. Zaugg, E.; Edwards, M.; Margulis, A. The SlimSAR: A small, multifrequency, Synthetic Aperture Radar for UAS operation. Proceedings of the IEEE Radar Conference (RadarCon), Marriott Crystal Gateway, Arlington, VA, USA, 10–14 May 2010; pp. 277–282.
  31. Zaugg, E.; Edwards, M. Experimental results of repeat-pass SAR employing a visual pilot guidance system. Proceedings of the IEEE Radar Conference (RadarCon), Atlanta, GA, USA, 7–11 May 2012; pp. 629–634.
  32. Miniature Radar Developed for Lightweight Unmanned Aircraft. Available online: http://www.barnardmicrosystems.com/UAV/features/synthetic_aperture_radar.html (accessed on 1 January 2015).
  33. NanoSAR B Datasheet. Available online: http://www.imsar.com/uploads/files/45_NanoSAR_B_Data_Sheet.pdf (accessed on 28 October 2014).
  34. NanoSAR C Datasheet. Available online: http://www.imsar.com/uploads/files/46_NanoSAR_C_Data_Sheet.pdf (accessed on 28 October 2014).
  35. Johannes, W.; Essen, H.; Stanko, S.; Sommer, R.; Wahlen, A.; Wilcke, J.; Wagner, C.; Schlechtweg, M.; Tessmann, A. Miniaturized high resolution Synthetic Aperture Radar at 94 GHz for microlite aircraft or UAV. Proceedings of the 2011 IEEE Sensors, University of Limerick, Limerick, Ireland, 28-31 October 2011; pp. 2022–2025.
  36. Ulaby, F.T.; Moore, R.K.; Fung, A.K. Microwave Remote Sensing: Active and Passive, Vol. II: Radar Remote Sensing and Surface Scattering and Emission Theory; Addison-Wesley Publishing Company: Reading, MA, USA, 1982. [Google Scholar]
  37. Meta, A.; Hoogeboom, P.; Ligthart, L. Signal processing for FMCW SAR. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3519–3532. [Google Scholar]
  38. Meta, A. Signal Processing of FMCW Synthetic Aperture Radar Data. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 2006. [Google Scholar]
  39. Hou, H.-P.; Qu, C.-W.; Sun, H.-B.; Song, R.-G. Research on FMCW SAR signal characteristic and improved azimuth matched filtering algorithm. Proceedings of the 2nd Asian-Pacific Conference on Synthetic Aperture Radar (APSAR), Xi'an, China, 26–30 October 2009; pp. 290–293.
  40. Curlander, J.C.; McDonough, N.R. The Radar Equation. In Synthetic Aperture Radar Systems and Signal Processing; Wiley-Interscience: New York, NY, USA, 1991; pp. 71–125. [Google Scholar]
  41. Collin, R.E. Receiving Antennas. In Antennas and Radiowave Propagation, 4th ed.; McGraw-Hill: New York, NY, USA, 1985; pp. 293–336. [Google Scholar]
  42. Skolnik, M.I. Radar Antennas. In Introduction to Radar Systems, 2nd ed.; McGraw-Hill: New York, NY, USA, 1980; pp. 223–277. [Google Scholar]
  43. Franceschetti, G.; Lanari, R. Fundamentals. In Synthetic Aperture Radar Processing; CRC Press: Boca Raton, FL, USA, 1999; pp. 1–71. [Google Scholar]
  44. Charvat, G.L. Frequency Modulated Continuous Wave (FMCW) Radar. In Small and Short-Range Radar Systems, 1st ed.; CRC Press: Boca Raton, FL, USA, 2014; pp. 69–135. [Google Scholar]
  45. Sullivan, R. Synthetic aperture radar. In Radar Handbook; Skolnik, M.I., Ed.; McGraw-Hill: New York, NY, USA, 2008; pp. 17.1–17.37. [Google Scholar]
  46. Thiele, A.; Cadario, E.; Schulz, K.; Thoennessen, U.; Soergel, U. Building recognition from multi-aspect high-resolution InSAR data in urban areas. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3583–3593. [Google Scholar]
  47. Moccia, A.; Renga, A. Spatial Resolution of Bistatic Synthetic Aperture Radar: Impact of Acquisition Geometry on Imaging Performance. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3487–3503. [Google Scholar]
Figure 1. Comparison between pulse repetition interval (PRI) (a) in FMCW SAR and (b) in conventional pulsed SAR. The lots are not to scale for clarity.
Figure 1. Comparison between pulse repetition interval (PRI) (a) in FMCW SAR and (b) in conventional pulsed SAR. The lots are not to scale for clarity.
Sensors 15 02309f1 1024
Figure 2. System architecture.
Figure 2. System architecture.
Sensors 15 02309f2 1024
Figure 3. Design process guidelines: Block diagram.
Figure 3. Design process guidelines: Block diagram.
Sensors 15 02309f3 1024
Figure 4. FMCW ambiguity in range: The first sweep reflection from the furthest target (red line) is between the transmitted signal (black line) and the second sweep reflection from the closest target (blue line), so that the furthest target is imaged closer.
Figure 4. FMCW ambiguity in range: The first sweep reflection from the furthest target (red line) is between the transmitted signal (black line) and the second sweep reflection from the closest target (blue line), so that the furthest target is imaged closer.
Sensors 15 02309f4 1024
Figure 5. Interferometric observation geometry.
Figure 5. Interferometric observation geometry.
Sensors 15 02309f5 1024
Figure 6. Percentage error between the true and approximated differential interferometric phases under various operating conditions (the three curves correspond to θ = 15°, θ = 45°, θ = 75°).
Figure 6. Percentage error between the true and approximated differential interferometric phases under various operating conditions (the three curves correspond to θ = 15°, θ = 45°, θ = 75°).
Sensors 15 02309f6 1024
Figure 7. Critical baseline for various operating conditions. For each plot, dr = 10 cm and Δh =10 cm have been considered.
Figure 7. Critical baseline for various operating conditions. For each plot, dr = 10 cm and Δh =10 cm have been considered.
Sensors 15 02309f7 1024
Figure 8. Effect of height variation on the critical baseline. For each plot, dr = 10 cm has been considered.
Figure 8. Effect of height variation on the critical baseline. For each plot, dr = 10 cm has been considered.
Sensors 15 02309f8 1024
Figure 9. Platform and sensor moving in a simplified operational scenario. The platform and target position vectors, the line of sight unit vector, the velocity vector and the target distance to the antennas are depicted, too (not to scale, for clarity).
Figure 9. Platform and sensor moving in a simplified operational scenario. The platform and target position vectors, the line of sight unit vector, the velocity vector and the target distance to the antennas are depicted, too (not to scale, for clarity).
Sensors 15 02309f9 1024
Figure 10. Plane OAED. Ambiguous area (shaded) and contributions: intersection angle (green contour), resolution (blue contour) and pixel size (red contour). For clarity, the distance between two close isolines does not represent the true system resolution.
Figure 10. Plane OAED. Ambiguous area (shaded) and contributions: intersection angle (green contour), resolution (blue contour) and pixel size (red contour). For clarity, the distance between two close isolines does not represent the true system resolution.
Sensors 15 02309f10 1024
Figure 11. Total unambiguous area (in red, about 47% of the control volume surface) for the position and velocity reported in Table 7. Note that the observable walls are not depicted in the figure.
Figure 11. Total unambiguous area (in red, about 47% of the control volume surface) for the position and velocity reported in Table 7. Note that the observable walls are not depicted in the figure.
Sensors 15 02309f11 1024
Table 1. Basic design guidelines of the proposed innovative SAR system.
Table 1. Basic design guidelines of the proposed innovative SAR system.
Main Constraints

Mass< 1 kg
Size< 1500 cm3
Maximum dimension< 30 cm
Antenna maximum length< 10 cm
Power consumption< 10 W
Real-time onboard processing
Expected Performances

3D Mapping without ground truth
3D geometric resolution10–20 cm
Field-of-viewHemispherical
Operation in the presence of smoke and fire
Possible Technical Solutions

SAR
Radar interferometry
Millimeter wave radar
Table 2. The main features of existing compact lightweight SAR systems (N/A = not available).
Table 2. The main features of existing compact lightweight SAR systems (N/A = not available).
Mass (kg)Size (cm3)Power Consumption (W)Transmitted Power (W)Resolution (m)Maximum Range (km)Bandwidth (MHz)Carrier Frequency (GHz)SchemeOnboard Real Time Data ProcessingSingle Pass Interferometry
Lite-weight UAV Radar (LUAVR)[19]932,77410010.110180035FMCW SARYesNo
MISAR[11,20]410,00010010.5430035FMCW SARNoNo
Brigham Young University (BYU) MicroSAR[21,22]2.72295.3816110.7905.55FMCW SARNoNo
MiniSAR[14]14250250600.310300016.8Pulsed SARNear-real timeNo
NuSAR[23,24]8.62N/A160250.30.75009.75Pulsed SARYesNo
PicoSAR[25,26]1010,79730010.3207689.7Pulsed SARYesNo
Radar de Apertura Sintética Miniaturizado Aéreo (MINISARA)[27,28]2.57296N/A10.072.97200034FMCW SARN/ANo
BYU MicroASAR[29]3.31880.713510.75N/A2005.43FMCW SARNoNo
SlimSAR[30,31]4.54N/A15040.23N/A6609.28FMCW SARNoNo
NanoSAR[32]0.9116741510.3150010.25Pulsed SARNoNo
NanoSAR B[33]1.591458.493010.34N/AN/APulsed SARNoYes
NanoSAR C[34]1.181409.292510.33N/AN/APulsed SARNoYes
Millimeterwave Radar using Analog and New Digital Approac (MIRANDA)h [35]2.24459.13200.10.152100094FMCW SARNoNo
ARBRES SAR[15]2.5595050N/A1.5N/A1009.65FMCW SARN/AYes
MetaSensing SAR[16]N/AN/AN/AN/A0.4N/A4509.65FMCW SARN/AYes
Table 3. Input parameters for the system design.
Table 3. Input parameters for the system design.
SymbolParameterUnitMinimum ValueMaximum Value
drRange resolution(cm)1020
daAzimuth resolution(cm)1020
dhHeight resolution(cm)1020
vPlatform velocity(m · s−1)0.252.00
θOff-nadir angle(°)1575
θsqSquint angle(°)−4545
RmaxMaximum distance(m)25.030.0
RminMinimum distance(m)0.53.0
ΔhHeight difference between two points in adjacent range cells(cm)520
NBITNumber of bits1632
Table 4. Constant parameters for the system design.
Table 4. Constant parameters for the system design.
SymbolParameterUnitValue
fcCarrier frequency(GHz)94
λWavelength(mm)3.2
cSpeed of light(m · s−1)3 × 108
kBBoltzmann's constant(J · K−1)1.38 × 10−23
TNTemperature of system(K) (dB)290
FNFigure of noise15
SNRSignal-to-noise ratio(dB)20
σ0Differential scattering coefficient(dB)−20
ηFMCW SAR duty cycle1
Table 5. Selected working parameters.
Table 5. Selected working parameters.
SymbolParameterUnitValue
drRange resolution(cm)10
daAzimuth resolution(cm)10
vPlatform velocity(m · s−1)0.50
θOff-nadir angle(°)60
RmaxMaximum range(m)30
RminMinimum range(m)1.5
NBITNumber of bits16
dhHeight resolution(cm)10
BTransmitted bandwidth(GHz)1.50
fSSampling frequency(kHz)68.327
PRFPulse repetition frequency(Hz)125
dantenna width(m)0.01
θ rantenna beamwidth in the range direction(°)18
lazantenna length(m)0.02
θazantenna beamwidth in the azimuth direction(°)9
PTTransmitted Power(mW)<1
αBaseline roll angle(°)40
BintInterferometric baseline(cm)3
ΔφPhase resolution at the interferometer(°)11
ΔhHeight difference between two points in adjacent range cells(cm)10
Table 6. Additional parameters for observation.
Table 6. Additional parameters for observation.
SymbolParameterUnitValue
TintIntegration time(s)4
minLower bound on intersection angle(°)45
maxUpper bound on intersection angle(°)135
ApixelPixel area threshold(m2)0.04
kresMinimum required resolution(m)0.20
Table 7. Position and velocity of the antenna halfway through the integration time.
Table 7. Position and velocity of the antenna halfway through the integration time.
Px (m)Py (m)Pz (m)vx (m · s−1)vy (m · s−1)vz (m · s−1)
15220.500

Share and Cite

MDPI and ACS Style

Scannapieco, A.F.; Renga, A.; Moccia, A. Preliminary Study of a Millimeter Wave FMCW InSAR for UAS Indoor Navigation. Sensors 2015, 15, 2309-2335. https://doi.org/10.3390/s150202309

AMA Style

Scannapieco AF, Renga A, Moccia A. Preliminary Study of a Millimeter Wave FMCW InSAR for UAS Indoor Navigation. Sensors. 2015; 15(2):2309-2335. https://doi.org/10.3390/s150202309

Chicago/Turabian Style

Scannapieco, Antonio F., Alfredo Renga, and Antonio Moccia. 2015. "Preliminary Study of a Millimeter Wave FMCW InSAR for UAS Indoor Navigation" Sensors 15, no. 2: 2309-2335. https://doi.org/10.3390/s150202309

Article Metrics

Back to TopTop