Preliminary Study of a Millimeter Wave FMCW InSAR for UAS Indoor Navigation

Small autonomous unmanned aerial systems (UAS) could be used for indoor inspection in emergency missions, such as damage assessment or the search for survivors in dangerous environments, e.g., power plants, underground railways, mines and industrial warehouses. Two basic functions are required to carry out these tasks, that is autonomous GPS-denied navigation with obstacle detection and high-resolution 3D mapping with moving target detection. State-of-the-art sensors for UAS are very sensitive to environmental conditions and often fail in the case of poor visibility caused by dust, fog, smoke, flames or other factors that are met as nominal mission scenarios when operating indoors. This paper is a preliminary study concerning an innovative radar sensor based on the interferometric Synthetic Aperture Radar (SAR) principle, which has the potential to satisfy stringent requirements set by indoor autonomous operation. An architectural solution based on a frequency-modulated continuous wave (FMCW) scheme is proposed after a detailed analysis of existing compact and lightweight SAR. A preliminary system design is obtained, and the main imaging peculiarities of the novel sensor are discussed, demonstrating that high-resolution, high-quality observation of an assigned control volume can be achieved.


Introduction
Unmanned aerial systems (UAS) are commonly defined as medium-small scale uninhabited aerial vehicles able to attain stable flight operation thanks to a control system that can be programmed to follow a certain flight path or can be remotely controlled from a ground station. Today, UAS are moving toward autonomous sense and detect functions [1,2] and are performing missions with increasing levels of autonomy and complexity, such as repetitive reconnaissance and surveillance, whereby human presence onboard is undesirable or inadvisable. Outdoor flying unmanned vehicles have received a considerable amount of research and industrial attention over the years. Although limitations exist concerning UAS inclusion in air space, today, complete systems are available for military and civilian applications [3,4].
On the contrary, there is still much to be done in the area of indoor or urban autonomous operation, both for vehicle navigation and for monitoring or exploration. The application to unknown building interiors and very cluttered urban or natural environments is one of the most demanding issues envisioned for UAS, since it requires the real-time capability: (i) to detect and identify very different objects, such as buildings, walls, caves, infrastructures or underground facilities, in problematic and unpredictable illumination conditions; (ii) to navigate through complex-shaped passageways, even avoiding non-stationary obstacles; and (iii) to gather and relay information. Use of very compact sized and extreme lightweight small UAS or micro aerial vehicles (MAV), different from outdoor applications, represents an additional strong constraint when indoor flight operations must be performed. Target mission scenarios include high risk indoor inspection, e.g., nuclear power plant failure and leakage or tunnel roof collapse in mine, but also the search for survivors in cluttered dense urban environment or indoors, such as underground railways or industrial warehouses. Pipeline inspection and nuclear, biological or chemical (NBC) emergency reconnaissance represent additional dangerous applications that could take full advantage of small UAS and MAV operations. Completely different scenarios, but similar capabilities, are required in planetary exploration. Specifically, in past decades, rovers have emerged as one of the most important tools for planetary exploration. Important drawbacks of rover systems deal with the limited coverage they can achieve and uncertainty in terrain. For planetary and planet-like bodies, when a significant atmosphere is present, the above limitations can be overcome by aerial vehicles. In addition to Earth, several planets, such as Venus, Mars, Jupiter, Saturn, Uranus and Neptune, but also the Saturn moon, Titan, are endowed with an adequate atmosphere. Aerial vehicles proposed and investigated for planetary exploration include [5][6][7] airplanes and gliders, helicopters, balloons and airships. The most investigated solutions are based on lighter-than-atmosphere robotic airships combining the long-term airborne capability of balloons with the maneuverability of airplanes or helicopters.
The introduced applications involve flight operation in GPS-denied and substantially unknown environments with a potentially large communication latency (planetary explorations) or extended communication blackout periods (indoor emergencies). The accomplishment of two basic functions is required to carry out these tasks: fully autonomous navigation with obstacle detection/avoidance capability and high resolution 3D mapping and monitoring of the target area, including moving target detection. Unless the small UAS is provided with hovering capability, autonomous navigation presents clearly the most stringent time requirements. Regarding obstacle avoidance, in theory, accurate geometric models of the operational environment combined with thematic information and the description of all of the present objects could reduce the need for continuous and real-time sensing. However, those data are often neither updated nor available at the required spatial resolution and accuracy. Furthermore, unexpected obstacles, for instance consequent to an accident that requires to investigation, can appear anytime and anywhere; hence, real-time mapping capabilities are required, too.
The set of data needed to perform these tasks cannot be provided by sensors that are potentially adequate under conventional operating conditions, such as laser scanners and optical cameras, owing to their physical size, weight, strong dependence on illumination conditions and possible poor visibility caused by environmental factors. Conversely, radar sensors are able to operate in any illumination condition, and microwave carrier frequencies allow for coherent signal detection to be performed, thus resulting in significantly increased sensitivity and instant access to range information. In addition, high-resolution 3D mapping can be provided by combining the Synthetic Aperture Radar (SAR) technique with radar interferometry [8,9]. This also makes velocity information available via Doppler processing, which is a valuable feature for sensors operating onboard moving platforms. Finally, millimeter wave radar technology has been receiving increasing interest for application in small UAS [10,11] thanks to the limited size and power requirements and the capability to penetrate smoke and fire [12,13]. The objective of this work is to assess the main features, possible architectural schemes and technical solutions and to carry out a preliminary design of a very innovative radar sensor for novel autonomous operations onboard small UAS. Table 1 summarizes the key driving issues in the preliminary design that will be presented in the paper. First of all, it should be noted that for matching with the considered operational scenarios, the sensor must be compact, lightweight and characterized by low power consumption. In addition, it has to guarantee very high 3D resolution and accuracy, as well as the capability to perform real-time onboard processing in order to support autonomous navigation, exploration and mapping in completely unknown and unstructured environments.

State-of-the-Art Analysis
In the last decade, several compact and lightweight SARs have been developed and tested for different purposes and applications. Table 2 lists the most relevant systems together with their main features, as available today in the open literature. All of them are devoted to outdoor operations, such as surveillance and remote sensing applications, and work in side-looking mode with limited pointing capability. Vision-based navigation through those radar sensors has not been implemented yet. None of these systems satisfies all of the constraints of Table 1. Real-time onboard operation is rarely enabled; resolutions can be insufficient; and in most cases, the mass and power requirements exceed small platform availability. Nonetheless, a few interesting features can be highlighted. MiniSAR by Sandia National Labs [14] and Miniaturized SAR (MISAR) by European Aeronautic Defence and Space Company N.V. (EADS) [11]; both include a double gimbal structure, which allows mechanical steering of the antenna to be achieved, thus making SAR interferometry along multiple directions possible. In both cases, two separate antennas, one for transmission and one for reception, are accommodated to implement a frequency-modulated continuous wave (FMCW) scheme. More than half of the listed sensors exploit this architectural scheme, even though not possessing a gimbal structure. Finally, it is important to remark that AiR-Based REmote Sensing (ARBRES) X-Band SAR [15] and MetaSensing X-Band SAR [16] make use of three antennas, namely two receiving and one transmitting for performing FMCW single-pass interferometry.
In the following subsections, a critical analysis of some key design solutions is presented, and then, an adequate innovative architecture is proposed.

Why FMCW SAR
First of all, it is necessary to point out the advantages connected to the use of FMCW SAR. FMCW radar transmits a frequency-modulated signal, which is usual in SAR, but in a continuous wave, differently from most realizations. The received echo, which is delayed by round trip delay τ associated with target-range distance, is mixed with the transmitted signal [17]. For a linear frequency modulation, the output of the mixing process, namely the beat signal, has two Fourier components at different frequencies. The first component is a signal centered at a constant frequency lower than the carrier frequency [18]. The second component is a residual signal centered approximately at twice the carrier frequency, which has less energy with respect to the former component [17] and is filtered out. The process involving both the mixing of transmitted and received signals and the low-pass filtering of a beat signal is also called deramp-on-receive. The aforementioned low, constant frequency in the beat signal, which is computed by differentiating the phase term of the beat signal with respect to time, is labeled as the beat frequency. The beat frequency holds strong relevance in FMCW radar, as it is directly proportional to the target range by the ratio between the propagation velocity and the bandwidth of the transmitted signal, thus allowing the system to compute the range by measuring the beat frequency. The theoretical value for the range resolution is [17]: where c is light velocity and B is the transmitted bandwidth. Actually, Equation (1) is equivalent to the conventional pulsed radar theoretical range resolution [8,36]. However, it is important to remark that the FMCW range compressed signal is obtained in the frequency domain rather than in the time domain. The FMCW scheme guarantees decisive advantages with respect to conventional pulsed SAR, especially when compact systems have to be realized. Continuous transmission, i.e., a unity duty cycle η = 1, involves less transmitted peak power, which makes significant simplifications in the power generation and conditioning unit along with a strong reduction in power requirements with respect to pulsed systems possible. In addition, deramp-on-receive relies on the sampling of the beat signal bandwidth B B instead of the whole transmitted bandwidth B. This means that even the GHz bandwidth can be easily handled by a MHz sampling frequency f S , because B B B, thus allowing simpler and cheaper hardware equipment.
The FMCW's peculiar features in comparison to traditional pulsed technology are consequent to the motion during continuous transmission. A better understanding of motion effects on the signal is given by [37] in which the following equation is reported for the beat signal in the two-dimensional spatial frequency domain: where K r and K x are the spatial frequencies in the range and azimuth directions, respectively, v is the platform velocity, R 0 is the distance of the closest approach and t is the time referring to the signal transmission/reception at velocity c. The second exponential in Equation (2) coincides with the beat signal of conventional pulsed SAR in the two-dimensional spatial frequency domain, whereas the first is a space invariant term that takes into account the motion during transmission. This term becomes equal to one in conventional SAR, because of the start-stop approximation, which assumes that the radar is stationary during the pulse transmission-reception, because v c. Start-stop approximation is traditionally exploited to explain raw SAR image formation [8]. As a direct consequence of Equation (2), in general, conventional algorithms for SAR image formation would result in FMCW SAR image degradation. More complex reference functions have to be adopted in these cases [38].
However, specific conditions exist in which start-stop approximation can be considered valid for FMCW SAR, too. Even though continuous transmission is used, it is possible to define the concept of the pulse repetition interval (PRI) for FMCW radar as the sweep duration, i.e., the time the transmitted frequency takes to shift from the minimum to the maximum frequency, or equivalently, the time between the start of two consecutive sweeps. It is clear that the last definition leads to almost a similar PRI meaning as for pulsed SAR, although it refers to sweep instead of chirp (see Figure 1). Based on the introduced PRI, the pulse repetition frequency can be defined as the reciprocal of the PRI. The Nyquist sampling theorem requires PRI to be small enough in order to properly sample the azimuth Doppler history. In detail, provided that the sampling requirements are satisfied [38], each sweep represents a sample of the Doppler history in the same way as a pulse of conventional SAR. Hence, both fast time t and slow time t N (i.e., referring to radar motion at velocity v) can be introduced for FMCW SAR, too. On the other hand, a longer sweep duration would produce several samples in the azimuth Doppler history within each sweep, thus making start-stop approximation less acceptable. The remainder of this paper focuses on the case in which start-stop approximation is valid [16,38].
As in conventional SAR, the FMCW SAR target response exhibits a Doppler bandwidth, B D , generated by the variation of the observation angle and, therefore, by the variation of the radial velocity: where λ is the carrier wavelength, θ sq is the squint angle and θ az is the beamwidth in the azimuth direction. Hence, provided that proper motion compensation algorithms are exploited [17,38], the theoretical FMCW SAR azimuth resolution is: where l az is the antenna length. Equation (4) is exactly the same equation that holds for conventional pulsed SAR.
As expected, the result of range and azimuth compression is a bi-dimensional sinc function multiplied by two complex exponentials, the former depending on both the minimum platform to target distance and a reference distance R ref used for the processing [39], the latter depending only on the reference distance and system parameters. Namely: where f R is the range frequency, x 0 is the position of the target along the azimuth direction with respect to the center scene and τ ref is the time delay of the echo at reference range R ref , which corresponds to the range from the center scene. The first exponential resembles the exponential term of the pulsed SAR 2D-focused signal and again can be exploited to perform interferometry (see Section 2.3). Moreover, it has to be noted that the signal of Equation (5), unlike the pulsed SAR 2D-focused signal, is better described in the range-time domain, as range frequency f R is directly proportional to the range in FMCW SAR. Finally, the amplitude of the resulting signal depends on the Doppler bandwidth. The implementation advantages of FMCW SAR must be weighed against some drawbacks that this scheme exhibits. In general, data processing is more complex with respect to pulsed SAR, because deramp-on-receive produces an unwanted phase term, called the residual video phase (RVP), which must be removed. In addition, moving targets can introduce ambiguities in range measurement. Indeed, owing to longer observation time compared to a conventional system, targets can move through several resolution cells within a sweep [38], causing the Doppler effect not to be negligible. Several solutions have been proposed to correctly determine the range, even in the presence of moving targets, including triangular frequency modulation [17,18] to determine the range and Doppler information within a single time interval. Non-linearities in transmitted and received signals cause an additional erroneous phase term in the beat signal, therefore leading to deteriorated range resolution [38]. Typical algorithms for non-linearity correction work under the assumption that non-linearity effects depend linearly on time delay, which is true for small distances. This is the case of indoor applications. The assumption falls for long range observations and causes the computational load to increase. Hardware and software solutions are known in the literature [17,38], such as voltage-controlled oscillator (VCO) and direct digital synthesizer (DDS), or approaches based on approximations of non-linearity. Finally, the simultaneous signal transmission and reception generate signal leakage in the reception chain. Specifically, due to the extremely high transmitted-to-received power ratio, saturation or damage of equipment can occur if even a small leakage of transmitted power is present [18]. Good isolation is therefore required, and typically, separated transmitting and receiving antennas in both bistatic and quasi-monostatic configurations are exploited. Considering that relatively assessed solutions are today available to deal with the discussed drawbacks and taking into account its advantages for the considered applications, the FMCW SAR scheme is selected herein as a base for the system architecture.

Why SAR Interferometry
SAR interferometry is a technique that exploits phase information, obtained from two or more SAR images, in order to compute target height and position in a three-dimensional environment. It can be considered a well-assessed technology for conventional pulsed SAR [8,9]. As regards FMCW SAR, the 2D-focused SAR signal (see Equation (5)) shows that the phase of the azimuth sinc samples target range as the multiple of the wavelength and can therefore be utilized to perform interferometry. It has to be noted that it is necessary to remove the additional contribution to the phase given by the reference range distance, which is typically the distance to the center of the scene illuminated by the beamwidth, and therefore, it can be different in the two images to be correlated. SAR interferometry has been successfully tested on data collected by FMCW SAR [16], and it is considered a key asset towards the operational scenario considered in this work.

Selected Scheme
Based on the state-of-the-art analysis, a system architecture that is potentially able to satisfy all requirements listed in Table 1 is shown in Figure 2. The selected scheme is an interferometric FMCW SAR, equipped with three antennas, one transmitting and two receiving, mounted on a double gimbal structure. Among various factors, interferometric measurement resolution and accuracy are strongly dependent on antenna separation knowledge and control. Furthermore, the proposed system is compact and operates on a single platform, i.e., the two antennas could be rigidly connected and simultaneously pointed to specific targets by adequately rotating a double gimbal to change the baseline (i.e., antenna separation with respect to the target). Hence, it is expected to achieve adequate performance. It is worth noting that: (i) although electronic antenna steering would be favorable for fast and accurate sweeping of all hemispherical field-of-views, the creation of adequate baseline components to extract phase measurements is based on antenna mechanical re-orientation; consequently, the design and development of a double gimbal has been considered to make easier realization of both the antenna and electronics; (ii) depending on the platform selected for the mission, for instance a quadrotor, antenna mechanical re-orientation can be achieved by either rotation of the platform itself or the combined action of the platform and double gimbal.
In addition, an autonomous processing unit (PU), committed to real-time onboard data processing, is included in the scheme. Radar data are stored onboard in a mass memory unit. These data are exploited by the PU to directly command the double gimbal pointing system. The PU also sends information to the UAS navigation unit via a direct interface data link. Communication from the navigation unit to the PU is also necessary to support image processing and data extraction. Finally, the PU interfaces with the radio frequency transmitter to send stored data to the ground station via a wireless data link, when available.  Figure 2. System architecture.

Preliminary Design Process
The design process is outlined in Figure 3: circles represent input parameters, which have been chosen according to the system requirements (Table 1), the system architecture ( Figure 2) and the application, whereas boxes return the sought values. The input parameters of the design process are chosen first. Table 3 lists the input parameters that vary within a minimum and maximum value, whereas Table 4 lists the ones that assume a constant value in the implemented design process. The resolution requirements in range, azimuth and height directions are chosen according to the expected performance, whereas boundaries on platform velocity and maximum and minimum range distances depend on the application. In our case, it is the dynamics of the small UAS flying in an indoor environment performing loitering maneuvers. In addition, a typical value for an indoor differential radar cross-section has been considered. The following sub-sections report a brief explanation of peculiar blocks, specific for the FMCW SAR design. An example of the overall system characteristics is finally derived, accordingly.  Figure 3. Design process guidelines: Block diagram.

Ambiguities and Antenna Width
Range ambiguity for a FMCW radar may occur owing to the continuous transmission of a frequency modulated signal when an echo from a target arrives at receiver after the end of the sweep that generated it. As a result, the received signal will be mixed with a different sweep and will result in the target being closer than in reality (see Figure 4). The unambiguous range is therefore equal to the round-trip distance covered by the wave in a single sweep, namely: time frequency Figure 4. FMCW ambiguity in range: The first sweep reflection from the furthest target (red line) is between the transmitted signal (black line) and the second sweep reflection from the closest target (blue line), so that the furthest target is imaged closer.
Therefore, under the hypothesis that the whole swath width is less than the unambiguous range, the following inequalities shall be satisfied to avoid echo ambiguities and bandwidth undersampling: where the subscripts FR and NR refer to far-and near-range, respectively. The difference R FR − R NR depends on the antenna aperture, hence on the antenna width in elevation in an inverse proportion. Since the considered distances and the Doppler bandwidth are small, Equation (7) does not yield strict bounds on the antenna dimensions. Hence, the antenna width d can be quite small and may be chosen according to other requirements, e.g., the radar equation, heat dissipation and technological restrictions.

Transmitted Power
Transmitted power can be computed by the following formula derived in [40]: which takes into account the range and azimuth compression gains, N R and N A , respectively. In Equation (8), the subscripts T and R refer to transmitting and receiving antenna gains (G), B N is the noise bandwidth and dr gr is the ground range resolution. For rectangular antennas, the gain at the boresight is expressed in [41,42] as: where k e is an efficiency factor, typically equal to 0.65, and A the antenna area. Under the hypothesis of identical transmitting and receiving antennas and by expressing compression gains as in [43], Equation (8) becomes: Concerning the transmitted power, it is important to point out that in FMCW SAR, noise bandwidth B N is equal to sampling frequency f S [44]. This is an additional advantage over conventional SAR, in which the noise bandwidth is equal to the transmitted one.

Interferometry
Plane wave approximation (pwa) is a typical assumption exploited to perform interferometry and to compute interferometric phase φ. With reference to the geometry depicted in Figure 5, this leads to: where B int is the interferometric baseline defined as the modulus of the antenna separation vector and α is the baseline roll angle. In Equation (11) and following, φ i represents the interferometric phase of the i-th point and R j,i the distance between the j-th antenna and the i-th point. Therefore, the differential phase between two points in adjacent range cells, with separation in height ∆h and separation in slant range dr = R 1,2 − R 1,1 , is: where: is the variation in the off-nadir angle related to the difference in height. For a close-range (cr) application, as is the aim of the present work, the plane wave approximation is not valid anymore. Hence, Equation (11) must be generalized as: thus leading to differential phase: The percentage error resulting from the adoption of the plane wave approximation (12) in a close-range application can be calculated as: Figure 6 shows the percentage error function for various θ, ∆θ, B int and R. The error increases for larger B int and closer targets, as the line of sight of two antennas becomes less and less parallel. Finally, increasing the off-nadir angle θ causes a shift of the function towards larger α, although, obviously, the periodic behavior of the function is clear.

Interferometric Baseline
A new method to design the interferometric baseline for close-range applications is required. Equation (15) does not allow B int to be obtained directly from the other parameters, so it is necessary to address an indirect solution. The one hereby proposed envisages exploiting the numerical representation of Equation (15), given a certain geometry, as a function of a range of values for both B int and α. One of the requirements for the correct reconstruction of height variation is that the difference in phases between two adjacent pixels is no greater than 2π. Therefore, all of the couples: are discarded, whereas all of the other values could represent a good choice, depending on the application. The value of the maximum allowable interferometric baseline: referred to as the critical baseline [9], is shown in Figures 7 and 8 for various operating conditions. As expected, Figure 7 shows that when the range increases, the critical baseline increases, as well. This means that, depending on the size of the antennas, a minimum interferometric baseline is achievable, thus imposing a bound on the smallest distance at which it is possible to perform interferometry. Based on this consideration, minimum values for R min listed in Table 3 have to be updated accordingly. However, it has to be pointed out that this minimum distance is also strongly related to the height variation between points in adjacent range cells. Namely, if ∆h is smaller than expected, then interferometry can be performed at even a smaller range distance (see Figure 8).

System Parameters
In Section 3.1, input parameters, due to both requirements and the envisaged missions, for the design of an innovative FMCW SAR system have been shown (see Table 3). In the remainder of this section, attention will be paid to further assumptions, which have been made to achieve a combination of working parameters (see Table 5) by exploiting the design block diagram depicted in Figure 3 and by accounting for the radar and interferometry constraints previously discussed. In order to propose an advanced configuration, the most stringent input values from Table 3 have been chosen for theoretical three-dimensional resolution. Furthermore, the mission profile contributed to the choice of both platform velocity v, small enough to move in unknown environments, and the expected difference in height ∆h, set equal to the height resolution. Finally, the off-nadir angle θ, which influences both transmitted power P T and interferometric performance, has been chosen to achieve an adequate baseline. It is worth noting that, being that the radar is designed to operate indoors, at close range, the transmitted power is much lower than the values of the existing, compact, lightweight systems listed in Table 2. Nonetheless, the parameters reported in Table 5 must be considered as nominal ones. From the practical point of view, the system must be able to collect useful data under extremely different operating conditions depending on the observation geometries, the synthetic aperture formation and the effective baseline. The next section will focus on these problems, which are critical for the proposed system.

Assessment of Three-Dimensional Mapping Capabilities
A typical operational scenario for the proposed system is well represented by a parallelepiped, whose dimensions are depicted in Figure 9. Specifically, concerning indoor exploration, this parallelepiped can represent an example of a warehouse in which the sensor is requested to operate. The same scenario is valid also for planetary exploration, where the parallelepiped can be conceived of as a relatively small control volume that encloses scatterers, which vary depending on the application. The design values proposed in the previous section (see Table 5) allow both acceptable values of SNR for the whole range of distances to be obtained and the start-stop approximation to be exploited.
Concerning geometric resolution, it is worth highlighting that a practically rectangular resolution element is achieved when a conventional side-looking monostatic SAR is considered. Specifically, this is possible because the azimuth or the along-track directions and ranges or the across-track direction are orthogonal and the sampling frequency and pulse repetition frequency (PRF) are tuned correspondingly, accounting for multilook processing, too [45]. On the contrary, the proposed system is designed to look in general along directions not perpendicular to the motion of the platform. As a result, image pixels no longer cover rectangular, but differently-skewed areas. Hence, in order to get satisfactory resolutions, it is of primary importance both to introduce a set of figures of merit to decide whether an image is acceptable or not and to evaluate the system performance in the control volume.

Geometric Model
The target position in three-dimensional space is determined by the intersection of three surfaces: namely the range sphere, Doppler cone and phase hyperboloid [9]. Given a Cartesian coordinate system, whose origin is in the vertex O and axes along the edges of parallelepiped OD, OA and OC in Figure 9, P and T represent the antenna and target positions in Equations (19), whereas l represents the line of sight vector. It is worth noting that, if plane wave approximation is valid [9], the phase hyperboloid Equation (19c) degenerates into a cone.

Range Sphere-Doppler Cone Intersection
The gradient method can be exploited to assess the effects of pixel shape in the presence of the squint angle within the whole three-dimensional environment. The application of the gradient method requires the introduction of more general definitions of range and Doppler or azimuth directions as the direction of fast time gradient − → ∇t and Doppler frequency gradient −−→ ∇f D , respectively [46]. In addition, a further hypothesis of motion at constant velocity within the integration time is assumed. It is worth noting that the gradient method, traditionally applied considering terrain, can be extended to each wall in the case of indoor navigation to get a three-dimensional awareness.
The characteristics of range and Doppler isolines, caused by the intersection of both the range sphere and Doppler cone with walls, are analyzed herein. In detail, the unambiguous area is defined in the plane of each wall as the geometric locus that simultaneously satisfies the following three criteria: • the angle Ω of intersection between the iso-range and iso-Doppler contour lines falls within the interval [Ω min , Ω max ], • the spatial resolutions computed along the range and Doppler directions are not lower than required in Table 1, • the area of an illuminated pixel (i.e., the area bounded by two adjacent iso-range and iso-Doppler lines) is smaller than a threshold A pixel related to the required cell resolution.
Consequently, the ambiguous area is the complement of the unambiguous one. The aforementioned criteria physically mean that within the ambiguous area, the shape of the resolution cell does not allow the target position on the wall plane to be established with the desired accuracy, owing to the size of the resolution cell and the geometry of both the isolines and the pixel. Furthermore, it is worth noting that a phase value can be assigned to a point observable in both the range and Doppler domain, that is a point that lies in the unambiguous area, thus making interferometry possible.
The imaging performance is estimated considering the parameters listed in Table 6. The azimuth or Doppler resolution depends on the integration time or synthetic aperture duration. The integration time should be defined as the time span for which a given target is illuminated by the main lobe of the transmitting antenna and remains within the main lobe of the receiving one. For the considered system and environment, the integration time is a function of the distance and of the relative geometry between the sensor and the target. Hence, it varies from point to point within the control volume. However, since this actual integration time is, in general, not known, the performance analysis is addressed in this section by supposing a constant integration time. This means that the integration time must be interpreted herein as the time span used for SAR focusing, which is assumed constant for all of the imaged targets. The value for integration time reported in Table 6 is also compliant with the possible platform dynamics and antenna apertures assumed in the simulation. As a consequence, a range of distances at which the theoretical azimuth resolution (Equation (4)) can be achieved will exist. Farther points may suffer from worse resolution owing to the increasing distance between either two close iso-range or iso-Doppler curves, which results in a larger imaging pixel. Nonetheless, as shown in the following, the degraded pixel is still complaint with the minimum required resolution and pixel area threshold (Table 6) over sufficiently large areas within the test environment. Quantitatively, a preliminary analysis of the mapping capability is carried out with the platform at a specific location. The antenna is located at position P with a velocity v (see Table 7) at half the integration time. The selected velocity and integration time give the theoretical azimuth resolution at a distance of about 3 m (and synthetic aperture equal to 0.5 m), but acceptable values are obtained even at longer distances, as shown in Figures 10 and 11. In more detail, Figure 10 shows the three terms that contribute to the ambiguous area (shaded) and the shape of the resolution element within the unambiguous area. The total unambiguous area is about 47% of the total area, and the walls having observable areas are depicted in Figure 11. It should be noted that points lying within areas, whose size depends on the distance (i.e., the farther the wall, the larger the size), around the projection of the velocity direction on walls are not observable, owing to forward-looking ambiguities.
In addition, points inside a circle, whose radius depends on the distance, around projections of the platform on the walls, are not observable, owing to the poor ground range resolution. Front and rear walls are not observable, as the vector normal to their surfaces is parallel to the velocity vector, thus resulting in parallel range and Doppler isolines. Furthermore, most of the wall ABFE is not observable. It is worth noting that even though the azimuth resolution satisfies the requirements of Table 6, the effects of both the ground range resolution and intersection angle Ω due to the distance strongly affect the observation capability. Table 7. Position and velocity of the antenna halfway through the integration time. Figure 10. Plane OAED. Ambiguous area (shaded) and contributions: intersection angle (green contour), resolution (blue contour) and pixel size (red contour). For clarity, the distance between two close isolines does not represent the true system resolution.
The presented results suggest that the whole control volume can be mapped by exploiting the platform agility to move and the point the beam.  Table 7. Note that the observable walls are not depicted in the figure.

Layover
Layover is a well-known geometric distortion of SAR images affecting targets that have the same range and velocity relative to the platform in three-dimensional space [40,45]. Layover does not affect the capability to image an area of interest, but can cause the inversion of the position of scatterers and geometric distortion, resulting in interpretation problems. With reference to the considered control volume, the most critical zones interested in layover are edges and angles generated by the intersection of two or three walls, which have at least two layover points [45]. However, this is not a specific problem of the proposed system, since it affects any radar observation, and SAR data processing algorithms do not typically remove layover areas. In addition, the exploitation of multi-aspect InSAR data has demonstrated good capabilities in terms of the recognition and removal of layover areas [47]. Even though these techniques have been tested on different scenarios, i.e., layover generated by small and large buildings in urban areas, they are expected to be useful for the proposed system. Indeed, since it is expected that the required multi-aspect interferometric acquisitions will constitute the system operating mode in order to increase the percentage of the covered area within the control volume (see Section 4.1.1), the proposed and the successfully experienced techniques to cope with layover will be certainly exploited.

Conclusions
In this paper, the first steps towards the overall feasibility study and design of an innovative radar sensor for autonomous operations in GPS-denied indoor environments by flying small UAS have been taken. The work can be summarized as follows: • After the state-of-the-art analysis of existing small SAR sensors, FMCW has been individuated as a suitable scheme to be exploited in combination with InSAR technology for applications requiring both high-resolution performance and compact and lightweight systems. Millimeter wavelengths have been selected thanks to their atmospheric penetration characteristics, even in environments with smoke and flames, and to limit antenna dimensions. The peculiar features of the FMCW scheme have been thus discussed, also giving a comparison with well-assessed pulsed SAR technology. • Based on the FMCW features, a system design procedure has been achieved, outlining guidelines to trade-off the design choices based on the specific mission requirements and operative environments. • Imaging peculiarities have been discussed in terms of the resolution.
The presented results demonstrate that high-resolution, high-quality observation of an assigned control volume is possible, provided that an adequate flight trajectory is selected. The advantage of FMCW with respect to the pulse architecture in terms of sampling frequency and real-time data handling suggests that the transmission of both raw data and processed images to the ground station could be easily achieved. It is clear that for autonomous navigation, onboard real-time data processing operations are required, such as interferogram formation, simultaneous localization and mapping procedures and structured data handling and storage, all of which are very demanding on the system processor. In addition, very long missions could produce an extremely large amount of data to be stored onboard. Nevertheless, it can be expected that future enhancements in miniaturization and customization of both processors and data storage devices will make the aforementioned problems affordable.