Small Multicopter-UAV-Based Radar Imaging: Performance Assessment for a Single Flight Track

: This paper deals with a feasibility study assessing the reconstruction capabilities of a small Multicopter-Unmanned Aerial Vehicle (M-UAV) based radar system, whose ﬂight positions are determined by using the Carrier-Phase Di ﬀ erential GPS (CDGPS) technique. The paper describes the overall radar imaging system in terms of both hardware devices and data processing strategy for the case of a single ﬂight track. The data processing is cast as the solution of an inverse scattering problem and is able to provide focused images of on surface targets. In particular, the reconstruction is approached through the adjoint of the functional operator linking the unknown contrast function to the scattered ﬁeld data, which is computed by taking into account the actual ﬂight positions provided by the CDGPS technique. For this inverse problem, we provide an analysis of the reconstruction capabilities by showing the e ﬀ ect of the radar parameters, the ﬂight altitude and the spatial o ﬀ set between target and ﬂight path on the resolution limits. A measurement campaign is carried out to demonstrate the imaging capabilities in controlled conditions. Experimental results referred to two surveys performed on the same scene but at two di ﬀ erent UAV altitudes verify the consistency of these results with the theoretical resolution analysis.


Introduction
Radar imaging performed by UAV platforms [1], and more in detail by M-UAV platforms [2], is attracting huge attention in remote sensing community as a cost-effective solution to cover wide and/or not easily accessible regions, with high operative flexibility [3]. Indeed, M-UAVs have vertical lift capability, allow take-off and landing from very small areas without the need for long runways or dedicated launch and recovery systems, and are able to move in all directions. These peculiar features allow their use at any location [3] and under different flight modes, thus introducing new possibilities in radar imaging measurements [4]. For instance, M-UAV vertical lift capability can be exploited to perform vertical apertures and implement high-resolution vertical tomography, which is useful in structural monitoring [5]. On the other hand, circular flights are suitable to generate holographic and tomographic radar images [6]. Furthermore, M-UAVs allow waypoint flights in autopilot mode and pre-programmed flights with auto-triggering. This introduces the possibility of designing sophisticated flight strategies, such as specific grid acquisitions devoted to investigating the area of interest or repeat-pass tracks aimed at performing interferometric acquisitions [7]. RTS, the CDGPS technique can achieve centimeter accuracy even in harsh operation scenarios where an unobstructed line of sight (LOS) between the ground-based GPS receivers and the flying platform may occur [20].
Once an accurate estimate of the 3D flight path has been obtained, a high-resolution radar imaging algorithm is proposed for the case of a single flight track. The radar imaging approach is able to account for the spatial coordinates of the measurement points provided by the CDGPS technique and states the radar imaging as an electromagnetic inverse scattering problem. The inverse problem is linearized by resorting to the Born approximation [22] and the inversion is carried out by means of the adjoint operator [23]. The reconstruction capabilities of the proposed radar imaging system are investigated in terms of resolution limits by a theoretical/numerical analysis, which makes it possible to foresee how the measurement parameters (location of the measurement points and working frequency band) affect the reconstruction performance [24]. Finally, a measurement campaign carried out at an authorized site for amateur UAV testing flights in Acerra, a small town on the outskirts of Naples (Italy), is presented as an experimental assessment of the integrated use of the CDGPS positioning procedure and the adopted imaging approach. The experimental results provide a proof of concept of the imaging performance of the proposed small M-UAV-based radar imaging system.
The paper is organized as follows. Section 2 describes the small M-UAV-based radar imaging system and the strategy adopted to estimate the actual flight path. Section 3 deals with data processing and presents the reconstruction performance analysis. Section 4 reports the experimental validation of the small M-UAV-based radar imaging system. A final discussion on the system performance and the achieved results is reported in Section 5 and conclusions end the paper in Section 6.

Imaging System
The small M-UAV imaging system already presented in [17] is improved with a second ground-based GPS station in order to exploit the CDGPS technique (see Figure 1). The system has the following main components, which are briefly described (see [17] for more details): • Small M-UAV platform: DJI F550 hexacopter able to fly at very low speeds (about 1 m/s), thus ensuring a small spatial sampling step and the ability to take-off and land from a very small area; • Radar system: Pulson P440 radar is a light and compact time-domain device transmitting ultra-wideband pulses (about 1.7 GHz bandwidth centered at the carrier frequency of 3.95 GHz) with a low power consumption [25]. The radar system is mounted rigidly on the UAV body (strapdown installation) and no gimbal is adopted. The limited altitude dynamics experienced during flights (very low ground speed and wind speed conditions resulting in small and almost constant roll/pitch angles), the relatively large radar antenna lobes and the limited baseline between the radar antenna and the drone center of mass are such that altitude/pointing knowledge does not play a significant role; • GPS receivers/antennas: two single-frequency Ublox LEA-6T devices are chosen, one mounted onboard the UAV and the other one used as a ground-based station. Both are connected to an active patch antenna. The antenna is directly placed on the ground (Figure 1b) in order to get from CDGPS a direct estimate of the height above ground for the antenna mounted on the drone; • CPU controller: Linux-based Odroid XU4 is devoted to managing the data acquisition for both radar system and onboard GPS receiver, while assuring their time synchronization.
The possibility to estimate the trajectory of the UAV platform depends on the quality of the embarked navigation sensors. By using a standalone onboard GPS receiver, the achievable absolute positioning accuracy is given into a global reference frame, such as WGS84 (World Geodetic System 1984), and is defined according to the specifications provided by the US Department of Defense [26]. Absolute GPS localization errors are estimated as the product of the User Equivalent Range Error (UERE), which is the effective accuracy of the localization errors along the pseudo-range direction, the Horizontal Dilution of Precision (HDOP) and Vertical Dilution of Precision (VDOP). These latter are  When reasonably short time flights are considered, several error sources (i.e., broadcast clock, broadcast ephemeris, group delay, ionospheric delay and tropospheric delay) are strongly correlated both in space and time [20] and introduce a positioning error, which is an almost constant but unknown bias. In addition, the use of a proper processing strategy, such as carrier-smoothing [20], allows a reduction of the measurement noise [28], thus improving the standalone GPS performance.
As shown in [17], in the frame of radar imaging, it is important to have accurate knowledge of the relative positions of the UAV radar system with respect to the investigated spatial region. Therefore, the constant and unknown bias affecting the horizontal positions provided by a standalone GPS does not play any role in focusing the targets (which, however, will not be reliably localized in the WGS84 reference system), whereas the bias occurring into the vertical position (UAV height) may prejudice satisfactory radar imaging performance.
In this paper, we exploit a strategy based on the use of a CDGPS, which is a method for improving the positioning or timing performance of GPS by exploiting at least one motionless GPS receiver working as a reference station. Here, the CDGPS method is implemented by using two GPS receivers (one mounted onboard the UAV and the other one used at reference ground station), which store the data into a local hard drive.
Each receiver collects single-frequency observables, which is a pseudo-range and a carrier-phase measure for any tracked GPS satellite. It is well-known that carrier-phase measures show significantly reduced measurement noise (in the order of 1/100 of signal wavelength, i.e., mm scale) with respect to pseudo-range ones, but ambiguities appear, so carrier-phase are biased measurements [28,29]. If one is able to resolve the ambiguity, very high accuracy positioning is enabled. This can be achieved by differential techniques, i.e., CDGPS, where differences between the measurements collected by two relatively close receivers are computed. Such differential measures are not affected by common errors between the receivers, due to ionosphere, troposphere and clock errors, so suitable processing is implemented to filter out pseudo-range noise thus deriving an estimate of carrier-phase ambiguities. If a connection through a radio link is established between the UAV and the ground station, CDGPS processing can be performed in real-time, which is defined as "Real-Time Kinematic" (RTK). Offline CDGPS processing is, instead, used in this work, which is typically referred to as "Post-Processing Kinematic" (PPK).
Depending on the working environment, platform dynamics and receiver quality, two different types of CDGPS solutions can be obtained, i.e. fixed or float solutions [30]. The former is the most accurate one, being able to guarantee up to sub-cm accuracy in the determination of the relative position between the receivers, exploiting the property of carrier-phase ambiguities to become, under suitable measurement combinations and for properly designed receivers, integer numbers. The fixed solution can be robustly generated by processing multi-frequency GPS data and can be obtained, although with reduced time availability, by using single-frequency receivers, which typically rely on When reasonably short time flights are considered, several error sources (i.e., broadcast clock, broadcast ephemeris, group delay, ionospheric delay and tropospheric delay) are strongly correlated both in space and time [20] and introduce a positioning error, which is an almost constant but unknown bias. In addition, the use of a proper processing strategy, such as carrier-smoothing [20], allows a reduction of the measurement noise [28], thus improving the standalone GPS performance.
As shown in [17], in the frame of radar imaging, it is important to have accurate knowledge of the relative positions of the UAV radar system with respect to the investigated spatial region. Therefore, the constant and unknown bias affecting the horizontal positions provided by a standalone GPS does not play any role in focusing the targets (which, however, will not be reliably localized in the WGS84 reference system), whereas the bias occurring into the vertical position (UAV height) may prejudice satisfactory radar imaging performance.
In this paper, we exploit a strategy based on the use of a CDGPS, which is a method for improving the positioning or timing performance of GPS by exploiting at least one motionless GPS receiver working as a reference station. Here, the CDGPS method is implemented by using two GPS receivers (one mounted onboard the UAV and the other one used at reference ground station), which store the data into a local hard drive.
Each receiver collects single-frequency observables, which is a pseudo-range and a carrier-phase measure for any tracked GPS satellite. It is well-known that carrier-phase measures show significantly reduced measurement noise (in the order of 1/100 of signal wavelength, i.e., mm scale) with respect to pseudo-range ones, but ambiguities appear, so carrier-phase are biased measurements [28,29]. If one is able to resolve the ambiguity, very high accuracy positioning is enabled. This can be achieved by differential techniques, i.e., CDGPS, where differences between the measurements collected by two relatively close receivers are computed. Such differential measures are not affected by common errors between the receivers, due to ionosphere, troposphere and clock errors, so suitable processing is implemented to filter out pseudo-range noise thus deriving an estimate of carrier-phase ambiguities. If a connection through a radio link is established between the UAV and the ground station, CDGPS processing can be performed in real-time, which is defined as "Real-Time Kinematic" (RTK). Offline CDGPS processing is, instead, used in this work, which is typically referred to as "Post-Processing Kinematic" (PPK).
Depending on the working environment, platform dynamics and receiver quality, two different types of CDGPS solutions can be obtained, i.e., fixed or float solutions [30]. The former is the most accurate one, being able to guarantee up to sub-cm accuracy in the determination of the relative position between the receivers, exploiting the property of carrier-phase ambiguities to become, under suitable measurement combinations and for properly designed receivers, integer numbers. The fixed solution can be robustly generated by processing multi-frequency GPS data and can be obtained, although with reduced time availability, by using single-frequency receivers, which typically rely on the float solution, i.e., they consider carrier-phase ambiguities as real numbers. This is the case of the presented system architecture. Hence, for most of the time epochs, a realistic estimate of the carrier-phase ambiguities can be robustly generated by the adopted single-frequency receivers and the achieved accuracy thus degrades to the order of 10 cm. The error is reduced to a very few cm when fixed solutions are available.
Herein, CDGPS processing is carried out by using the open-source software RTKlib [21]. In particular, the post-processing analysis tool RTKPOST is used, which inputs RINEX observation data and navigation message files (from GPS, GLONASS, Galileo, QZSS, BeiDou and SBAS), and can compute the positioning solutions by various processing modes (such as Single-Point, DGPS/DGNSS, Kinematic, Static, PPP-Kinematic and PPP-Static). In this regard, the "Kinematic" positioning mode is chosen, which corresponds to PPK, with integer ambiguity resolution set to "Fix and hold". RTKPOST outputs the E/N/U coordinates of the flying receiver with respect to the base-station, together with a flag relevant to the solution type (float/fixed). This flag, and the processing residuals, can be used as an estimate of the achieved positioning accuracy.

Radar Signal Processing
This section describes the signal processing strategy adopted to process the data collected by the radar system. The various stages of the data processing are summarized in the block diagram of Figure 2. According to this scheme, the input information is the raw radargram (B-scan) collected by the radar, which represents the received radar signal collected at each measurement position (along the flight path) versus the fast-time (i.e., the wave travel time). The final output of the reconstruction procedure is a focused and easily interpretable image depicting the scene under test.

Radar Imaging Approach
Let us refer to the 3D scenario sketched in Figure 3. The ultra-wideband radar transceiver onboard the UAV illuminates the scene with transmitting and receiving antennas pointed at nadir (down-looking mode), i.e., at a zero incidence angle with respect to the normal to the air-soil interface. The radar can be considered operating in monostatic mode, since transmitting and receiving antennas have negligible offset in terms of the probing wavelength. At each measurement point along the flight trajectory Γ, the transceiver records the signals scattered from the targets over the angular frequency range Ω = , . Therefore, multimonostatic and multifrequency data As a first stage of the overall reconstruction procedure, a time-domain pre-processing of the radargram is performed by applying the following operations [31][32][33]: The zero-timing consists of setting the starting instant of the fast-time axis in such a way that the range of the signal reflected by the air-soil interface at the first measurement point of the flight trajectory is coincident with the UAV flight height estimated by the CDGPS processing.
The background removal is a filtering procedure that allows mitigating the effects of the strong coupling between the transmitting and receiving radar antennas, which is a spatially constant signal. This filter replaces each single radar trace (A-scan) of the radargram with the difference between it and the average of all the traces of the radargram collected along the flight trajectory.
The time-gating procedure selects the interval (along the fast-time) of the radargram, where signals scattered from targets of interest occur. This allows a reduction of environmental clutter and noise effects. Herein, the UAV altitude is exploited to define a suitable time window around the time where reflection of the air-soil interface occurs.
After the time-domain pre-processing stage, each trace in the radargram is transformed into the frequency domain by using the Fast Fourier Transform (FFT) algorithm. Then, the frequency-domain data are processed according to the radar imaging approach detailed in the next subsection.

Radar Imaging Approach
Let us refer to the 3D scenario sketched in Figure 3. The ultra-wideband radar transceiver onboard the UAV illuminates the scene with transmitting and receiving antennas pointed at nadir (down-looking mode), i.e., at a zero incidence angle with respect to the normal to the air-soil interface. The radar can be considered operating in monostatic mode, since transmitting and receiving antennas have negligible offset in terms of the probing wavelength. At each measurement point along the flight trajectory Γ, the transceiver records the signals scattered from the targets over the angular frequency range Ω = [ω min , ω max ]. Therefore, multimonostatic and multifrequency data are collected.

Radar Imaging Approach
Let us refer to the 3D scenario sketched in Figure 3. The ultra-wideband radar transceiver onboard the UAV illuminates the scene with transmitting and receiving antennas pointed at nadir (down-looking mode), i.e., at a zero incidence angle with respect to the normal to the air-soil interface. The radar can be considered operating in monostatic mode, since transmitting and receiving antennas have negligible offset in terms of the probing wavelength. At each measurement point along the flight trajectory Γ, the transceiver records the signals scattered from the targets over the angular frequency range Ω = , . Therefore, multimonostatic and multifrequency data are collected.
The trajectory Γ has an arbitrary shape in space and each measurement point is described by the position vector = + + ̂. The targets are supposed to be located into the planar investigation domain D, which is coincident with the air-soil interface assumed at = 0. The time dependence is assumed and dropped. The radar signal model is based on the following assumptions: (i) the antennas have a broad radiation pattern; (ii) the targets are in the far-field region with respect to the radar antennas; (iii) a linear model of the scattering phenomenon is assumed, hence the mutual interactions between the targets are neglected [22]. Accordingly, the scattered signal at each measurement point is expressed by the following linear integral equation [16,34]:  The trajectory Γ has an arbitrary shape in space and each measurement point is described by the position vector r m = x mx + y mŷ + z mẑ . The targets are supposed to be located into the planar investigation domain D, which is coincident with the air-soil interface assumed at z = 0. The time dependence e jωt is assumed and dropped. The radar signal model is based on the following assumptions: (i) the antennas have a broad radiation pattern; (ii) the targets are in the far-field region with respect to the radar antennas; (iii) a linear model of the scattering phenomenon is assumed, hence the mutual interactions between the Remote Sens. 2020, 12, 774 7 of 22 targets are neglected [22]. Accordingly, the scattered signal at each measurement point r m is expressed by the following linear integral equation [16,34]: where I(ω) is the spectrum of the transmitted pulse, σ(r) is the unknown reflectivity function at a point r = xx + yŷ in D, k 0 = ω/c 0 is the propagation constant in free-space (c 0 3·10 8 [m/s] is the speed of light) and |r m − r| is the distance between the measurement point and the generic point of the investigation domain D. It is worth noting that the spectrum I(ω) may be assumed unitary within the system bandwidth and therefore, for notation simplicity, it will be omitted. The linear operator L maps the space of the unknown object function (reflectivity of the scene) into the space of data (measured scattered field). The reflectivity function σ(r) accounts for the difference between the electromagnetic properties of the targets (dielectric permittivity, electrical conductivity) and the free space ones. Accordingly, the targets are searched for as anomalies with respect to the free-space scenario and appear in the "focused image" as the regions where the modulus of the reflectivity function is different from zero.
The radar imaging is faced as the inversion of the linear integral Equation (1) and this is performed by computing the adjoint of the forward scattering operator L [23]: where L + is the adjoint operator of L.
The adjoint inversion scheme given by Equation (2) is also referred as frequency-domain back-projection [35], since the measured signal is back projected to the point where it is generated and the image is formed as the coherent summation of these contributions.
The numerical implementation of the inversion is performed by discretizing Equation (2) by applying the Method of Moments [36]. The scattered field is discretized by M × N data, where M is the number of measurement points (x m , y m , z m ), m = 1, 2 . . . , M and N is the number of angular frequencies ω n , n = 1, 2 . . . N sampling the work frequency bandwidth Ω. The domain D is discretized by P × Q pixels x p , y q , where p = 1, 2 . . . P, and = 1, 2 . . . Q (see Figure 4). After removing unessential constants, the inversion scheme in Equation (2) is rewritten in discrete form as: According to the assumption of antennas having a broad radiation pattern, Equation (3) sums coherently the multi-frequency data collected along the whole trajectory Γ for each pixel in D. Therefore, the radar image is obtained by computing Equation (3) for all pixels in D and plotting the magnitude of the retrieved reflectivity values normalized with respect to their maximum value.
In this process, the precise measurement positions of the UAV obtained with the CDGPS processing are considered. The exploitation of the positioning information allows accurate images, as already pointed out in the airborne radar imaging context [24,37].
removing unessential constants, the inversion scheme in Equation (2) is rewritten in discrete form as:

Resolution Analysis
This subsection aims at investigating the spatial resolution limits of the proposed M-UAV radar imaging system. The analysis covers the effect of the measurement parameters on the resolution limits in the image plane D. To achieve this goal, we compute the point spread function (PSF) of the system, i.e., the reconstruction of a point-like target [23]. For a point-like target located at r t = x tx + y tŷ and having unitary reflectivity, the related scattered field is expressed, according to Equation (1), as: After plugging Equation (4) in the adjoint inversion formula Equation (2), we get the following expression for the PSF allowing the evaluation of the resolution as a function of the system parameters and the flight trajectory. Hence, Equation (5) is useful, on one hand, for planning the measurement campaign according to the requirements of the applicative context of interest and, on the other side, for investigating how deviations, with respect to the nominal flight path, affect the achievable imaging performance. Before proceeding further, it is worth recalling the resolution formulas holding for an ideal rectilinear flight path. These formulas provide useful insight into radar imaging also under non-ideal motion and allow foreseeing, at least in a qualitative way, the effect of the main measurement parameters.
Let us consider the geometry sketched in Figure 5, where the UAV moves at a fixed height h following a rectilinear trajectory directed along the x-axis. The along-track resolution ∆ x is determined by the central frequency f c of the radar and the maximum view angle θ fixed by half-length of the synthetic aperture [32]: that in the small angle approximation rewrites as [38]: The range resolution is related to the radar system bandwidth B by the classical formula [39]: Remote Sens. 2020, 12, 774 9 of 22 The across-track resolution ∆ y is evaluated from the projection of the 3D target reconstruction over the image plane (see Figure 5b). If r denotes the target range with respect to the antenna, then the 3D target reconstruction is the cylindrical shell having its axis coincident with the measurement line and its inner and outer radius equal to r − ∆ r and r + ∆ r , respectively. Note that only a part of the shell is shown in Figure 5b, for sake of clarity. The across-track resolution ∆ y is calculated as the intersection of the cylindrical shell with the image plane z = 0 and is given by where d is the across-track distance between the fight trajectory and the target (see Figure 5a,b). According to Equation (9), the across-track resolution gets worse when the UAV flies at a higher altitude h and, when the target is illuminated at nadir (d = 0), it turns out that: i.e., the across-track resolution is finite and larger than the range resolution ∆ r . shell is shown in Figure 5b, for sake of clarity. The across-track resolution Δ is calculated as the intersection of the cylindrical shell with the image plane = 0 and is given by where is the across-track distance between the fight trajectory and the target (see Figure 5a,b). According to Equation (9), the across-track resolution gets worse when the UAV flies at a higher altitude ℎ and, when the target is illuminated at nadir ( = 0), it turns out that: Equation (9) also reveals that, for a fixed value of h and ∆ r , ∆ y improves as long as the target moves away from the measurement line. Most notably, the asymptotic value of the across-track resolution is found as d approaches to infinity: Remote Sens. 2020, 12, 774 10 of 22 Based on the results in the Equations (10) and (11), the following inequality holds: Note that if the system bandwidth B goes to zero, the range resolution ∆ r becomes infinite and it is no longer possible to resolve targets along the direction perpendicular to the track, as already noticed in [24]. Figure 6 depicts the across-track resolution ∆ y as a function of the target offset d and the flight altitude h. The contour plot has been produced by applying Equation (9) and considering the bandwidth of the radar system (i.e., B = 1.7 GHz) introduced in Section 2.
Remote Sens. 2020, 12, x FOR PEER REVIEW 10 of 22 Note that if the system bandwidth goes to zero, the range resolution Δ becomes infinite and it is no longer possible to resolve targets along the direction perpendicular to the track, as already noticed in [24]. Figure 6 depicts the across-track resolution Δ as a function of the target offset and the flight altitude ℎ . The contour plot has been produced by applying Equation (9) and considering the bandwidth of the radar system (i.e., = 1.7 GHz) introduced in Section 2.
As previously pointed out, the resolution degrades when increasing the flight altitude ℎ for a fixed value of or when reducing for a fixed value of ℎ. Figure 7 provides an example of the PSF computed according to Equation (5) by considering an investigation domain = −3,3 × −3,3 , which is discretized by square image pixels with size 0.01 m, and two different values of the target offset (i.e., = 0 m and = 2 m). The scattered field data are sampled evenly with 0.01 m step along the trajectory Γ at a flight altitude ℎ = 5 m. Figure 7a,b reports the PSF reconstruction for the case of a rectilinear trajectory covering the interval −3,3 along .  Figure 7a,b shows that a focused spot along and across the track is obtained in correspondence of the target position and the along-track resolution does not change when the target is located at the radar nadir ( = 0 m) or at the point (0,2) m. Conversely, the across-track resolution improves when the target is far from the nadir, as predicted by Equation (9). However, in this latter case, a false target appears at the specular position with respect to the flight trajectory, i.e., at (0, −2) m. This phenomenon is the so-called left-right ambiguity [40] and is due to the radar's inability to discriminate left ( > 0) and right ( < 0) targets located at the same distance with respect to the measurement line.
In addition, Figure 7c,f shows that, as expected, even with a slight trajectory deviation with respect to the rectilinear path, the PSF is no longer symmetric with respect to the trajectory. Most notably, when the target is placed at (0,2) m (see Figure 7d,   ambiguity appears distorted and with a lower intensity, with respect to Figure 7b. Indeed, when the trajectory is not rigorously rectilinear, the left and right targets are in some way discriminated by the radar because their echoes have different propagation delays at each measurement point. However, the beneficial effect provided by the trajectory curvature in mitigating the false target becomes less relevant when the flight altitude ℎ increases, since left and right targets produce scattering signals with "more similar" propagation delays. This statement is corroborated by the images in Figure 8a,b,  Figure 7a,b shows that a focused spot along and across the track is obtained in correspondence of the target position and the along-track resolution does not change when the target is located at the radar nadir (d = 0 m) or at the point (0, 2) m. Conversely, the across-track resolution improves when the target is far from the nadir, as predicted by Equation (9). However, in this latter case, a false target appears at the specular position with respect to the flight trajectory, i.e., at (0, −2) m. This phenomenon is the so-called left-right ambiguity [40] and is due to the radar's inability to discriminate left (y > 0) and right (y < 0) targets located at the same distance with respect to the measurement line.
In addition, Figure 7c,f shows that, as expected, even with a slight trajectory deviation with respect to the rectilinear path, the PSF is no longer symmetric with respect to the trajectory. Most notably, when the target is placed at (0, 2) m (see Figure 7d,f), the false target due to the left-right ambiguity appears distorted and with a lower intensity, with respect to Figure 7b. Indeed, when the trajectory is not rigorously rectilinear, the left and right targets are in some way discriminated by the radar because their echoes have different propagation delays at each measurement point. However, the beneficial effect provided by the trajectory curvature in mitigating the false target becomes less relevant when the flight altitude h increases, since left and right targets produce scattering signals with "more similar" propagation delays. This statement is corroborated by the images in Figure 8a,b, which are analogous to Figure 7c,d but for the altitude that is h = 10 m. As expected, by increasing flight altitude, the resolution across-track degrades regardless of the position of the target and the left-right ambiguity problem turns out to be more evident. The amplitude of the false target seen in Figure 8b is, indeed, stronger compared to the one observed in Figure 7d Table 1.

Experimental Results
The M-UAV radar imaging system has been experimentally tested at an authorized site for amateur UAV testing flights in Acerra, Naples, Italy. The experiment aimed at testing the ability of the CDGPS technique to estimate the UAV position with the accuracy required for target imaging and, thus, to verify the capability of the overall radar imaging system. The experiment was carried out during a sunny day with a weak wind state. Two metallic trihedral corner reflectors, having a size = 0.40 m × 0.40 m × 0.57 m and referred as Target 1 and Target 2, were used as on-ground The along-and across-track resolution values referred to the considered numerical examples are summarized in Table 1. Table 1. Along-and across-track resolution values.

Experimental Results
The M-UAV radar imaging system has been experimentally tested at an authorized site for amateur UAV testing flights in Acerra, Naples, Italy. The experiment aimed at testing the ability of the Remote Sens. 2020, 12, 774 13 of 22 CDGPS technique to estimate the UAV position with the accuracy required for target imaging and, thus, to verify the capability of the overall radar imaging system. The experiment was carried out during a sunny day with a weak wind state. Two metallic trihedral corner reflectors, having a size D = 0.40 m × 0.40 m × 0.57 m and referred as Target 1 and Target 2, were used as on-ground targets placed at a relative distance of 10 m one from the other along the flight direction; one of them (i.e., Target 2) was covered with a cardboard box (see Figure 9).  spaced measurement points. The radar parameters set for the data acquisition are summarized in Table 2. Note that we considered flight altitude values in the range 5-10 m to operate with a suitable signal-to-noise ratio. Indeed, a major constraint in our system is the limited transmit power of the radar, whose maximum level is declared to be approximately -13 dBm by the manufacturer. The raw radargrams, i.e., the data collected during the two surveys, are depicted in Figure 10a,b while the filtered radargrams (after the time domain pre-processing stage) are given in Figure 11a,b. It is worth pointing out that the horizontal axis shows the slow-time, i.e., the duration of the flight in seconds, while the vertical axis is the fast-time, i.e., the observation time window during which the data are gathered for each radar position, once that the time-zero correction has been performed. The fast-time is expressed in nanoseconds. The white dotted line represents the air/soil interface achieved by converting the variable UAV flight altitude ℎ estimated by the CDGPS into an equivalent travel time by using the formula = 2ℎ ⁄ .
From Figures 10 and 11, one can observe that the CDGPS provides an accurate estimation of the flight altitudes and the targets' responses are visible as hyperbolas whose apex occurs at the fast-time where nadir surface reflection is observed. Moreover, Figure 10a,b shows that clutter signals, due to metallic awnings located on the entry side of the flight site, appear at fast-times greater than 70 ns in Figure 10a and 90 ns in Figure 10b. These undesired signals, as well as the mutual coupling between transmitting and receiving antennas, are removed by a time-domain pre-processing (see Figure  11a,b). The filtered radargrams have been obtained by performing the background removal and setting as fast-time gating window the portion occurring 6 ns before and 24 ns after the air-soil interface response seen at nadir. The filtered data have been transformed into the frequency domain by sampling the radar bandwidth [3.1, 4.8] GHz into 341 evenly spaced frequency samples and have The UAV was manually piloted and two surveys at different altitudes, in the following referred to as Track 1 and Track 2, were carried out. Both tracks were performed on the same scenario by positioning the UAV nearly at the same starting point (x, y). Track 1 had a duration of 17.5 s and covered a path 31.4 m long at an average altitude h = 4 m; along this track, data were gathered at 251 unevenly spaced measurement points. Track 2 had a duration of 21.7 s and covered a 33 m long path at an average altitude h = 10 m; along this track, data were gathered at 331 unevenly spaced measurement points. The radar parameters set for the data acquisition are summarized in Table 2. Note that we considered flight altitude values in the range 5-10 m to operate with a suitable signal-to-noise ratio. Indeed, a major constraint in our system is the limited transmit power of the radar, whose maximum level is declared to be approximately -13 dBm by the manufacturer. The raw radargrams, i.e., the data collected during the two surveys, are depicted in Figure 10a,b while the filtered radargrams (after the time domain pre-processing stage) are given in Figure 11a,b. It is worth pointing out that the horizontal axis shows the slow-time, i.e., the duration of the flight in seconds, while the vertical axis is the fast-time, i.e., the observation time window during which the data are gathered for each radar position, once that the time-zero correction has been performed. The fast-time is expressed in nanoseconds. The white dotted line represents the air/soil interface achieved by converting the variable UAV flight altitude h estimated by the CDGPS into an equivalent travel time t h by using the formula t h = 2h/c 0 . accuracy of the UAV. Specifically, Table 3 summarizes the maximum positioning errors achieved with the CDGPS technique along Tracks 1 and 2. These errors are the standard deviations provided by the RTKlib tool, which measure the positioning errors along the three coordinate axes based on a priori error models and error parameters [21]. The maximum errors in the horizontal plane are always smaller than the error along z, which is 9.4 cm in the worst case (Track 2).  The focused images of the surveyed scenario are depicted in Figure 12a,b for Track 1 and Track 2, respectively. These images have been obtained by considering a square planar investigation domain at = 0 m, whose origin corresponds to the starting point of the UAV tracks into the x- From Figures 10 and 11, one can observe that the CDGPS provides an accurate estimation of the flight altitudes and the targets' responses are visible as hyperbolas whose apex occurs at the fast-time where nadir surface reflection is observed. Moreover, Figure 10a,b shows that clutter signals, due to metallic awnings located on the entry side of the flight site, appear at fast-times greater than 70 ns in Figure 10a and 90 ns in Figure 10b. These undesired signals, as well as the mutual coupling between transmitting and receiving antennas, are removed by a time-domain pre-processing (see Figure 11a,b). The filtered radargrams have been obtained by performing the background removal and setting as fast-time gating window the portion occurring 6 ns before and 24 ns after the air-soil interface response seen at nadir. The filtered data have been transformed into the frequency domain by sampling the radar bandwidth [3.1, 4.8] GHz into 341 evenly spaced frequency samples and have been processed according to the inversion procedure described in Section 3.1.
Before showing the focused radar images, we provide quantitative data about the positioning accuracy of the UAV. Specifically, Table 3 summarizes the maximum positioning errors achieved with the CDGPS technique along Tracks 1 and 2. These errors are the standard deviations provided by the RTKlib tool, which measure the positioning errors along the three coordinate axes based on a priori error models and error parameters [21]. The maximum errors in the horizontal plane are always smaller than the error along z, which is 9.4 cm in the worst case (Track 2).
Remote Sens. 2020, 12, x FOR PEER REVIEW 15 of 22 y plane and whose side is 18 m. The domain has been evenly discretized by pixels having side 0.01 m. In Figure 12a,b, the dotted white line represents the M-UAV trajectory as estimated by the CDGPS and projected onto the investigated domain. According to the analysis presented in Section 3.2, Figure 12a shows that when the targets are illuminated at nadir, i.e., when the distance approaches to zero, single spots appear and no ambiguities occur. Conversely, false targets due to the left-right ambiguity problem appear when the UAV flight path does not cover the targets (see Figure 12b). However, coherently with the PSFs shown in Figure 8b, the false targets appear slightly distorted and with lower intensity compared to the real target reconstructions owing to the trajectory curvature. As a result, it is possible to discriminate the actual targets from the ambiguous ones. Table  4 reports the experimental along-and across-track resolution values as estimated by Figure 12a,b for both targets. For comparison, the table reports the theoretical resolution values referred to a rectilinear flight path at the average altitudes h = 4 m and h = 10 m. The experimental and theoretical resolution values are quite consistent. Notably, the experimental along-track resolution decreases slightly when the flight altitude increases and the target offset is not null, while the across-track one improves when d increases. It is worth pointing out that the corner reflectors emphasize the radar echoes but they are not actually ideal point targets. Consequently, some discrepancies on resolution  The focused images of the surveyed scenario are depicted in Figure 12a,b for Track 1 and Track 2, respectively. These images have been obtained by considering a square planar investigation domain D at z = 0 m, whose origin corresponds to the starting point of the UAV tracks into the x-y plane and whose side is 18 m. The domain D has been evenly discretized by pixels having side 0.01 m.
Remote Sens. 2020, 12, x FOR PEER REVIEW 16 of 22 values are expected and this outcome is confirmed by the comparison between the experimental and theoretical data reported in Table 4.  In Figure 12a,b, the dotted white line represents the M-UAV trajectory as estimated by the CDGPS and projected onto the investigated domain. According to the analysis presented in Section 3.2, Figure 12a shows that when the targets are illuminated at nadir, i.e., when the distance d approaches to zero, single spots appear and no ambiguities occur. Conversely, false targets due to the left-right ambiguity problem appear when the UAV flight path does not cover the targets (see Figure 12b). However, coherently with the PSFs shown in Figure 8b, the false targets appear slightly distorted and with lower intensity compared to the real target reconstructions owing to the trajectory curvature. As a result, it is possible to discriminate the actual targets from the ambiguous ones. Table 4 reports the experimental along-and across-track resolution values as estimated by Figure 12a,b for both targets. For comparison, the table reports the theoretical resolution values referred to a rectilinear flight path at the average altitudes h = 4 m and h = 10 m. The experimental and theoretical resolution values are quite consistent. Notably, the experimental along-track resolution decreases slightly when the flight altitude increases and the target offset d is not null, while the across-track one improves when d increases. It is worth pointing out that the corner reflectors emphasize the radar echoes but they are not actually ideal point targets. Consequently, some discrepancies on resolution values are expected and this outcome is confirmed by the comparison between the experimental and theoretical data reported in Table 4.

Discussion
This work deals with a feasibility study on small UAV-based radar imaging when the scene under investigation is probed with a single measurement line and the imaging domain is a plane at a fixed altitude. The considered acquisition geometry is the simplest one and its achievable imaging capabilities have been studied in Section 4. Regarding the along-track resolution, this parameter is influenced by the maximum illumination angle, which in turn depends on the flight altitude and the length of the synthetic aperture. The flight height and the horizontal displacement between the target and the UAV, instead, influence the across-track resolution. Targets far from the radar nadir are generally better resolved across-track than those seen at nadir; however, an inherent limitation in the imaging arises due to the left-right ambiguity problem. This phenomenon is partially mitigated in the presence of horizontal deviations of the UAV with respect to the ideal rectilinear trajectory. Additionally, flying at a higher altitude can be convenient to enlarge the area of coverage but such choice generally produces a worsening of the spatial resolution both along-and across-track.
A further point worth to be discussed concerns the inability of the present imaging configuration to provide unambiguous and high-resolution 3D target reconstructions. To clarify this point, it is useful to refer to Figure 13 showing how the reconstruction of the target changes when the image plane is not the correct one. In particular, in Figure 13, we show how a point target located on the plane D 0 at z = 0 is imaged on three planes D 0 , D 1 , D 2 placed at different altitudes, i.e., z = 0, z = z 1 , and z = z 2 .
Remote Sens. 2020, 12, x FOR PEER REVIEW 18 of 22 The geometry in Figure 13 also reveals that the target can be detected (but not correctly localized) when the imaging plane is placed at a higher elevation with respect to the target. Indeed, in this case, it is still possible to find two intersection points between the 3D target reconstruction and the image plane. Conversely, the target cannot be identified at all when it is located above the image plane since this last no longer intersects the 3D target reconstruction.
A numerical example showing the effect of the elevation of the image plane is presented in the case of a multi-target scenario. Specifically, the example refers to the rectilinear trajectory and simulation parameters already considered in Section 3.2. The scene comprises three point targets T1, T2, T3 aligned along the flight track and located at coordinates: (-2, 0, 0) m, (0, 0, 0.2) m, (2, 0, 0.4) m. The reconstructions results achieved on three images planes at z = 0, 0.2 and 0.4 m are displayed in Figure 14a-c, respectively. As can be observed in Figure 14a, only the target T1 is imaged and correctly localized in the plane z = 0 m while the targets T2 and T3 are not detected because they are located above the image plane. When the image plane is fixed at z = 0.2 m, the target T2 is the only one to be correctly localized while T1 is imaged a different location with a spatial offset with respect to the true position. The target T3 is still not detectable because its elevation is greater than the height of the image plane. Finally, Figure 14c shows the reconstruction in the plane z = 0.4m. In this case, all targets are detected but only T3 is correctly localized. Table 5 reported below compares the true and reconstructed targets' positions achieved in each image plane. The maximum of each spot in the images of Figure 14a-c is considered as the estimate of the targets' positions. Note that the ± sign appears in the presence of the left-right ambiguity problem.
An improvement of the approach in terms of resolution and left-right ambiguity suppression toward a high-resolution 3D imaging can be achieved by collecting wideband scattered field data along multiple (parallel) measurement tracks. A similar measurement configuration has been recently studied in the single-frequency case [24]. The theoretical and experimental assessment of such a configuration in the multifrequency case will be the subject of future research.
As a further upgrade of the radar imaging system, the possibility of using a gimbal, as suggested in [41,42], will be considered to achieve major flexibility in the data acquisition. If the image plane coincides with the plane where the target is located, i.e., D 0 , the target is reconstructed at the correct position. When the image plane is different from D 0 , i.e., D 1 or D 2 , due to the cylindrical symmetry of the 3D target reconstruction, the target is imaged at a position different from the true one in the considered plane. The position of the reconstructed target is equal to the intersection point between the 3D reconstruction and the plane where the imaging is carried out. Furthermore, due to the left-right ambiguity, two specular targets appear on both sides of the track (see red rectangles on planes D 1 or D 2 ). The spatial offset d in the x-y plane between the true target and the reconstructed one for an image plane at a height z can be derived after straightforward geometrical considerations and is given by This last formula holds also in the more general case when the target is not illuminated at the radar nadir, as in Figure 13, and d is the horizontal distance between the target and the track.
The geometry in Figure 13 also reveals that the target can be detected (but not correctly localized) when the imaging plane is placed at a higher elevation with respect to the target. Indeed, in this case, it is still possible to find two intersection points between the 3D target reconstruction and the image plane. Conversely, the target cannot be identified at all when it is located above the image plane since this last no longer intersects the 3D target reconstruction.
A numerical example showing the effect of the elevation of the image plane is presented in the case of a multi-target scenario. Specifically, the example refers to the rectilinear trajectory and simulation parameters already considered in Section 3.2. The scene comprises three point targets T1, T2, T3 aligned along the flight track and located at coordinates: (−2, 0, 0) m, (0, 0, 0.2) m, (2, 0, 0.4) m. The reconstructions results achieved on three images planes at z = 0, 0.2 and 0.4 m are displayed in Figure 14a-c, respectively. As can be observed in Figure 14a, only the target T1 is imaged and correctly localized in the plane z = 0 m while the targets T2 and T3 are not detected because they are located above the image plane. When the image plane is fixed at z = 0.2 m, the target T2 is the only one to be correctly localized while T1 is imaged a different location with a spatial offset with respect to the true position. The target T3 is still not detectable because its elevation is greater than the height of the image plane. Finally, Figure 14c shows the reconstruction in the plane z = 0.4m. In this case, all targets are detected but only T3 is correctly localized.

Conclusions
A proof of concept of a Multicopter Unmanned Aerial Vehicle (M-UAV) radar imaging system has been developed by integrating a miniaturized and commercial radar system onboard a small M-UAV. The imaging system has been equipped with two Global Positioning System (GPS) receivers, the first one located onboard the M-UAV platform, and, the second one used as a ground-based station with the aim of exploiting the Carrier-Phase-Based Differential GPS (CDGPS) technique. This latter allows estimating the 3D M-UAV flight path with centimeter accuracy. Moreover, an advanced imaging approach, based on the adjoint inverse scattering problem, has been adopted to obtain focused images of on surface targets in the case of a single flight track. This approach exploits the 3D M-UAV trajectory estimate provided by the CDGPS into the reconstruction stage.  Table 5 reported below compares the true and reconstructed targets' positions achieved in each image plane. The maximum of each spot in the images of Figure 14a-c is considered as the estimate of the targets' positions. Note that the ± sign appears in the presence of the left-right ambiguity problem. An improvement of the approach in terms of resolution and left-right ambiguity suppression toward a high-resolution 3D imaging can be achieved by collecting wideband scattered field data along multiple (parallel) measurement tracks. A similar measurement configuration has been recently studied in the single-frequency case [24]. The theoretical and experimental assessment of such a configuration in the multifrequency case will be the subject of future research.
As a further upgrade of the radar imaging system, the possibility of using a gimbal, as suggested in [41,42], will be considered to achieve major flexibility in the data acquisition.

Conclusions
A proof of concept of a Multicopter Unmanned Aerial Vehicle (M-UAV) radar imaging system has been developed by integrating a miniaturized and commercial radar system onboard a small M-UAV. The imaging system has been equipped with two Global Positioning System (GPS) receivers, the first one located onboard the M-UAV platform, and, the second one used as a ground-based station with the aim of exploiting the Carrier-Phase-Based Differential GPS (CDGPS) technique. This latter allows estimating the 3D M-UAV flight path with centimeter accuracy. Moreover, an advanced imaging approach, based on the adjoint inverse scattering problem, has been adopted to obtain focused images of on surface targets in the case of a single flight track. This approach exploits the 3D M-UAV trajectory estimate provided by the CDGPS into the reconstruction stage.
A theoretical/numerical analysis has been preliminary conducted to evaluate the effect of the overall system and measurement configuration parameters on the imaging system performance. In addition, a proof of concept measurement campaign has been performed. The flight tests have been carried out by manually piloting the UAV at an authorized site for amateur and the experimental results have demonstrated the capability of the system to obtain very good imaging results, comparable to those foreseen with the theoretical analysis. This was possible thanks to the accurate UAV positioning estimation, which means an accurate knowledge of the measurement points, that is a key factor for a reliable focusing of the targets. It is worth noting that, despite the simple and light radar system adopted in this work, the necessity of dealing with high frequency a working band and centimetric probing wavelength (7.5 cm at the frequency of 4 GHz) has made significantly challenging the necessity to have an accurate UAV positioning estimate. This was necessary for a reliable focusing procedure requiring knowledge of the location of the measurement points along the flight trajectory with an accuracy comparable with the probing wavelength.
A final comment is dedicated to future developments. Indeed, to overcome the ambiguity effects caused by the nadir antenna pointing, non-rectilinear trajectory, such as circular or slanting flights are worth being considered. The planning of a measurement campaign involving this kind of flight is the subject of current work. In addition, further flight tests will be conducted to assess the subsurface imaging system capability. Moreover, waypoint following and grid surveys will be exploited to regularly sample the area of interest, and multi-constellation/multi-frequency GNSS will be tested. In this frame, more sophisticated flight/navigation modes and 3D tomographic imaging approaches based on multiple measurement lines will be exploited, in order to open novel remote sensing perspectives in structural monitoring and cultural heritage contexts.